Sample records for recognize visual patterns

  1. Perceptual learning in a non-human primate model of artificial vision

    PubMed Central

    Killian, Nathaniel J.; Vurro, Milena; Keith, Sarah B.; Kyada, Margee J.; Pezaris, John S.

    2016-01-01

    Visual perceptual grouping, the process of forming global percepts from discrete elements, is experience-dependent. Here we show that the learning time course in an animal model of artificial vision is predicted primarily from the density of visual elements. Three naïve adult non-human primates were tasked with recognizing the letters of the Roman alphabet presented at variable size and visualized through patterns of discrete visual elements, specifically, simulated phosphenes mimicking a thalamic visual prosthesis. The animals viewed a spatially static letter using a gaze-contingent pattern and then chose, by gaze fixation, between a matching letter and a non-matching distractor. Months of learning were required for the animals to recognize letters using simulated phosphene vision. Learning rates increased in proportion to the mean density of the phosphenes in each pattern. Furthermore, skill acquisition transferred from trained to untrained patterns, not depending on the precise retinal layout of the simulated phosphenes. Taken together, the findings suggest that learning of perceptual grouping in a gaze-contingent visual prosthesis can be described simply by the density of visual activation. PMID:27874058

  2. Voice response system of color and pattern on clothes for visually handicapped person.

    PubMed

    Miyake, Masao; Manabe, Yoshitsugu; Uranishi, Yuki; Imura, Masataka; Oshiro, Osamu

    2013-01-01

    For visually handicapped people, a mental support is important in their independent daily life and participation in a society. It is expected to develop a system which can recognize colors and patterns on clothes so that they can go out with less concerns. We have worked on a basic study into such a system, and developed a prototype system which can stably recognize colors and patterns and immediately provide these information in voice, when a user faces it to clothes. In the results of evaluation experiments it is shown that the prototype system is superior to the system in the basic study at the accuracy rate for the recognition of color and pattern.

  3. Comparing the visual spans for faces and letters

    PubMed Central

    He, Yingchen; Scholz, Jennifer M.; Gage, Rachel; Kallie, Christopher S.; Liu, Tingting; Legge, Gordon E.

    2015-01-01

    The visual span—the number of adjacent text letters that can be reliably recognized on one fixation—has been proposed as a sensory bottleneck that limits reading speed (Legge, Mansfield, & Chung, 2001). Like reading, searching for a face is an important daily task that involves pattern recognition. Is there a similar limitation on the number of faces that can be recognized in a single fixation? Here we report on a study in which we measured and compared the visual-span profiles for letter and face recognition. A serial two-stage model for pattern recognition was developed to interpret the data. The first stage is characterized by factors limiting recognition of isolated letters or faces, and the second stage represents the interfering effect of nearby stimuli on recognition. Our findings show that the visual span for faces is smaller than that for letters. Surprisingly, however, when differences in first-stage processing for letters and faces are accounted for, the two visual spans become nearly identical. These results suggest that the concept of visual span may describe a common sensory bottleneck that underlies different types of pattern recognition. PMID:26129858

  4. Intramodal and Intermodal Functioning of Normal and LD Children

    ERIC Educational Resources Information Center

    Heath, Earl J.; Early, George H.

    1973-01-01

    Assessed were the abilities of 50 normal 5-to 9-year-old children and 30 learning disabled 7-to 9-year-old children to recognize temporal patterns presented visually and auditorially (intramodal abilities) and to vocally produce the patterns whether presentation was visual or auditory (intramodal and cross-modal abilities). (MC)

  5. Cognitive approaches for patterns analysis and security applications

    NASA Astrophysics Data System (ADS)

    Ogiela, Marek R.; Ogiela, Lidia

    2017-08-01

    In this paper will be presented new opportunities for developing innovative solutions for semantic pattern classification and visual cryptography, which will base on cognitive and bio-inspired approaches. Such techniques can be used for evaluation of the meaning of analyzed patterns or encrypted information, and allow to involve such meaning into the classification task or encryption process. It also allows using some crypto-biometric solutions to extend personalized cryptography methodologies based on visual pattern analysis. In particular application of cognitive information systems for semantic analysis of different patterns will be presented, and also a novel application of such systems for visual secret sharing will be described. Visual shares for divided information can be created based on threshold procedure, which may be dependent on personal abilities to recognize some image details visible on divided images.

  6. Transformations in the Recognition of Visual Forms

    ERIC Educational Resources Information Center

    Charness, Neil; Bregman, Albert S.

    1973-01-01

    In a study which required college students to learn to recognize four flexible plastic shapes photographed on different backgrounds from different angles, the importance of a context-rich environment for the learning and recognition of visual patterns was illustrated. (Author)

  7. Is It a Pattern?

    ERIC Educational Resources Information Center

    McGarvey, Lynn M.

    2013-01-01

    This article describes how in early mathematics learning, young children are often asked to recognize and describe visual patterns in their environment--perhaps on their clothing, a toy, or the carpet; around a picture frame; or in the playground equipment. Exploring patterns in the early years is seen as an important introduction to algebraic…

  8. Think outside the Polygon

    ERIC Educational Resources Information Center

    Graf, Andrea B.

    2010-01-01

    Students appear to be able to see number relationships and patterns but have difficulty recognizing the visual properties of shapes, especially if the shapes are in different positions. Their difficulty in the visual and spatial realm is often linked to a lack of drawing experience and possibly undeveloped fine-motor skills. The author, as a…

  9. Learning to Recognize Patterns: Changes in the Visual Field with Familiarity

    NASA Astrophysics Data System (ADS)

    Bebko, James M.; Uchikawa, Keiji; Saida, Shinya; Ikeda, Mitsuo

    1995-01-01

    Two studies were conducted to investigate changes which take place in the visual information processing of novel stimuli as they become familiar. Japanese writing characters (Hiragana and Kanji) which were unfamiliar to two native English speaking subjects were presented using a moving window technique to restrict their visual fields. Study time for visual recognition was recorded across repeated sessions, and with varying visual field restrictions. The critical visual field was defined as the size of the visual field beyond which further increases did not improve the speed of recognition performance. In the first study, when the Hiragana patterns were novel, subjects needed to see about half of the entire pattern simultaneously to maintain optimal performance. However, the critical visual field size decreased as familiarity with the patterns increased. These results were replicated in the second study with more complex Kanji characters. In addition, the critical field size decreased as pattern complexity decreased. We propose a three component model of pattern perception. In the first stage a representation of the stimulus must be constructed by the subject, and restricting of the visual field interferes dramatically with this component when stimuli are unfamiliar. With increased familiarity, subjects become able to reconstruct a previous representation from very small, unique segments of the pattern, analogous to the informativeness areas hypothesized by Loftus and Mackworth [J. Exp. Psychol., 4 (1978) 565].

  10. Magnocellular pathway for rotation invariant Neocognitron.

    PubMed

    Ting, C H

    1993-03-01

    In the mammalian visual system, magnocellular pathway and parvocellular pathway cooperatively process visual information in parallel. The magnocellular pathway is more global and less particular about the details while the parvocellular pathway recognizes objects based on the local features. In many aspects, Neocognitron may be regarded as the artificial analogue of the parvocellular pathway. It is interesting then to model the magnocellular pathway. In order to achieve "rotation invariance" for Neocognitron, we propose a neural network model after the magnocellular pathway and expand its roles to include surmising the orientation of the input pattern prior to recognition. With the incorporation of the magnocellular pathway, a basic shift in the original paradigm has taken place. A pattern is now said to be recognized when and only when one of the winners of the magnocellular pathway is validified by the parvocellular pathway. We have implemented the magnocellular pathway coupled with Neocognitron parallel on transputers; our simulation programme is now able to recognize numerals in arbitrary orientation.

  11. Visual hallucinations in schizophrenia: confusion between imagination and perception.

    PubMed

    Brébion, Gildas; Ohlsen, Ruth I; Pilowsky, Lyn S; David, Anthony S

    2008-05-01

    An association between hallucinations and reality-monitoring deficit has been repeatedly observed in patients with schizophrenia. Most data concern auditory/verbal hallucinations. The aim of this study was to investigate the association between visual hallucinations and a specific type of reality-monitoring deficit, namely confusion between imagined and perceived pictures. Forty-one patients with schizophrenia and 43 healthy control participants completed a reality-monitoring task. Thirty-two items were presented either as written words or as pictures. After the presentation phase, participants had to recognize the target words and pictures among distractors, and then remember their mode of presentation. All groups of participants recognized the pictures better than the words, except the patients with visual hallucinations, who presented the opposite pattern. The participants with visual hallucinations made more misattributions to pictures than did the others, and higher ratings of visual hallucinations were correlated with increased tendency to remember words as pictures. No association with auditory hallucinations was revealed. Our data suggest that visual hallucinations are associated with confusion between visual mental images and perception.

  12. Self-Organization of Spatio-Temporal Hierarchy via Learning of Dynamic Visual Image Patterns on Action Sequences

    PubMed Central

    Jung, Minju; Hwang, Jungsik; Tani, Jun

    2015-01-01

    It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns. PMID:26147887

  13. Self-Organization of Spatio-Temporal Hierarchy via Learning of Dynamic Visual Image Patterns on Action Sequences.

    PubMed

    Jung, Minju; Hwang, Jungsik; Tani, Jun

    2015-01-01

    It is well known that the visual cortex efficiently processes high-dimensional spatial information by using a hierarchical structure. Recently, computational models that were inspired by the spatial hierarchy of the visual cortex have shown remarkable performance in image recognition. Up to now, however, most biological and computational modeling studies have mainly focused on the spatial domain and do not discuss temporal domain processing of the visual cortex. Several studies on the visual cortex and other brain areas associated with motor control support that the brain also uses its hierarchical structure as a processing mechanism for temporal information. Based on the success of previous computational models using spatial hierarchy and temporal hierarchy observed in the brain, the current report introduces a novel neural network model for the recognition of dynamic visual image patterns based solely on the learning of exemplars. This model is characterized by the application of both spatial and temporal constraints on local neural activities, resulting in the self-organization of a spatio-temporal hierarchy necessary for the recognition of complex dynamic visual image patterns. The evaluation with the Weizmann dataset in recognition of a set of prototypical human movement patterns showed that the proposed model is significantly robust in recognizing dynamically occluded visual patterns compared to other baseline models. Furthermore, an evaluation test for the recognition of concatenated sequences of those prototypical movement patterns indicated that the model is endowed with a remarkable capability for the contextual recognition of long-range dynamic visual image patterns.

  14. Self-organizing neural network models for visual pattern recognition.

    PubMed

    Fukushima, K

    1987-01-01

    Two neural network models for visual pattern recognition are discussed. The first model, called a "neocognitron", is a hierarchical multilayered network which has only afferent synaptic connections. It can acquire the ability to recognize patterns by "learning-without-a-teacher": the repeated presentation of a set of training patterns is sufficient, and no information about the categories of the patterns is necessary. The cells of the highest stage eventually become "gnostic cells", whose response shows the final result of the pattern-recognition of the network. Pattern recognition is performed on the basis of similarity in shape between patterns, and is not affected by deformation, nor by changes in size, nor by shifts in the position of the stimulus pattern. The second model has not only afferent but also efferent synaptic connections, and is endowed with the function of selective attention. The afferent and the efferent signals interact with each other in the hierarchical network: the efferent signals, that is, the signals for selective attention, have a facilitating effect on the afferent signals, and at the same time, the afferent signals gate efferent signal flow. When a complex figure, consisting of two patterns or more, is presented to the model, it is segmented into individual patterns, and each pattern is recognized separately. Even if one of the patterns to which the models is paying selective attention is affected by noise or defects, the model can "recall" the complete pattern from which the noise has been eliminated and the defects corrected.

  15. Learning to recognize face shapes through serial exploration.

    PubMed

    Wallraven, Christian; Whittingstall, Lisa; Bülthoff, Heinrich H

    2013-05-01

    Human observers are experts at visual face recognition due to specialized visual mechanisms for face processing that evolve with perceptual expertize. Such expertize has long been attributed to the use of configural processing, enabled by fast, parallel information encoding of the visual information in the face. Here we tested whether participants can learn to efficiently recognize faces that are serially encoded-that is, when only partial visual information about the face is available at any given time. For this, ten participants were trained in gaze-restricted face recognition in which face masks were viewed through a small aperture controlled by the participant. Tests comparing trained with untrained performance revealed (1) a marked improvement in terms of speed and accuracy, (2) a gradual development of configural processing strategies, and (3) participants' ability to rapidly learn and accurately recognize novel exemplars. This performance pattern demonstrates that participants were able to learn new strategies to compensate for the serial nature of information encoding. The results are discussed in terms of expertize acquisition and relevance for other sensory modalities relying on serial encoding.

  16. A view of Kanerva's sparse distributed memory

    NASA Technical Reports Server (NTRS)

    Denning, P. J.

    1986-01-01

    Pentti Kanerva is working on a new class of computers, which are called pattern computers. Pattern computers may close the gap between capabilities of biological organisms to recognize and act on patterns (visual, auditory, tactile, or olfactory) and capabilities of modern computers. Combinations of numeric, symbolic, and pattern computers may one day be capable of sustaining robots. The overview of the requirements for a pattern computer, a summary of Kanerva's Sparse Distributed Memory (SDM), and examples of tasks this computer can be expected to perform well are given.

  17. Male tawny dragons use throat patterns to recognize rivals.

    PubMed

    Osborne, Louise; Umbers, Kate D L; Backwell, Patricia R Y; Keogh, J Scott

    2012-10-01

    The ability to distinguish between familiar and unfamiliar conspecifics is important for many animals, especially territorial species since it allows them to avoid unnecessary interactions with individuals that pose little threat. There are very few studies, however, that identify the proximate cues that facilitate such recognition in visual systems. Here, we show that in tawny dragons (Ctenophorus decresii), males can recognize familiar and unfamiliar conspecific males based on morphological features alone, without the aid of chemical or behavioural cues. We further show that it is the colour pattern of the throat patches (gular) that facilitates this recognition.

  18. A Cortical Network for the Encoding of Object Change

    PubMed Central

    Hindy, Nicholas C.; Solomon, Sarah H.; Altmann, Gerry T.M.; Thompson-Schill, Sharon L.

    2015-01-01

    Understanding events often requires recognizing unique stimuli as alternative, mutually exclusive states of the same persisting object. Using fMRI, we examined the neural mechanisms underlying the representation of object states and object-state changes. We found that subjective ratings of visual dissimilarity between a depicted object and an unseen alternative state of that object predicted the corresponding multivoxel pattern dissimilarity in early visual cortex during an imagery task, while late visual cortex patterns tracked dissimilarity among distinct objects. Early visual cortex pattern dissimilarity for object states in turn predicted the level of activation in an area of left posterior ventrolateral prefrontal cortex (pVLPFC) most responsive to conflict in a separate Stroop color-word interference task, and an area of left ventral posterior parietal cortex (vPPC) implicated in the relational binding of semantic features. We suggest that when visualizing object states, representational content instantiated across early and late visual cortex is modulated by processes in left pVLPFC and left vPPC that support selection and binding, and ultimately event comprehension. PMID:24127425

  19. Knowledge Management for Command and Control

    DTIC Science & Technology

    2004-06-01

    interfaces relies on rich visual and conceptual understanding of what is sketched, rather than the pattern-recognition technologies that most systems use...recognizers) required by other approaches. • The underlying conceptual representations that nuSketch uses enable it to serve as a front end to knowledge...constructing enemy-intent hypotheses via mixed visual and conceptual analogies. II.C. Multi-ViewPoint Clustering Analysis (MVP-CA) technology To

  20. You Look Familiar: How Malaysian Chinese Recognize Faces

    PubMed Central

    Tan, Chrystalle B. Y.; Stephen, Ian D.; Whitehead, Ross; Sheppard, Elizabeth

    2012-01-01

    East Asian and white Western observers employ different eye movement strategies for a variety of visual processing tasks, including face processing. Recent eye tracking studies on face recognition found that East Asians tend to integrate information holistically by focusing on the nose while white Westerners perceive faces featurally by moving between the eyes and mouth. The current study examines the eye movement strategy that Malaysian Chinese participants employ when recognizing East Asian, white Western, and African faces. Rather than adopting the Eastern or Western fixation pattern, Malaysian Chinese participants use a mixed strategy by focusing on the eyes and nose more than the mouth. The combination of Eastern and Western strategies proved advantageous in participants' ability to recognize East Asian and white Western faces, suggesting that individuals learn to use fixation patterns that are optimized for recognizing the faces with which they are more familiar. PMID:22253762

  1. Aging and visual 3-D shape recognition from motion.

    PubMed

    Norman, J Farley; Adkins, Olivia C; Dowell, Catherine J; Hoyng, Stevie C; Shain, Lindsey M; Pedersen, Lauren E; Kinnard, Jonathan D; Higginbotham, Alexia J; Gilliam, Ashley N

    2017-11-01

    Two experiments were conducted to evaluate the ability of younger and older adults to recognize 3-D object shape from patterns of optical motion. In Experiment 1, participants were required to identify dotted surfaces that rotated in depth (i.e., surface structure portrayed using the kinetic depth effect). The task difficulty was manipulated by limiting the surface point lifetimes within the stimulus apparent motion sequences. In Experiment 2, the participants identified solid, naturally shaped objects (replicas of bell peppers, Capsicum annuum) that were defined by occlusion boundary contours, patterns of specular highlights, or combined optical patterns containing both boundary contours and specular highlights. Significant and adverse effects of increased age were found in both experiments. Despite the fact that previous research has found that increases in age do not reduce solid shape discrimination, our current results indicated that the same conclusion does not hold for shape identification. We demonstrated that aging results in a reduction in the ability to visually recognize 3-D shape independent of how the 3-D structure is defined (motions of isolated points, deformations of smooth optical fields containing specular highlights, etc.).

  2. Preserved Haptic Shape Processing after Bilateral LOC Lesions.

    PubMed

    Snow, Jacqueline C; Goodale, Melvyn A; Culham, Jody C

    2015-10-07

    The visual and haptic perceptual systems are understood to share a common neural representation of object shape. A region thought to be critical for recognizing visual and haptic shape information is the lateral occipital complex (LOC). We investigated whether LOC is essential for haptic shape recognition in humans by studying behavioral responses and brain activation for haptically explored objects in a patient (M.C.) with bilateral lesions of the occipitotemporal cortex, including LOC. Despite severe deficits in recognizing objects using vision, M.C. was able to accurately recognize objects via touch. M.C.'s psychophysical response profile to haptically explored shapes was also indistinguishable from controls. Using fMRI, M.C. showed no object-selective visual or haptic responses in LOC, but her pattern of haptic activation in other brain regions was remarkably similar to healthy controls. Although LOC is routinely active during visual and haptic shape recognition tasks, it is not essential for haptic recognition of object shape. The lateral occipital complex (LOC) is a brain region regarded to be critical for recognizing object shape, both in vision and in touch. However, causal evidence linking LOC with haptic shape processing is lacking. We studied recognition performance, psychophysical sensitivity, and brain response to touched objects, in a patient (M.C.) with extensive lesions involving LOC bilaterally. Despite being severely impaired in visual shape recognition, M.C. was able to identify objects via touch and she showed normal sensitivity to a haptic shape illusion. M.C.'s brain response to touched objects in areas of undamaged cortex was also very similar to that observed in neurologically healthy controls. These results demonstrate that LOC is not necessary for recognizing objects via touch. Copyright © 2015 the authors 0270-6474/15/3513745-16$15.00/0.

  3. Sciologer: Visualizing and Exploring Scientific Communities

    ERIC Educational Resources Information Center

    Bales, Michael Eliot

    2009-01-01

    Despite the recognized need to increase interdisciplinary collaboration, there are few information resources available to provide researchers with an overview of scientific communities--topics under investigation by various groups, and patterns of collaboration among groups. The tools that are available are designed for expert social network…

  4. Recognizing patterns of visual field loss using unsupervised machine learning

    NASA Astrophysics Data System (ADS)

    Yousefi, Siamak; Goldbaum, Michael H.; Zangwill, Linda M.; Medeiros, Felipe A.; Bowd, Christopher

    2014-03-01

    Glaucoma is a potentially blinding optic neuropathy that results in a decrease in visual sensitivity. Visual field abnormalities (decreased visual sensitivity on psychophysical tests) are the primary means of glaucoma diagnosis. One form of visual field testing is Frequency Doubling Technology (FDT) that tests sensitivity at 52 points within the visual field. Like other psychophysical tests used in clinical practice, FDT results yield specific patterns of defect indicative of the disease. We used Gaussian Mixture Model with Expectation Maximization (GEM), (EM is used to estimate the model parameters) to automatically separate FDT data into clusters of normal and abnormal eyes. Principal component analysis (PCA) was used to decompose each cluster into different axes (patterns). FDT measurements were obtained from 1,190 eyes with normal FDT results and 786 eyes with abnormal (i.e., glaucomatous) FDT results, recruited from a university-based, longitudinal, multi-center, clinical study on glaucoma. The GEM input was the 52-point FDT threshold sensitivities for all eyes. The optimal GEM model separated the FDT fields into 3 clusters. Cluster 1 contained 94% normal fields (94% specificity) and clusters 2 and 3 combined, contained 77% abnormal fields (77% sensitivity). For clusters 1, 2 and 3 the optimal number of PCA-identified axes were 2, 2 and 5, respectively. GEM with PCA successfully separated FDT fields from healthy and glaucoma eyes and identified familiar glaucomatous patterns of loss.

  5. Comparing the minimum spatial-frequency content for recognizing Chinese and alphabet characters

    PubMed Central

    Wang, Hui; Legge, Gordon E.

    2018-01-01

    Visual blur is a common problem that causes difficulty in pattern recognition for normally sighted people under degraded viewing conditions (e.g., near the acuity limit, when defocused, or in fog) and also for people with impaired vision. For reliable identification, the spatial frequency content of an object needs to extend up to or exceed a minimum value in units of cycles per object, referred to as the critical spatial frequency. In this study, we investigated the critical spatial frequency for alphabet and Chinese characters, and examined the effect of pattern complexity. The stimuli were divided into seven categories based on their perimetric complexity, including the lowercase and uppercase alphabet letters, and five groups of Chinese characters. We found that the critical spatial frequency significantly increased with complexity, from 1.01 cycles per character for the simplest group to 2.00 cycles per character for the most complex group of Chinese characters. A second goal of the study was to test a space-bandwidth invariance hypothesis that would represent a tradeoff between the critical spatial frequency and the number of adjacent patterns that can be recognized at one time. We tested this hypothesis by comparing the critical spatial frequencies in cycles per character from the current study and visual-span sizes in number of characters (measured by Wang, He, & Legge, 2014) for sets of characters with different complexities. For the character size (1.2°) we used in the study, we found an invariant product of approximately 10 cycles, which may represent a capacity limitation on visual pattern recognition. PMID:29297056

  6. Exploration of spatio-temporal patterns of students' movement in field trip by visualizing the log data

    NASA Astrophysics Data System (ADS)

    Cho, Nahye; Kang, Youngok

    2018-05-01

    A numerous log data in addition to user input data are being generated as mobile and web users continue to increase recently, and the studies in order to explore the patterns and meanings of various movement activities by making use of these log data are also rising rapidly. On the other hand, in the field of education, people have recognized the importance of field trip as the creative education is highlighted. Also, the examples which utilize the mobile devices in the field trip in accordance to the development of information technology are growing. In this study, we try to explore the patterns of student's activity by visualizing the log data generated from high school students' field trip with mobile device.

  7. Facial Recognition in a Discus Fish (Cichlidae): Experimental Approach Using Digital Models

    PubMed Central

    Satoh, Shun; Tanaka, Hirokazu; Kohda, Masanori

    2016-01-01

    A number of mammals and birds are known to be capable of visually discriminating between familiar and unfamiliar individuals, depending on facial patterns in some species. Many fish also visually recognize other conspecifics individually, and previous studies report that facial color patterns can be an initial signal for individual recognition. For example, a cichlid fish and a damselfish will use individual-specific color patterns that develop only in the facial area. However, it remains to be determined whether the facial area is an especially favorable site for visual signals in fish, and if so why? The monogamous discus fish, Symphysopdon aequifasciatus (Cichlidae), is capable of visually distinguishing its pair-partner from other conspecifics. Discus fish have individual-specific coloration patterns on entire body including the facial area, frontal head, trunk and vertical fins. If the facial area is an inherently important site for the visual cues, this species will use facial patterns for individual recognition, but otherwise they will use patterns on other body parts as well. We used modified digital models to examine whether discus fish use only facial coloration for individual recognition. Digital models of four different combinations of familiar and unfamiliar fish faces and bodies were displayed in frontal and lateral views. Focal fish frequently performed partner-specific displays towards partner-face models, and did aggressive displays towards models of non-partner’s faces. We conclude that to identify individuals this fish does not depend on frontal color patterns but does on lateral facial color patterns, although they have unique color patterns on the other parts of body. We discuss the significance of facial coloration for individual recognition in fish compared with birds and mammals. PMID:27191162

  8. Facial Recognition in a Discus Fish (Cichlidae): Experimental Approach Using Digital Models.

    PubMed

    Satoh, Shun; Tanaka, Hirokazu; Kohda, Masanori

    2016-01-01

    A number of mammals and birds are known to be capable of visually discriminating between familiar and unfamiliar individuals, depending on facial patterns in some species. Many fish also visually recognize other conspecifics individually, and previous studies report that facial color patterns can be an initial signal for individual recognition. For example, a cichlid fish and a damselfish will use individual-specific color patterns that develop only in the facial area. However, it remains to be determined whether the facial area is an especially favorable site for visual signals in fish, and if so why? The monogamous discus fish, Symphysopdon aequifasciatus (Cichlidae), is capable of visually distinguishing its pair-partner from other conspecifics. Discus fish have individual-specific coloration patterns on entire body including the facial area, frontal head, trunk and vertical fins. If the facial area is an inherently important site for the visual cues, this species will use facial patterns for individual recognition, but otherwise they will use patterns on other body parts as well. We used modified digital models to examine whether discus fish use only facial coloration for individual recognition. Digital models of four different combinations of familiar and unfamiliar fish faces and bodies were displayed in frontal and lateral views. Focal fish frequently performed partner-specific displays towards partner-face models, and did aggressive displays towards models of non-partner's faces. We conclude that to identify individuals this fish does not depend on frontal color patterns but does on lateral facial color patterns, although they have unique color patterns on the other parts of body. We discuss the significance of facial coloration for individual recognition in fish compared with birds and mammals.

  9. Rett syndrome: basic features of visual processing-a pilot study of eye-tracking.

    PubMed

    Djukic, Aleksandra; Valicenti McDermott, Maria; Mavrommatis, Kathleen; Martins, Cristina L

    2012-07-01

    Consistently observed "strong eye gaze" has not been validated as a means of communication in girls with Rett syndrome, ubiquitously affected by apraxia, unable to reply either verbally or manually to questions during formal psychologic assessment. We examined nonverbal cognitive abilities and basic features of visual processing (visual discrimination attention/memory) by analyzing patterns of visual fixation in 44 girls with Rett syndrome, compared with typical control subjects. To determine features of visual fixation patterns, multiple pictures (with the location of the salient and presence/absence of novel stimuli as variables) were presented on the screen of a TS120 eye-tracker. Of the 44, 35 (80%) calibrated and exhibited meaningful patterns of visual fixation. They looked longer at salient stimuli (cartoon, 2.8 ± 2 seconds S.D., vs shape, 0.9 ± 1.2 seconds S.D.; P = 0.02), regardless of their position on the screen. They recognized novel stimuli, decreasing the fixation time on the central image when another image appeared on the periphery of the slide (2.7 ± 1 seconds S.D. vs 1.8 ± 1 seconds S.D., P = 0.002). Eye-tracking provides a feasible method for cognitive assessment and new insights into the "hidden" abilities of individuals with Rett syndrome. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Multiperson visual focus of attention from head pose and meeting contextual cues.

    PubMed

    Ba, Sileye O; Odobez, Jean-Marc

    2011-01-01

    This paper introduces a novel contextual model for the recognition of people's visual focus of attention (VFOA) in meetings from audio-visual perceptual cues. More specifically, instead of independently recognizing the VFOA of each meeting participant from his own head pose, we propose to jointly recognize the participants' visual attention in order to introduce context-dependent interaction models that relate to group activity and the social dynamics of communication. Meeting contextual information is represented by the location of people, conversational events identifying floor holding patterns, and a presentation activity variable. By modeling the interactions between the different contexts and their combined and sometimes contradictory impact on the gazing behavior, our model allows us to handle VFOA recognition in difficult task-based meetings involving artifacts, presentations, and moving people. We validated our model through rigorous evaluation on a publicly available and challenging data set of 12 real meetings (5 hours of data). The results demonstrated that the integration of the presentation and conversation dynamical context using our model can lead to significant performance improvements.

  11. Impaired Integration of Emotional Faces and Affective Body Context in a Rare Case of Developmental Visual Agnosia

    PubMed Central

    Aviezer, Hillel; Hassin, Ran. R.; Bentin, Shlomo

    2011-01-01

    In the current study we examined the recognition of facial expressions embedded in emotionally expressive bodies in case LG, an individual with a rare form of developmental visual agnosia who suffers from severe prosopagnosia. Neuropsychological testing demonstrated that LG‘s agnosia is characterized by profoundly impaired visual integration. Unlike individuals with typical developmental prosopagnosia who display specific difficulties with face identity (but typically not expression) recognition, LG was also impaired at recognizing isolated facial expressions. By contrast, he successfully recognized the expressions portrayed by faceless emotional bodies handling affective paraphernalia. When presented with contextualized faces in emotional bodies his ability to detect the emotion expressed by a face did not improve even if it was embedded in an emotionally-congruent body context. Furthermore, in contrast to controls, LG displayed an abnormal pattern of contextual influence from emotionally-incongruent bodies. The results are interpreted in the context of a general integration deficit in developmental visual agnosia, suggesting that impaired integration may extend from the level of the face to the level of the full person. PMID:21482423

  12. [Pattern recognition of decorative papers with different visual characteristics using visible spectroscopy coupled with principal component analysis (PCA)].

    PubMed

    Zhang, Mao-mao; Yang, Zhong; Lu, Bin; Liu, Ya-na; Sun, Xue-dong

    2015-02-01

    As one of the most important decorative materials for the modern household products, decorative papers impregnated with melamine not only have better decorative performance, but also could greatly improve the surface properties of materials. However, the appearance quality (such as color-difference evaluation and control) of decorative papers, as an important index for the surface quality of decorative paper, has been a puzzle for manufacturers and consumers. Nowadays, human eye is used to discriminate whether there exist color difference in the factory, which is not only of low efficiency but also prone to bring subjective error. Thus, it is of great significance to find an effective method in order to realize the fast recognition and classification of the decorative papers. In the present study, the visible spectroscopy coupled with principal component analysis (PCA) was used for the pattern recognition of decorative papers with different visual characteristics to investigate the feasibility of visible spectroscopy to rapidly recognize the types of decorative papers. The results showed that the correlation between visible spectroscopy and visual characteristics (L*, a* and b*) was significant, and the correlation coefficients wereup to 0.85 and some was even more than 0. 99, which might suggest that the visible spectroscopy reflected some information about visual characteristics on the surface of decorative papers. When using the visible spectroscopy coupled with PCA to recognize the types of decorative papers, the accuracy reached 94%-100%, which might suggest that the visible spectroscopy was a very potential new method for the rapid, objective and accurate recognition of decorative papers with different visual characteristics.

  13. Effect of pattern complexity on the visual span for Chinese and alphabet characters

    PubMed Central

    Wang, Hui; He, Xuanzi; Legge, Gordon E.

    2014-01-01

    The visual span for reading is the number of letters that can be recognized without moving the eyes and is hypothesized to impose a sensory limitation on reading speed. Factors affecting the size of the visual span have been studied using alphabet letters. There may be common constraints applying to recognition of other scripts. The aim of this study was to extend the concept of the visual span to Chinese characters and to examine the effect of the greater complexity of these characters. We measured visual spans for Chinese characters and alphabet letters in the central vision of bilingual subjects. Perimetric complexity was used as a metric to quantify the pattern complexity of binary character images. The visual span tests were conducted with four sets of stimuli differing in complexity—lowercase alphabet letters and three groups of Chinese characters. We found that the size of visual spans decreased with increasing complexity, ranging from 10.5 characters for alphabet letters to 4.5 characters for the most complex Chinese characters studied. A decomposition analysis revealed that crowding was the dominant factor limiting the size of the visual span, and the amount of crowding increased with complexity. Errors in the spatial arrangement of characters (mislocations) had a secondary effect. We conclude that pattern complexity has a major effect on the size of the visual span, mediated in large part by crowding. Measuring the visual span for Chinese characters is likely to have high relevance to understanding visual constraints on Chinese reading performance. PMID:24993020

  14. Visualizing frequent patterns in large multivariate time series

    NASA Astrophysics Data System (ADS)

    Hao, M.; Marwah, M.; Janetzko, H.; Sharma, R.; Keim, D. A.; Dayal, U.; Patnaik, D.; Ramakrishnan, N.

    2011-01-01

    The detection of previously unknown, frequently occurring patterns in time series, often called motifs, has been recognized as an important task. However, it is difficult to discover and visualize these motifs as their numbers increase, especially in large multivariate time series. To find frequent motifs, we use several temporal data mining and event encoding techniques to cluster and convert a multivariate time series to a sequence of events. Then we quantify the efficiency of the discovered motifs by linking them with a performance metric. To visualize frequent patterns in a large time series with potentially hundreds of nested motifs on a single display, we introduce three novel visual analytics methods: (1) motif layout, using colored rectangles for visualizing the occurrences and hierarchical relationships of motifs in a multivariate time series, (2) motif distortion, for enlarging or shrinking motifs as appropriate for easy analysis and (3) motif merging, to combine a number of identical adjacent motif instances without cluttering the display. Analysts can interactively optimize the degree of distortion and merging to get the best possible view. A specific motif (e.g., the most efficient or least efficient motif) can be quickly detected from a large time series for further investigation. We have applied these methods to two real-world data sets: data center cooling and oil well production. The results provide important new insights into the recurring patterns.

  15. Brief Report: Face-Specific Recognition Deficits in Young Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Bradshaw, Jessica; Shic, Frederick; Chawarska, Katarzyna

    2011-01-01

    This study used eyetracking to investigate the ability of young children with autism spectrum disorders (ASD) to recognize social (faces) and nonsocial (simple objects and complex block patterns) stimuli using the visual paired comparison (VPC) paradigm. Typically developing (TD) children showed evidence for recognition of faces and simple…

  16. Television and Childhood Education.

    ERIC Educational Resources Information Center

    Hilliard, Robert L.

    To make adequate use of mass media for children's education, we must recognize that the medium is the message, that the conveyer is the content. The medium itself changes behavior, learning and growth patterns of the child. For example television itself teaches a special kind of visual awareness and enhances the ability to relate non-immediate…

  17. Perception of animacy in dogs and humans.

    PubMed

    Abdai, Judit; Ferdinandy, Bence; Terencio, Cristina Baño; Pogány, Ákos; Miklósi, Ádám

    2017-06-01

    Humans have a tendency to perceive inanimate objects as animate based on simple motion cues. Although animacy is considered as a complex cognitive property, this recognition seems to be spontaneous. Researchers have found that young human infants discriminate between dependent and independent movement patterns. However, quick visual perception of animate entities may be crucial to non-human species as well. Based on general mammalian homology, dogs may possess similar skills to humans. Here, we investigated whether dogs and humans discriminate similarly between dependent and independent motion patterns performed by geometric shapes. We projected a side-by-side video display of the two patterns and measured looking times towards each side, in two trials. We found that in Trial 1, both dogs and humans were equally interested in the two patterns, but in Trial 2 of both species, looking times towards the dependent pattern decreased, whereas they increased towards the independent pattern. We argue that dogs and humans spontaneously recognized the specific pattern and habituated to it rapidly, but continued to show interest in the 'puzzling' pattern. This suggests that both species tend to recognize inanimate agents as animate relying solely on their motions. © 2017 The Author(s).

  18. Learning To Recognize Visual Concepts: Development and Implementation of a Method for Texture Concept Acquisition Through Inductive Learning

    DTIC Science & Technology

    1993-01-01

    Maria and My Parents, Helena and Andrzej IV ACKNOWLEDGMENTS I would like to first of all thank my advisor. Dr. Ryszard Michalski. who introduced...represent the current state of the art in machine learning methodology. The most popular method. the minimization of Bayes risk [ Duda and Hart. 1973]. is a...34 Pattern Recognition, Vol. 23, no. 3-4, pp. 291-309, 1990. Duda , O. and P. Hart, Pattern Classification and Scene Analysis, John Wiley & Sons. 1973

  19. Chess players' eye movements reveal rapid recognition of complex visual patterns: Evidence from a chess-related visual search task.

    PubMed

    Sheridan, Heather; Reingold, Eyal M

    2017-03-01

    To explore the perceptual component of chess expertise, we monitored the eye movements of expert and novice chess players during a chess-related visual search task that tested anecdotal reports that a key differentiator of chess skill is the ability to visualize the complex moves of the knight piece. Specifically, chess players viewed an array of four minimized chessboards, and they rapidly searched for the target board that allowed a knight piece to reach a target square in three moves. On each trial, there was only one target board (i.e., the "Yes" board), and for the remaining "lure" boards, the knight's path was blocked on either the first move (the "Easy No" board) or the second move (i.e., "the Difficult No" board). As evidence that chess experts can rapidly differentiate complex chess-related visual patterns, the experts (but not the novices) showed longer first-fixation durations on the "Yes" board relative to the "Difficult No" board. Moreover, as hypothesized, the task strongly differentiated chess skill: Reaction times were more than four times faster for the experts relative to novices, and reaction times were correlated with within-group measures of expertise (i.e., official chess ratings, number of hours of practice). These results indicate that a key component of chess expertise is the ability to rapidly recognize complex visual patterns.

  20. Common constraints limit Korean and English character recognition in peripheral vision.

    PubMed

    He, Yingchen; Kwon, MiYoung; Legge, Gordon E

    2018-01-01

    The visual span refers to the number of adjacent characters that can be recognized in a single glance. It is viewed as a sensory bottleneck in reading for both normal and clinical populations. In peripheral vision, the visual span for English characters can be enlarged after training with a letter-recognition task. Here, we examined the transfer of training from Korean to English characters for a group of bilingual Korean native speakers. In the pre- and posttests, we measured visual spans for Korean characters and English letters. Training (1.5 hours × 4 days) consisted of repetitive visual-span measurements for Korean trigrams (strings of three characters). Our training enlarged the visual spans for Korean single characters and trigrams, and the benefit transferred to untrained English symbols. The improvement was largely due to a reduction of within-character and between-character crowding in Korean recognition, as well as between-letter crowding in English recognition. We also found a negative correlation between the size of the visual span and the average pattern complexity of the symbol set. Together, our results showed that the visual span is limited by common sensory (crowding) and physical (pattern complexity) factors regardless of the language script, providing evidence that the visual span reflects a universal bottleneck for text recognition.

  1. Common constraints limit Korean and English character recognition in peripheral vision

    PubMed Central

    He, Yingchen; Kwon, MiYoung; Legge, Gordon E.

    2018-01-01

    The visual span refers to the number of adjacent characters that can be recognized in a single glance. It is viewed as a sensory bottleneck in reading for both normal and clinical populations. In peripheral vision, the visual span for English characters can be enlarged after training with a letter-recognition task. Here, we examined the transfer of training from Korean to English characters for a group of bilingual Korean native speakers. In the pre- and posttests, we measured visual spans for Korean characters and English letters. Training (1.5 hours × 4 days) consisted of repetitive visual-span measurements for Korean trigrams (strings of three characters). Our training enlarged the visual spans for Korean single characters and trigrams, and the benefit transferred to untrained English symbols. The improvement was largely due to a reduction of within-character and between-character crowding in Korean recognition, as well as between-letter crowding in English recognition. We also found a negative correlation between the size of the visual span and the average pattern complexity of the symbol set. Together, our results showed that the visual span is limited by common sensory (crowding) and physical (pattern complexity) factors regardless of the language script, providing evidence that the visual span reflects a universal bottleneck for text recognition. PMID:29327041

  2. The Effects of Syntactically Parsed Text Formats on Intensive Reading in EFL

    ERIC Educational Resources Information Center

    Herbert, John C.

    2014-01-01

    Separating text into meaningful language chunks, as with visual-syntactic text formatting, helps readers to process text more easily and language learners to recognize grammar and syntax patterns more quickly. Evidence of this exists in studies on native and non-native English speakers. However, recent studies question the roll of VSTF in certain…

  3. A Hierarchical and Contextual Model for Learning and Recognizing Highly Variant Visual Categories

    DTIC Science & Technology

    2010-01-01

    neighboring pattern primitives, to create our model. We also present a minimax entropy framework for automatically learning which contextual constraints are...Grammars . . . . . . . . . . . . . . . . . . 19 3.2 Markov Random Fields . . . . . . . . . . . . . . . . . . . . . . . . 23 3.3 Creating a Contextual...Compositional Boosting. . . . . 119 7.8 Top-down hallucinations of missing objects. . . . . . . . . . . . . . . 121 7.9 The bottom-up to top-down

  4. Automatic speech recognition and training for severely dysarthric users of assistive technology: the STARDUST project.

    PubMed

    Parker, Mark; Cunningham, Stuart; Enderby, Pam; Hawley, Mark; Green, Phil

    2006-01-01

    The STARDUST project developed robust computer speech recognizers for use by eight people with severe dysarthria and concomitant physical disability to access assistive technologies. Independent computer speech recognizers trained with normal speech are of limited functional use by those with severe dysarthria due to limited and inconsistent proximity to "normal" articulatory patterns. Severe dysarthric output may also be characterized by a small mass of distinguishable phonetic tokens making the acoustic differentiation of target words difficult. Speaker dependent computer speech recognition using Hidden Markov Models was achieved by the identification of robust phonetic elements within the individual speaker output patterns. A new system of speech training using computer generated visual and auditory feedback reduced the inconsistent production of key phonetic tokens over time.

  5. Functional specialization and convergence in the occipito-temporal cortex supporting haptic and visual identification of human faces and body parts: an fMRI study.

    PubMed

    Kitada, Ryo; Johnsrude, Ingrid S; Kochiyama, Takanori; Lederman, Susan J

    2009-10-01

    Humans can recognize common objects by touch extremely well whenever vision is unavailable. Despite its importance to a thorough understanding of human object recognition, the neuroscientific study of this topic has been relatively neglected. To date, the few published studies have addressed the haptic recognition of nonbiological objects. We now focus on haptic recognition of the human body, a particularly salient object category for touch. Neuroimaging studies demonstrate that regions of the occipito-temporal cortex are specialized for visual perception of faces (fusiform face area, FFA) and other body parts (extrastriate body area, EBA). Are the same category-sensitive regions activated when these components of the body are recognized haptically? Here, we use fMRI to compare brain organization for haptic and visual recognition of human body parts. Sixteen subjects identified exemplars of faces, hands, feet, and nonbiological control objects using vision and haptics separately. We identified two discrete regions within the fusiform gyrus (FFA and the haptic face region) that were each sensitive to both haptically and visually presented faces; however, these two regions differed significantly in their response patterns. Similarly, two regions within the lateral occipito-temporal area (EBA and the haptic body region) were each sensitive to body parts in both modalities, although the response patterns differed. Thus, although the fusiform gyrus and the lateral occipito-temporal cortex appear to exhibit modality-independent, category-sensitive activity, our results also indicate a degree of functional specialization related to sensory modality within these structures.

  6. Investigation of environmental change pattern in Japan

    NASA Technical Reports Server (NTRS)

    Maruyasu, T.; Ochiai, H.; Sugimori, Y.; Shoji, D.; Takeda, K.; Tsuchiya, K.; Nakajima, I.; Nakano, T.; Hayashi, S.; Horikawa, S. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A detailed land use classification for a large urban area of Tokyo was made using MSS digital data. It was found that residential, commercial, industrial, and wooded areas and grasslands can be successfully classified. A mesoscale vortex associated with large ocean current, Kuroshio, which is a rare phenomenon, was recognized visually through the analysis of MSS data. It was found that this vortex affects the effluent patterns of rivers. Lava flowing from Sakurajima Volcano was clearly classified for three major erruptions (1779, 1914, and 1946) using MSS data.

  7. Older adults' decoding of emotions: age-related differences in interpreting dynamic emotional displays and the well-preserved ability to recognize happiness.

    PubMed

    Moraitou, Despina; Papantoniou, Georgia; Gkinopoulos, Theofilos; Nigritinou, Magdalini

    2013-09-01

    Although the ability to recognize emotions through bodily and facial muscular movements is vital to everyday life, numerous studies have found that older adults are less adept at identifying emotions than younger adults. The message gleaned from research has been one of greater decline in abilities to recognize specific negative emotions than positive ones. At the same time, these results raise methodological issues with regard to different modalities in which emotion decoding is measured. The main aim of the present study is to identify the pattern of age differences in the ability to decode basic emotions from naturalistic visual emotional displays. The sample comprised a total of 208 adults from Greece, aged from 18 to 86 years. Participants were examined using the Emotion Evaluation Test, which is the first part of a broader audiovisual tool, The Awareness of Social Inference Test. The Emotion Evaluation Test was designed to examine a person's ability to identify six emotions and discriminate these from neutral expressions, as portrayed dynamically by professional actors. The findings indicate that decoding of basic emotions occurs along the broad affective dimension of uncertainty, and a basic step in emotion decoding involves recognizing whether information presented is emotional or not. Age was found to negatively affect the ability to decode basic negatively valenced emotions as well as pleasant surprise. Happiness decoding is the only ability that was found well-preserved with advancing age. The main conclusion drawn from the study is that the pattern in which emotion decoding from visual cues is affected by normal ageing depends on the rate of uncertainty, which either is related to decoding difficulties or is inherent to a specific emotion. © 2013 The Authors. Psychogeriatrics © 2013 Japanese Psychogeriatric Society.

  8. Dietary Assessment on a Mobile Phone Using Image Processing and Pattern Recognition Techniques: Algorithm Design and System Prototyping.

    PubMed

    Probst, Yasmine; Nguyen, Duc Thanh; Tran, Minh Khoi; Li, Wanqing

    2015-07-27

    Dietary assessment, while traditionally based on pen-and-paper, is rapidly moving towards automatic approaches. This study describes an Australian automatic food record method and its prototype for dietary assessment via the use of a mobile phone and techniques of image processing and pattern recognition. Common visual features including scale invariant feature transformation (SIFT), local binary patterns (LBP), and colour are used for describing food images. The popular bag-of-words (BoW) model is employed for recognizing the images taken by a mobile phone for dietary assessment. Technical details are provided together with discussions on the issues and future work.

  9. A System for Video Surveillance and Monitoring CMU VSAM Final Report

    DTIC Science & Technology

    1999-11-30

    motion-based skeletonization, neural network , spatio-temporal salience Patterns inside image chips, spurious motion rejection, model -based... network of sensors with respect to the model coordinate system, computation of 3D geolocation estimates, and graphical display of object hypotheses...rithms have been developed. The first uses view dependent visual properties to train a neural network classifier to recognize four classes: single

  10. Cultural differences in visual object recognition in 3-year-old children

    PubMed Central

    Kuwabara, Megumi; Smith, Linda B.

    2016-01-01

    Recent research indicates that culture penetrates fundamental processes of perception and cognition (e.g. Nisbett & Miyamoto, 2005). Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (n=128) examined the degree to which nonface object recognition by 3 year olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects in which only 3 diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children and likelihood of recognition increased for U.S., but not Japanese children when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children’s recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. PMID:26985576

  11. Cultural differences in visual object recognition in 3-year-old children.

    PubMed

    Kuwabara, Megumi; Smith, Linda B

    2016-07-01

    Recent research indicates that culture penetrates fundamental processes of perception and cognition. Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (N=128) examined the degree to which nonface object recognition by 3-year-olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects where only three diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children, and the likelihood of recognition increased for U.S. children, but not Japanese children, when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children's recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Recognition Alters the Spatial Pattern of fMRI Activation in Early Retinotopic Cortex

    PubMed Central

    Vul, E.; Kanwisher, N.

    2010-01-01

    Early retinotopic cortex has traditionally been viewed as containing a veridical representation of the low-level properties of the image, not imbued by high-level interpretation and meaning. Yet several recent results indicate that neural representations in early retinotopic cortex reflect not just the sensory properties of the image, but also the perceived size and brightness of image regions. Here we used functional magnetic resonance imaging pattern analyses to ask whether the representation of an object in early retinotopic cortex changes when the object is recognized compared with when the same stimulus is presented but not recognized. Our data confirmed this hypothesis: the pattern of response in early retinotopic visual cortex to a two-tone “Mooney” image of an object was more similar to the response to the full grayscale photo version of the same image when observers knew what the two-tone image represented than when they did not. Further, in a second experiment, high-level interpretations actually overrode bottom-up stimulus information, such that the pattern of response in early retinotopic cortex to an identified two-tone image was more similar to the response to the photographic version of that stimulus than it was to the response to the identical two-tone image when it was not identified. Our findings are consistent with prior results indicating that perceived size and brightness affect representations in early retinotopic visual cortex and, further, show that even higher-level information—knowledge of object identity—also affects the representation of an object in early retinotopic cortex. PMID:20071627

  13. The Functional Significance of Aposematic Signals: Geographic Variation in the Responses of Widespread Lizard Predators to Colourful Invertebrate Prey

    PubMed Central

    Tseng, Hui-Yun; Lin, Chung-Ping; Hsu, Jung-Ya; Pike, David A.; Huang, Wen-San

    2014-01-01

    Conspicuous colouration can evolve as a primary defence mechanism that advertises unprofitability and discourages predatory attacks. Geographic overlap is a primary determinant of whether individual predators encounter, and thus learn to avoid, such aposematic prey. We experimentally tested whether the conspicuous colouration displayed by Old World pachyrhynchid weevils (Pachyrhynchus tobafolius and Kashotonus multipunctatus) deters predation by visual predators (Swinhoe’s tree lizard; Agamidae, Japalura swinhonis). During staged encounters, sympatric lizards attacked weevils without conspicuous patterns at higher rates than weevils with intact conspicuous patterns, whereas allopatric lizards attacked weevils with intact patterns at higher rates than sympatric lizards. Sympatric lizards also attacked masked weevils at lower rates, suggesting that other attributes of the weevils (size/shape/smell) also facilitate recognition. Allopatric lizards rapidly learned to avoid weevils after only a single encounter, and maintained aversive behaviours for more than three weeks. The imperfect ability of visual predators to recognize potential prey as unpalatable, both in the presence and absence of the aposematic signal, may help explain how diverse forms of mimicry exploit the predator’s visual system to deter predation. PMID:24614681

  14. Octopuses (Enteroctopus dofleini) recognize individual humans.

    PubMed

    Anderson, Roland C; Mather, Jennifer A; Monette, Mathieu Q; Zimsen, Stephanie R M

    2010-01-01

    This study exposed 8 Enteroctopus dofleini separately to 2 unfamiliar individual humans over a 2-week period under differing circumstances. One person consistently fed the octopuses and the other touched them with a bristly stick. Each human recorded octopus body patterns, behaviors, and respiration rates directly after each treatment. At the end of 2 weeks, a body pattern (a dark Eyebar) and 2 behaviors (reaching arms toward or away from the tester and funnel direction) were significantly different in response to the 2 humans. The respiration rate of the 4 larger octopuses changed significantly in response to the 2 treatments; however, there was no significant difference in the 4 smaller octopuses' respiration. Octopuses' ability to recognize humans enlarges our knowledge of the perceptual ability of this nonhuman animal, which depends heavily on learning in response to visual information. Any training paradigm should take such individual recognition into consideration as it could significantly alter the octopuses' responses.

  15. Toward unsupervised outbreak detection through visual perception of new patterns

    PubMed Central

    Lévy, Pierre P; Valleron, Alain-Jacques

    2009-01-01

    Background Statistical algorithms are routinely used to detect outbreaks of well-defined syndromes, such as influenza-like illness. These methods cannot be applied to the detection of emerging diseases for which no preexisting information is available. This paper presents a method aimed at facilitating the detection of outbreaks, when there is no a priori knowledge of the clinical presentation of cases. Methods The method uses a visual representation of the symptoms and diseases coded during a patient consultation according to the International Classification of Primary Care 2nd version (ICPC-2). The surveillance data are transformed into color-coded cells, ranging from white to red, reflecting the increasing frequency of observed signs. They are placed in a graphic reference frame mimicking body anatomy. Simple visual observation of color-change patterns over time, concerning a single code or a combination of codes, enables detection in the setting of interest. Results The method is demonstrated through retrospective analyses of two data sets: description of the patients referred to the hospital by their general practitioners (GPs) participating in the French Sentinel Network and description of patients directly consulting at a hospital emergency department (HED). Informative image color-change alert patterns emerged in both cases: the health consequences of the August 2003 heat wave were visualized with GPs' data (but passed unnoticed with conventional surveillance systems), and the flu epidemics, which are routinely detected by standard statistical techniques, were recognized visually with HED data. Conclusion Using human visual pattern-recognition capacities to detect the onset of unexpected health events implies a convenient image representation of epidemiological surveillance and well-trained "epidemiology watchers". Once these two conditions are met, one could imagine that the epidemiology watchers could signal epidemiological alerts, based on "image walls" presenting the local, regional and/or national surveillance patterns, with specialized field epidemiologists assigned to validate the signals detected. PMID:19515246

  16. The effects of perceptual priming on 4-year-olds' haptic-to-visual cross-modal transfer.

    PubMed

    Kalagher, Hilary

    2013-01-01

    Four-year-old children often have difficulty visually recognizing objects that were previously experienced only haptically. This experiment attempts to improve their performance in these haptic-to-visual transfer tasks. Sixty-two 4-year-old children participated in priming trials in which they explored eight unfamiliar objects visually, haptically, or visually and haptically together. Subsequently, all children participated in the same haptic-to-visual cross-modal transfer task. In this task, children haptically explored the objects that were presented in the priming phase and then visually identified a match from among three test objects, each matching the object on only one dimension (shape, texture, or color). Children in all priming conditions predominantly made shape-based matches; however, the most shape-based matches were made in the Visual and Haptic condition. All kinds of priming provided the necessary memory traces upon which subsequent haptic exploration could build a strong enough representation to enable subsequent visual recognition. Haptic exploration patterns during the cross-modal transfer task are discussed and the detailed analyses provide a unique contribution to our understanding of the development of haptic exploratory procedures.

  17. Normal Visual Acuity and Electrophysiological Contrast Gain in Adults with High-Functioning Autism Spectrum Disorder.

    PubMed

    Tebartz van Elst, Ludger; Bach, Michael; Blessing, Julia; Riedel, Andreas; Bubl, Emanuel

    2015-01-01

    A common neurodevelopmental disorder, autism spectrum disorder (ASD), is defined by specific patterns in social perception, social competence, communication, highly circumscribed interests, and a strong subjective need for behavioral routines. Furthermore, distinctive features of visual perception, such as markedly reduced eye contact and a tendency to focus more on small, visual items than on holistic perception, have long been recognized as typical ASD characteristics. Recent debate in the scientific community discusses whether the physiology of low-level visual perception might explain such higher visual abnormalities. While reports of this enhanced, "eagle-like" visual acuity contained methodological errors and could not be substantiated, several authors have reported alterations in even earlier stages of visual processing, such as contrast perception and motion perception at the occipital cortex level. Therefore, in this project, we have investigated the electrophysiology of very early visual processing by analyzing the pattern electroretinogram-based contrast gain, the background noise amplitude, and the psychophysical visual acuities of participants with high-functioning ASD and controls with equal education. Based on earlier findings, we hypothesized that alterations in early vision would be present in ASD participants. This study included 33 individuals with ASD (11 female) and 33 control individuals (12 female). The groups were matched in terms of age, gender, and education level. We found no evidence of altered electrophysiological retinal contrast processing or psychophysical measured visual acuities. There appears to be no evidence for abnormalities in retinal visual processing in ASD patients, at least with respect to contrast detection.

  18. Dorsal hippocampus is necessary for visual categorization in rats.

    PubMed

    Kim, Jangjin; Castro, Leyre; Wasserman, Edward A; Freeman, John H

    2018-02-23

    The hippocampus may play a role in categorization because of the need to differentiate stimulus categories (pattern separation) and to recognize category membership of stimuli from partial information (pattern completion). We hypothesized that the hippocampus would be more crucial for categorization of low-density (few relevant features) stimuli-due to the higher demand on pattern separation and pattern completion-than for categorization of high-density (many relevant features) stimuli. Using a touchscreen apparatus, rats were trained to categorize multiple abstract stimuli into two different categories. Each stimulus was a pentagonal configuration of five visual features; some of the visual features were relevant for defining the category whereas others were irrelevant. Two groups of rats were trained with either a high (dense, n = 8) or low (sparse, n = 8) number of category-relevant features. Upon reaching criterion discrimination (≥75% correct, on 2 consecutive days), bilateral cannulas were implanted in the dorsal hippocampus. The rats were then given either vehicle or muscimol infusions into the hippocampus just prior to various testing sessions. They were tested with: the previously trained stimuli (trained), novel stimuli involving new irrelevant features (novel), stimuli involving relocated features (relocation), and a single relevant feature (singleton). In training, the dense group reached criterion faster than the sparse group, indicating that the sparse task was more difficult than the dense task. In testing, accuracy of both groups was equally high for trained and novel stimuli. However, both groups showed impaired accuracy in the relocation and singleton conditions, with a greater deficit in the sparse group. The testing data indicate that rats encode both the relevant features and the spatial locations of the features. Hippocampal inactivation impaired visual categorization regardless of the density of the category-relevant features for the trained, novel, relocation, and singleton stimuli. Hippocampus-mediated pattern completion and pattern separation mechanisms may be necessary for visual categorization involving overlapping irrelevant features. © 2018 Wiley Periodicals, Inc.

  19. Protein-Protein Interaction Network and Gene Ontology

    NASA Astrophysics Data System (ADS)

    Choi, Yunkyu; Kim, Seok; Yi, Gwan-Su; Park, Jinah

    Evolution of computer technologies makes it possible to access a large amount and various kinds of biological data via internet such as DNA sequences, proteomics data and information discovered about them. It is expected that the combination of various data could help researchers find further knowledge about them. Roles of a visualization system are to invoke human abilities to integrate information and to recognize certain patterns in the data. Thus, when the various kinds of data are examined and analyzed manually, an effective visualization system is an essential part. One instance of these integrated visualizations can be combination of protein-protein interaction (PPI) data and Gene Ontology (GO) which could help enhance the analysis of PPI network. We introduce a simple but comprehensive visualization system that integrates GO and PPI data where GO and PPI graphs are visualized side-by-side and supports quick reference functions between them. Furthermore, the proposed system provides several interactive visualization methods for efficiently analyzing the PPI network and GO directedacyclic- graph such as context-based browsing and common ancestors finding.

  20. Computation of pattern invariance in brain-like structures.

    PubMed

    Ullman, S; Soloviev, S

    1999-10-01

    A fundamental capacity of the perceptual systems and the brain in general is to deal with the novel and the unexpected. In vision, we can effortlessly recognize a familiar object under novel viewing conditions, or recognize a new object as a member of a familiar class, such as a house, a face, or a car. This ability to generalize and deal efficiently with novel stimuli has long been considered a challenging example of brain-like computation that proved extremely difficult to replicate in artificial systems. In this paper we present an approach to generalization and invariant recognition. We focus our discussion on the problem of invariance to position in the visual field, but also sketch how similar principles could apply to other domains.The approach is based on the use of a large repertoire of partial generalizations that are built upon past experience. In the case of shift invariance, visual patterns are described as the conjunction of multiple overlapping image fragments. The invariance to the more primitive fragments is built into the system by past experience. Shift invariance of complex shapes is obtained from the invariance of their constituent fragments. We study by simulations aspects of this shift invariance method and then consider its extensions to invariant perception and classification by brain-like structures.

  1. Dietary Assessment on a Mobile Phone Using Image Processing and Pattern Recognition Techniques: Algorithm Design and System Prototyping

    PubMed Central

    Probst, Yasmine; Nguyen, Duc Thanh; Tran, Minh Khoi; Li, Wanqing

    2015-01-01

    Dietary assessment, while traditionally based on pen-and-paper, is rapidly moving towards automatic approaches. This study describes an Australian automatic food record method and its prototype for dietary assessment via the use of a mobile phone and techniques of image processing and pattern recognition. Common visual features including scale invariant feature transformation (SIFT), local binary patterns (LBP), and colour are used for describing food images. The popular bag-of-words (BoW) model is employed for recognizing the images taken by a mobile phone for dietary assessment. Technical details are provided together with discussions on the issues and future work. PMID:26225994

  2. Optical-Correlator Neural Network Based On Neocognitron

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin; Stoner, William W.

    1994-01-01

    Multichannel optical correlator implements shift-invariant, high-discrimination pattern-recognizing neural network based on paradigm of neocognitron. Selected as basic building block of this neural network because invariance under shifts is inherent advantage of Fourier optics included in optical correlators in general. Neocognitron is conceptual electronic neural-network model for recognition of visual patterns. Multilayer processing achieved by iteratively feeding back output of feature correlator to input spatial light modulator and updating Fourier filters. Neural network trained by use of characteristic features extracted from target images. Multichannel implementation enables parallel processing of large number of selected features.

  3. A model for optimizing file access patterns using spatio-temporal parallelism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boonthanome, Nouanesengsy; Patchett, John; Geveci, Berk

    2013-01-01

    For many years now, I/O read time has been recognized as the primary bottleneck for parallel visualization and analysis of large-scale data. In this paper, we introduce a model that can estimate the read time for a file stored in a parallel filesystem when given the file access pattern. Read times ultimately depend on how the file is stored and the access pattern used to read the file. The file access pattern will be dictated by the type of parallel decomposition used. We employ spatio-temporal parallelism, which combines both spatial and temporal parallelism, to provide greater flexibility to possible filemore » access patterns. Using our model, we were able to configure the spatio-temporal parallelism to design optimized read access patterns that resulted in a speedup factor of approximately 400 over traditional file access patterns.« less

  4. Toward a hybrid brain-computer interface based on repetitive visual stimuli with missing events.

    PubMed

    Wu, Yingying; Li, Man; Wang, Jing

    2016-07-26

    Steady-state visually evoked potentials (SSVEPs) can be elicited by repetitive stimuli and extracted in the frequency domain with satisfied performance. However, the temporal information of such stimulus is often ignored. In this study, we utilized repetitive visual stimuli with missing events to present a novel hybrid BCI paradigm based on SSVEP and omitted stimulus potential (OSP). Four discs flickering from black to white with missing flickers served as visual stimulators to simultaneously elicit subject's SSVEPs and OSPs. Key parameters in the new paradigm, including flicker frequency, optimal electrodes, missing flicker duration and intervals of missing events were qualitatively discussed with offline data. Two omitted flicker patterns including missing black/white disc were proposed and compared. Averaging times were optimized with Information Transfer Rate (ITR) in online experiments, where SSVEPs and OSPs were identified using Canonical Correlation Analysis in the frequency domain and Support Vector Machine (SVM)-Bayes fusion in the time domain, respectively. The online accuracy and ITR (mean ± standard deviation) over nine healthy subjects were 79.29 ± 18.14 % and 19.45 ± 11.99 bits/min with missing black disc pattern, and 86.82 ± 12.91 % and 24.06 ± 10.95 bits/min with missing white disc pattern, respectively. The proposed BCI paradigm, for the first time, demonstrated that SSVEPs and OSPs can be simultaneously elicited in single visual stimulus pattern and recognized in real-time with satisfied performance. Besides the frequency features such as SSVEP elicited by repetitive stimuli, we found a new feature (OSP) in the time domain to design a novel hybrid BCI paradigm by adding missing events in repetitive stimuli.

  5. Seeing without knowing: task relevance dissociates between visual awareness and recognition.

    PubMed

    Eitam, Baruch; Shoval, Roy; Yeshurun, Yaffa

    2015-03-01

    We demonstrate that task relevance dissociates between visual awareness and knowledge activation to create a state of seeing without knowing-visual awareness of familiar stimuli without recognizing them. We rely on the fact that in order to experience a Kanizsa illusion, participants must be aware of its inducers. While people can indicate the orientation of the illusory rectangle with great ease (signifying that they have consciously experienced the illusion's inducers), almost 30% of them could not report the inducers' color. Thus, people can see, in the sense of phenomenally experiencing, but not know, in the sense of recognizing what the object is or activating appropriate knowledge about it. Experiment 2 tests whether relevance-based selection operates within objects and shows that, contrary to the pattern of results found with features of different objects in our previous studies and replicated in Experiment 1, selection does not occur when both relevant and irrelevant features belong to the same object. We discuss these findings in relation to the existing theories of consciousness and to attention and inattentional blindness, and the role of cognitive load, object-based attention, and the use of self-reports as measures of awareness. © 2015 New York Academy of Sciences.

  6. Patterns on serpentine shapes elicit visual attention in marmosets (Callithrix jacchus).

    PubMed

    Wombolt, Jessica R; Caine, Nancy G

    2016-09-01

    Given the prevalence of threatening snakes in the evolutionary history, and modern-day environments of human and nonhuman primates, sensory, and perceptual abilities that allow for quick detection of, and appropriate response to snakes are likely to have evolved. Many studies have demonstrated that primates recognize snakes faster than other stimuli, and it is suggested that the unique serpentine shape is responsible for its quick detection. However, there are many nonthreatening serpentine shapes in the environment (e.g., vines) that are not threatening; therefore, other cues must be used to distinguish threatening from benign serpentine objects. In two experiments, we systematically evaluated how common marmosets (Callithrix jacchus) visually attend to specific snake-like features. In the first experiment, we examined if skin pattern is a cue that elicits increased visual inspection of serpentine shapes by measuring the amount of time the marmosets looked into a blind before, during, and after presentation of clay models with and without patterns. The marmosets spent the most time looking at the objects, both serpentine and triangle, that were etched with scales, suggesting that something may be uniquely salient about scales in evoking attention. In contrast, they showed relatively little interest in the unpatterned serpentine and control (a triangle) stimuli. In experiment 2, we replicated and extended the results of experiment 1 by adding additional stimulus conditions. We found that patterns on a serpentine shape generated more inspection than those same patterns on a triangle shape. We were unable to confirm that a scaled pattern is unique in its ability to elicit visual interest; the scaled models elicited similar looking times as line and star patterns. Our data provide a foundation for future research to examine how snakes are detected and identified by primates. Am. J. Primatol. 78:928-936, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. Effects of Alzheimer’s Disease on Visual Target Detection: A “Peripheral Bias”

    PubMed Central

    Vallejo, Vanessa; Cazzoli, Dario; Rampa, Luca; Zito, Giuseppe A.; Feuerstein, Flurin; Gruber, Nicole; Müri, René M.; Mosimann, Urs P.; Nef, Tobias

    2016-01-01

    Visual exploration is an omnipresent activity in everyday life, and might represent an important determinant of visual attention deficits in patients with Alzheimer’s Disease (AD). The present study aimed at investigating visual search performance in AD patients, in particular target detection in the far periphery, in daily living scenes. Eighteen AD patients and 20 healthy controls participated in the study. They were asked to freely explore a hemispherical screen, covering ±90°, and to respond to targets presented at 10°, 30°, and 50° eccentricity, while their eye movements were recorded. Compared to healthy controls, AD patients recognized less targets appearing in the center. No difference was found in target detection in the periphery. This pattern was confirmed by the fixation distribution analysis. These results show a neglect for the central part of the visual field for AD patients and provide new insights by mean of a search task involving a larger field of view. PMID:27582704

  8. Effects of Alzheimer's Disease on Visual Target Detection: A "Peripheral Bias".

    PubMed

    Vallejo, Vanessa; Cazzoli, Dario; Rampa, Luca; Zito, Giuseppe A; Feuerstein, Flurin; Gruber, Nicole; Müri, René M; Mosimann, Urs P; Nef, Tobias

    2016-01-01

    Visual exploration is an omnipresent activity in everyday life, and might represent an important determinant of visual attention deficits in patients with Alzheimer's Disease (AD). The present study aimed at investigating visual search performance in AD patients, in particular target detection in the far periphery, in daily living scenes. Eighteen AD patients and 20 healthy controls participated in the study. They were asked to freely explore a hemispherical screen, covering ±90°, and to respond to targets presented at 10°, 30°, and 50° eccentricity, while their eye movements were recorded. Compared to healthy controls, AD patients recognized less targets appearing in the center. No difference was found in target detection in the periphery. This pattern was confirmed by the fixation distribution analysis. These results show a neglect for the central part of the visual field for AD patients and provide new insights by mean of a search task involving a larger field of view.

  9. Global ensemble texture representations are critical to rapid scene perception.

    PubMed

    Brady, Timothy F; Shafer-Skelton, Anna; Alvarez, George A

    2017-06-01

    Traditionally, recognizing the objects within a scene has been treated as a prerequisite to recognizing the scene itself. However, research now suggests that the ability to rapidly recognize visual scenes could be supported by global properties of the scene itself rather than the objects within the scene. Here, we argue for a particular instantiation of this view: That scenes are recognized by treating them as a global texture and processing the pattern of orientations and spatial frequencies across different areas of the scene without recognizing any objects. To test this model, we asked whether there is a link between how proficient individuals are at rapid scene perception and how proficiently they represent simple spatial patterns of orientation information (global ensemble texture). We find a significant and selective correlation between these tasks, suggesting a link between scene perception and spatial ensemble tasks but not nonspatial summary statistics In a second and third experiment, we additionally show that global ensemble texture information is not only associated with scene recognition, but that preserving only global ensemble texture information from scenes is sufficient to support rapid scene perception; however, preserving the same information is not sufficient for object recognition. Thus, global ensemble texture alone is sufficient to allow activation of scene representations but not object representations. Together, these results provide evidence for a view of scene recognition based on global ensemble texture rather than a view based purely on objects or on nonspatially localized global properties. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Patterns of sleep behaviour.

    NASA Technical Reports Server (NTRS)

    Webb, W. B.

    1972-01-01

    Discussion of the electroencephalogram as the critical measurement procedure for sleep research, and survey of major findings that have emerged in the last decade on the presence of sleep within the twenty-four-hour cycle. Specifically, intrasleep processes, frequency of stage changes, sequence of stage events, sleep stage amounts, temporal patterns of sleep, and stability of intrasleep pattern in both man and lower animals are reviewed, along with some circadian aspects of sleep, temporal factors, and number of sleep episodes. It is felt that it is particularly critical to take the presence of sleep into account whenever performance is considered. When it is recognized that responsive performance is extremely limited during sleep, it is easy to visualize the extent to which performance is controlled by sleep itself.

  11. Visualization of melanoma tumor with lectin-conjugated rare-earth doped fluoride nanocrystals

    PubMed Central

    Dumych, Tetiana; Lutsyk, Maxym; Banski, Mateusz; Yashchenko, Antonina; Sojka, Bartlomiej; Horbay, Rostyslav; Lutsyk, Alexander; Stoika, Rostyslav; Misiewicz, Jan; Podhorodecki, Artur; Bilyy, Rostyslav

    2014-01-01

    Aim To develop specific fluorescent markers for melanoma tumor visualization, which would provide high selectivity and reversible binding pattern, by the use of carbohydrate-recognizing proteins, lectins, combined with the physical ability for imaging deep in the living tissues by utilizing red and near infrared fluorescent properties of specific rare-earth doped nanocrystals (NC). Methods B10F16 melanoma cells were inoculated to C57BL/6 mice for inducing experimental melanoma tumor. Tumors were removed and analyzed by lectin-histochemistry using LABA, PFA, PNA, HPA, SNA, GNA, and NPL lectins and stained with hematoxylin and eosin. NPL lectin was conjugated to fluorescent NaGdF4:Eu3+-COOH nanoparticles (5 nm) via zero length cross-linking reaction, and the conjugates were purified from unbound substances and then used for further visualization of histological samples. Fluorescent microscopy was used to visualize NPL-NaGdF4:Eu3+ with the fluorescent emission at 600-720 nm range. Results NPL lectin selectively recognized regions of undifferentiated melanoblasts surrounding neoangiogenic foci inside melanoma tumor, PNA lectin recognized differentiated melanoblasts, and LCA and WGA were bound to tumor stroma regions. NPL-NaGdF4:Eu3+ conjugated NC were efficiently detecting newly formed regions of melanoma tumor, confirmed by fluorescent microscopy in visible and near infrared mode. These conjugates possessed high photostability and were compatible with convenient xylene-based mounting systems and preserved intensive fluorescent signal at samples storage for at least 6 months. Conclusion NPL lectin-NaGdF4:Eu3+ conjugated NC permitted distinct identification of contours of the melanoma tissue on histological sections using red excitation at 590-610 nm and near infrared emission of 700-720 nm. These data are of potential practical significance for development of glycans-conjugated nanoparticles to be used for in vivo visualization of melanoma tumor. PMID:24891277

  12. SU-E-J-92: CERR: New Tools to Analyze Image Registration Precision.

    PubMed

    Apte, A; Wang, Y; Oh, J; Saleh, Z; Deasy, J

    2012-06-01

    To present new tools in CERR (The Computational Environment for Radiotherapy Research) to analyze image registration and other software updates/additions. CERR continues to be a key environment (cited more than 129 times to date) for numerous RT-research studies involving outcomes modeling, prototyping algorithms for segmentation, and registration, experiments with phantom dosimetry, IMRT research, etc. Image registration is one of the key technologies required in many research studies. CERR has been interfaced with popular image registration frameworks like Plastimatch and ITK. Once the images have been autoregistered, CERR provides tools to analyze the accuracy of registration using the following innovative approaches (1)Distance Discordance Histograms (DDH), described in detail in a separate paper and (2)'MirrorScope', explained as follows: for any view plane the 2-d image is broken up into a 2d grid of medium-sized squares. Each square contains a right-half, which is the reference image, and a left-half, which is the mirror flipped version of the overlay image. The user can increase or decrease the size of this grid to control the resolution of the analysis. Other updates to CERR include tools to extract image and dosimetric features programmatically and storage in a central database and tools to interface with Statistical analysis software like SPSS and Matlab Statistics toolbox. MirrorScope was compared on various examples, including 'perfect' registration examples and 'artificially translated' registrations. for 'perfect' registration, the patterns obtained within each circles are symmetric, and are easily, visually recognized as aligned. For registrations that are off, the patterns obtained in the circles located in the regions of imperfections show unsymmetrical patterns that are easily recognized. The new updates to CERR further increase its utility for RT-research. Mirrorscope is a visually intuitive method of monitoring the accuracy of image registration that improves on the visual confusion of standard methods. © 2012 American Association of Physicists in Medicine.

  13. Iconic-memory processing of unfamiliar stimuli by retarded and nonretarded individuals.

    PubMed

    Hornstein, H A; Mosley, J L

    1979-07-01

    The iconic-memory processing of unfamiliar stimuli was undertaken employing a visually cued partial-report procedure and a visual masking procedure. Subjects viewed stimulus arrays consisting of six Chinese characters arranged in a circular pattern for 100 msec. At variable stimulus-onset asynchronies, a teardrop indicator or an annulus was presented for 100 msec. Immediately upon cue offset, the subject was required to recognize the cued stimulus from a card containing a single character. Retarded subjects' performance was comparable to that of MA- and CA-matched subjects. We suggested that earlier reported iconic-memory differences between retarded and nonretarded individuals may be attributable to processes other than iconic memory.

  14. Repetition Is the Feature Behind the Attentional Bias for Recognizing Threatening Patterns.

    PubMed

    Shabbir, Maryam; Zon, Adelynn M Y; Thuppil, Vivek

    2018-01-01

    Animals attend to what is relevant in order to behave in an effective manner and succeed in their environments. In several nonhuman species, there is an evolved bias for attending to patterns indicative of threats in the natural environment such as dangerous animals. Because skins of many dangerous animals are typically repetitive, we propose that repetition is the key feature enabling recognition of evolutionarily important threats. The current study consists of two experiments where we measured participants' reactions to pictures of male and female models wearing clothing of various repeating (leopard skin, snakeskin, and floral print) and nonrepeating (camouflage, shiny, and plain) patterns. In Experiment 1, when models wearing patterns were presented side by side with total fixation duration as the measure, the repeating floral pattern was the most provocative, with total fixation duration significantly longer than all other patterns. Leopard and snakeskin patterns had total fixation durations that were significantly longer than the plain pattern. In Experiment 2, we employed a visual-search task where participants were required to find models wearing the various patterns in a setting of a crowded airport terminal. Participants detected leopard skin pattern and repetitive floral pattern significantly faster than two of the nonpatterned clothing styles. Our experimental findings support the hypothesis that repetition of specific visual features might facilitate target detection, especially those characterizing evolutionary important threats. Our findings that intricate, but nonthreatening repeating patterns can have similar attention-grabbing properties to animal skin patterns have important implications for the fashion industry and wildlife trade.

  15. When May a Child Who Is Visually Impaired Recognize a Face?

    ERIC Educational Resources Information Center

    Markham, R.; Wyver, S.

    1996-01-01

    The ability of 16 school-age children with visual impairments and their sighted peers to recognize faces was compared. Although no intergroup differences were found in ability to identify entire faces, the visually impaired children were at a disadvantage when part of the face, especially the eyes, was not visible. Degree of visual acuity also…

  16. Toward a Computational Neuropsychology of High-Level Vision.

    DTIC Science & Technology

    1984-08-20

    known as visual agnosia ’ (also called "mindblindness’)l this patient failed to *recognize her nurses, got lost frequently when travelling familiar routes...visual agnosia are not blind: these patients can compare two shapes reliably when Computational neuropsychology 16 both are visible, but they cannot...visually recognize what an object is (although many can recognize objects by touch). This sort of agnosia has been well-documented in the literature (see

  17. Radical “Visual Capture” Observed in a Patient with Severe Visual Agnosia

    PubMed Central

    Takaiwa, Akiko; Yoshimura, Hirokazu; Abe, Hirofumi; Terai, Satoshi

    2003-01-01

    We report the case of a 79-year-old female with visual agnosia due to brain infarction in the left posterior cerebral artery. She could recognize objects used in daily life rather well by touch (the number of objects correctly identified was 16 out of 20 presented objects), but she could not recognize them as well by vision (6 out of 20). In this case, it was expected that she would recognize them well when permitted to use touch and vision simultaneously. Our patient, however, performed poorly, producing 5 correct answers out of 20 in the Vision-and-Touch condition. It would be natural to think that visual capture functions when vision and touch provide contradictory information on concrete positions and shapes. However, in the present case, it functioned in spite of the visual deficit in recognizing objects. This should be called radical visual capture. By presenting detailed descriptions of her symptoms and neuropsychological and neuroradiological data, we clarify the characteristics of this type of capture. PMID:12719638

  18. The use of decision tree induction and artificial neural networks for recognizing the geochemical distribution patterns of LREE in the Choghart deposit, Central Iran

    NASA Astrophysics Data System (ADS)

    Zaremotlagh, S.; Hezarkhani, A.

    2017-04-01

    Some evidences of rare earth elements (REE) concentrations are found in iron oxide-apatite (IOA) deposits which are located in Central Iranian microcontinent. There are many unsolved problems about the origin and metallogenesis of IOA deposits in this district. Although it is considered that felsic magmatism and mineralization were simultaneous in the district, interaction of multi-stage hydrothermal-magmatic processes within the Early Cambrian volcano-sedimentary sequence probably caused some epigenetic mineralizations. Secondary geological processes (e.g., multi-stage mineralization, alteration, and weathering) have affected on variations of major elements and possible redistribution of REE in IOA deposits. Hence, the geochemical behaviors and distribution patterns of REE are expected to be complicated in different zones of these deposits. The aim of this paper is recognizing LREE distribution patterns based on whole-rock chemical compositions and automatic discovery of their geochemical rules. For this purpose, the pattern recognition techniques including decision tree and neural network were applied on a high-dimensional geochemical dataset from Choghart IOA deposit. Because some data features were irrelevant or redundant in recognizing the distribution patterns of each LREE, a greedy attribute subset selection technique was employed to select the best subset of predictors used in classification tasks. The decision trees (CART algorithm) were pruned optimally to more accurately categorize independent test data than unpruned ones. The most effective classification rules were extracted from the pruned tree to describe the meaningful relationships between the predictors and different concentrations of LREE. A feed-forward artificial neural network was also applied to reliably predict the influence of various rock compositions on the spatial distribution patterns of LREE with a better performance than the decision tree induction. The findings of this study could be effectively used to visualize the LREE distribution patterns as geochemical maps.

  19. Effect of synapse dilution on the memory retrieval in structured attractor neural networks

    NASA Astrophysics Data System (ADS)

    Brunel, N.

    1993-08-01

    We investigate a simple model of structured attractor neural network (ANN). In this network a module codes for the category of the stored information, while another group of neurons codes for the remaining information. The probability distribution of stabilities of the patterns and the prototypes of the categories are calculated, for two different synaptic structures. The stability of the prototypes is shown to increase when the fraction of neurons coding for the category goes down. Then the effect of synapse destruction on the retrieval is studied in two opposite situations : first analytically in sparsely connected networks, then numerically in completely connected ones. In both cases the behaviour of the structured network and that of the usual homogeneous networks are compared. When lesions increase, two transitions are shown to appear in the behaviour of the structured network when one of the patterns is presented to the network. After the first transition the network recognizes the category of the pattern but not the individual pattern. After the second transition the network recognizes nothing. These effects are similar to syndromes caused by lesions in the central visual system, namely prosopagnosia and agnosia. In both types of networks (structured or homogeneous) the stability of the prototype is greater than the stability of individual patterns, however the first transition, for completely connected networks, occurs only when the network is structured.

  20. Model-based analysis of pattern motion processing in mouse primary visual cortex

    PubMed Central

    Muir, Dylan R.; Roth, Morgane M.; Helmchen, Fritjof; Kampa, Björn M.

    2015-01-01

    Neurons in sensory areas of neocortex exhibit responses tuned to specific features of the environment. In visual cortex, information about features such as edges or textures with particular orientations must be integrated to recognize a visual scene or object. Connectivity studies in rodent cortex have revealed that neurons make specific connections within sub-networks sharing common input tuning. In principle, this sub-network architecture enables local cortical circuits to integrate sensory information. However, whether feature integration indeed occurs locally in rodent primary sensory areas has not been examined directly. We studied local integration of sensory features in primary visual cortex (V1) of the mouse by presenting drifting grating and plaid stimuli, while recording the activity of neuronal populations with two-photon calcium imaging. Using a Bayesian model-based analysis framework, we classified single-cell responses as being selective for either individual grating components or for moving plaid patterns. Rather than relying on trial-averaged responses, our model-based framework takes into account single-trial responses and can easily be extended to consider any number of arbitrary predictive models. Our analysis method was able to successfully classify significantly more responses than traditional partial correlation (PC) analysis, and provides a rigorous statistical framework to rank any number of models and reject poorly performing models. We also found a large proportion of cells that respond strongly to only one stimulus class. In addition, a quarter of selectively responding neurons had more complex responses that could not be explained by any simple integration model. Our results show that a broad range of pattern integration processes already take place at the level of V1. This diversity of integration is consistent with processing of visual inputs by local sub-networks within V1 that are tuned to combinations of sensory features. PMID:26300738

  1. Qualitative GIS and the Visualization of Narrative Activity Space Data

    PubMed Central

    Mennis, Jeremy; Mason, Michael J.; Cao, Yinghui

    2012-01-01

    Qualitative activity space data, i.e. qualitative data associated with the routine locations and activities of individuals, are recognized as increasingly useful by researchers in the social and health sciences for investigating the influence of environment on human behavior. However, there has been little research on techniques for exploring qualitative activity space data. This research illustrates the theoretical principles of combining qualitative and quantitative data and methodologies within the context of GIS, using visualization as the means of inquiry. Through the use of a prototype implementation of a visualization system for qualitative activity space data, and its application in a case study of urban youth, we show how these theoretical methodological principles are realized in applied research. The visualization system uses a variety of visual variables to simultaneously depict multiple qualitative and quantitative attributes of individuals’ activity spaces. The visualization is applied to explore the activity spaces of a sample of urban youth participating in a study on the geographic and social contexts of adolescent substance use. Examples demonstrate how the visualization may be used to explore individual activity spaces to generate hypotheses, investigate statistical outliers, and explore activity space patterns among subject subgroups. PMID:26190932

  2. Qualitative GIS and the Visualization of Narrative Activity Space Data.

    PubMed

    Mennis, Jeremy; Mason, Michael J; Cao, Yinghui

    Qualitative activity space data, i.e. qualitative data associated with the routine locations and activities of individuals, are recognized as increasingly useful by researchers in the social and health sciences for investigating the influence of environment on human behavior. However, there has been little research on techniques for exploring qualitative activity space data. This research illustrates the theoretical principles of combining qualitative and quantitative data and methodologies within the context of GIS, using visualization as the means of inquiry. Through the use of a prototype implementation of a visualization system for qualitative activity space data, and its application in a case study of urban youth, we show how these theoretical methodological principles are realized in applied research. The visualization system uses a variety of visual variables to simultaneously depict multiple qualitative and quantitative attributes of individuals' activity spaces. The visualization is applied to explore the activity spaces of a sample of urban youth participating in a study on the geographic and social contexts of adolescent substance use. Examples demonstrate how the visualization may be used to explore individual activity spaces to generate hypotheses, investigate statistical outliers, and explore activity space patterns among subject subgroups.

  3. USSR and Eastern Europe Scientific Abstracts, Cybernetics, Computers, and Automation Technology, Number 27

    DTIC Science & Technology

    1977-05-10

    apply this method of forecast- ing in the solution of all major scientific-technical problems of the na- tional economy. Citing the slow...the future, however, computers will "mature" and learn to recognize patterns in what amounts to a much more complex language—the language of visual...images. Photoelectronic tracking devices or "eyes" will allow the computer to take in information in a much more complex form and to perform opera

  4. Artificial vision by multi-layered neural networks: neocognitron and its advances.

    PubMed

    Fukushima, Kunihiko

    2013-01-01

    The neocognitron is a neural network model proposed by Fukushima (1980). Its architecture was suggested by neurophysiological findings on the visual systems of mammals. It is a hierarchical multi-layered network. It acquires the ability to robustly recognize visual patterns through learning. Although the neocognitron has a long history, modifications of the network to improve its performance are still going on. For example, a recent neocognitron uses a new learning rule, named add-if-silent, which makes the learning process much simpler and more stable. Nevertheless, a high recognition rate can be kept with a smaller scale of the network. Referring to the history of the neocognitron, this paper discusses recent advances in the neocognitron. We also show that various new functions can be realized by, for example, introducing top-down connections to the neocognitron: mechanism of selective attention, recognition and completion of partly occluded patterns, restoring occluded contours, and so on. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Two Trees: Migrating Fault Trees to Decision Trees for Real Time Fault Detection on International Space Station

    NASA Technical Reports Server (NTRS)

    Lee, Charles; Alena, Richard L.; Robinson, Peter

    2004-01-01

    We started from ISS fault trees example to migrate to decision trees, presented a method to convert fault trees to decision trees. The method shows that the visualizations of root cause of fault are easier and the tree manipulating becomes more programmatic via available decision tree programs. The visualization of decision trees for the diagnostic shows a format of straight forward and easy understands. For ISS real time fault diagnostic, the status of the systems could be shown by mining the signals through the trees and see where it stops at. The other advantage to use decision trees is that the trees can learn the fault patterns and predict the future fault from the historic data. The learning is not only on the static data sets but also can be online, through accumulating the real time data sets, the decision trees can gain and store faults patterns in the trees and recognize them when they come.

  6. [Artificial intelligence in sleep analysis (ARTISANA)--modelling visual processes in sleep classification].

    PubMed

    Schwaibold, M; Schöller, B; Penzel, T; Bolz, A

    2001-05-01

    We describe a novel approach to the problem of automated sleep stage recognition. The ARTISANA algorithm mimics the behaviour of a human expert visually scoring sleep stages (Rechtschaffen and Kales classification). It comprises a number of interacting components that imitate the stepwise approach of the human expert, and artificial intelligence components. On the basis of parameters extracted at 1-s intervals from the signal curves, artificial neural networks recognize the incidence of typical patterns, e.g. delta activity or K complexes. This is followed by a rule interpretation stage that identifies the sleep stage with the aid of a neuro-fuzzy system while taking account of the context. Validation studies based on the records of 8 patients with obstructive sleep apnoea have confirmed the potential of this approach. Further features of the system include the transparency of the decision-taking process, and the flexibility of the option for expanding the system to cover new patterns and criteria.

  7. Pictorial review of radiographic patterns of injury in modern warfare: imaging the conflict in Afghanistan.

    PubMed

    Peramaki, Ed R

    2011-05-01

    Radiographic assessment of combat injuries has been an important component of casualty care in every major conflict of the 20th and 21st centuries. The advent of multislice computed tomography scanners has provided physicians with the ability to visualize organ injury at submillimetre resolution, changing the way war wounds are treated. Modern wars are, for the most part, asymmetric conflicts where improvised explosive devices have replaced artillery as a major cause of casualties. Both bullets and explosive devices wreak distinctive patterns of injury on the human body. Being able to recognize these patterns and their potential associated morbidities will allow medical personnel to provide expert and timely care to some of the most severely injured patients on earth. This series of pictorial essays will review the radiographic patterns of combat-related injury encountered in southern Afghanistan in 2008-2009.

  8. Improving the discrimination of hand motor imagery via virtual reality based visual guidance.

    PubMed

    Liang, Shuang; Choi, Kup-Sze; Qin, Jing; Pang, Wai-Man; Wang, Qiong; Heng, Pheng-Ann

    2016-08-01

    While research on the brain-computer interface (BCI) has been active in recent years, how to get high-quality electrical brain signals to accurately recognize human intentions for reliable communication and interaction is still a challenging task. The evidence has shown that visually guided motor imagery (MI) can modulate sensorimotor electroencephalographic (EEG) rhythms in humans, but how to design and implement efficient visual guidance during MI in order to produce better event-related desynchronization (ERD) patterns is still unclear. The aim of this paper is to investigate the effect of using object-oriented movements in a virtual environment as visual guidance on the modulation of sensorimotor EEG rhythms generated by hand MI. To improve the classification accuracy on MI, we further propose an algorithm to automatically extract subject-specific optimal frequency and time bands for the discrimination of ERD patterns produced by left and right hand MI. The experimental results show that the average classification accuracy of object-directed scenarios is much better than that of non-object-directed scenarios (76.87% vs. 69.66%). The result of the t-test measuring the difference between them is statistically significant (p = 0.0207). When compared to algorithms based on fixed frequency and time bands, contralateral dominant ERD patterns can be enhanced by using the subject-specific optimal frequency and the time bands obtained by our proposed algorithm. These findings have the potential to improve the efficacy and robustness of MI-based BCI applications. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. The menagerie of neurology

    PubMed Central

    Beh, Shin C.; Frohman, Teresa; Frohman, Elliot M.

    2014-01-01

    Summary Neurology is a field known for “eponymophilia.” While eponym use has been a controversial issue in medicine, animal-related metaphoric descriptions continue to flourish in neurologic practice, particularly with the advent of neuroimaging. To provide practicing and trainee neurologists with a useful reference for all these colorful eponyms, we performed a literature review and summarized the various animal eponyms in the practice of neurology (and their etiologic implications) to date. We believe that the ability to recognize animal-like attributes in clinical neurology and neuroradiology may be attributed to a visual phenomenon known as pareidolia. We propose that animal eponyms are a useful method of recognizing clinical and radiologic patterns that aid in the diagnostic process and therefore are effective aidesmémoire and communicative tools that enliven and improve the practice of neurology. PMID:29473555

  10. The Pale Blue Dot: Utilizing Real World Globes in High School and Undergraduate Oceanography Classrooms

    NASA Astrophysics Data System (ADS)

    Rogers, D. B.

    2017-12-01

    Geoscience classrooms have benefitted greatly from the use of interactive, dry-erasable globes to supplement instruction on topics that require three-dimensional visualization, such as seismic wave propagation and the large-scale movements of tectonic plates. Indeed, research by Bamford (2013) demonstrates that using three-dimensional visualization to illustrate complex processes enhances student comprehension. While some geoscience courses tend to bake-in lessons on visualization, other disciplines of earth science that require three-dimensional visualization, such as oceanography, tend to rely on students' prior spatial abilities. In addition to spatial intelligence, education on the three-dimensional structure of the ocean requires knowledge of the external processes govern the behavior of the ocean, as well as the vertical and lateral distribution of water properties around the globe. Presented here are two oceanographic activities that utilize RealWorldGlobes' dry-erase globes to supplement traditional oceanography lessons on thermohaline and surface ocean circulation. While simultaneously promoting basic plotting techniques, mathematical calculations, and unit conversions, these activities touch on the processes that govern global ocean circulation, the principles of radiocarbon dating, and the various patterns exhibited by surface ocean currents. These activities challenge students to recognize inherent patterns within their data and synthesize explanations for their occurrence. Spatial visualization and critical thinking are integral to any geoscience education, and the combination of these abilities with engaging hands-on activities has the potential to greatly enhance oceanography education in both secondary and postsecondary settings

  11. Driver behavior in car-to-pedestrian incidents: An application of the Driving Reliability and Error Analysis Method (DREAM).

    PubMed

    Habibovic, Azra; Tivesten, Emma; Uchida, Nobuyuki; Bärgman, Jonas; Ljung Aust, Mikael

    2013-01-01

    To develop relevant road safety countermeasures, it is necessary to first obtain an in-depth understanding of how and why safety-critical situations such as incidents, near-crashes, and crashes occur. Video-recordings from naturalistic driving studies provide detailed information on events and circumstances prior to such situations that is difficult to obtain from traditional crash investigations, at least when it comes to the observable driver behavior. This study analyzed causation in 90 video-recordings of car-to-pedestrian incidents captured by onboard cameras in a naturalistic driving study in Japan. The Driving Reliability and Error Analysis Method (DREAM) was modified and used to identify contributing factors and causation patterns in these incidents. Two main causation patterns were found. In intersections, drivers failed to recognize the presence of the conflict pedestrian due to visual obstructions and/or because their attention was allocated towards something other than the conflict pedestrian. In incidents away from intersections, this pattern reoccurred along with another pattern showing that pedestrians often behaved in unexpected ways. These patterns indicate that an interactive advanced driver assistance system (ADAS) able to redirect the driver's attention could have averted many of the intersection incidents, while autonomous systems may be needed away from intersections. Cooperative ADAS may be needed to address issues raised by visual obstructions. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. The Ability of Visually Impaired Children to Read Expressions and Recognize Faces.

    ERIC Educational Resources Information Center

    Ellis, H. D.; And Others

    1987-01-01

    Seventeen visually impaired children, aged 7-11 years, were compared with sighted children on a test of facial recognition and a test of expression identification. The visually impaired children were less able to recognize faces successfully but showed no disadvantage in discerning facial expressions such as happiness, anger, surprise, or fear.…

  13. Whole CMV Proteome Pattern Recognition Analysis after HSCT Identifies Unique Epitope Targets Associated with the CMV Status

    PubMed Central

    Pérez-Bercoff, Lena; Valentini, Davide; Gaseitsiwe, Simani; Mahdavifar, Shahnaz; Schutkowski, Mike; Poiret, Thomas; Pérez-Bercoff, Åsa; Ljungman, Per; Maeurer, Markus J.

    2014-01-01

    Cytomegalovirus (CMV) infection represents a vital complication after Hematopoietic Stem Cell Transplantation (HSCT). We screened the entire CMV proteome to visualize the humoral target epitope-focus profile in serum after HSCT. IgG profiling from four patient groups (donor and/or recipient +/− for CMV) was performed at 6, 12 and 24 months after HSCT using microarray slides containing 17174 of 15mer-peptides overlapping by 4 aa covering 214 proteins from CMV. Data were analyzed using maSigPro, PAM and the ‘exclusive recognition analysis (ERA)’ to identify unique CMV epitope responses for each patient group. The ‘exclusive recognition analysis’ of serum epitope patterns segregated best 12 months after HSCT for the D+/R+ group (versus D−/R−). Epitopes were derived from UL123 (IE1), UL99 (pp28), UL32 (pp150), this changed at 24 months to 2 strongly recognized peptides provided from UL123 and UL100. Strongly (IgG) recognized CMV targets elicited also robust cytokine production in T-cells from patients after HSCT defined by intracellular cytokine staining (IL-2, TNF, IFN and IL-17). High-content peptide microarrays allow epitope profiling of entire viral proteomes; this approach can be useful to map relevant targets for diagnostics and therapy in patients with well defined clinical endpoints. Peptide microarray analysis visualizes the breadth of B-cell immune reconstitution after HSCT and provides a useful tool to gauge immune reconstitution. PMID:24740411

  14. Multidimensional brain activity dictated by winner-take-all mechanisms.

    PubMed

    Tozzi, Arturo; Peters, James F

    2018-06-21

    A novel demon-based architecture is introduced to elucidate brain functions such as pattern recognition during human perception and mental interpretation of visual scenes. Starting from the topological concepts of invariance and persistence, we introduce a Selfridge pandemonium variant of brain activity that takes into account a novel feature, namely, demons that recognize short straight-line segments, curved lines and scene shapes, such as shape interior, density and texture. Low-level representations of objects can be mapped to higher-level views (our mental interpretations): a series of transformations can be gradually applied to a pattern in a visual scene, without affecting its invariant properties. This makes it possible to construct a symbolic multi-dimensional representation of the environment. These representations can be projected continuously to an object that we have seen and continue to see, thanks to the mapping from shapes in our memory to shapes in Euclidean space. Although perceived shapes are 3-dimensional (plus time), the evaluation of shape features (volume, color, contour, closeness, texture, and so on) leads to n-dimensional brain landscapes. Here we discuss the advantages of our parallel, hierarchical model in pattern recognition, computer vision and biological nervous system's evolution. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. The role of sensorimotor learning in the perception of letter-like forms: tracking the causes of neural specialization for letters.

    PubMed

    James, Karin H; Atwood, Thea P

    2009-02-01

    Functional specialization in the brain is considered a hallmark of efficient processing. It is therefore not surprising that there are brain areas specialized for processing letters. To better understand the causes of functional specialization for letters, we explore the emergence of this pattern of response in the ventral processing stream through a training paradigm. Previously, we hypothesized that the specialized response pattern seen during letter perception may be due in part to our experience in writing letters. The work presented here investigates whether or not this aspect of letter processing-the integration of sensorimotor systems through writing-leads to functional specialization in the visual system. To test this idea, we investigated whether or not different types of experiences with letter-like stimuli ("pseudoletters") led to functional specialization similar to that which exists for letters. Neural activation patterns were measured using functional magnetic resonance imaging (fMRI) before and after three different types of training sessions. Participants were trained to recognize pseudoletters by writing, typing, or purely visual practice. Results suggested that only after writing practice did neural activation patterns to pseudoletters resemble patterns seen for letters. That is, neural activation in the left fusiform and dorsal precentral gyrus was greater when participants viewed pseudoletters than other, similar stimuli but only after writing experience. Neural activation also increased after typing practice in the right fusiform and left precentral gyrus, suggesting that in some areas, any motor experience may change visual processing. The results of this experiment suggest an intimate interaction among perceptual and motor systems during pseudoletter perception that may be extended to everyday letter perception.

  16. Colour vision and response bias in a coral reef fish.

    PubMed

    Cheney, Karen L; Newport, Cait; McClure, Eva C; Marshall, N Justin

    2013-08-01

    Animals use coloured signals for a variety of communication purposes, including to attract potential mates, recognize individuals, defend territories and warn predators of secondary defences (aposematism). To understand the mechanisms that drive the evolution and design of such visual signals, it is important to understand the visual systems and potential response biases of signal receivers. Here, we provide raw data on the spectral capabilities of a coral reef fish, the Picasso triggerfish Rhinecanthus aculeatus, which is potentially trichromatic with three cone sensitivities of 413 nm (single cone), 480 nm (double cone, medium sensitivity) and 528 nm (double cone, long sensitivity), and a rod sensitivity of 498 nm. The ocular media have a 50% transmission cut off at 405 nm. Behavioural experiments confirmed colour vision over their spectral range; triggerfish were significantly more likely to choose coloured stimuli over grey distractors, irrespective of luminance. We then examined whether response biases existed towards coloured and patterned stimuli to provide insight into how visual signals - in particular, aposematic colouration - may evolve. Triggerfish showed a preferential foraging response bias to red and green stimuli, in contrast to blue and yellow, irrespective of pattern. There was no response bias to patterned over monochromatic non-patterned stimuli. A foraging response bias towards red in fish differs from that of avian predators, who often avoid red food items. Red is frequently associated with warning colouration in terrestrial environments (ladybirds, snakes, frogs), whilst blue is used in aquatic environments (blue-ringed octopus, nudibranchs); whether the design of warning (aposematic) displays is a cause or consequence of response biases is unclear.

  17. Smarter Instruments, Smarter Archives: Machine Learning for Tactical Science

    NASA Astrophysics Data System (ADS)

    Thompson, D. R.; Kiran, R.; Allwood, A.; Altinok, A.; Estlin, T.; Flannery, D.

    2014-12-01

    There has been a growing interest by Earth and Planetary Sciences in machine learning, visualization and cyberinfrastructure to interpret ever-increasing volumes of instrument data. Such tools are commonly used to analyze archival datasets, but they can also play a valuable real-time role during missions. Here we discuss ways that machine learning can benefit tactical science decisions during Earth and Planetary Exploration. Machine learning's potential begins at the instrument itself. Smart instruments endowed with pattern recognition can immediately recognize science features of interest. This allows robotic explorers to optimize their limited communications bandwidth, triaging science products and prioritizing the most relevant data. Smart instruments can also target their data collection on the fly, using principles of experimental design to reduce redundancy and generally improve sampling efficiency for time-limited operations. Moreover, smart instruments can respond immediately to transient or unexpected phenomena. Examples include detections of cometary plumes, terrestrial floods, or volcanism. We show recent examples of smart instruments from 2014 tests including: aircraft and spacecraft remote sensing instruments that recognize cloud contamination, field tests of a "smart camera" for robotic surface geology, and adaptive data collection by X-Ray fluorescence spectrometers. Machine learning can also assist human operators when tactical decision making is required. Terrestrial scenarios include airborne remote sensing, where the decision to re-fly a transect must be made immediately. Planetary scenarios include deep space encounters or planetary surface exploration, where the number of command cycles is limited and operators make rapid daily decisions about where next to collect measurements. Visualization and modeling can reveal trends, clusters, and outliers in new data. This can help operators recognize instrument artifacts or spot anomalies in real time. We show recent examples from science data pipelines deployed onboard aircraft as well as tactical visualizations for non-image instrument data.

  18. Learning and Recognition of Clothing Genres From Full-Body Images.

    PubMed

    Hidayati, Shintami C; You, Chuang-Wen; Cheng, Wen-Huang; Hua, Kai-Lung

    2018-05-01

    According to the theory of clothing design, the genres of clothes can be recognized based on a set of visually differentiable style elements, which exhibit salient features of visual appearance and reflect high-level fashion styles for better describing clothing genres. Instead of using less-discriminative low-level features or ambiguous keywords to identify clothing genres, we proposed a novel approach for automatically classifying clothing genres based on the visually differentiable style elements. A set of style elements, that are crucial for recognizing specific visual styles of clothing genres, were identified based on the clothing design theory. In addition, the corresponding salient visual features of each style element were identified and formulated with variables that can be computationally derived with various computer vision algorithms. To evaluate the performance of our algorithm, a dataset containing 3250 full-body shots crawled from popular online stores was built. Recognition results show that our proposed algorithms achieved promising overall precision, recall, and -score of 88.76%, 88.53%, and 88.64% for recognizing upperwear genres, and 88.21%, 88.17%, and 88.19% for recognizing lowerwear genres, respectively. The effectiveness of each style element and its visual features on recognizing clothing genres was demonstrated through a set of experiments involving different sets of style elements or features. In summary, our experimental results demonstrate the effectiveness of the proposed method in clothing genre recognition.

  19. Honey bee cognition.

    PubMed

    Gould, J L

    1990-11-01

    The visual memory of honey bees is stored pictorially. Bees will accept a mirror-image reversal of a familiar pattern in the absence of the original, but prefer the original over the reversal; the matching system of bees, therefore, does not incorporate a mirror-image ambiguity. Bees will not accept a rotation of a familiar vertical pattern, but readily recognize any rotation of a horizontal pattern; the context-specific ability to make a mental transformation seems justified by natural contingencies. Bees are able to construct and use cognitive maps of their home area, though it is possible to create conditions under which they lack useful cues. Other experiments suggest that recruits, having attended a dance in the hive specifying the distance and direction of a food source, can evaluate the "plausibility" of the location without leaving the hive; this suggests a kind of imagination.

  20. Brief Communication: visual-field superiority as a function of stimulus type and content: further evidence.

    PubMed

    Basu, Anamitra; Mandal, Manas K

    2004-07-01

    The present study examined visual-field advantage as a function of presentation mode (unilateral, bilateral), stimulus structure (facial, lexical), and stimulus content (emotional, neutral). The experiment was conducted in a split visual-field paradigm using a JAVA-based computer program with recognition accuracy as the dependent measure. Unilaterally, rather than bilaterally, presented stimuli were significantly better recognized. Words were significantly better recognized than faces in the right visual-field; the difference was nonsignificant in the left visual-field. Emotional content elicited left visual-field and neutral content elicited right visual-field advantages. Copyright Taylor and Francis Inc.

  1. Rhesus macaques recognize unique multi-modal face-voice relations of familiar individuals and not of unfamiliar ones

    PubMed Central

    Habbershon, Holly M.; Ahmed, Sarah Z.; Cohen, Yale E.

    2013-01-01

    Communication signals in non-human primates are inherently multi-modal. However, for laboratory-housed monkeys, there is relatively little evidence in support of the use of multi-modal communication signals in individual recognition. Here, we used a preferential-looking paradigm to test whether laboratory-housed rhesus could “spontaneously” (i.e., in the absence of operant training) use multi-modal communication stimuli to discriminate between known conspecifics. The multi-modal stimulus was a silent movie of two monkeys vocalizing and an audio file of the vocalization from one of the monkeys in the movie. We found that the gaze patterns of those monkeys that knew the individuals in the movie were reliably biased toward the individual that did not produce the vocalization. In contrast, there was not a systematic gaze pattern for those monkeys that did not know the individuals in the movie. These data are consistent with the hypothesis that laboratory-housed rhesus can recognize and distinguish between conspecifics based on auditory and visual communication signals. PMID:23774779

  2. Automated analysis and reannotation of subcellular locations in confocal images from the Human Protein Atlas.

    PubMed

    Li, Jieyue; Newberg, Justin Y; Uhlén, Mathias; Lundberg, Emma; Murphy, Robert F

    2012-01-01

    The Human Protein Atlas contains immunofluorescence images showing subcellular locations for thousands of proteins. These are currently annotated by visual inspection. In this paper, we describe automated approaches to analyze the images and their use to improve annotation. We began by training classifiers to recognize the annotated patterns. By ranking proteins according to the confidence of the classifier, we generated a list of proteins that were strong candidates for reexamination. In parallel, we applied hierarchical clustering to group proteins and identified proteins whose annotations were inconsistent with the remainder of the proteins in their cluster. These proteins were reexamined by the original annotators, and a significant fraction had their annotations changed. The results demonstrate that automated approaches can provide an important complement to visual annotation.

  3. Visual scanning behavior is related to recognition performance for own- and other-age faces

    PubMed Central

    Proietti, Valentina; Macchi Cassia, Viola; dell’Amore, Francesca; Conte, Stefania; Bricolo, Emanuela

    2015-01-01

    It is well-established that our recognition ability is enhanced for faces belonging to familiar categories, such as own-race faces and own-age faces. Recent evidence suggests that, for race, the recognition bias is also accompanied by different visual scanning strategies for own- compared to other-race faces. Here, we tested the hypothesis that these differences in visual scanning patterns extend also to the comparison between own and other-age faces and contribute to the own-age recognition advantage. Participants (young adults with limited experience with infants) were tested in an old/new recognition memory task where they encoded and subsequently recognized a series of adult and infant faces while their eye movements were recorded. Consistent with findings on the other-race bias, we found evidence of an own-age bias in recognition which was accompanied by differential scanning patterns, and consequently differential encoding strategies, for own-compared to other-age faces. Gaze patterns for own-age faces involved a more dynamic sampling of the internal features and longer viewing time on the eye region compared to the other regions of the face. This latter strategy was extensively employed during learning (vs. recognition) and was positively correlated to discriminability. These results suggest that deeply encoding the eye region is functional for recognition and that the own-age bias is evident not only in differential recognition performance, but also in the employment of different sampling strategies found to be effective for accurate recognition. PMID:26579056

  4. Children's and adults' memory for emotional pictures: examining age-related patterns using the Developmental Affective Photo System.

    PubMed

    Cordon, Ingrid M; Melinder, Annika M D; Goodman, Gail S; Edelstein, Robin S

    2013-02-01

    Two studies were conducted to examine theoretical questions about children's and adults' memory for emotional visual stimuli. In Study 1, 7- to 9-year-olds and adults (N=172) participated in the initial creation of the Developmental Affective Photo System (DAPS). Ratings of emotional valence, arousal, and complexity were obtained. In Study 2, DAPS pictures were presented to 20 8- to 12-year-olds and 30 adults, followed by a recognition memory test. Children and adults recognized aversive images better than neutral images. Moreover, children and adults recognized high and moderate arousal images more accurately than low arousal images. Adults' memory for neutral images exceeded that of children, but there were no developmental differences in memory for aversive pictures. Theoretical and methodological implications are discussed. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. The menagerie of neurology: Animal signs and the refinement of clinical acumen.

    PubMed

    Beh, Shin C; Frohman, Teresa; Frohman, Elliot M

    2014-06-01

    Neurology is a field known for "eponymophilia." While eponym use has been a controversial issue in medicine, animal-related metaphoric descriptions continue to flourish in neurologic practice, particularly with the advent of neuroimaging. To provide practicing and trainee neurologists with a useful reference for all these colorful eponyms, we performed a literature review and summarized the various animal eponyms in the practice of neurology (and their etiologic implications) to date. We believe that the ability to recognize animal-like attributes in clinical neurology and neuroradiology may be attributed to a visual phenomenon known as pareidolia. We propose that animal eponyms are a useful method of recognizing clinical and radiologic patterns that aid in the diagnostic process and therefore are effective aidesmémoire and communicative tools that enliven and improve the practice of neurology.

  6. Automated indirect immunofluorescence evaluation of antinuclear autoantibodies on HEp-2 cells.

    PubMed

    Voigt, Jörn; Krause, Christopher; Rohwäder, Edda; Saschenbrecker, Sandra; Hahn, Melanie; Danckwardt, Maick; Feirer, Christian; Ens, Konstantin; Fechner, Kai; Barth, Erhardt; Martinetz, Thomas; Stöcker, Winfried

    2012-01-01

    Indirect immunofluorescence (IIF) on human epithelial (HEp-2) cells is considered as the gold standard screening method for the detection of antinuclear autoantibodies (ANA). However, in terms of automation and standardization, it has not been able to keep pace with most other analytical techniques used in diagnostic laboratories. Although there are already some automation solutions for IIF incubation in the market, the automation of result evaluation is still in its infancy. Therefore, the EUROPattern Suite has been developed as a comprehensive automated processing and interpretation system for standardized and efficient ANA detection by HEp-2 cell-based IIF. In this study, the automated pattern recognition was compared to conventional visual interpretation in a total of 351 sera. In the discrimination of positive from negative samples, concordant results between visual and automated evaluation were obtained for 349 sera (99.4%, kappa = 0.984). The system missed out none of the 272 antibody-positive samples and identified 77 out of 79 visually negative samples (analytical sensitivity/specificity: 100%/97.5%). Moreover, 94.0% of all main antibody patterns were recognized correctly by the software. Owing to its performance characteristics, EUROPattern enables fast, objective, and economic IIF ANA analysis and has the potential to reduce intra- and interlaboratory variability.

  7. Automated Indirect Immunofluorescence Evaluation of Antinuclear Autoantibodies on HEp-2 Cells

    PubMed Central

    Voigt, Jörn; Krause, Christopher; Rohwäder, Edda; Saschenbrecker, Sandra; Hahn, Melanie; Danckwardt, Maick; Feirer, Christian; Ens, Konstantin; Fechner, Kai; Barth, Erhardt; Martinetz, Thomas; Stöcker, Winfried

    2012-01-01

    Indirect immunofluorescence (IIF) on human epithelial (HEp-2) cells is considered as the gold standard screening method for the detection of antinuclear autoantibodies (ANA). However, in terms of automation and standardization, it has not been able to keep pace with most other analytical techniques used in diagnostic laboratories. Although there are already some automation solutions for IIF incubation in the market, the automation of result evaluation is still in its infancy. Therefore, the EUROPattern Suite has been developed as a comprehensive automated processing and interpretation system for standardized and efficient ANA detection by HEp-2 cell-based IIF. In this study, the automated pattern recognition was compared to conventional visual interpretation in a total of 351 sera. In the discrimination of positive from negative samples, concordant results between visual and automated evaluation were obtained for 349 sera (99.4%, kappa = 0.984). The system missed out none of the 272 antibody-positive samples and identified 77 out of 79 visually negative samples (analytical sensitivity/specificity: 100%/97.5%). Moreover, 94.0% of all main antibody patterns were recognized correctly by the software. Owing to its performance characteristics, EUROPattern enables fast, objective, and economic IIF ANA analysis and has the potential to reduce intra- and interlaboratory variability. PMID:23251220

  8. 'Silent voices' in health services research: ethnicity and socioeconomic variation in participation in studies of quality of life in childhood visual disability.

    PubMed

    Tadic, Valerie; Hamblion, Esther Louise; Keeley, Sarah; Cumberland, Phillippa; Lewando Hundt, Gillian; Rahi, Jugnoo Sangeeta

    2010-04-01

    Purpose. To investigate patterns of participation of visually impaired (VI) children and their families in health services research. Methods. The authors compared clinical and sociodemographic characteristics of children and their families who participated with those who did not participate in two studies of quality of life (QoL) of VI children. In Study 1, the authors interviewed VI children and adolescents, aged 10 to 15 years, about their vision-related quality of life (VRQoL) as the first phase of a program to develop a VRQoL instrument for this population. One hundred seven children with visual impairment (visual acuity in the better eye LogMar worse than 0.51) were invited to participate in the interviews. Study 2 investigated health-related quality of life (HRQoL) of VI children using an existing generic instrument, administered in a postal survey. 151 VI children and adolescents, aged 2 to 16 years, with hereditary retinal disorders were invited to participate in the survey. Results. The overall participation level was below 50%. In both studies, participants from white ethnic and more affluent socioeconomic backgrounds were overrepresented. Participation did not vary by age, sex, or clinical characteristics. Conclusions. The authors suggest that there are barriers to participation in child- and family-centered research on childhood visual disability for children from socioeconomically deprived or ethnic minority groups. They urge assessment and reporting of participation patterns in further health services research on childhood visual disability. Failure to recognize that there are "silent voices" is likely to have important implications for equitable and appropriate service planning and provision for VI children.

  9. Higher-Order Neural Networks Recognize Patterns

    NASA Technical Reports Server (NTRS)

    Reid, Max B.; Spirkovska, Lilly; Ochoa, Ellen

    1996-01-01

    Networks of higher order have enhanced capabilities to distinguish between different two-dimensional patterns and to recognize those patterns. Also enhanced capabilities to "learn" patterns to be recognized: "trained" with far fewer examples and, therefore, in less time than necessary to train comparable first-order neural networks.

  10. The effect of integration masking on visual processing in perceptual categorization.

    PubMed

    Hélie, Sébastien

    2017-08-01

    Learning to recognize and categorize objects is an essential cognitive skill allowing animals to function in the world. However, animals rarely have access to a canonical view of an object in an uncluttered environment. Hence, it is essential to study categorization under noisy, degraded conditions. In this article, we explore how the brain processes categorization stimuli in low signal-to-noise conditions using multivariate pattern analysis. We used an integration masking paradigm with mask opacity of 50%, 60%, and 70% inside a magnetic resonance imaging scanner. The results show that mask opacity affects blood-oxygen-level dependent (BOLD) signal in visual processing areas (V1, V2, V3, and V4) but does not affect the BOLD signal in brain areas traditionally associated with categorization (prefrontal cortex, striatum, hippocampus). This suggests that when a stimulus is difficult to extract from its background (e.g., low signal-to-noise ratio), the visual system extracts the stimulus and that activity in areas typically associated with categorization are not affected by the difficulty level of the visual conditions. We conclude with implications of this result for research on visual attention, categorization, and the integration of these fields. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Analyzing tree-shape anatomical structures using topological descriptors of branching and ensemble of classifiers.

    PubMed

    Skoura, Angeliki; Bakic, Predrag R; Megalooikonomou, Vasilis

    2013-01-01

    The analysis of anatomical tree-shape structures visualized in medical images provides insight into the relationship between tree topology and pathology of the corresponding organs. In this paper, we propose three methods to extract descriptive features of the branching topology; the asymmetry index, the encoding of branching patterns using a node labeling scheme and an extension of the Sholl analysis. Based on these descriptors, we present classification schemes for tree topologies with respect to the underlying pathology. Moreover, we present a classifier ensemble approach which combines the predictions of the individual classifiers to optimize the classification accuracy. We applied the proposed methodology to a dataset of x-ray galactograms, medical images which visualize the breast ductal tree, in order to recognize images with radiological findings regarding breast cancer. The experimental results demonstrate the effectiveness of the proposed framework compared to state-of-the-art techniques suggesting that the proposed descriptors provide more valuable information regarding the topological patterns of ductal trees and indicating the potential of facilitating early breast cancer diagnosis.

  12. Analyzing tree-shape anatomical structures using topological descriptors of branching and ensemble of classifiers

    PubMed Central

    Skoura, Angeliki; Bakic, Predrag R.; Megalooikonomou, Vasilis

    2014-01-01

    The analysis of anatomical tree-shape structures visualized in medical images provides insight into the relationship between tree topology and pathology of the corresponding organs. In this paper, we propose three methods to extract descriptive features of the branching topology; the asymmetry index, the encoding of branching patterns using a node labeling scheme and an extension of the Sholl analysis. Based on these descriptors, we present classification schemes for tree topologies with respect to the underlying pathology. Moreover, we present a classifier ensemble approach which combines the predictions of the individual classifiers to optimize the classification accuracy. We applied the proposed methodology to a dataset of x-ray galactograms, medical images which visualize the breast ductal tree, in order to recognize images with radiological findings regarding breast cancer. The experimental results demonstrate the effectiveness of the proposed framework compared to state-of-the-art techniques suggesting that the proposed descriptors provide more valuable information regarding the topological patterns of ductal trees and indicating the potential of facilitating early breast cancer diagnosis. PMID:25414850

  13. Bio-inspired approach to multistage image processing

    NASA Astrophysics Data System (ADS)

    Timchenko, Leonid I.; Pavlov, Sergii V.; Kokryatskaya, Natalia I.; Poplavska, Anna A.; Kobylyanska, Iryna M.; Burdenyuk, Iryna I.; Wójcik, Waldemar; Uvaysova, Svetlana; Orazbekov, Zhassulan; Kashaganova, Gulzhan

    2017-08-01

    Multistage integration of visual information in the brain allows people to respond quickly to most significant stimuli while preserving the ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing, described in this paper, comprises main types of cortical multistage convergence. One of these types occurs within each visual pathway and the other between the pathways. This approach maps input images into a flexible hierarchy which reflects the complexity of the image data. The procedures of temporal image decomposition and hierarchy formation are described in mathematical terms. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image which encapsulates, in a computer manner, structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a very quick response from the system. The result is represented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match.

  14. Category-Specificity in Visual Object Recognition

    ERIC Educational Resources Information Center

    Gerlach, Christian

    2009-01-01

    Are all categories of objects recognized in the same manner visually? Evidence from neuropsychology suggests they are not: some brain damaged patients are more impaired in recognizing natural objects than artefacts whereas others show the opposite impairment. Category-effects have also been demonstrated in neurologically intact subjects, but the…

  15. Holistic neural coding of Chinese character forms in bilateral ventral visual system.

    PubMed

    Mo, Ce; Yu, Mengxia; Seger, Carol; Mo, Lei

    2015-02-01

    How are Chinese characters recognized and represented in the brain of skilled readers? Functional MRI fast adaptation technique was used to address this question. We found that neural adaptation effects were limited to identical characters in bilateral ventral visual system while no activation reduction was observed for partially overlapping characters regardless of the spatial location of the shared sub-character components, suggesting highly selective neuronal tuning to whole characters. The consistent neural profile across the entire ventral visual cortex indicates that Chinese characters are represented as mutually distinctive wholes rather than combinations of sub-character components, which presents a salient contrast to the left-lateralized, simple-to-complex neural representations of alphabetic words. Our findings thus revealed the cultural modulation effect on both local neuronal activity patterns and functional anatomical regions associated with written symbol recognition. Moreover, the cross-language discrepancy in written symbol recognition mechanism might stem from the language-specific early-stage learning experience. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Influence of the visual environment on the postural stability in healthy older women.

    PubMed

    Brooke-Wavell, K; Perrett, L K; Howarth, P A; Haslam, R A

    2002-01-01

    A poor postural stability in older people is associated with an increased risk of falling. It is recognized that visual environment factors (such as poor lighting and repeating patterns on escalators) may contribute to falls, but little is known about the effects of the visual environment on postural stability in the elderly. To determine whether the postural stability of older women (using body sway as a measure) differed under five different visual environment conditions. Subjects were 33 healthy women aged 65-76 years. Body sway was measured using an electronic force platform which identified the location of their centre of gravity every 0.05 s. Maximal lateral sway and anteroposterior sway were determined and the sway velocity calculated over 1-min trial periods. Body sway was measured under each of the following conditions: (1) normal laboratory lighting (186 lx); (2) moderate lighting (10 lx); (3) dim lighting (1 lx); (4) eyes closed, and (5) repeating pattern projected onto a wall. Each measure of the postural stability was significantly poorer in condition 4 (eyes closed) than in all other conditions. Anteroposterior sway was greater in condition 3 than in conditions 1 and 2, whilst the sway velocity was greater in condition 3 than in condition 2. Lateral sway did not differ significantly between different lighting levels (conditions 1-3). A projected repeating pattern (condition 5) did not significantly influence the postural stability relative to condition 1. The substantially greater body sway with eyes closed than with eyes open confirms the importance of vision in maintaining the postural stability. At the lowest light level, the body sway was significantly increased as compared with the other light levels, but was still substantially smaller than on closing the eyes. A projected repeating pattern did not influence the postural stability. Dim lighting levels and removing visual input appear to be associated with a poorer postural stability in older people and hence might be associated with an increased risk of falls. Copyright 2002 S. Karger AG, Basel

  17. Auditory preferences of young children with and without hearing loss for meaningful auditory-visual compound stimuli.

    PubMed

    Zupan, Barbra; Sussman, Joan E

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both experiments was to evaluate the role of familiarity in these preferences. Participants were exposed to randomized blocks of photographs and sounds of ten familiar and ten unfamiliar animals in auditory-only, visual-only and auditory-visual trials. Results indicated an overall auditory preference in children, regardless of hearing status, and a visual preference in adults. Familiarity only affected modality preferences in adults who showed a strong visual preference to unfamiliar stimuli only. The similar degree of auditory responses in children with hearing loss to those from children with normal hearing is an original finding and lends support to an auditory emphasis for habilitation. Readers will be able to (1) Describe the pattern of modality preferences reported in young children without hearing loss; (2) Recognize that differences in communication mode may affect modality preferences in young children with hearing loss; and (3) Understand the role of familiarity in modality preferences in children with and without hearing loss.

  18. More than one way to see it: Individual heuristics in avian visual computation

    PubMed Central

    Ravignani, Andrea; Westphal-Fitch, Gesche; Aust, Ulrike; Schlumpp, Martin M.; Fitch, W. Tecumseh

    2015-01-01

    Comparative pattern learning experiments investigate how different species find regularities in sensory input, providing insights into cognitive processing in humans and other animals. Past research has focused either on one species’ ability to process pattern classes or different species’ performance in recognizing the same pattern, with little attention to individual and species-specific heuristics and decision strategies. We trained and tested two bird species, pigeons (Columba livia) and kea (Nestor notabilis, a parrot species), on visual patterns using touch-screen technology. Patterns were composed of several abstract elements and had varying degrees of structural complexity. We developed a model selection paradigm, based on regular expressions, that allowed us to reconstruct the specific decision strategies and cognitive heuristics adopted by a given individual in our task. Individual birds showed considerable differences in the number, type and heterogeneity of heuristic strategies adopted. Birds’ choices also exhibited consistent species-level differences. Kea adopted effective heuristic strategies, based on matching learned bigrams to stimulus edges. Individual pigeons, in contrast, adopted an idiosyncratic mix of strategies that included local transition probabilities and global string similarity. Although performance was above chance and quite high for kea, no individual of either species provided clear evidence of learning exactly the rule used to generate the training stimuli. Our results show that similar behavioral outcomes can be achieved using dramatically different strategies and highlight the dangers of combining multiple individuals in a group analysis. These findings, and our general approach, have implications for the design of future pattern learning experiments, and the interpretation of comparative cognition research more generally. PMID:26113444

  19. Universal in vivo Textural Model for Human Skin based on Optical Coherence Tomograms.

    PubMed

    Adabi, Saba; Hosseinzadeh, Matin; Noei, Shahryar; Conforto, Silvia; Daveluy, Steven; Clayton, Anne; Mehregan, Darius; Nasiriavanaki, Mohammadreza

    2017-12-20

    Currently, diagnosis of skin diseases is based primarily on the visual pattern recognition skills and expertise of the physician observing the lesion. Even though dermatologists are trained to recognize patterns of morphology, it is still a subjective visual assessment. Tools for automated pattern recognition can provide objective information to support clinical decision-making. Noninvasive skin imaging techniques provide complementary information to the clinician. In recent years, optical coherence tomography (OCT) has become a powerful skin imaging technique. According to specific functional needs, skin architecture varies across different parts of the body, as do the textural characteristics in OCT images. There is, therefore, a critical need to systematically analyze OCT images from different body sites, to identify their significant qualitative and quantitative differences. Sixty-three optical and textural features extracted from OCT images of healthy and diseased skin are analyzed and, in conjunction with decision-theoretic approaches, used to create computational models of the diseases. We demonstrate that these models provide objective information to the clinician to assist in the diagnosis of abnormalities of cutaneous microstructure, and hence, aid in the determination of treatment. Specifically, we demonstrate the performance of this methodology on differentiating basal cell carcinoma (BCC) and squamous cell carcinoma (SCC) from healthy tissue.

  20. Believing is Seeing: Visual Conventions in Barr's Classification of the "Feeble-Minded"

    ERIC Educational Resources Information Center

    Elks, Martin A.

    2004-01-01

    The eugenics era (c. 1900?1930) produced a strong desire among mental retardation professionals to recognize and control "the feeble-minded." Some eugenicists believed it was possible to classify individuals visually by learning to recognize what they believed to be observable characteristics of idiocy and imbecility. In this paper I used…

  1. Adaptive Spatial Filter Based on Similarity Indices to Preserve the Neural Information on EEG Signals during On-Line Processing

    PubMed Central

    Villa-Parra, Ana Cecilia; Bastos-Filho, Teodiano; López-Delis, Alberto; Frizera-Neto, Anselmo; Krishnan, Sridhar

    2017-01-01

    This work presents a new on-line adaptive filter, which is based on a similarity analysis between standard electrode locations, in order to reduce artifacts and common interferences throughout electroencephalography (EEG) signals, but preserving the useful information. Standard deviation and Concordance Correlation Coefficient (CCC) between target electrodes and its correspondent neighbor electrodes are analyzed on sliding windows to select those neighbors that are highly correlated. Afterwards, a model based on CCC is applied to provide higher values of weight to those correlated electrodes with lower similarity to the target electrode. The approach was applied to brain computer-interfaces (BCIs) based on Canonical Correlation Analysis (CCA) to recognize 40 targets of steady-state visual evoked potential (SSVEP), providing an accuracy (ACC) of 86.44 ± 2.81%. In addition, also using this approach, features of low frequency were selected in the pre-processing stage of another BCI to recognize gait planning. In this case, the recognition was significantly (p<0.01) improved for most of the subjects (ACC≥74.79%), when compared with other BCIs based on Common Spatial Pattern, Filter Bank-Common Spatial Pattern, and Riemannian Geometry. PMID:29186848

  2. Teaching school teachers to recognize respiratory distress in asthmatic children.

    PubMed

    Sapien, Robert E; Fullerton-Gleason, L; Allen, N

    2004-10-01

    To demonstrate that school teachers can be taught to recognize respiratory distress in asthmatic children. Forty-five school teachers received a one-hour educational session on childhood asthma. Each education session consisted of two portions, video footage of asthmatic children exhibiting respiratory distress and didactic. Pre- and posttests on general asthma knowledge, signs of respiratory distress on video footage and comfort level with asthma knowledge and medications were administered. General asthma knowledge median scores increased significantly, pre = 60% correct, post = 70% (p < 0.0001). The ability to visually recognize respiratory distress also significantly improved (pre-median = 66.7% correct, post = 88.9% [p < 0.0001]). Teachers' comfort level with asthma knowledge and medications improved. Using video footage, school teachers can be taught to visually recognize respiratory distress in asthmatic children. Improvement in visual recognition of respiratory distress was greater than improvement in didactic asthma information.

  3. [Trial of eye drops recognizer for visually disabled persons].

    PubMed

    Okamoto, Norio; Suzuki, Katsuhiko; Mimura, Osamu

    2009-01-01

    The development of a device to enable the visually disabled to differentiate eye drops and their dose. The new instrument is composed of a voice generator and a two-dimensional bar-code reader (LS9208). We designed voice outputs for the visually disabled to state when (number of times) and where (right, left, or both) to administer eye drops. We then determined the minimum bar-code size that can be recognized. After attaching bar-codes of the appropriate size to the lateral or bottom surface of the eye drops container, the readability of the bar-codes was compared. The minimum discrimination bar-code size was 6 mm high x 8.5 mm long. Bar-codes on the bottom surface could be more easily recognized than bar-codes on the side. Our newly-developed device using bar-codes enables visually disabled persons to differentiate eye drops and their doses.

  4. When apperceptive agnosia is explained by a deficit of primary visual processing.

    PubMed

    Serino, Andrea; Cecere, Roberto; Dundon, Neil; Bertini, Caterina; Sanchez-Castaneda, Cristina; Làdavas, Elisabetta

    2014-03-01

    Visual agnosia is a deficit in shape perception, affecting figure, object, face and letter recognition. Agnosia is usually attributed to lesions to high-order modules of the visual system, which combine visual cues to represent the shape of objects. However, most of previously reported agnosia cases presented visual field (VF) defects and poor primary visual processing. The present case-study aims to verify whether form agnosia could be explained by a deficit in basic visual functions, rather that by a deficit in high-order shape recognition. Patient SDV suffered a bilateral lesion of the occipital cortex due to anoxia. When tested, he could navigate, interact with others, and was autonomous in daily life activities. However, he could not recognize objects from drawings and figures, read or recognize familiar faces. He was able to recognize objects by touch and people from their voice. Assessments of visual functions showed blindness at the centre of the VF, up to almost 5°, bilaterally, with better stimulus detection in the periphery. Colour and motion perception was preserved. Psychophysical experiments showed that SDV's visual recognition deficits were not explained by poor spatial acuity or by the crowding effect. Rather a severe deficit in line orientation processing might be a key mechanism explaining SDV's agnosia. Line orientation processing is a basic function of primary visual cortex neurons, necessary for detecting "edges" of visual stimuli to build up a "primal sketch" for object recognition. We propose, therefore, that some forms of visual agnosia may be explained by deficits in basic visual functions due to widespread lesions of the primary visual areas, affecting primary levels of visual processing. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Measuring the Speed of Newborn Object Recognition in Controlled Visual Worlds

    ERIC Educational Resources Information Center

    Wood, Justin N.; Wood, Samantha M. W.

    2017-01-01

    How long does it take for a newborn to recognize an object? Adults can recognize objects rapidly, but measuring object recognition speed in newborns has not previously been possible. Here we introduce an automated controlled-rearing method for measuring the speed of newborn object recognition in controlled visual worlds. We raised newborn chicks…

  6. Looking is not seeing: using art to improve observational skills.

    PubMed

    Pellico, Linda Honan; Friedlaender, Linda; Fennie, Kristopher P

    2009-11-01

    This project evaluated the effects of an art museum experience on the observational skills of nursing students. Half of a class of non-nurse college graduates entering an accelerated master's degree program (n = 34) were assigned to a museum experience, whereas the other half (n = 32) received traditional teaching methods. Using original works of art, students participated in focused observational experiences to visually itemize everything noted in the art piece, discriminate visual qualities, recognize patterns, and cluster observations. After organizing observed information, they drew conclusions to construct the object's meaning. Participants visiting the museum subsequently wrote more about what they saw, resulting in significantly more objective clinical findings when viewing patient photographs. In addition, participants demonstrated significantly more fluidity in their differential diagnosis by offering more alternative diagnoses than did the control group. The study supports the notion that focused viewing of works of art enhances observational skills. Copyright 2009, SLACK Incorporated.

  7. Visual scanning and recognition of Chinese, Caucasian, and racially ambiguous faces: contributions from bottom-up facial physiognomic information and top-down knowledge of racial categories.

    PubMed

    Wang, Qiandong; Xiao, Naiqi G; Quinn, Paul C; Hu, Chao S; Qian, Miao; Fu, Genyue; Lee, Kang

    2015-02-01

    Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese, Caucasian, and racially ambiguous faces. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Visual scanning and recognition of Chinese, Caucasian, and racially ambiguous faces: Contributions from bottom-up facial physiognomic information and top-down knowledge of racial categories

    PubMed Central

    Wang, Qiandong; Xiao, Naiqi G.; Quinn, Paul C.; Hu, Chao S.; Qian, Miao; Fu, Genyue; Lee, Kang

    2014-01-01

    Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese faces, Caucasian faces, and racially ambiguous morphed face stimuli. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information of racial categories that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. PMID:25497461

  9. More than one way to see it: Individual heuristics in avian visual computation.

    PubMed

    Ravignani, Andrea; Westphal-Fitch, Gesche; Aust, Ulrike; Schlumpp, Martin M; Fitch, W Tecumseh

    2015-10-01

    Comparative pattern learning experiments investigate how different species find regularities in sensory input, providing insights into cognitive processing in humans and other animals. Past research has focused either on one species' ability to process pattern classes or different species' performance in recognizing the same pattern, with little attention to individual and species-specific heuristics and decision strategies. We trained and tested two bird species, pigeons (Columba livia) and kea (Nestor notabilis, a parrot species), on visual patterns using touch-screen technology. Patterns were composed of several abstract elements and had varying degrees of structural complexity. We developed a model selection paradigm, based on regular expressions, that allowed us to reconstruct the specific decision strategies and cognitive heuristics adopted by a given individual in our task. Individual birds showed considerable differences in the number, type and heterogeneity of heuristic strategies adopted. Birds' choices also exhibited consistent species-level differences. Kea adopted effective heuristic strategies, based on matching learned bigrams to stimulus edges. Individual pigeons, in contrast, adopted an idiosyncratic mix of strategies that included local transition probabilities and global string similarity. Although performance was above chance and quite high for kea, no individual of either species provided clear evidence of learning exactly the rule used to generate the training stimuli. Our results show that similar behavioral outcomes can be achieved using dramatically different strategies and highlight the dangers of combining multiple individuals in a group analysis. These findings, and our general approach, have implications for the design of future pattern learning experiments, and the interpretation of comparative cognition research more generally. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  10. How do schizophrenia patients use visual information to decode facial emotion?

    PubMed

    Lee, Junghee; Gosselin, Frédéric; Wynn, Jonathan K; Green, Michael F

    2011-09-01

    Impairment in recognizing facial emotions is a prominent feature of schizophrenia patients, but the underlying mechanism of this impairment remains unclear. This study investigated the specific aspects of visual information that are critical for schizophrenia patients to recognize emotional expression. Using the Bubbles technique, we probed the use of visual information during a facial emotion discrimination task (fear vs. happy) in 21 schizophrenia patients and 17 healthy controls. Visual information was sampled through randomly located Gaussian apertures (or "bubbles") at 5 spatial frequency scales. Online calibration of the amount of face exposed through bubbles was used to ensure 75% overall accuracy for each subject. Least-square multiple linear regression analyses between sampled information and accuracy were performed to identify critical visual information that was used to identify emotional expression. To accurately identify emotional expression, schizophrenia patients required more exposure of facial areas (i.e., more bubbles) compared with healthy controls. To identify fearful faces, schizophrenia patients relied less on bilateral eye regions at high-spatial frequency compared with healthy controls. For identification of happy faces, schizophrenia patients relied on the mouth and eye regions; healthy controls did not utilize eyes and used the mouth much less than patients did. Schizophrenia patients needed more facial information to recognize emotional expression of faces. In addition, patients differed from controls in their use of high-spatial frequency information from eye regions to identify fearful faces. This study provides direct evidence that schizophrenia patients employ an atypical strategy of using visual information to recognize emotional faces.

  11. Autistic savants. [correction of artistic].

    PubMed

    Hou, C; Miller, B L; Cummings, J L; Goldberg, M; Mychack, P; Bottino, V; Benson, D F

    2000-01-01

    The objectives of this study were to examine common patterns in the lives and artwork of five artistic savants previously described and to report on the clinical, neuropsychological, and neuroimaging findings from one newly diagnosed artistic savant. The artistic savant syndrome has been recognized for centuries, although its neuroanatomic basis remains a mystery. The cardinal features, strengths, and weaknesses of the work of these six savants were analyzed and compared with those of children with autism in whom artistic talent was absent. An anatomic substrate for these behaviors was considered in the context of newly emerging theories related to paradoxical functional facilitation, visual thinking, and multiple intelligences. The artists had features of "pervasive developmental disorder," including impairment in social interaction and communication as well as restricted repetitive and stereotyped patterns of behavior, interest, and activities. All six demonstrated a strong preference for a single art medium and showed a restricted variation in artistic themes. None understood art theory. Some autistic features contributed to their success, including attention to visual detail, a tendency toward ritualistic compulsive repetition, the ability to focus on one topic at the expense of other interests, and intact memory and visuospatial skills. The artistic savant syndrome remains rare and mysterious in origin. Savants exhibit extraordinary visual talents along with profound linguistic and social impairment. The intense focus on and ability to remember visual detail contributes to the artistic product of the savant. The anatomic substrate for the savant syndrome may involve loss of function in the left temporal lobe with enhanced function of the posterior neocortex.

  12. Smartphone-Based Escalator Recognition for the Visually Impaired

    PubMed Central

    Nakamura, Daiki; Takizawa, Hotaka; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji

    2017-01-01

    It is difficult for visually impaired individuals to recognize escalators in everyday environments. If the individuals ride on escalators in the wrong direction, they will stumble on the steps. This paper proposes a novel method to assist visually impaired individuals in finding available escalators by the use of smartphone cameras. Escalators are recognized by analyzing optical flows in video frames captured by the cameras, and auditory feedback is provided to the individuals. The proposed method was implemented on an Android smartphone and applied to actual escalator scenes. The experimental results demonstrate that the proposed method is promising for helping visually impaired individuals use escalators. PMID:28481270

  13. Visual feedback for retuning to just intonation intervals

    NASA Astrophysics Data System (ADS)

    Ayers, R. Dean; Nordquist, Peter R.; Corn, Justin S.

    2005-04-01

    Musicians become used to equal temperament pitch intervals due to their widespread use in tuning pianos and other fixed-pitch instruments. For unaccompanied singing and some other performance situations, a more harmonious blending of sounds can be achieved by shifting to just intonation intervals. Lissajous figures provide immediate and striking visual feedback that emphasizes the frequency ratios and pitch intervals found among the first few members of a single harmonic series. Spirograph patterns (hypotrochoids) are also especially simple for ratios of small whole numbers, and their use for providing feedback to singers has been suggested previously [G. W. Barton, Jr., Am. J. Phys. 44(6), 593-594 (1976)]. A hybrid mixture of these methods for comparing two frequencies generates what appears to be a three dimensional Lissajous figure-a cylindrical wire mesh that rotates about its tilted vertical axis, with zero tilt yielding the familiar Lissajous figure. Sine wave inputs work best, but the sounds of flute, recorder, whistling, and a sung ``oo'' are good enough approximations to work well. This initial study compares the three modes of presentation in terms of the ease with which a singer can obtain a desired pattern and recognize its shape.

  14. Sight and sound converge to form modality-invariant representations in temporo-parietal cortex

    PubMed Central

    Man, Kingson; Kaplan, Jonas T.; Damasio, Antonio; Meyer, Kaspar

    2013-01-01

    People can identify objects in the environment with remarkable accuracy, irrespective of the sensory modality they use to perceive them. This suggests that information from different sensory channels converges somewhere in the brain to form modality-invariant representations, i.e., representations that reflect an object independently of the modality through which it has been apprehended. In this functional magnetic resonance imaging study of human subjects, we first identified brain areas that responded to both visual and auditory stimuli and then used crossmodal multivariate pattern analysis to evaluate the neural representations in these regions for content-specificity (i.e., do different objects evoke different representations?) and modality-invariance (i.e., do the sight and the sound of the same object evoke a similar representation?). While several areas became activated in response to both auditory and visual stimulation, only the neural patterns recorded in a region around the posterior part of the superior temporal sulcus displayed both content-specificity and modality-invariance. This region thus appears to play an important role in our ability to recognize objects in our surroundings through multiple sensory channels and to process them at a supra-modal (i.e., conceptual) level. PMID:23175818

  15. Finding and recognizing objects in natural scenes: complementary computations in the dorsal and ventral visual systems

    PubMed Central

    Rolls, Edmund T.; Webb, Tristan J.

    2014-01-01

    Searching for and recognizing objects in complex natural scenes is implemented by multiple saccades until the eyes reach within the reduced receptive field sizes of inferior temporal cortex (IT) neurons. We analyze and model how the dorsal and ventral visual streams both contribute to this. Saliency detection in the dorsal visual system including area LIP is modeled by graph-based visual saliency, and allows the eyes to fixate potential objects within several degrees. Visual information at the fixated location subtending approximately 9° corresponding to the receptive fields of IT neurons is then passed through a four layer hierarchical model of the ventral cortical visual system, VisNet. We show that VisNet can be trained using a synaptic modification rule with a short-term memory trace of recent neuronal activity to capture both the required view and translation invariances to allow in the model approximately 90% correct object recognition for 4 objects shown in any view across a range of 135° anywhere in a scene. The model was able to generalize correctly within the four trained views and the 25 trained translations. This approach analyses the principles by which complementary computations in the dorsal and ventral visual cortical streams enable objects to be located and recognized in complex natural scenes. PMID:25161619

  16. Atypical form of Alzheimer's disease with prominent posterior cortical atrophy: a review of lesion distribution and circuit disconnection in cortical visual pathways

    NASA Technical Reports Server (NTRS)

    Hof, P. R.; Vogt, B. A.; Bouras, C.; Morrison, J. H.; Bloom, F. E. (Principal Investigator)

    1997-01-01

    In recent years, the existence of visual variants of Alzheimer's disease characterized by atypical clinical presentation at onset has been increasingly recognized. In many of these cases post-mortem neuropathological assessment revealed that correlations could be established between clinical symptoms and the distribution of neurodegenerative lesions. We have analyzed a series of Alzheimer's disease patients presenting with prominent visual symptomatology as a cardinal sign of the disease. In these cases, a shift in the distribution of pathological lesions was observed such that the primary visual areas and certain visual association areas within the occipito-parieto-temporal junction and posterior cingulate cortex had very high densities of lesions, whereas the prefrontal cortex had fewer lesions than usually observed in Alzheimer's disease. Previous quantitative analyses have demonstrated that in Alzheimer's disease, primary sensory and motor cortical areas are less damaged than the multimodal association areas of the frontal and temporal lobes, as indicated by the laminar and regional distribution patterns of neurofibrillary tangles and senile plaques. The distribution of pathological lesions in the cerebral cortex of Alzheimer's disease cases with visual symptomatology revealed that specific visual association pathways were disrupted, whereas these particular connections are likely to be affected to a less severe degree in the more common form of Alzheimer's disease. These data suggest that in some cases with visual variants of Alzheimer's disease, the neurological symptomatology may be related to the loss of certain components of the cortical visual pathways, as reflected by the particular distribution of the neuropathological markers of the disease.

  17. Crowding by Invisible Flankers

    PubMed Central

    Ho, Cristy; Cheung, Sing-Hang

    2011-01-01

    Background Human object recognition degrades sharply as the target object moves from central vision into peripheral vision. In particular, one's ability to recognize a peripheral target is severely impaired by the presence of flanking objects, a phenomenon known as visual crowding. Recent studies on how visual awareness of flanker existence influences crowding had shown mixed results. More importantly, it is not known whether conscious awareness of the existence of both the target and flankers are necessary for crowding to occur. Methodology/Principal Findings Here we show that crowding persists even when people are completely unaware of the flankers, which are rendered invisible through the continuous flash suppression technique. Contrast threshold for identifying the orientation of a grating pattern was elevated in the flanked condition, even when the subjects reported that they were unaware of the perceptually suppressed flankers. Moreover, we find that orientation-specific adaptation is attenuated by flankers even when both the target and flankers are invisible. Conclusions These findings complement the suggested correlation between crowding and visual awareness. What's more, our results demonstrate that conscious awareness and attention are not prerequisite for crowding. PMID:22194919

  18. A framework for interactive visual analysis of heterogeneous marine data in an integrated problem solving environment

    NASA Astrophysics Data System (ADS)

    Liu, Shuai; Chen, Ge; Yao, Shifeng; Tian, Fenglin; Liu, Wei

    2017-07-01

    This paper presents a novel integrated marine visualization framework which focuses on processing, analyzing the multi-dimension spatiotemporal marine data in one workflow. Effective marine data visualization is needed in terms of extracting useful patterns, recognizing changes, and understanding physical processes in oceanography researches. However, the multi-source, multi-format, multi-dimension characteristics of marine data pose a challenge for interactive and feasible (timely) marine data analysis and visualization in one workflow. And, global multi-resolution virtual terrain environment is also needed to give oceanographers and the public a real geographic background reference and to help them to identify the geographical variation of ocean phenomena. This paper introduces a data integration and processing method to efficiently visualize and analyze the heterogeneous marine data. Based on the data we processed, several GPU-based visualization methods are explored to interactively demonstrate marine data. GPU-tessellated global terrain rendering using ETOPO1 data is realized and the video memory usage is controlled to ensure high efficiency. A modified ray-casting algorithm for the uneven multi-section Argo volume data is also presented and the transfer function is designed to analyze the 3D structure of ocean phenomena. Based on the framework we designed, an integrated visualization system is realized. The effectiveness and efficiency of the framework is demonstrated. This system is expected to make a significant contribution to the demonstration and understanding of marine physical process in a virtual global environment.

  19. The combined effect of visual impairment and cognitive impairment on disability in older people.

    PubMed

    Whitson, Heather E; Cousins, Scott W; Burchett, Bruce M; Hybels, Celia F; Pieper, Carl F; Cohen, Harvey J

    2007-06-01

    To determine the risk of disability in individuals with coexisting visual and cognitive impairment and to compare the magnitude of risk associated with visual impairment, cognitive impairment, or the multimorbidity. Prospective cohort. North Carolina. Three thousand eight hundred seventy-eight participants in the North Carolina Established Populations for the Epidemiologic Studies of the Elderly with nonmissing visual status, cognitive status, and disability status data at baseline Short Portable Mental Status Questionnaire (cognitive impairment defined as > or =4 errors), self reported visual acuity (visual impairment defined as inability to see well enough to recognize a friend across the street or to read newspaper print), demographic and health-related variables, disability status (activities of daily living (ADLs), instrumental activities of daily living (IADLs), mobility), death, and time to nursing home placement. Participants with coexisting visual and cognitive impairment were at greater risk of IADL disability (odds ratio (OR)=6.50, 95% confidence interval (CI)=4.34-9.75), mobility disability (OR=4.04, 95% CI=2.49-6.54), ADL disability (OR=2.84, 95% CI=1.87-4.32), and incident ADL disability (OR=3.66, 95%, CI=2.36-5.65). In each case, the estimated OR associated with the multimorbidity was greater than the estimated OR associated with visual or cognitive impairment alone, a pattern that was not observed for other adverse outcomes assessed. No significant interactions were observed between cognitive impairment and visual impairment as predictors of disability status. Individuals with coexisting visual impairment and cognitive impairment are at high risk of disability, with each condition contributing additively to disability risk. Further study is needed to improve functional trajectories in patients with this prevalent multimorbidity. When visual or cognitive impairment is present, efforts to maximize the other function may be beneficial.

  20. Idiopathic Pulmonary Fibrosis: The Association between the Adaptive Multiple Features Method and Fibrosis Outcomes.

    PubMed

    Salisbury, Margaret L; Lynch, David A; van Beek, Edwin J R; Kazerooni, Ella A; Guo, Junfeng; Xia, Meng; Murray, Susan; Anstrom, Kevin J; Yow, Eric; Martinez, Fernando J; Hoffman, Eric A; Flaherty, Kevin R

    2017-04-01

    Adaptive multiple features method (AMFM) lung texture analysis software recognizes high-resolution computed tomography (HRCT) patterns. To evaluate AMFM and visual quantification of HRCT patterns and their relationship with disease progression in idiopathic pulmonary fibrosis. Patients with idiopathic pulmonary fibrosis in a clinical trial of prednisone, azathioprine, and N-acetylcysteine underwent HRCT at study start and finish. Proportion of lung occupied by ground glass, ground glass-reticular (GGR), honeycombing, emphysema, and normal lung densities were measured by AMFM and three radiologists, documenting baseline disease extent and postbaseline change. Disease progression includes composite mortality, hospitalization, and 10% FVC decline. Agreement between visual and AMFM measurements was moderate for GGR (Pearson's correlation r = 0.60, P < 0.0001; mean difference = -0.03 with 95% limits of agreement of -0.19 to 0.14). Baseline extent of GGR was independently associated with disease progression when adjusting for baseline Gender-Age-Physiology stage and smoking status (hazard ratio per 10% visual GGR increase = 1.98, 95% confidence interval [CI] = 1.20-3.28, P = 0.008; and hazard ratio per 10% AMFM GGR increase = 1.36, 95% CI = 1.01-1.84, P = 0.04). Postbaseline visual and AMFM GGR trajectories were correlated with postbaseline FVC trajectory (r = -0.30, 95% CI = -0.46 to -0.11, P = 0.002; and r = -0.25, 95% CI = -0.42 to -0.06, P = 0.01, respectively). More extensive baseline visual and AMFM fibrosis (as measured by GGR densities) is independently associated with elevated hazard for disease progression. Postbaseline change in AMFM-measured and visually measured GGR densities are modestly correlated with change in FVC. AMFM-measured fibrosis is an automated adjunct to existing prognostic markers and may allow for study enrichment with subjects at increased disease progression risk.

  1. Mapping the spatial patterns of field traffic and traffic intensity to predict soil compaction risks at the field scale

    NASA Astrophysics Data System (ADS)

    Duttmann, Rainer; Kuhwald, Michael; Nolde, Michael

    2015-04-01

    Soil compaction is one of the main threats to cropland soils in present days. In contrast to easily visible phenomena of soil degradation, soil compaction, however, is obscured by other signals such as reduced crop yield, delayed crop growth, and the ponding of water, which makes it difficult to recognize and locate areas impacted by soil compaction directly. Although it is known that trafficking intensity is a key factor for soil compaction, until today only modest work has been concerned with the mapping of the spatially distributed patterns of field traffic and with the visual representation of the loads and pressures applied by farm traffic within single fields. A promising method for for spatial detection and mapping of soil compaction risks of individual fields is to process dGPS data, collected from vehicle-mounted GPS receivers and to compare the soil stress induced by farm machinery to the load bearing capacity derived from given soil map data. The application of position-based machinery data enables the mapping of vehicle movements over time as well as the assessment of trafficking intensity. It also facilitates the calculation of the trafficked area and the modeling of the loads and pressures applied to soil by individual vehicles. This paper focuses on the modeling and mapping of the spatial patterns of traffic intensity in silage maize fields during harvest, considering the spatio-temporal changes in wheel load and ground contact pressure along the loading sections. In addition to scenarios calculated for varying mechanical soil strengths, an example for visualizing the three-dimensional stress propagation inside the soil will be given, using the Visualization Toolkit (VTK) to construct 2D or 3D maps supporting to decision making due to sustainable field traffic management.

  2. Visual Pattern Analysis in Histopathology Images Using Bag of Features

    NASA Astrophysics Data System (ADS)

    Cruz-Roa, Angel; Caicedo, Juan C.; González, Fabio A.

    This paper presents a framework to analyse visual patterns in a collection of medical images in a two stage procedure. First, a set of representative visual patterns from the image collection is obtained by constructing a visual-word dictionary under a bag-of-features approach. Second, an analysis of the relationships between visual patterns and semantic concepts in the image collection is performed. The most important visual patterns for each semantic concept are identified using correlation analysis. A matrix visualization of the structure and organization of the image collection is generated using a cluster analysis. The experimental evaluation was conducted on a histopathology image collection and results showed clear relationships between visual patterns and semantic concepts, that in addition, are of easy interpretation and understanding.

  3. MCAW-DB: A glycan profile database capturing the ambiguity of glycan recognition patterns.

    PubMed

    Hosoda, Masae; Takahashi, Yushi; Shiota, Masaaki; Shinmachi, Daisuke; Inomoto, Renji; Higashimoto, Shinichi; Aoki-Kinoshita, Kiyoko F

    2018-05-11

    Glycan-binding protein (GBP) interaction experiments, such as glycan microarrays, are often used to understand glycan recognition patterns. However, oftentimes the interpretation of glycan array experimental data makes it difficult to identify discrete GBP binding patterns due to their ambiguity. It is known that lectins, for example, are non-specific in their binding affinities; the same lectin can bind to different monosaccharides or even different glycan structures. In bioinformatics, several tools to mine the data generated from these sorts of experiments have been developed. These tools take a library of predefined motifs, which are commonly-found glycan patterns such as sialyl-Lewis X, and attempt to identify the motif(s) that are specific to the GBP being analyzed. In our previous work, as opposed to using predefined motifs, we developed the Multiple Carbohydrate Alignment with Weights (MCAW) tool to visualize the state of the glycans being recognized by the GBP under analysis. We previously reported on the effectiveness of our tool and algorithm by analyzing several glycan array datasets from the Consortium of Functional Glycomics (CFG). In this work, we report on our analysis of 1081 data sets which we collected from the CFG, the results of which we have made publicly and freely available as a database called MCAW-DB. We introduce this database, its usage and describe several analysis results. We show how MCAW-DB can be used to analyze glycan-binding patterns of GBPs amidst their ambiguity. For example, the visualization of glycan-binding patterns in MCAW-DB show how they correlate with the concentrations of the samples used in the array experiments. Using MCAW-DB, the patterns of glycans found to bind to various GBP-glycan binding proteins are visualized, indicating the binding "environment" of the glycans. Thus, the ambiguity of glycan recognition is numerically represented, along with the patterns of monosaccharides surrounding the binding region. The profiles in MCAW-DB could potentially be used as predictors of affinity of unknown or novel glycans to particular GBPs by comparing how well they match the existing profiles for those GBPs. Moreover, as the glycan profiles of diseased tissues become available, glycan alignments could also be used to identify glycan biomarkers unique to that tissue. Databases of these alignments may be of great use for drug discovery. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. The re-emergence of felid camouflage with the decay of predator recognition in deer under relaxed selection

    PubMed Central

    Stankowich, Theodore; Coss, Richard G

    2006-01-01

    When a previously common predator disappears owing to local extinction, the strong source of natural selection on prey to visually recognize that predator becomes relaxed. At present, we do not know the extent to which recognition of a specific predator is generalized to similar looking predators or how a specific predator-recognition cue, such as coat pattern, degrades under prolonged relaxed selection. Using predator models, we show that deer exhibit a more rapid and stronger antipredator response to their current predator, the puma, than to a leopard displaying primitive rosettes similar to a locally extinct predator, an early jaguar. Presentation of a novel tiger with a striped coat engendered an intermediate speed of predator recognition and strength of antipredator behaviour. Responses to the leopard model slightly exceeded responses to a non-threatening deer model, suggesting that thousands of years of relaxed selection have led to the loss of recognition of the spotted coat as a jaguar-recognition cue, and that the spotted coat has regained its ability to camouflage the felid form. Our results shed light on the evolutionary arms race between adoption of camouflage to facilitate hunting and the ability of prey to quickly recognize predators by their formerly camouflaging patterns. PMID:17148247

  5. Establishing Visual Category Boundaries between Objects: A PET Study

    ERIC Educational Resources Information Center

    Saumier, Daniel; Chertkow, Howard; Arguin, Martin; Whatmough, Cristine

    2005-01-01

    Individuals with Alzheimer's disease (AD) often have problems in recognizing common objects. This visual agnosia may stem from difficulties in establishing appropriate visual boundaries between visually similar objects. In support of this hypothesis, Saumier, Arguin, Chertkow, and Renfrew (2001) showed that AD subjects have difficulties in…

  6. Plant features measurements for robotics

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1989-01-01

    Initial studies of the technical feasibility of using machine vision and color image processing to measure plant health were performed. Wheat plants were grown in nutrient solutions deficient in nitrogen, potassium, and iron. An additional treatment imposed water stress on wheat plants which received a full complement of nutrients. The results for juvenile (less than 2 weeks old) wheat plants show that imaging technology can be used to detect nutrient deficiencies. The relative amount of green color in a leaf declined with increased water stress. The absolute amount of green was higher for nitrogen deficient leaves compared to the control plants. Relative greenness was lower for iron deficient leaves, but the absolute green values were higher. The data showed patterns across the leaf consistent with visual symptons. The development of additional color image processing routines to recognize these patterns would improve the performance of this sensor of plant health.

  7. Intrasellar cysticercosis: a systematic review.

    PubMed

    Del Brutto, Oscar H; Del Brutto, Victor J

    2013-09-01

    The objective of this study was to review patients with intrasellar cysticercosis to outline the features of this form of neurocysticercosis. A MEDLINE and manual search of patients with intrasellar cysticercosis were done. Abstracted data included clinical manifestations, neuroimaging findings, therapy, and outcome. Twenty-three patients were reviewed. Ophthalmological disturbances, including diminution of visual acuity and visual field defects following a chiasmatic pattern, were recorded in 67 % of cases. Endocrine abnormalities were found in 56 % of patients (panhypopituitarism, hyperprolactinemia, diabetes insipidus, and isolated hypothyroidism). In addition, some patients complained of seizures or chronic headaches. Neuroimaging studies showed lesions confined to the sellar region in 47 % of cases. The remaining patients also had subarachnoid cysts associated or not with hydrocephalus, parenchymal brain cysts, or parenchymal brain calcifications. Thirteen patients underwent surgical resection of the sellar cyst through a craniotomy in nine cases and by the transsphenoidal approach in four. Visual acuity or visual field defects improved in only two of these patients. Five patients were treated with cysticidal drugs without improvement. Intrasellar cysticercosis is rare and probably under-recognized. Clinical manifestations resemble those caused by pituitary tumors, cysts, or other granulomatous lesions. Neuroimaging findings are of more value when intrasellar cysts are associated with other forms of neurocysticercosis, such as basal subarachnoid cysts or hydrocephalus. Prompt surgical resection is mandatory to reduce the risk of permanent loss of visual function. There seems to be no role for cysticidal drug therapy in these cases.

  8. Invariant visual object recognition and shape processing in rats

    PubMed Central

    Zoccolan, Davide

    2015-01-01

    Invariant visual object recognition is the ability to recognize visual objects despite the vastly different images that each object can project onto the retina during natural vision, depending on its position and size within the visual field, its orientation relative to the viewer, etc. Achieving invariant recognition represents such a formidable computational challenge that is often assumed to be a unique hallmark of primate vision. Historically, this has limited the invasive investigation of its neuronal underpinnings to monkey studies, in spite of the narrow range of experimental approaches that these animal models allow. Meanwhile, rodents have been largely neglected as models of object vision, because of the widespread belief that they are incapable of advanced visual processing. However, the powerful array of experimental tools that have been developed to dissect neuronal circuits in rodents has made these species very attractive to vision scientists too, promoting a new tide of studies that have started to systematically explore visual functions in rats and mice. Rats, in particular, have been the subjects of several behavioral studies, aimed at assessing how advanced object recognition and shape processing is in this species. Here, I review these recent investigations, as well as earlier studies of rat pattern vision, to provide an historical overview and a critical summary of the status of the knowledge about rat object vision. The picture emerging from this survey is very encouraging with regard to the possibility of using rats as complementary models to monkeys in the study of higher-level vision. PMID:25561421

  9. Predicting Negative Emotions Based on Mobile Phone Usage Patterns: An Exploratory Study

    PubMed Central

    Yang, Pei-Ching; Chang, Chia-Chi; Chiang, Jung-Hsien; Chen, Ying-Yeh

    2016-01-01

    Background Prompt recognition and intervention of negative emotions is crucial for patients with depression. Mobile phones and mobile apps are suitable technologies that can be used to recognize negative emotions and intervene if necessary. Objective Mobile phone usage patterns can be associated with concurrent emotional states. The objective of this study is to adapt machine-learning methods to analyze such patterns for the prediction of negative emotion. Methods We developed an Android-based app to capture emotional states and mobile phone usage patterns, which included call logs (and use of apps). Visual analog scales (VASs) were used to report negative emotions in dimensions of depression, anxiety, and stress. In the system-training phase, participants were requested to tag their emotions for 14 consecutive days. Five feature-selection methods were used to determine individual usage patterns and four machine-learning methods were tested. Finally, rank product scoring was used to select the best combination to construct the prediction model. In the system evaluation phase, participants were then requested to verify the predicted negative emotions for at least 5 days. Results Out of 40 enrolled healthy participants, we analyzed data from 28 participants, including 30% (9/28) women with a mean (SD) age of 29.2 (5.1) years with sufficient emotion tags. The combination of time slots of 2 hours, greedy forward selection, and Naïve Bayes method was chosen for the prediction model. We further validated the personalized models in 18 participants who performed at least 5 days of model evaluation. Overall, the predictive accuracy for negative emotions was 86.17%. Conclusion We developed a system capable of predicting negative emotions based on mobile phone usage patterns. This system has potential for ecological momentary intervention (EMI) for depressive disorders by automatically recognizing negative emotions and providing people with preventive treatments before it escalates to clinical depression. PMID:27511748

  10. The Effects of Visual Degradation on Attended Objects and the Ability to Process Unattended Objects within the Visual Array

    DTIC Science & Technology

    2010-09-01

    field at once (e.g., Biederman , Blickle, Teitelbaum, & Klatsky, 1988), and objects of interest typically receive the attention required to recognize them...field ( Biederman & Cooper, 1991) and image size changes ( Biederman & Cooper, 1992). Yet, only attended objects are recognized when mirror images...left-right reversals) occur ( Biederman & Cooper, 1991). Due to these results, Hummel (2001) proposed that attended images are processed by both

  11. Toward a Shared Vocabulary for Visual Analysis: An Analytic Toolkit for Deconstructing the Visual Design of Graphic Novels

    ERIC Educational Resources Information Center

    Connors, Sean P.

    2012-01-01

    Literacy educators might advocate using graphic novels to develop students' visual literacy skills, but teachers who lack a vocabulary for engaging in close analysis of visual texts may be reluctant to teach them. Recognizing this, teacher educators should equip preservice teachers with a vocabulary for analyzing visual texts. This article…

  12. Visual Literacy and Reading: Let's Take a Closer Look.

    ERIC Educational Resources Information Center

    Castle, Marrietta Walden

    Based on the notion that visual decisions play an important role in what children recognize and interpret in books and that teachers have a special responsibility to help students become visually literate, this article draws parallels between visual and verbal concepts and suggests some activities for teaching "picture reading" skills in the…

  13. Effects of Computer-Based Visual Representation on Mathematics Learning and Cognitive Load

    ERIC Educational Resources Information Center

    Yung, Hsin I.; Paas, Fred

    2015-01-01

    Visual representation has been recognized as a powerful learning tool in many learning domains. Based on the assumption that visual representations can support deeper understanding, we examined the effects of visual representations on learning performance and cognitive load in the domain of mathematics. An experimental condition with visual…

  14. Development of robust behaviour recognition for an at-home biomonitoring robot with assistance of subject localization and enhanced visual tracking.

    PubMed

    Imamoglu, Nevrez; Dorronzoro, Enrique; Wei, Zhixuan; Shi, Huangjun; Sekine, Masashi; González, José; Gu, Dongyun; Chen, Weidong; Yu, Wenwei

    2014-01-01

    Our research is focused on the development of an at-home health care biomonitoring mobile robot for the people in demand. Main task of the robot is to detect and track a designated subject while recognizing his/her activity for analysis and to provide warning in an emergency. In order to push forward the system towards its real application, in this study, we tested the robustness of the robot system with several major environment changes, control parameter changes, and subject variation. First, an improved color tracker was analyzed to find out the limitations and constraints of the robot visual tracking considering the suitable illumination values and tracking distance intervals. Then, regarding subject safety and continuous robot based subject tracking, various control parameters were tested on different layouts in a room. Finally, the main objective of the system is to find out walking activities for different patterns for further analysis. Therefore, we proposed a fast, simple, and person specific new activity recognition model by making full use of localization information, which is robust to partial occlusion. The proposed activity recognition algorithm was tested on different walking patterns with different subjects, and the results showed high recognition accuracy.

  15. Development of Robust Behaviour Recognition for an at-Home Biomonitoring Robot with Assistance of Subject Localization and Enhanced Visual Tracking

    PubMed Central

    Imamoglu, Nevrez; Dorronzoro, Enrique; Wei, Zhixuan; Shi, Huangjun; González, José; Gu, Dongyun; Yu, Wenwei

    2014-01-01

    Our research is focused on the development of an at-home health care biomonitoring mobile robot for the people in demand. Main task of the robot is to detect and track a designated subject while recognizing his/her activity for analysis and to provide warning in an emergency. In order to push forward the system towards its real application, in this study, we tested the robustness of the robot system with several major environment changes, control parameter changes, and subject variation. First, an improved color tracker was analyzed to find out the limitations and constraints of the robot visual tracking considering the suitable illumination values and tracking distance intervals. Then, regarding subject safety and continuous robot based subject tracking, various control parameters were tested on different layouts in a room. Finally, the main objective of the system is to find out walking activities for different patterns for further analysis. Therefore, we proposed a fast, simple, and person specific new activity recognition model by making full use of localization information, which is robust to partial occlusion. The proposed activity recognition algorithm was tested on different walking patterns with different subjects, and the results showed high recognition accuracy. PMID:25587560

  16. Widespread correlation patterns of fMRI signal across visual cortex reflect eccentricity organization.

    PubMed

    Arcaro, Michael J; Honey, Christopher J; Mruczek, Ryan E B; Kastner, Sabine; Hasson, Uri

    2015-02-19

    The human visual system can be divided into over two-dozen distinct areas, each of which contains a topographic map of the visual field. A fundamental question in vision neuroscience is how the visual system integrates information from the environment across different areas. Using neuroimaging, we investigated the spatial pattern of correlated BOLD signal across eight visual areas on data collected during rest conditions and during naturalistic movie viewing. The correlation pattern between areas reflected the underlying receptive field organization with higher correlations between cortical sites containing overlapping representations of visual space. In addition, the correlation pattern reflected the underlying widespread eccentricity organization of visual cortex, in which the highest correlations were observed for cortical sites with iso-eccentricity representations including regions with non-overlapping representations of visual space. This eccentricity-based correlation pattern appears to be part of an intrinsic functional architecture that supports the integration of information across functionally specialized visual areas.

  17. Visual capability to receive character information. Part I: How many characters can we recognize at a glance?

    PubMed

    Fukuda, T

    1992-01-01

    A study was made on the capability to receive character information and the factors restricting it. The study showed that the capability indicated by the memory span was limited by the average number of characters for words that are made up of individual characters, and calculated in terms of information quantity, there was no difference among the individual characters. The differences in memory span, depending on the size of the pattern presented, was negligible. The difference in the way characters were arranged produced the difference in memory span saturation. This phenomenon is explained by the nature of the lateral interference effect working among the adjacent characters.

  18. Informal interprofessional learning: visualizing the clinical workplace.

    PubMed

    Wagter, Judith Martine; van de Bunt, Gerhard; Honing, Marina; Eckenhausen, Marina; Scherpbier, Albert

    2012-05-01

    Daily collaboration of senior doctors, residents and nurses involves a major potential for sharing knowledge between professionals. Therefore, more attention needs to be paid to informal learning to create strategies and appropriate conditions for enhancing and effectuating informal learning in the workplace. The aim of this study is to visualize and describe patterns of informal interprofessional learning relations among staff in complex care. Questionnaires with four network questions - recognized as indicators of informal learning in the clinical workplace - were handed out to intensive and medium care unit (ICU/MCU) staff members (N = 108), of which 77% were completed and returned. Data were analyzed using social network analysis and Mokken scale analysis. Densities, tie strength and reciprocity of the four networks created show MCU and ICU nurses as subgroups within the ward and reveal central but relatively one-sided relations of senior doctors with nurses and residents. Based on the analyses, we formulated a scale of intensity of informal learning relations that can be used to understand and stimulate informal interprofessional learning.

  19. The Other-Race Effect Develops During Infancy

    PubMed Central

    Quinn, Paul C.; Slater, Alan M.; Lee, Kang; Ge, Liezhong; Pascalis, Olivier

    2008-01-01

    Experience plays a crucial role in the development of face processing. In the study reported here, we investigated how faces observed within the visual environment affect the development of the face-processing system during the 1st year of life. We assessed 3-, 6-, and 9-month-old Caucasian infants' ability to discriminate faces within their own racial group and within three other-race groups (African, Middle Eastern, and Chinese). The 3-month-old infants demonstrated recognition in all conditions, the 6-month-old infants were able to recognize Caucasian and Chinese faces only, and the 9-month-old infants' recognition was restricted to own-race faces. The pattern of preferences indicates that the other-race effect is emerging by 6 months of age and is present at 9 months of age. The findings suggest that facial input from the infant's visual environment is crucial for shaping the face-processing system early in infancy, resulting in differential recognition accuracy for faces of different races in adulthood. PMID:18031416

  20. A Spiking Neural Network Based Cortex-Like Mechanism and Application to Facial Expression Recognition

    PubMed Central

    Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai

    2012-01-01

    In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism. PMID:23193391

  1. A biological hierarchical model based underwater moving object detection.

    PubMed

    Shen, Jie; Fan, Tanghuai; Tang, Min; Zhang, Qian; Sun, Zhen; Huang, Fengchen

    2014-01-01

    Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results.

  2. A Biological Hierarchical Model Based Underwater Moving Object Detection

    PubMed Central

    Shen, Jie; Fan, Tanghuai; Tang, Min; Zhang, Qian; Sun, Zhen; Huang, Fengchen

    2014-01-01

    Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results. PMID:25140194

  3. A spiking neural network based cortex-like mechanism and application to facial expression recognition.

    PubMed

    Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai

    2012-01-01

    In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism.

  4. Patterned light flash evoked short latency activity in the visual system of visually normal and in amblyopic subjects.

    PubMed

    Sjöström, A; Abrahamsson, M

    1994-04-01

    In a previous experimental study on anaesthetized cat it was shown that a short latency (35-40 ms) cortical potential changed polarity due to the presence or absence of a pattern in the flash stimulus. The results suggested one pathway of neuronal activation in the cortex to a pattern that was within the level of resolution and another to patterns that were not. It was implied that a similar difference in impulse transmission to pattern and non-pattern stimuli may be recorded in humans. The present paper describes recordings of the short-latency visual evoked response to varying light flash checkerboard pattern stimuli of high intensity in visually normal and amblyopic children and adults. When stimulating the normal eye a visual evoked response potential with a peak latency between 35 to 40 ms showed a polarity change to patterned compared to non-patterned stimulation. The visual evoked response resolution limit could be correlated to a visual acuity of 0.5 and below. In amblyopic eyes the shift in polarity was recorded at the acuity limit level. The latency of the pattern depending potential was increased in patients with amblyopia compared to normal, but not directly related to amblyopic degree. It is concluded that the short latency, visual evoked response that mainly represents the retino-geniculo-cortical activation may be used to estimate visual resolution below 0.5 in acuity level.(ABSTRACT TRUNCATED AT 250 WORDS)

  5. Do Dyslexic Individuals Present a Reduced Visual Attention Span? Evidence from Visual Recognition Tasks of Non-Verbal Multi-Character Arrays

    ERIC Educational Resources Information Center

    Yeari, Menahem; Isser, Michal; Schiff, Rachel

    2017-01-01

    A controversy has recently developed regarding the hypothesis that developmental dyslexia may be caused, in some cases, by a reduced visual attention span (VAS). To examine this hypothesis, independent of phonological abilities, researchers tested the ability of dyslexic participants to recognize arrays of unfamiliar visual characters. Employing…

  6. Multiphase magnetic systems: Measurement and simulation

    NASA Astrophysics Data System (ADS)

    Cao, Yue; Ahmadzadeh, Mostafa; Xu, Ke; Dodrill, Brad; McCloy, John S.

    2018-01-01

    Multiphase magnetic systems are common in nature and are increasingly being recognized in technical applications. One characterization method which has shown great promise for determining separate and collective effects of multiphase magnetic systems is first order reversal curves (FORCs). Several examples are given of FORC patterns which provide distinguishing evidence of multiple phases. In parallel, a visualization method for understanding multiphase magnetic interaction is given, which allocates Preisach magnetic elements as an input "Preisach hysteron distribution pattern" to enable simulation of different "wasp-waisted" magnetic behaviors. These simulated systems allow reproduction of different major hysteresis loops and FORC patterns of real systems and parameterized theoretical systems. The experimental FORC measurements and FORC diagrams of four commercially obtained magnetic materials, particularly those sold as nanopowders, show that these materials are often not phase pure. They exhibit complex hysteresis behaviors that are not predictable based on relative phase fraction obtained by characterization methods such as diffraction. These multiphase materials, consisting of various fractions of BaFe12O19, ɛ-Fe2O3, and γ-Fe2O3, are discussed.

  7. Dietary Patterns Seem to Influence the Development of Perfusion Changes in Cardiac Syndrome X Patients.

    PubMed

    Szot, Wojciech; Zając, Joanna; Kostkiewicz, Magdalena; Kolarzyk, Emilia

    2015-01-01

    Cardiac syndrome X (CSX) is linked with changes in the heart's micro-vasculature, without significant changes in main coronary vessels. According to ESC 2013 stable coronary artery disease criteria, CSX was replaced by Microvascular Angina (MA). While no changes in main coronary vessels are present, most patients still suffer from angina-like chest pains, which significantly diminish their quality of life. CSX is recognized among other coronary diseases and is now considered to be a form of stable angina. In most CSX patients we can visualize perfusion changes in the left ventricle. Since it is well known that the kind of diet can greatly influence the development of coronary disease, our aim was to evaluate the influence of diet on the myocardial perfusion in the group of patients who were diagnosed of CSX. In addition, we tried to verify whether there is any correlation between dietary patterns and perfusion changes visualized in this group of patients. Toward this goal we screened for the presence of CSX a group of 436 women who suffered from angina-like symptoms and whose routinely performed angiography revealed no changes in coronary vessels. Out of these, 55 women with CSX diagnosis, completed questionnaires regarding their nutritional patterns and underwent both myocardial perfusion studies (MPI) and exercise tests. In the studied group dietary patterns were far from normal values, with the majority of women consuming too much protein, animal fats and sugars in their daily diet, and too low amounts of complex carbohydrates and oils. We were not able to find definite correlations between diet and perfusion changes; however, women whose diet included too high fat and protein intake, seemed to have worse perfusion pattern in MPI. Nutritional pattern seems to have an impact on development of myocardial perfusion changes in CSX patients.

  8. Our World Their World

    ERIC Educational Resources Information Center

    Brisco, Nicole

    2011-01-01

    Build, create, make, blog, develop, organize, structure, perform. These are just a few verbs that illustrate the visual world. These words create images that allow students to respond to their environment. Visual culture studies recognize the predominance of visual forms of media, communication, and information in the postmodern world. This…

  9. The man who mistook his neuropsychologist for a popstar: when configural processing fails in acquired prosopagnosia

    PubMed Central

    Jansari, Ashok; Miller, Scott; Pearce, Laura; Cobb, Stephanie; Sagiv, Noam; Williams, Adrian L.; Tree, Jeremy J.; Hanley, J. Richard

    2015-01-01

    We report the case of an individual with acquired prosopagnosia who experiences extreme difficulties in recognizing familiar faces in everyday life despite excellent object recognition skills. Formal testing indicates that he is also severely impaired at remembering pre-experimentally unfamiliar faces and that he takes an extremely long time to identify famous faces and to match unfamiliar faces. Nevertheless, he performs as accurately and quickly as controls at identifying inverted familiar and unfamiliar faces and can recognize famous faces from their external features. He also performs as accurately as controls at recognizing famous faces when fracturing conceals the configural information in the face. He shows evidence of impaired global processing but normal local processing of Navon figures. This case appears to reflect the clearest example yet of an acquired prosopagnosic patient whose familiar face recognition deficit is caused by a severe configural processing deficit in the absence of any problems in featural processing. These preserved featural skills together with apparently intact visual imagery for faces allow him to identify a surprisingly large number of famous faces when unlimited time is available. The theoretical implications of this pattern of performance for understanding the nature of acquired prosopagnosia are discussed. PMID:26236212

  10. Margined winner-take-all: New learning rule for pattern recognition.

    PubMed

    Fukushima, Kunihiko

    2018-01-01

    The neocognitron is a deep (multi-layered) convolutional neural network that can be trained to recognize visual patterns robustly. In the intermediate layers of the neocognitron, local features are extracted from input patterns. In the deepest layer, based on the features extracted in the intermediate layers, input patterns are classified into classes. A method called IntVec (interpolating-vector) is used for this purpose. This paper proposes a new learning rule called margined Winner-Take-All (mWTA) for training the deepest layer. Every time when a training pattern is presented during the learning, if the result of recognition by WTA (Winner-Take-All) is an error, a new cell is generated in the deepest layer. Here we put a certain amount of margin to the WTA. In other words, only during the learning, a certain amount of handicap is given to cells of classes other than that of the training vector, and the winner is chosen under this handicap. By introducing the margin to the WTA, we can generate a compact set of cells, with which a high recognition rate can be obtained with a small computational cost. The ability of this mWTA is demonstrated by computer simulation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Widespread correlation patterns of fMRI signal across visual cortex reflect eccentricity organization

    PubMed Central

    Arcaro, Michael J; Honey, Christopher J; Mruczek, Ryan EB; Kastner, Sabine; Hasson, Uri

    2015-01-01

    The human visual system can be divided into over two-dozen distinct areas, each of which contains a topographic map of the visual field. A fundamental question in vision neuroscience is how the visual system integrates information from the environment across different areas. Using neuroimaging, we investigated the spatial pattern of correlated BOLD signal across eight visual areas on data collected during rest conditions and during naturalistic movie viewing. The correlation pattern between areas reflected the underlying receptive field organization with higher correlations between cortical sites containing overlapping representations of visual space. In addition, the correlation pattern reflected the underlying widespread eccentricity organization of visual cortex, in which the highest correlations were observed for cortical sites with iso-eccentricity representations including regions with non-overlapping representations of visual space. This eccentricity-based correlation pattern appears to be part of an intrinsic functional architecture that supports the integration of information across functionally specialized visual areas. DOI: http://dx.doi.org/10.7554/eLife.03952.001 PMID:25695154

  12. A rodent model for the study of invariant visual object recognition

    PubMed Central

    Zoccolan, Davide; Oertelt, Nadja; DiCarlo, James J.; Cox, David D.

    2009-01-01

    The human visual system is able to recognize objects despite tremendous variation in their appearance on the retina resulting from variation in view, size, lighting, etc. This ability—known as “invariant” object recognition—is central to visual perception, yet its computational underpinnings are poorly understood. Traditionally, nonhuman primates have been the animal model-of-choice for investigating the neuronal substrates of invariant recognition, because their visual systems closely mirror our own. Meanwhile, simpler and more accessible animal models such as rodents have been largely overlooked as possible models of higher-level visual functions, because their brains are often assumed to lack advanced visual processing machinery. As a result, little is known about rodents' ability to process complex visual stimuli in the face of real-world image variation. In the present work, we show that rats possess more advanced visual abilities than previously appreciated. Specifically, we trained pigmented rats to perform a visual task that required them to recognize objects despite substantial variation in their appearance, due to changes in size, view, and lighting. Critically, rats were able to spontaneously generalize to previously unseen transformations of learned objects. These results provide the first systematic evidence for invariant object recognition in rats and argue for an increased focus on rodents as models for studying high-level visual processing. PMID:19429704

  13. From Iconic to Lingual: Interpreting Visual Statements.

    ERIC Educational Resources Information Center

    Curtiss, Deborah

    In this age of proliferating visual communications, there is a permissiveness in subject matter, content, and meaning that is exhilarating, yet overwhelming to interpret in a meaningful or consensual way. By recognizing visual statements, whether a piece of sculpture, an advertisement, a video, or a building, as communication, one can approach…

  14. Visual Factors in Reading

    ERIC Educational Resources Information Center

    Singleton, Chris; Henderson, Lisa-Marie

    2006-01-01

    This article reviews current knowledge about how the visual system recognizes letters and words, and the impact on reading when parts of the visual system malfunction. The physiology of eye and brain places important constraints on how we process text, and the efficient organization of the neurocognitive systems involved is not inherent but…

  15. Professional Standards for Visual Arts Educators

    ERIC Educational Resources Information Center

    National Art Education Association, 2009

    2009-01-01

    The National Art Education Association (NAEA) is committed to ensuring that all students have access to a high quality, certified visual arts educator in every K-12 public school across the United States, recognizing that effective arts instruction is a core component of 21st-century education. "Professional Standards for Visual Arts…

  16. Decoding brain responses to pixelized images in the primary visual cortex: implications for visual cortical prostheses

    PubMed Central

    Guo, Bing-bing; Zheng, Xiao-lin; Lu, Zhen-gang; Wang, Xing; Yin, Zheng-qin; Hou, Wen-sheng; Meng, Ming

    2015-01-01

    Visual cortical prostheses have the potential to restore partial vision. Still limited by the low-resolution visual percepts provided by visual cortical prostheses, implant wearers can currently only “see” pixelized images, and how to obtain the specific brain responses to different pixelized images in the primary visual cortex (the implant area) is still unknown. We conducted a functional magnetic resonance imaging experiment on normal human participants to investigate the brain activation patterns in response to 18 different pixelized images. There were 100 voxels in the brain activation pattern that were selected from the primary visual cortex, and voxel size was 4 mm × 4 mm × 4 mm. Multi-voxel pattern analysis was used to test if these 18 different brain activation patterns were specific. We chose a Linear Support Vector Machine (LSVM) as the classifier in this study. The results showed that the classification accuracies of different brain activation patterns were significantly above chance level, which suggests that the classifier can successfully distinguish the brain activation patterns. Our results suggest that the specific brain activation patterns to different pixelized images can be obtained in the primary visual cortex using a 4 mm × 4 mm × 4 mm voxel size and a 100-voxel pattern. PMID:26692860

  17. A review of visual perception mechanisms that regulate rapid adaptive camouflage in cuttlefish.

    PubMed

    Chiao, Chuan-Chin; Chubb, Charles; Hanlon, Roger T

    2015-09-01

    We review recent research on the visual mechanisms of rapid adaptive camouflage in cuttlefish. These neurophysiologically complex marine invertebrates can camouflage themselves against almost any background, yet their ability to quickly (0.5-2 s) alter their body patterns on different visual backgrounds poses a vexing challenge: how to pick the correct body pattern amongst their repertoire. The ability of cuttlefish to change appropriately requires a visual system that can rapidly assess complex visual scenes and produce the motor responses-the neurally controlled body patterns-that achieve camouflage. Using specifically designed visual backgrounds and assessing the corresponding body patterns quantitatively, we and others have uncovered several aspects of scene variation that are important in regulating cuttlefish patterning responses. These include spatial scale of background pattern, background intensity, background contrast, object edge properties, object contrast polarity, object depth, and the presence of 3D objects. Moreover, arm postures and skin papillae are also regulated visually for additional aspects of concealment. By integrating these visual cues, cuttlefish are able to rapidly select appropriate body patterns for concealment throughout diverse natural environments. This sensorimotor approach of studying cuttlefish camouflage thus provides unique insights into the mechanisms of visual perception in an invertebrate image-forming eye.

  18. Seeing with ears: Sightless humans' perception of dog bark provides a test for structural rules in vocal communication.

    PubMed

    Molnár, Csaba; Pongrácz, Péter; Miklósi, Adám

    2010-05-01

    Prerecorded family dog (Canis familiaris) barks were played back to groups of congenitally sightless, sightless with prior visual experience, and sighted people (none of whom had ever owned a dog). We found that blind people without any previous canine visual experiences can categorize accurately various dog barks recorded in different contexts, and their results are very close to those of sighted people in characterizing the emotional content of barks. These findings suggest that humans can recognize some of the most important motivational states reflecting, for example, fear or aggression in a dog's bark without any visual experience. It is very likely that this result can be generalized to other mammalian species--that is, no visual experience of another individual is needed for recognizing some of the most important motivational states of the caller.

  19. A Comparison of the Visual Attention Patterns of People with Aphasia and Adults without Neurological Conditions for Camera-Engaged and Task-Engaged Visual Scenes

    ERIC Educational Resources Information Center

    Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria

    2016-01-01

    Purpose: The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Method: Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological…

  20. Image-enhanced endoscopy with I-scan technology for the evaluation of duodenal villous patterns.

    PubMed

    Cammarota, Giovanni; Ianiro, Gianluca; Sparano, Lucia; La Mura, Rossella; Ricci, Riccardo; Larocca, Luigi M; Landolfi, Raffaele; Gasbarrini, Antonio

    2013-05-01

    I-scan technology is the newly developed endoscopic tool that works in real time and utilizes a digital contrast method to enhance endoscopic image. We performed a feasibility study aimed to determine the diagnostic accuracy of i-scan technology for the evaluation of duodenal villous patterns, having histology as the reference standard. In this prospective, single center, open study, patients undergoing upper endoscopy for an histological evaluation of duodenal mucosa were enrolled. All patients underwent upper endoscopy using high resolution view in association with i-scan technology. During endoscopy, duodenal villous patterns were evaluated and classified as normal, partial villous atrophy, or marked villous atrophy. Results were then compared with histology. One hundred fifteen subjects were recruited in this study. The endoscopist was able to find marked villous atrophy of the duodenum in 12 subjects, partial villous atrophy in 25, and normal villi in the remaining 78 individuals. The i-scan system was demonstrated to have great accuracy (100 %) in the detection of marked villous atrophy patterns. I-scan technology showed quite lower accuracy in determining partial villous atrophy or normal villous patterns (respectively, 90 % for both items). Image-enhancing endoscopic technology allows a clear visualization of villous patterns in the duodenum. By switching from the standard to the i-scan view, it is possible to optimize the accuracy of endoscopy in recognizing villous alteration in subjects undergoing endoscopic evaluation.

  1. PROTERAN: animated terrain evolution for visual analysis of patterns in protein folding trajectory.

    PubMed

    Zhou, Ruhong; Parida, Laxmi; Kapila, Kush; Mudur, Sudhir

    2007-01-01

    The mechanism of protein folding remains largely a mystery in molecular biology, despite the enormous effort from many groups in the past decades. Currently, the protein folding mechanism is often characterized by calculating the free energy landscape versus various reaction coordinates such as the fraction of native contacts, the radius of gyration and so on. In this paper, we present an integrated approach towards understanding the folding process via visual analysis of patterns of these reaction coordinates. The three disparate processes (1) protein folding simulation, (2) pattern elicitation and (3) visualization of patterns, work in tandem. Thus as the protein folds, the changing landscape in the pattern space can be viewed via the visualization tool, PROTERAN, a program we developed for this purpose. We first present an incremental (on-line) trie-based pattern discovery algorithm to elicit the patterns and then describe the terrain metaphor based visualization tool. Using two example small proteins, a beta-hairpin and a designed protein Trp-cage, we next demonstrate that this combined pattern discovery and visualization approach extracts crucial information about protein folding intermediates and mechanism.

  2. Visual Equivalence and Amodal Completion in Cuttlefish

    PubMed Central

    Lin, I-Rong; Chiao, Chuan-Chin

    2017-01-01

    Modern cephalopods are notably the most intelligent invertebrates and this is accompanied by keen vision. Despite extensive studies investigating the visual systems of cephalopods, little is known about their visual perception and object recognition. In the present study, we investigated the visual processing of the cuttlefish Sepia pharaonis, including visual equivalence and amodal completion. Cuttlefish were trained to discriminate images of shrimp and fish using the operant conditioning paradigm. After cuttlefish reached the learning criteria, a series of discrimination tasks were conducted. In the visual equivalence experiment, several transformed versions of the training images, such as images reduced in size, images reduced in contrast, sketches of the images, the contours of the images, and silhouettes of the images, were used. In the amodal completion experiment, partially occluded views of the original images were used. The results showed that cuttlefish were able to treat the training images of reduced size and sketches as the visual equivalence. Cuttlefish were also capable of recognizing partially occluded versions of the training image. Furthermore, individual differences in performance suggest that some cuttlefish may be able to recognize objects when visual information was partly removed. These findings support the hypothesis that the visual perception of cuttlefish involves both visual equivalence and amodal completion. The results from this research also provide insights into the visual processing mechanisms used by cephalopods. PMID:28220075

  3. Assistive Technologies for Library Patrons with Visual Disabilities

    ERIC Educational Resources Information Center

    Sunrich, Matthew; Green, Ravonne

    2007-01-01

    This study provides an overview of the various products available for library patrons with blindness or visual impairments. To provide some insight into the status of library services for patrons with blindness, a sample of American universities that are recognized for their programs for students with visual impairments was surveyed to discern…

  4. African American Youth and the Artist's Identity: Cultural Models and Aspirational Foreclosure

    ERIC Educational Resources Information Center

    Charland, William

    2010-01-01

    The decision to participate in visual arts studies in college and visual arts professions in adult life is the product of multiple factors, including the influences of family, community, peer group, mass culture, and K-12 schooling. Recognizing African American underrepresentation in visual arts studies and professions, this article explores how…

  5. Enhanced learning of natural visual sequences in newborn chicks.

    PubMed

    Wood, Justin N; Prasad, Aditya; Goldman, Jason G; Wood, Samantha M W

    2016-07-01

    To what extent are newborn brains designed to operate over natural visual input? To address this question, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) show enhanced learning of natural visual sequences at the onset of vision. We took the same set of images and grouped them into either natural sequences (i.e., sequences showing different viewpoints of the same real-world object) or unnatural sequences (i.e., sequences showing different images of different real-world objects). When raised in virtual worlds containing natural sequences, newborn chicks developed the ability to recognize familiar images of objects. Conversely, when raised in virtual worlds containing unnatural sequences, newborn chicks' object recognition abilities were severely impaired. In fact, the majority of the chicks raised with the unnatural sequences failed to recognize familiar images of objects despite acquiring over 100 h of visual experience with those images. Thus, newborn chicks show enhanced learning of natural visual sequences at the onset of vision. These results indicate that newborn brains are designed to operate over natural visual input.

  6. Brain-Computer Interface Based on Generation of Visual Images

    PubMed Central

    Bobrov, Pavel; Frolov, Alexander; Cantor, Charles; Fedulova, Irina; Bakhnyan, Mikhail; Zhavoronkov, Alexander

    2011-01-01

    This paper examines the task of recognizing EEG patterns that correspond to performing three mental tasks: relaxation and imagining of two types of pictures: faces and houses. The experiments were performed using two EEG headsets: BrainProducts ActiCap and Emotiv EPOC. The Emotiv headset becomes widely used in consumer BCI application allowing for conducting large-scale EEG experiments in the future. Since classification accuracy significantly exceeded the level of random classification during the first three days of the experiment with EPOC headset, a control experiment was performed on the fourth day using ActiCap. The control experiment has shown that utilization of high-quality research equipment can enhance classification accuracy (up to 68% in some subjects) and that the accuracy is independent of the presence of EEG artifacts related to blinking and eye movement. This study also shows that computationally-inexpensive Bayesian classifier based on covariance matrix analysis yields similar classification accuracy in this problem as a more sophisticated Multi-class Common Spatial Patterns (MCSP) classifier. PMID:21695206

  7. Evaluating structural pattern recognition for handwritten math via primitive label graphs

    NASA Astrophysics Data System (ADS)

    Zanibbi, Richard; Mouchère, Harold; Viard-Gaudin, Christian

    2013-01-01

    Currently, structural pattern recognizer evaluations compare graphs of detected structure to target structures (i.e. ground truth) using recognition rates, recall and precision for object segmentation, classification and relationships. In document recognition, these target objects (e.g. symbols) are frequently comprised of multiple primitives (e.g. connected components, or strokes for online handwritten data), but current metrics do not characterize errors at the primitive level, from which object-level structure is obtained. Primitive label graphs are directed graphs defined over primitives and primitive pairs. We define new metrics obtained by Hamming distances over label graphs, which allow classification, segmentation and parsing errors to be characterized separately, or using a single measure. Recall and precision for detected objects may also be computed directly from label graphs. We illustrate the new metrics by comparing a new primitive-level evaluation to the symbol-level evaluation performed for the CROHME 2012 handwritten math recognition competition. A Python-based set of utilities for evaluating, visualizing and translating label graphs is publicly available.

  8. Correlation of pattern reversal visual evoked potential parameters with the pattern standard deviation in primary open angle glaucoma.

    PubMed

    Kothari, Ruchi; Bokariya, Pradeep; Singh, Ramji; Singh, Smita; Narang, Purvasha

    2014-01-01

    To evaluate whether glaucomatous visual field defect particularly the pattern standard deviation (PSD) of Humphrey visual field could be associated with visual evoked potential (VEP) parameters of patients having primary open angle glaucoma (POAG). Visual field by Humphrey perimetry and simultaneous recordings of pattern reversal visual evoked potential (PRVEP) were assessed in 100 patients with POAG. The stimulus configuration for VEP recordings consisted of the transient pattern reversal method in which a black and white checker board pattern was generated (full field) and displayed on VEP monitor (colour 14″) by an electronic pattern regenerator inbuilt in an evoked potential recorder (RMS EMG EP MARK II). The results of our study indicate that there is a highly significant (P<0.001) negative correlation of P100 amplitude and a statistically significant (P<0.05) positive correlation of N70 latency, P100 latency and N155 latency with the PSD of Humphrey visual field in the subjects of POAG in various age groups as evaluated by Student's t-test. Prolongation of VEP latencies were mirrored by a corresponding increase of PSD values. Conversely, as PSD increases the magnitude of VEP excursions were found to be diminished.

  9. Visual Inspection Research Project Report on Benchmark Inspections

    DOT National Transportation Integrated Search

    1996-10-01

    Word document. Recognizing the importance of visual inspection in the maintenance of the civil air fleet, the FAA tasked the Aging Aircraft Nondestructive Inspection Validation Center (AANC) at Sandia National Labs in Albuquerque, NM, to establish a ...

  10. Visual-auditory integration during speech imitation in autism.

    PubMed

    Williams, Justin H G; Massaro, Dominic W; Peel, Natalie J; Bosseler, Alexis; Suddendorf, Thomas

    2004-01-01

    Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training.

  11. SoundView: an auditory guidance system based on environment understanding for the visually impaired people.

    PubMed

    Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2009-01-01

    Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.

  12. Comparison of Pixel-Based and Object-Based Classification Using Parameters and Non-Parameters Approach for the Pattern Consistency of Multi Scale Landcover

    NASA Astrophysics Data System (ADS)

    Juniati, E.; Arrofiqoh, E. N.

    2017-09-01

    Information extraction from remote sensing data especially land cover can be obtained by digital classification. In practical some people are more comfortable using visual interpretation to retrieve land cover information. However, it is highly influenced by subjectivity and knowledge of interpreter, also takes time in the process. Digital classification can be done in several ways, depend on the defined mapping approach and assumptions on data distribution. The study compared several classifiers method for some data type at the same location. The data used Landsat 8 satellite imagery, SPOT 6 and Orthophotos. In practical, the data used to produce land cover map in 1:50,000 map scale for Landsat, 1:25,000 map scale for SPOT and 1:5,000 map scale for Orthophotos, but using visual interpretation to retrieve information. Maximum likelihood Classifiers (MLC) which use pixel-based and parameters approach applied to such data, and also Artificial Neural Network classifiers which use pixel-based and non-parameters approach applied too. Moreover, this study applied object-based classifiers to the data. The classification system implemented is land cover classification on Indonesia topographic map. The classification applied to data source, which is expected to recognize the pattern and to assess consistency of the land cover map produced by each data. Furthermore, the study analyse benefits and limitations the use of methods.

  13. Natural course of visual field loss in patients with Type 2 Usher syndrome.

    PubMed

    Fishman, Gerald A; Bozbeyoglu, Simge; Massof, Robert W; Kimberling, William

    2007-06-01

    To evaluate the natural course of visual field loss in patients with Type 2 Usher syndrome and different patterns of visual field loss. Fifty-eight patients with Type 2 Usher syndrome who had at least three visual field measurements during a period of at least 3 years were studied. Kinetic visual fields measured on a standard calibrated Goldmann perimeter with II4e and V4e targets were analyzed. The visual field areas in both eyes were determined by planimetry with the use of a digitalizing tablet and computer software and expressed in square inches. The data for each visual field area measurement were transformed to a natural log unit. Using a mixed model regression analysis, values for the half-life of field loss (time during which half of the remaining field area is lost) were estimated. Three different patterns of visual field loss were identified, and the half-life time for each pattern of loss was calculated. Of the 58 patients, 11 were classified as having pattern type I, 12 with pattern type II, and 14 with pattern type III. Of 21 patients whose visual field loss was so advanced that they could not be classified, 15 showed only a small residual central field (Group A) and 6 showed a residual central field with a peripheral island (Group B). The average half-life times varied between 3.85 and 7.37 for the II4e test target and 4.59 to 6.42 for the V4e target. There was no statistically significant difference in the half-life times between the various patterns of field loss or for the test targets. The average half-life times for visual field loss in patients with Usher syndrome Type 2 were statistically similar among those patients with different patterns of visual field loss. These findings will be useful for counseling patients with Type 2 Usher syndrome as to their prognosis for anticipated visual field loss.

  14. Visual field progression in glaucoma: total versus pattern deviation analyses.

    PubMed

    Artes, Paul H; Nicolela, Marcelo T; LeBlanc, Raymond P; Chauhan, Balwantray C

    2005-12-01

    To compare visual field progression with total and pattern deviation analyses in a prospective longitudinal study of patients with glaucoma and healthy control subjects. A group of 101 patients with glaucoma (168 eyes) with early to moderately advanced visual field loss at baseline (average mean deviation [MD], -3.9 dB) and no clinical evidence of media opacity were selected from a prospective longitudinal study on visual field progression in glaucoma. Patients were examined with static automated perimetry at 6-month intervals for a median follow-up of 9 years. At each test location, change was established with event and trend analyses of total and pattern deviation. The event analyses compared each follow-up test to a baseline obtained from averaging the first two tests, and visual field progression was defined as deterioration beyond the 5th percentile of test-retest variability at three test locations, observed on three consecutive tests. The trend analyses were based on point-wise linear regression, and visual field progression was defined as statistically significant deterioration (P < 5%) worse than -1 dB/year at three locations, confirmed by independently omitting the last and the penultimate observation. The incidence and the time-to-progression were compared between total and pattern deviation analyses. To estimate the specificity of the progression analyses, identical criteria were applied to visual fields obtained in 102 healthy control subjects, and the rate of visual field improvement was established in the patients with glaucoma and the healthy control subjects. With both event and trend methods, pattern deviation analyses classified approximately 15% fewer eyes as having progressed than did the total deviation analyses. In eyes classified as progressing by both the total and pattern deviation methods, total deviation analyses tended to detect progression earlier than the pattern deviation analyses. A comparison of the changes observed in MD and the visual fields' general height (estimated by the 85th percentile of the total deviation values) confirmed that change in the glaucomatous eyes almost always comprised a diffuse component. Pattern deviation analyses of progression may therefore underestimate the true amount of glaucomatous visual field progression. Pattern deviation analyses of visual field progression may underestimate visual field progression in glaucoma, particularly when there is no clinical evidence of increasing media opacity. Clinicians should have access to both total and pattern deviation analyses to make informed decisions on visual field progression in glaucoma.

  15. Visual pattern recognition based on spatio-temporal patterns of retinal ganglion cells’ activities

    PubMed Central

    Jing, Wei; Liu, Wen-Zhong; Gong, Xin-Wei; Gong, Hai-Qing

    2010-01-01

    Neural information is processed based on integrated activities of relevant neurons. Concerted population activity is one of the important ways for retinal ganglion cells to efficiently organize and process visual information. In the present study, the spike activities of bullfrog retinal ganglion cells in response to three different visual patterns (checker-board, vertical gratings and horizontal gratings) were recorded using multi-electrode arrays. A measurement of subsequence distribution discrepancy (MSDD) was applied to identify the spatio-temporal patterns of retinal ganglion cells’ activities in response to different stimulation patterns. The results show that the population activity patterns were different in response to different stimulation patterns, such difference in activity pattern was consistently detectable even when visual adaptation occurred during repeated experimental trials. Therefore, the stimulus pattern can be reliably discriminated according to the spatio-temporal pattern of the neuronal activities calculated using the MSDD algorithm. PMID:21886670

  16. Manta Matcher: automated photographic identification of manta rays using keypoint features.

    PubMed

    Town, Christopher; Marshall, Andrea; Sethasathien, Nutthaporn

    2013-07-01

    For species which bear unique markings, such as natural spot patterning, field work has become increasingly more reliant on visual identification to recognize and catalog particular specimens or to monitor individuals within populations. While many species of interest exhibit characteristic markings that in principle allow individuals to be identified from photographs, scientists are often faced with the task of matching observations against databases of hundreds or thousands of images. We present a novel technique for automated identification of manta rays (Manta alfredi and Manta birostris) by means of a pattern-matching algorithm applied to images of their ventral surface area. Automated visual identification has recently been developed for several species. However, such methods are typically limited to animals that can be photographed above water, or whose markings exhibit high contrast and appear in regular constellations. While manta rays bear natural patterning across their ventral surface, these patterns vary greatly in their size, shape, contrast, and spatial distribution. Our method is the first to have proven successful at achieving high matching accuracies on a large corpus of manta ray images taken under challenging underwater conditions. Our method is based on automated extraction and matching of keypoint features using the Scale-Invariant Feature Transform (SIFT) algorithm. In order to cope with the considerable variation in quality of underwater photographs, we also incorporate preprocessing and image enhancement steps. Furthermore, we use a novel pattern-matching approach that results in better accuracy than the standard SIFT approach and other alternative methods. We present quantitative evaluation results on a data set of 720 images of manta rays taken under widely different conditions. We describe a novel automated pattern representation and matching method that can be used to identify individual manta rays from photographs. The method has been incorporated into a website (mantamatcher.org) which will serve as a global resource for ecological and conservation research. It will allow researchers to manage and track sightings data to establish important life-history parameters as well as determine other ecological data such as abundance, range, movement patterns, and structure of manta ray populations across the world.

  17. Manta Matcher: automated photographic identification of manta rays using keypoint features

    PubMed Central

    Town, Christopher; Marshall, Andrea; Sethasathien, Nutthaporn

    2013-01-01

    For species which bear unique markings, such as natural spot patterning, field work has become increasingly more reliant on visual identification to recognize and catalog particular specimens or to monitor individuals within populations. While many species of interest exhibit characteristic markings that in principle allow individuals to be identified from photographs, scientists are often faced with the task of matching observations against databases of hundreds or thousands of images. We present a novel technique for automated identification of manta rays (Manta alfredi and Manta birostris) by means of a pattern-matching algorithm applied to images of their ventral surface area. Automated visual identification has recently been developed for several species. However, such methods are typically limited to animals that can be photographed above water, or whose markings exhibit high contrast and appear in regular constellations. While manta rays bear natural patterning across their ventral surface, these patterns vary greatly in their size, shape, contrast, and spatial distribution. Our method is the first to have proven successful at achieving high matching accuracies on a large corpus of manta ray images taken under challenging underwater conditions. Our method is based on automated extraction and matching of keypoint features using the Scale-Invariant Feature Transform (SIFT) algorithm. In order to cope with the considerable variation in quality of underwater photographs, we also incorporate preprocessing and image enhancement steps. Furthermore, we use a novel pattern-matching approach that results in better accuracy than the standard SIFT approach and other alternative methods. We present quantitative evaluation results on a data set of 720 images of manta rays taken under widely different conditions. We describe a novel automated pattern representation and matching method that can be used to identify individual manta rays from photographs. The method has been incorporated into a website (mantamatcher.org) which will serve as a global resource for ecological and conservation research. It will allow researchers to manage and track sightings data to establish important life-history parameters as well as determine other ecological data such as abundance, range, movement patterns, and structure of manta ray populations across the world. PMID:23919138

  18. Newborn chickens generate invariant object representations at the onset of visual object experience

    PubMed Central

    Wood, Justin N.

    2013-01-01

    To recognize objects quickly and accurately, mature visual systems build invariant object representations that generalize across a range of novel viewing conditions (e.g., changes in viewpoint). To date, however, the origins of this core cognitive ability have not yet been established. To examine how invariant object recognition develops in a newborn visual system, I raised chickens from birth for 2 weeks within controlled-rearing chambers. These chambers provided complete control over all visual object experiences. In the first week of life, subjects’ visual object experience was limited to a single virtual object rotating through a 60° viewpoint range. In the second week of life, I examined whether subjects could recognize that virtual object from novel viewpoints. Newborn chickens were able to generate viewpoint-invariant representations that supported object recognition across large, novel, and complex changes in the object’s appearance. Thus, newborn visual systems can begin building invariant object representations at the onset of visual object experience. These abstract representations can be generated from sparse data, in this case from a visual world containing a single virtual object seen from a limited range of viewpoints. This study shows that powerful, robust, and invariant object recognition machinery is an inherent feature of the newborn brain. PMID:23918372

  19. Neural Mechanisms of Recognizing Camouflaged Objects: A Human fMRI Study

    DTIC Science & Technology

    2015-07-30

    Unlimited Final Report: Neural Mechanisms of Recognizing Camouflaged Objects: A Human fMRI Study The views, opinions and/or findings contained in this...27709-2211 Visual search, Camouflage, Functional magnetic resonance imaging ( fMRI ), Perceptual learning REPORT DOCUMENTATION PAGE 11. SPONSOR...ABSTRACT Number of Papers published in peer-reviewed journals: Final Report: Neural Mechanisms of Recognizing Camouflaged Objects: A Human fMRI Study

  20. Beyond sensory images: Object-based representation in the human ventral pathway

    PubMed Central

    Pietrini, Pietro; Furey, Maura L.; Ricciardi, Emiliano; Gobbini, M. Ida; Wu, W.-H. Carolyn; Cohen, Leonardo; Guazzelli, Mario; Haxby, James V.

    2004-01-01

    We investigated whether the topographically organized, category-related patterns of neural response in the ventral visual pathway are a representation of sensory images or a more abstract representation of object form that is not dependent on sensory modality. We used functional MRI to measure patterns of response evoked during visual and tactile recognition of faces and manmade objects in sighted subjects and during tactile recognition in blind subjects. Results showed that visual and tactile recognition evoked category-related patterns of response in a ventral extrastriate visual area in the inferior temporal gyrus that were correlated across modality for manmade objects. Blind subjects also demonstrated category-related patterns of response in this “visual” area, and in more ventral cortical regions in the fusiform gyrus, indicating that these patterns are not due to visual imagery and, furthermore, that visual experience is not necessary for category-related representations to develop in these cortices. These results demonstrate that the representation of objects in the ventral visual pathway is not simply a representation of visual images but, rather, is a representation of more abstract features of object form. PMID:15064396

  1. The Flowering of Identity: Tracing the History of Cuba through the Visual Arts

    ERIC Educational Resources Information Center

    Smith, Noel

    2007-01-01

    Teaching history through the visual arts is one way of bringing the past into the present. In Cuba, the visual arts and architecture have reflected the country's "flowering of identity" through time, as a multi-ethnic population has grown to recognize its own distinct history, values and attributes, and Cuban artists have portrayed the…

  2. A Visual Training Tool for Teaching Kanji to Children with Developmental Dyslexia

    ERIC Educational Resources Information Center

    Ikeshita-Yamazoe, Hanae; Miyao, Masutomo

    2016-01-01

    We developed a visual training tool to assist children with developmental dyslexia in learning to recognize and understand Chinese characters (kanji). The visual training tool presents the strokes of a kanji character as separate shapes and requires students to use these fragments to construct the character. Two types of experiments were conducted…

  3. Viewfinders: A Visual Environmental Literacy Curriculum. Elementary Unit: Exploring Community Appearance and the Environment.

    ERIC Educational Resources Information Center

    Dunn Foundation, Warwick, RI.

    Recognizing that community growth and change are inevitable, Viewfinders' goals are as follows: to introduce students and teachers to the concept of the visual environment; enhance an understanding of the interrelationship between the built and natural environment; create an awareness that the visual environment affects the economy and quality of…

  4. Ingredients to Successful Students Presentations: It's More Than Just a Sum of Raw Materials.

    ERIC Educational Resources Information Center

    Kerns, H. Dan; Johnson, Nial

    Recognizing the decline in student visual communication skills, faculty from different disciplines collaborated in the design of a visual literacy course. The visual literacy skills developed in the course are that students learn in the following ways: (1) through faculty presentation and demonstration of the various tools available; (2) with…

  5. Image remapping strategies applied as protheses for the visually impaired

    NASA Technical Reports Server (NTRS)

    Johnson, Curtis D.

    1993-01-01

    Maculopathy and retinitis pigmentosa (rp) are two vision defects which render the afflicted person with impaired ability to read and recognize visual patterns. For some time there has been interest and work on the use of image remapping techniques to provide a visual aid for individuals with these impairments. The basic concept is to remap an image according to some mathematical transformation such that the image is warped around a maculopathic defect (scotoma) or within the rp foveal region of retinal sensitivity. NASA/JSC has been pursuing this research using angle invariant transformations with testing of the resulting remapping using subjects and facilities of the University of Houston, College of Optometry. Testing is facilitated by use of a hardware device, the Programmable Remapper, to provide the remapping of video images. This report presents the results of studies of alternative remapping transformations with the objective of improving subject reading rates and pattern recognition. In particular a form of conformal transformation was developed which provides for a smooth warping of an image around a scotoma. In such a case it is shown that distortion of characters and lines of characters is minimized which should lead to enhanced character recognition. In addition studies were made of alternative transformations which, although not conformal, provide for similar low character distortion remapping. A second, non-conformal transformation was studied for remapping of images to aid rp impairments. In this case a transformation was investigated which allows remapping of a vision field into a circular area representing the foveal retina region. The size and spatial representation of the image are selectable. It is shown that parametric adjustments allow for a wide variation of how a visual field is presented to the sensitive retina. This study also presents some preliminary considerations of how a prosthetic device could be implemented in a practical sense, vis-a-vis, size, weight and portability.

  6. Early Sign Language Experience Goes Along with an Increased Cross-modal Gain for Affective Prosodic Recognition in Congenitally Deaf CI Users.

    PubMed

    Fengler, Ineke; Delfau, Pia-Céline; Röder, Brigitte

    2018-04-01

    It is yet unclear whether congenitally deaf cochlear implant (CD CI) users' visual and multisensory emotion perception is influenced by their history in sign language acquisition. We hypothesized that early-signing CD CI users, relative to late-signing CD CI users and hearing, non-signing controls, show better facial expression recognition and rely more on the facial cues of audio-visual emotional stimuli. Two groups of young adult CD CI users-early signers (ES CI users; n = 11) and late signers (LS CI users; n = 10)-and a group of hearing, non-signing, age-matched controls (n = 12) performed an emotion recognition task with auditory, visual, and cross-modal emotionally congruent and incongruent speech stimuli. On different trials, participants categorized either the facial or the vocal expressions. The ES CI users more accurately recognized affective prosody than the LS CI users in the presence of congruent facial information. Furthermore, the ES CI users, but not the LS CI users, gained more than the controls from congruent visual stimuli when recognizing affective prosody. Both CI groups performed overall worse than the controls in recognizing affective prosody. These results suggest that early sign language experience affects multisensory emotion perception in CD CI users.

  7. Universal brain systems for recognizing word shapes and handwriting gestures during reading

    PubMed Central

    Nakamura, Kimihiro; Kuo, Wen-Jui; Pegado, Felipe; Cohen, Laurent; Tzeng, Ovid J. L.; Dehaene, Stanislas

    2012-01-01

    Do the neural circuits for reading vary across culture? Reading of visually complex writing systems such as Chinese has been proposed to rely on areas outside the classical left-hemisphere network for alphabetic reading. Here, however, we show that, once potential confounds in cross-cultural comparisons are controlled for by presenting handwritten stimuli to both Chinese and French readers, the underlying network for visual word recognition may be more universal than previously suspected. Using functional magnetic resonance imaging in a semantic task with words written in cursive font, we demonstrate that two universal circuits, a shape recognition system (reading by eye) and a gesture recognition system (reading by hand), are similarly activated and show identical patterns of activation and repetition priming in the two language groups. These activations cover most of the brain regions previously associated with culture-specific tuning. Our results point to an extended reading network that invariably comprises the occipitotemporal visual word-form system, which is sensitive to well-formed static letter strings, and a distinct left premotor region, Exner’s area, which is sensitive to the forward or backward direction with which cursive letters are dynamically presented. These findings suggest that cultural effects in reading merely modulate a fixed set of invariant macroscopic brain circuits, depending on surface features of orthographies. PMID:23184998

  8. Evaluation of Alternative Conceptual Models Using Interdisciplinary Information: An Application in Shallow Groundwater Recharge and Discharge

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Bajcsy, P.; Valocchi, A. J.; Kim, C.; Wang, J.

    2007-12-01

    Natural systems are complex, thus extensive data are needed for their characterization. However, data acquisition is expensive; consequently we develop models using sparse, uncertain information. When all uncertainties in the system are considered, the number of alternative conceptual models is large. Traditionally, the development of a conceptual model has relied on subjective professional judgment. Good judgment is based on experience in coordinating and understanding auxiliary information which is correlated to the model but difficult to be quantified into the mathematical model. For example, groundwater recharge and discharge (R&D) processes are known to relate to multiple information sources such as soil type, river and lake location, irrigation patterns and land use. Although hydrologists have been trying to understand and model the interaction between each of these information sources and R&D processes, it is extremely difficult to quantify their correlations using a universal approach due to the complexity of the processes, the spatiotemporal distribution and uncertainty. There is currently no single method capable of estimating R&D rates and patterns for all practical applications. Chamberlin (1890) recommended use of "multiple working hypotheses" (alternative conceptual models) for rapid advancement in understanding of applied and theoretical problems. Therefore, cross analyzing R&D rates and patterns from various estimation methods and related field information will likely be superior to using only a single estimation method. We have developed the Pattern Recognition Utility (PRU), to help GIS users recognize spatial patterns from noisy 2D image. This GIS plug-in utility has been applied to help hydrogeologists establish alternative R&D conceptual models in a more efficient way than conventional methods. The PRU uses numerical methods and image processing algorithms to estimate and visualize shallow R&D patterns and rates. It can provide a fast initial estimate prior to planning labor intensive and time consuming field R&D measurements. Furthermore, the Spatial Pattern 2 Learn (SP2L) was developed to cross analyze results from the PRU with ancillary field information, such as land coverage, soil type, topographic maps and previous estimates. The learning process of SP2L cross examines each initially recognized R&D pattern with the ancillary spatial dataset, and then calculates a quantifiable reliability index for each R&D map using a supervised machine learning technique called decision tree. This JAVA based software package is capable of generating alternative R&D maps if the user decides to apply certain conditions recognized by the learning process. The reliability indices from SP2L will improve the traditionally subjective approach to initiating conceptual models by providing objectively quantifiable conceptual bases for further probabilistic and uncertainty analyses. Both the PRU and SP2L have been designed to be user-friendly and universal utilities for pattern recognition and learning to improve model predictions from sparse measurements by computer-assisted integration of spatially dense geospatial image data and machine learning of model dependencies.

  9. Bumblebees distinguish floral scent patterns, and can transfer these to corresponding visual patterns.

    PubMed

    Lawson, David A; Chittka, Lars; Whitney, Heather M; Rands, Sean A

    2018-06-13

    Flowers act as multisensory billboards to pollinators by using a range of sensory modalities such as visual patterns and scents. Different floral organs release differing compositions and quantities of the volatiles contributing to floral scent, suggesting that scent may be patterned within flowers. Early experiments suggested that pollinators can distinguish between the scents of differing floral regions, but little is known about how these potential scent patterns might influence pollinators. We show that bumblebees can learn different spatial patterns of the same scent, and that they are better at learning to distinguish between flowers when the scent pattern corresponds to a matching visual pattern. Surprisingly, once bees have learnt the spatial arrangement of a scent pattern, they subsequently prefer to visit novel unscented flowers that have an identical arrangement of visual marks, suggesting that multimodal floral signals may exploit the mechanisms by which learnt information is stored by the bee. © 2018 The Authors.

  10. Visual versus semi-quantitative analysis of 18F-FDG-PET in amnestic MCI: an European Alzheimer's Disease Consortium (EADC) project.

    PubMed

    Morbelli, Silvia; Brugnolo, Andrea; Bossert, Irene; Buschiazzo, Ambra; Frisoni, Giovanni B; Galluzzi, Samantha; van Berckel, Bart N M; Ossenkoppele, Rik; Perneczky, Robert; Drzezga, Alexander; Didic, Mira; Guedj, Eric; Sambuceti, Gianmario; Bottoni, Gianluca; Arnaldi, Dario; Picco, Agnese; De Carli, Fabrizio; Pagani, Marco; Nobili, Flavio

    2015-01-01

    We aimed to investigate the accuracy of FDG-PET to detect the Alzheimer's disease (AD) brain glucose hypometabolic pattern in 142 patients with amnestic mild cognitive impairment (aMCI) and 109 healthy controls. aMCI patients were followed for at least two years or until conversion to dementia. Images were evaluated by means of visual read by either moderately-skilled or expert readers, and by means of a summary metric of AD-like hypometabolism (PALZ score). Seventy-seven patients converted to AD-dementia after 28.6 ± 19.3 months of follow-up. Expert reading was the most accurate tool to detect these MCI converters from healthy controls (sensitivity 89.6%, specificity 89.0%, accuracy 89.2%) while two moderately-skilled readers were less (p < 0.05) specific (sensitivity 85.7%, specificity 79.8%, accuracy 82.3%) and PALZ score was less (p < 0.001) sensitive (sensitivity 62.3%, specificity 91.7%, accuracy 79.6%). Among the remaining 67 aMCI patients, 50 were confirmed as aMCI after an average of 42.3 months, 12 developed other dementia, and 3 reverted to normalcy. In 30/50 persistent MCI patients, the expert recognized the AD hypometabolic pattern. In 13/50 aMCI, both the expert and PALZ score were negative while in 7/50, only the PALZ score was positive due to sparse hypometabolic clusters mainly in frontal lobes. Visual FDG-PET reads by an expert is the most accurate method but an automated, validated system may be particularly helpful to moderately-skilled readers because of high specificity, and should be mandatory when even a moderately-skilled reader is unavailable.

  11. Immunolocalization of choline acetyltransferase of common type in the central brain mass of Octopus vulgaris

    PubMed Central

    Casini, A.; Vaccaro, R.; D'Este, L.; Sakaue, Y.; Bellier, J.P.; Kimura, H.; Renda, T.G.

    2012-01-01

    Acetylcholine, the first neurotransmitter to be identified in the vertebrate frog, is widely distributed among the animal kingdom. The presence of a large amount of acetylcholine in the nervous system of cephalopods is well known from several biochemical and physiological studies. However, little is known about the precise distribution of cholinergic structures due to a lack of a suitable histochemical technique for detecting acetylcholine. The most reliable method to visualize the cholinergic neurons is the immunohistochemical localization of the enzyme choline acetyltransferase, the synthetic enzyme of acetylcholine. Following our previous study on the distribution patterns of cholinergic neurons in the Octopus vulgaris visual system, using a novel antibody that recognizes choline acetyltransferase of the common type (cChAT), now we extend our investigation on the octopus central brain mass. When applied on sections of octopus central ganglia, immunoreactivity for cChAT was detected in cell bodies of all central brain mass lobes with the notable exception of the subfrontal and subvertical lobes. Positive varicosed nerves fibers where observed in the neuropil of all central brain mass lobes. PMID:23027350

  12. Immunolocalization of choline acetyltransferase of common type in the central brain mass of Octopus vulgaris.

    PubMed

    Casini, A; Vaccaro, R; D'Este, L; Sakaue, Y; Bellier, J P; Kimura, H; Renda, T G

    2012-07-19

    Acetylcholine, the first neurotransmitter to be identified in the vertebrate frog, is widely distributed among the animal kingdom. The presence of a large amount of acetylcholine in the nervous system of cephalopods is well known from several biochemical and physiological studies. However, little is known about the precise distribution of cholinergic structures due to a lack of a suitable histochemical technique for detecting acetylcholine. The most reliable method to visualize the cholinergic neurons is the immunohistochemical localization of the enzyme choline acetyltransferase, the synthetic enzyme of acetylcholine. Following our previous study on the distribution patterns of cholinergic neurons in the Octopus vulgaris visual system, using a novel antibody that recognizes choline acetyltransferase of the common type (cChAT), now we extend our investigation on the octopus central brain mass. When applied on sections of octopus central ganglia, immunoreactivity for cChAT was detected in cell bodies of all central brain mass lobes with the notable exception of the subfrontal and subvertical lobes. Positive varicosed nerves fibers where observed in the neuropil of all central brain mass lobes.

  13. Spatiotemporal dynamics in human visual cortex rapidly encode the emotional content of faces.

    PubMed

    Dima, Diana C; Perry, Gavin; Messaritaki, Eirini; Zhang, Jiaxiang; Singh, Krish D

    2018-06-08

    Recognizing emotion in faces is important in human interaction and survival, yet existing studies do not paint a consistent picture of the neural representation supporting this task. To address this, we collected magnetoencephalography (MEG) data while participants passively viewed happy, angry and neutral faces. Using time-resolved decoding of sensor-level data, we show that responses to angry faces can be discriminated from happy and neutral faces as early as 90 ms after stimulus onset and only 10 ms later than faces can be discriminated from scrambled stimuli, even in the absence of differences in evoked responses. Time-resolved relevance patterns in source space track expression-related information from the visual cortex (100 ms) to higher-level temporal and frontal areas (200-500 ms). Together, our results point to a system optimised for rapid processing of emotional faces and preferentially tuned to threat, consistent with the important evolutionary role that such a system must have played in the development of human social interactions. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  14. Superior voice recognition in a patient with acquired prosopagnosia and object agnosia.

    PubMed

    Hoover, Adria E N; Démonet, Jean-François; Steeves, Jennifer K E

    2010-11-01

    Anecdotally, it has been reported that individuals with acquired prosopagnosia compensate for their inability to recognize faces by using other person identity cues such as hair, gait or the voice. Are they therefore superior at the use of non-face cues, specifically voices, to person identity? Here, we empirically measure person and object identity recognition in a patient with acquired prosopagnosia and object agnosia. We quantify person identity (face and voice) and object identity (car and horn) recognition for visual, auditory, and bimodal (visual and auditory) stimuli. The patient is unable to recognize faces or cars, consistent with his prosopagnosia and object agnosia, respectively. He is perfectly able to recognize people's voices and car horns and bimodal stimuli. These data show a reverse shift in the typical weighting of visual over auditory information for audiovisual stimuli in a compromised visual recognition system. Moreover, the patient shows selectively superior voice recognition compared to the controls revealing that two different stimulus domains, persons and objects, may not be equally affected by sensory adaptation effects. This also implies that person and object identity recognition are processed in separate pathways. These data demonstrate that an individual with acquired prosopagnosia and object agnosia can compensate for the visual impairment and become quite skilled at using spared aspects of sensory processing. In the case of acquired prosopagnosia it is advantageous to develop a superior use of voices for person identity recognition in everyday life. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Altered anatomical patterns of depression in relation to antidepressant treatment: Evidence from a pattern recognition analysis on the topological organization of brain networks.

    PubMed

    Qin, Jiaolong; Wei, Maobin; Liu, Haiyan; Chen, Jianhuai; Yan, Rui; Yao, Zhijian; Lu, Qing

    2015-07-15

    Accumulated evidence has illuminated the topological infrastructure of major depressive disorder (MDD). However, the changes of topological properties of anatomical brain networks in remitted major depressive disorder patients (rMDD) remain an open question. The present study provides an exploratory examination of pattern changes among current major depressive disorder patients (cMDD), rMDD patients and healthy controls (HC) by means of a pattern recognition analysis. Twenty-eight cMDD patients (age range: 22-54, mean age: 39.57), 15 rMDD patients (age range: 23-53, mean age: 38.40) and 30 HC (23-54, mean age: 35.57) were enrolled. For each subject, we computed five kinds of weighted white matter (WM) networks via employing five physiological parameters (i.e. fractional anisotropy, mean diffusivity, λ1, λ2 and λ3) and then calculated three network measures of these weighted networks. We treated these measures as features and fed into a feature selection mechanism to choose the most discriminative features for linear support vector machine (SVM) classifiers. Linear SVM could excellently distinguish the three groups with the 100% classification accuracy of recognizing cMDD/rMDD from HC, and 97.67% classification accuracy of recognizing cMDD from rMDD. The further pattern analysis found two types of discriminative patterns among cMDD, rMDD and HC. (i) Compared with HC, both cMDD and rMDD exhibited the similar deficit patterns of node strength primarily involving the salience network (SN), default mode network (DMN) and frontoparietal network (FPN). (ii) Compared with cMDD and rMDD showed the altered pattern of intra-communicability within DMN and inter-communicability between DMN and the other sub-networks including the visual recognition network (VRN) and SN. The present study had a limited sample size and a lack of larger independent data set to validate the methods and confirm the findings. These findings implied that the impairment of MDD was closely associated with the alterations of connections within SN, DMN and FPN, whereas the remission of MDD was benefitted from the network compensatory of intra-communication within DMN and inter-communication between DMN and the other sub-networks (i.e., VRN and SN). Copyright © 2015 Elsevier B.V. All rights reserved.

  16. [Discrimination between pain-induced head movement disturbances after whiplash injuries and their simulation].

    PubMed

    Berger, M; Lechner-Steinleitner, S; Hoffmann, F; Schönegger, J

    1998-12-09

    Neck pain after whiplash injury of the cervical spine often induces typical changes in head motion patterns (amplitude, velocity). These changes of kinematics may help to recognize malingerers. We investigated the hypothesis that malingerers are not able to reproduce their simulated head movement disturbances three times. The kinematics of head movements of 23 patients with neck pain after whiplash injury and of 22 healthy subjects trying to act as malingerers were compared. The healthy subjects were informed about the symptomatology of whiplash injury and were asked to simulate painful head movements. Two different kinds of head movements were registered and analyzed by Cervicomotography: (1) the slow free axial head rotation (yaw) and (2) the axial head rotation (yaw) tracking a moving visual target. Each experimental condition was presented three times, expecting the malingerers not to be able to produce as well as to reproduce the same head movement disturbances again and again. In patients, as a consequence of their distinct pain patterns, we expected less variance between the test repetitions. The statistical analysis showed significant differences of the calculated kinematic parameters between both groups and the inability of healthy subjects to simulate and to reproduce convincingly distinct pain patterns.

  17. Three-dimensional vortex patterns in a starting flow

    NASA Astrophysics Data System (ADS)

    Freymuth, P.; Finaish, F.; Bank, W.

    1985-12-01

    Freymuth et al. (1983, 1984, 1985) have conducted investigations involving chordwise vortical-pattern visualizations in a starting flow of constant acceleration around an airfoil. Detailed resolution of vortical shapes in two dimensions could be obtained. No visualization in the third spanwise dimension is needed as long as the flow remains two-dimensional. However, some time after flow startup, chordwise vortical patterns become blurred, indicating the onset of turbulence. The present investigation is concerned with an extension of the flow visualization from a chordwise cross section to the spanwise dimension. The investigation has the objective to look into the two-dimensionality of the initial vortical developments and to resolve three-dimensional effects during the transition to turbulence. Attention is given to the visualization method, the chordwise vs spanwise visualization in the two-dimensional regime, the spanwise visualization of transition, and the visualization of vortical patterns behind the trailing edge.

  18. Distinct Visual Evoked Potential Morphological Patterns for Apparent Motion Processing in School-Aged Children.

    PubMed

    Campbell, Julia; Sharma, Anu

    2016-01-01

    Measures of visual cortical development in children demonstrate high variability and inconsistency throughout the literature. This is partly due to the specificity of the visual system in processing certain features. It may then be advantageous to activate multiple cortical pathways in order to observe maturation of coinciding networks. Visual stimuli eliciting the percept of apparent motion and shape change is designed to simultaneously activate both dorsal and ventral visual streams. However, research has shown that such stimuli also elicit variable visual evoked potential (VEP) morphology in children. The aim of this study was to describe developmental changes in VEPs, including morphological patterns, and underlying visual cortical generators, elicited by apparent motion and shape change in school-aged children. Forty-one typically developing children underwent high-density EEG recordings in response to a continuously morphing, radially modulated, circle-star grating. VEPs were then compared across the age groups of 5-7, 8-10, and 11-15 years according to latency and amplitude. Current density reconstructions (CDR) were performed on VEP data in order to observe activated cortical regions. It was found that two distinct VEP morphological patterns occurred in each age group. However, there were no major developmental differences between the age groups according to each pattern. CDR further demonstrated consistent visual generators across age and pattern. These results describe two novel VEP morphological patterns in typically developing children, but with similar underlying cortical sources. The importance of these morphological patterns is discussed in terms of future studies and the investigation of a relationship to visual cognitive performance.

  19. Distinct Visual Evoked Potential Morphological Patterns for Apparent Motion Processing in School-Aged Children

    PubMed Central

    Campbell, Julia; Sharma, Anu

    2016-01-01

    Measures of visual cortical development in children demonstrate high variability and inconsistency throughout the literature. This is partly due to the specificity of the visual system in processing certain features. It may then be advantageous to activate multiple cortical pathways in order to observe maturation of coinciding networks. Visual stimuli eliciting the percept of apparent motion and shape change is designed to simultaneously activate both dorsal and ventral visual streams. However, research has shown that such stimuli also elicit variable visual evoked potential (VEP) morphology in children. The aim of this study was to describe developmental changes in VEPs, including morphological patterns, and underlying visual cortical generators, elicited by apparent motion and shape change in school-aged children. Forty-one typically developing children underwent high-density EEG recordings in response to a continuously morphing, radially modulated, circle-star grating. VEPs were then compared across the age groups of 5–7, 8–10, and 11–15 years according to latency and amplitude. Current density reconstructions (CDR) were performed on VEP data in order to observe activated cortical regions. It was found that two distinct VEP morphological patterns occurred in each age group. However, there were no major developmental differences between the age groups according to each pattern. CDR further demonstrated consistent visual generators across age and pattern. These results describe two novel VEP morphological patterns in typically developing children, but with similar underlying cortical sources. The importance of these morphological patterns is discussed in terms of future studies and the investigation of a relationship to visual cognitive performance. PMID:27445738

  20. [The development of the skin-optical perception of color and images in blind schoolchildren on an "internal visual screen"].

    PubMed

    Mizrakhi, V M; Protsiuk, R G

    2000-03-01

    In profound impairement of vision the function of colour and seen objects perception is absent, with the person being unable to orient himself in space. The uncovered sensory sensations of colour allowed their use in training the blind in recognizing the colour of paper, fabric, etc. Further study in those having become blind will, we believe, help in finding eligible people and relevant approaches toward educating the blind, which will make for development of the trainee's ability to recognize images on the "inner visual screen".

  1. The Dynamics of Visual Experience, an EEG Study of Subjective Pattern Formation

    PubMed Central

    Elliott, Mark A.; Twomey, Deirdre; Glennon, Mark

    2012-01-01

    Background Since the origin of psychological science a number of studies have reported visual pattern formation in the absence of either physiological stimulation or direct visual-spatial references. Subjective patterns range from simple phosphenes to complex patterns but are highly specific and reported reliably across studies. Methodology/Principal Findings Using independent-component analysis (ICA) we report a reduction in amplitude variance consistent with subjective-pattern formation in ventral posterior areas of the electroencephalogram (EEG). The EEG exhibits significantly increased power at delta/theta and gamma-frequencies (point and circle patterns) or a series of high-frequency harmonics of a delta oscillation (spiral patterns). Conclusions/Significance Subjective-pattern formation may be described in a way entirely consistent with identical pattern formation in fluids or granular flows. In this manner, we propose subjective-pattern structure to be represented within a spatio-temporal lattice of harmonic oscillations which bind topographically organized visual-neuronal assemblies by virtue of low frequency modulation. PMID:22292053

  2. Image-based fall detection and classification of a user with a walking support system

    NASA Astrophysics Data System (ADS)

    Taghvaei, Sajjad; Kosuge, Kazuhiro

    2017-10-01

    The classification of visual human action is important in the development of systems that interact with humans. This study investigates an image-based classification of the human state while using a walking support system to improve the safety and dependability of these systems.We categorize the possible human behavior while utilizing a walker robot into eight states (i.e., sitting, standing, walking, and five falling types), and propose two different methods, namely, normal distribution and hidden Markov models (HMMs), to detect and recognize these states. The visual feature for the state classification is the centroid position of the upper body, which is extracted from the user's depth images. The first method shows that the centroid position follows a normal distribution while walking, which can be adopted to detect any non-walking state. The second method implements HMMs to detect and recognize these states. We then measure and compare the performance of both methods. The classification results are employed to control the motion of a passive-type walker (called "RT Walker") by activating its brakes in non-walking states. Thus, the system can be used for sit/stand support and fall prevention. The experiments are performed with four subjects, including an experienced physiotherapist. Results show that the algorithm can be adapted to the new user's motion pattern within 40 s, with a fall detection rate of 96.25% and state classification rate of 81.0%. The proposed method can be implemented to other abnormality detection/classification applications that employ depth image-sensing devices.

  3. Visual analytics in healthcare education: exploring novel ways to analyze and represent big data in undergraduate medical education

    PubMed Central

    Nilsson, Gunnar; Zary, Nabil

    2014-01-01

    Introduction. The big data present in the medical curriculum that informs undergraduate medical education is beyond human abilities to perceive and analyze. The medical curriculum is the main tool used by teachers and directors to plan, design, and deliver teaching and assessment activities and student evaluations in medical education in a continuous effort to improve it. Big data remains largely unexploited for medical education improvement purposes. The emerging research field of visual analytics has the advantage of combining data analysis and manipulation techniques, information and knowledge representation, and human cognitive strength to perceive and recognize visual patterns. Nevertheless, there is a lack of research on the use and benefits of visual analytics in medical education. Methods. The present study is based on analyzing the data in the medical curriculum of an undergraduate medical program as it concerns teaching activities, assessment methods and learning outcomes in order to explore visual analytics as a tool for finding ways of representing big data from undergraduate medical education for improvement purposes. Cytoscape software was employed to build networks of the identified aspects and visualize them. Results. After the analysis of the curriculum data, eleven aspects were identified. Further analysis and visualization of the identified aspects with Cytoscape resulted in building an abstract model of the examined data that presented three different approaches; (i) learning outcomes and teaching methods, (ii) examination and learning outcomes, and (iii) teaching methods, learning outcomes, examination results, and gap analysis. Discussion. This study identified aspects of medical curriculum that play an important role in how medical education is conducted. The implementation of visual analytics revealed three novel ways of representing big data in the undergraduate medical education context. It appears to be a useful tool to explore such data with possible future implications on healthcare education. It also opens a new direction in medical education informatics research. PMID:25469323

  4. Visual analytics in healthcare education: exploring novel ways to analyze and represent big data in undergraduate medical education.

    PubMed

    Vaitsis, Christos; Nilsson, Gunnar; Zary, Nabil

    2014-01-01

    Introduction. The big data present in the medical curriculum that informs undergraduate medical education is beyond human abilities to perceive and analyze. The medical curriculum is the main tool used by teachers and directors to plan, design, and deliver teaching and assessment activities and student evaluations in medical education in a continuous effort to improve it. Big data remains largely unexploited for medical education improvement purposes. The emerging research field of visual analytics has the advantage of combining data analysis and manipulation techniques, information and knowledge representation, and human cognitive strength to perceive and recognize visual patterns. Nevertheless, there is a lack of research on the use and benefits of visual analytics in medical education. Methods. The present study is based on analyzing the data in the medical curriculum of an undergraduate medical program as it concerns teaching activities, assessment methods and learning outcomes in order to explore visual analytics as a tool for finding ways of representing big data from undergraduate medical education for improvement purposes. Cytoscape software was employed to build networks of the identified aspects and visualize them. Results. After the analysis of the curriculum data, eleven aspects were identified. Further analysis and visualization of the identified aspects with Cytoscape resulted in building an abstract model of the examined data that presented three different approaches; (i) learning outcomes and teaching methods, (ii) examination and learning outcomes, and (iii) teaching methods, learning outcomes, examination results, and gap analysis. Discussion. This study identified aspects of medical curriculum that play an important role in how medical education is conducted. The implementation of visual analytics revealed three novel ways of representing big data in the undergraduate medical education context. It appears to be a useful tool to explore such data with possible future implications on healthcare education. It also opens a new direction in medical education informatics research.

  5. [Visual Texture Agnosia in Humans].

    PubMed

    Suzuki, Kyoko

    2015-06-01

    Visual object recognition requires the processing of both geometric and surface properties. Patients with occipital lesions may have visual agnosia, which is impairment in the recognition and identification of visually presented objects primarily through their geometric features. An analogous condition involving the failure to recognize an object by its texture may exist, which can be called visual texture agnosia. Here we present two cases with visual texture agnosia. Case 1 had left homonymous hemianopia and right upper quadrantanopia, along with achromatopsia, prosopagnosia, and texture agnosia, because of damage to his left ventromedial occipitotemporal cortex and right lateral occipito-temporo-parietal cortex due to multiple cerebral embolisms. Although he showed difficulty matching and naming textures of real materials, he could readily name visually presented objects by their contours. Case 2 had right lower quadrantanopia, along with impairment in stereopsis and recognition of texture in 2D images, because of subcortical hemorrhage in the left occipitotemporal region. He failed to recognize shapes based on texture information, whereas shape recognition based on contours was well preserved. Our findings, along with those of three reported cases with texture agnosia, indicate that there are separate channels for processing texture, color, and geometric features, and that the regions around the left collateral sulcus are crucial for texture processing.

  6. Orthographic processing in pigeons (Columba livia)

    PubMed Central

    Scarf, Damian; Boy, Karoline; Uber Reinert, Anelisie; Devine, Jack; Güntürkün, Onur; Colombo, Michael

    2016-01-01

    Learning to read involves the acquisition of letter–sound relationships (i.e., decoding skills) and the ability to visually recognize words (i.e., orthographic knowledge). Although decoding skills are clearly human-unique, given they are seated in language, recent research and theory suggest that orthographic processing may derive from the exaptation or recycling of visual circuits that evolved to recognize everyday objects and shapes in our natural environment. An open question is whether orthographic processing is limited to visual circuits that are similar to our own or a product of plasticity common to many vertebrate visual systems. Here we show that pigeons, organisms that separated from humans more than 300 million y ago, process words orthographically. Specifically, we demonstrate that pigeons trained to discriminate words from nonwords picked up on the orthographic properties that define words and used this knowledge to identify words they had never seen before. In addition, the pigeons were sensitive to the bigram frequencies of words (i.e., the common co-occurrence of certain letter pairs), the edit distance between nonwords and words, and the internal structure of words. Our findings demonstrate that visual systems organizationally distinct from the primate visual system can also be exapted or recycled to process the visual word form. PMID:27638211

  7. Learning piano melodies in visuo-motor or audio-motor training conditions and the neural correlates of their cross-modal transfer.

    PubMed

    Engel, Annerose; Bangert, Marc; Horbank, David; Hijmans, Brenda S; Wilkens, Katharina; Keller, Peter E; Keysers, Christian

    2012-11-01

    To investigate the cross-modal transfer of movement patterns necessary to perform melodies on the piano, 22 non-musicians learned to play short sequences on a piano keyboard by (1) merely listening and replaying (vision of own fingers occluded) or (2) merely observing silent finger movements and replaying (on a silent keyboard). After training, participants recognized with above chance accuracy (1) audio-motor learned sequences upon visual presentation (89±17%), and (2) visuo-motor learned sequences upon auditory presentation (77±22%). The recognition rates for visual presentation significantly exceeded those for auditory presentation (p<.05). fMRI revealed that observing finger movements corresponding to audio-motor trained melodies is associated with stronger activation in the left rolandic operculum than observing untrained sequences. This region was also involved in silent execution of sequences, suggesting that a link to motor representations may play a role in cross-modal transfer from audio-motor training condition to visual recognition. No significant differences in brain activity were found during listening to visuo-motor trained compared to untrained melodies. Cross-modal transfer was stronger from the audio-motor training condition to visual recognition and this is discussed in relation to the fact that non-musicians are familiar with how their finger movements look (motor-to-vision transformation), but not with how they sound on a piano (motor-to-sound transformation). Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Sustained Attention in Real Classroom Settings: An EEG Study.

    PubMed

    Ko, Li-Wei; Komarov, Oleksii; Hairston, W David; Jung, Tzyy-Ping; Lin, Chin-Teng

    2017-01-01

    Sustained attention is a process that enables the maintenance of response persistence and continuous effort over extended periods of time. Performing attention-related tasks in real life involves the need to ignore a variety of distractions and inhibit attention shifts to irrelevant activities. This study investigates electroencephalography (EEG) spectral changes during a sustained attention task within a real classroom environment. Eighteen healthy students were instructed to recognize as fast as possible special visual targets that were displayed during regular university lectures. Sorting their EEG spectra with respect to response times, which indicated the level of visual alertness to randomly introduced visual stimuli, revealed significant changes in the brain oscillation patterns. The results of power-frequency analysis demonstrated a relationship between variations in the EEG spectral dynamics and impaired performance in the sustained attention task. Across subjects and sessions, prolongation of the response time was preceded by an increase in the delta and theta EEG powers over the occipital region, and decrease in the beta power over the occipital and temporal regions. Meanwhile, implementation of the complex attention task paradigm into a real-world classroom setting makes it possible to investigate specific mutual links between brain activities and factors that cause impaired behavioral performance, such as development and manifestation of classroom mental fatigue. The findings of the study set a basis for developing a system capable of estimating the level of visual attention during real classroom activities by monitoring changes in the EEG spectra.

  9. Sustained Attention in Real Classroom Settings: An EEG Study

    PubMed Central

    Ko, Li-Wei; Komarov, Oleksii; Hairston, W. David; Jung, Tzyy-Ping; Lin, Chin-Teng

    2017-01-01

    Sustained attention is a process that enables the maintenance of response persistence and continuous effort over extended periods of time. Performing attention-related tasks in real life involves the need to ignore a variety of distractions and inhibit attention shifts to irrelevant activities. This study investigates electroencephalography (EEG) spectral changes during a sustained attention task within a real classroom environment. Eighteen healthy students were instructed to recognize as fast as possible special visual targets that were displayed during regular university lectures. Sorting their EEG spectra with respect to response times, which indicated the level of visual alertness to randomly introduced visual stimuli, revealed significant changes in the brain oscillation patterns. The results of power-frequency analysis demonstrated a relationship between variations in the EEG spectral dynamics and impaired performance in the sustained attention task. Across subjects and sessions, prolongation of the response time was preceded by an increase in the delta and theta EEG powers over the occipital region, and decrease in the beta power over the occipital and temporal regions. Meanwhile, implementation of the complex attention task paradigm into a real-world classroom setting makes it possible to investigate specific mutual links between brain activities and factors that cause impaired behavioral performance, such as development and manifestation of classroom mental fatigue. The findings of the study set a basis for developing a system capable of estimating the level of visual attention during real classroom activities by monitoring changes in the EEG spectra. PMID:28824396

  10. Monitoring the growth or decline of vegetation on mine dumps

    NASA Technical Reports Server (NTRS)

    Gilbertson, B. P. (Principal Investigator)

    1975-01-01

    The author has identified the following signficant results. It was established that particular mine dumps throughout the entire test area can be detected and identified. It was also established that patterns of vegetative growth on the mine dumps can be recognized from a simple visual analysis of photographic images. Because vegetation tends to occur in patches on many mine dumps, it is unsatisfactory to classify complete dumps into categories of percentage vegetative cover. A more desirable approach is to classify the patches of vegetation themselves. The coarse resolution of conventional densitometers restricts the accuracy of this procedure, and consequently a direct analysis of ERTS CCT's is preferred. A set of computer programs was written to perform the data reading and manipulating functions required for basic CCT analysis.

  11. Abnormal uterine bleeding unrelated to structural uterine abnormalities: management in the perimenopausal period.

    PubMed

    Sabbioni, Lorenzo; Zanetti, Isabella; Orlandini, Cinzia; Petraglia, Felice; Luisi, Stefano

    2017-02-01

    Abnormal uterine bleeding (AUB) is one of the commonest health problems encountered by women and a frequent phenomenon during menopausal transition. The clinical management of AUB must follow a standardized classification system to obtain the better diagnostic pathway and the optimal therapy. The PALM-COEIN classification system has been approved by the International Federation of Gynecology and Obstetrics (FIGO); it recognizes structural causes of AUB, which can be measured visually with imaging techniques or histopathology, and non-structural entities such as coagulopathies, ovulatory dysfunctions, endometrial and iatrogenic causes and disorders not yet classified. In this review we aim to evaluate the management of nonstructural causes of AUB during the menopausal transition, when commonly women experience changes in menstrual bleeding patterns and unexpected bleedings which affect their quality of life.

  12. How Temporal and Spatial Aspects of Presenting Visualizations Affect Learning about Locomotion Patterns

    ERIC Educational Resources Information Center

    Imhof, Birgit; Scheiter, Katharina; Edelmann, Jorg; Gerjets, Peter

    2012-01-01

    Two studies investigated the effectiveness of dynamic and static visualizations for a perceptual learning task (locomotion pattern classification). In Study 1, seventy-five students viewed either dynamic, static-sequential, or static-simultaneous visualizations. For tasks of intermediate difficulty, dynamic visualizations led to better…

  13. The Human is the Loop: New Directions for Visual Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; Hossain, Shahriar H.; Ramakrishnan, Naren

    2014-01-28

    Visual analytics is the science of marrying interactive visualizations and analytic algorithms to support exploratory knowledge discovery in large datasets. We argue for a shift from a ‘human in the loop’ philosophy for visual analytics to a ‘human is the loop’ viewpoint, where the focus is on recognizing analysts’ work processes, and seamlessly fitting analytics into that existing interactive process. We survey a range of projects that provide visual analytic support contextually in the sensemaking loop, and outline a research agenda along with future challenges.

  14. Developmental Changes in the Visual Span for Reading

    PubMed Central

    Kwon, MiYoung; Legge, Gordon E.; Dubbels, Brock R.

    2007-01-01

    The visual span for reading refers to the range of letters, formatted as in text, that can be recognized reliably without moving the eyes. It is likely that the size of the visual span is determined primarily by characteristics of early visual processing. It has been hypothesized that the size of the visual span imposes a fundamental limit on reading speed (Legge, Mansfield, & Chung, 2001). The goal of the present study was to investigate developmental changes in the size of the visual span in school-age children, and the potential impact of these changes on children’s reading speed. The study design included groups of 10 children in 3rd, 5th, and 7th grade, and 10 adults. Visual span profiles were measured by asking participants to recognize letters in trigrams (random strings of three letters) flashed for 100 ms at varying letter positions left and right of the fixation point. Two print sizes (0.25° and 1.0°) were used. Over a block of trials, a profile was built up showing letter recognition accuracy (% correct) versus letter position. The area under this profile was defined to be the size of the visual span. Reading speed was measured in two ways: with Rapid Serial Visual Presentation (RSVP) and with short blocks of text (termed Flashcard presentation). Consistent with our prediction, we found that the size of the visual span increased linearly with grade level and it was significantly correlated with reading speed for both presentation methods. Regression analysis using the size of the visual span as a predictor indicated that 34% to 52% of variability in reading speeds can be accounted for by the size of the visual span. These findings are consistent with a significant role of early visual processing in the development of reading skills. PMID:17845810

  15. Beyond Recipe: Leading Edges for Teaching Spelling.

    ERIC Educational Resources Information Center

    Garmston, Robert; Zimmerman, Diane

    A good spelling teacher teaches by "taste" rather than by "recipe": instead of strictly adhering to procedural outlines, good teachers alter their lessons according to students' needs. In addition, good teachers: (1) recognize the importance of visualization for spelling; (2) understand the two kinds of visualization--for…

  16. The Nature and Process of Development in Averaged Visually Evoked Potentials: Discussion on Pattern Structure.

    ERIC Educational Resources Information Center

    Izawa, Shuji; Mizutani, Tohru

    This paper examines the development of visually evoked EEG patterns in retarded and normal subjects. The paper focuses on the averaged visually evoked potentials (AVEP) in the central and occipital regions of the brain in eyes closed and eyes open conditions. Wave pattern, amplitude, and latency are examined. The first section of the paper reviews…

  17. Cross-Modal Decoding of Neural Patterns Associated with Working Memory: Evidence for Attention-Based Accounts of Working Memory

    PubMed Central

    Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica

    2016-01-01

    Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high–low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. PMID:25146374

  18. Do Pattern-Focused Visuals Improve Skin Self-Examination Performance? Explicating the Visual Skill Acquisition Model

    PubMed Central

    JOHN, KEVIN K.; JENSEN, JAKOB D.; KING, ANDY J.; RATCLIFF, CHELSEA L.; GROSSMAN, DOUGLAS

    2017-01-01

    Skin self-examination (SSE) consists of routinely checking the body for atypical moles that might be cancerous. Identifying atypical moles is a visual task; thus, SSE training materials utilize pattern-focused visuals to cultivate this skill. Despite widespread use, researchers have yet to explicate how pattern-focused visuals cultivate visual skill. Using eye tracking to capture the visual scanpaths of a sample of laypersons (N = 92), the current study employed a 2 (pattern: ABCDE vs. ugly duckling sign [UDS]) × 2 (presentation: photorealistic images vs. illustrations) factorial design to assess whether and how pattern-focused visuals can increase layperson accuracy in identifying atypical moles. Overall, illustrations resulted in greater sensitivity, while photos resulted in greater specificity. The UDS × photorealistic condition showed greatest specificity. For those in the photo condition with high self-efficacy, UDS increased specificity directly. For those in the photo condition with self-efficacy levels at the mean or lower, there was a conditional indirect effect such that these individuals spent a larger amount of their viewing time observing the atypical moles, and time on target was positively related to specificity. Illustrations provided significant gains in specificity for those with low-to-moderate self-efficacy by increasing total fixation time on the atypical moles. Findings suggest that maximizing visual processing efficiency could enhance existing SSE training techniques. PMID:28759333

  19. Pericentral retinopathy and racial differences in hydroxychloroquine toxicity.

    PubMed

    Melles, Ronald B; Marmor, Michael F

    2015-01-01

    To describe patterns of hydroxychloroquine retinopathy distinct from the classic parafoveal (bull's eye) maculopathy. Retrospective case series. Patients from a large multi-provider group practice and a smaller university referral practice diagnosed with hydroxychloroquine retinopathy. Patients with widespread or "end-stage" retinopathy were excluded. Review of ophthalmic studies (fundus photography, spectral-domain optical coherence tomography, fundus autofluorescence, multifocal electroretinography, visual fields) and classification of retinopathy into 1 of 3 patterns: parafoveal (retinal changes 2°-6° from the fovea), pericentral (retinal changes ≥ 8° from the fovea), or mixed (retinal changes in both parafoveal and pericentral areas). Relative frequency of different patterns of hydroxychloroquine retinopathy and comparison of risk factors. Of 201 total patients (18% Asian) with hydroxychloroquine retinopathy, 153 (76%) had typical parafoveal changes, 24 (12%) also had a zone of pericentral damage, and 24 (12%) had pericentral retinopathy without any parafoveal damage. Pericentral retinopathy alone was seen in 50% of Asian patients but only in 2% of white patients. Patients with the pericentral pattern were taking hydroxychloroquine for a somewhat longer duration (19.5 vs. 15.0 years, P < 0.01) and took a larger cumulative dose (2186 vs. 1813 g, P = 0.02) than patients with the parafoveal pattern, but they were diagnosed at a more severe stage of toxicity. Hydroxychloroquine retinopathy does not always develop in a parafoveal (bull's eye) pattern, and a pericentral pattern of damage is especially prevalent among Asian patients. Screening practices may need to be adjusted to recognize pericentral and parafoveal hydroxychloroquine retinopathy. Copyright © 2015 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  20. Treatment of a patient with posterior cortical atrophy (PCA) with chiropractic manipulation and Dynamic Neuromuscular Stabilization (DNS): A case report.

    PubMed

    Francio, Vinicius T; Boesch, Ron; Tunning, Michael

    2015-03-01

    Posterior cortical atrophy (PCA) is a rare progressive neurodegenerative syndrome which unusual symptoms include deficits of balance, bodily orientation, chronic pain syndrome and dysfunctional motor patterns. Current research provides minimal guidance on support, education and recommended evidence-based patient care. This case reports the utilization of chiropractic spinal manipulation, dynamic neuromuscular stabilization (DNS), and other adjunctive procedures along with medical treatment of PCA. A 54-year-old male presented to a chiropractic clinic with non-specific back pain associated with visual disturbances, slight memory loss, and inappropriate cognitive motor control. After physical examination, brain MRI and PET scan, the diagnosis of PCA was recognized. Chiropractic spinal manipulation and dynamic neuromuscular stabilization were utilized as adjunctive care to conservative pharmacological treatment of PCA. Outcome measurements showed a 60% improvement in the patient's perception of health with restored functional neuromuscular pattern, improvements in locomotion, posture, pain control, mood, tolerance to activities of daily living (ADLs) and overall satisfactory progress in quality of life. Yet, no changes on memory loss progression, visual space orientation, and speech were observed. PCA is a progressive and debilitating condition. Because of poor awareness of PCA by physicians, patients usually receive incomplete care. Additional efforts must be centered on the musculoskeletal features of PCA, aiming enhancement in quality of life and functional improvements (FI). Adjunctive rehabilitative treatment is considered essential for individuals with cognitive and motor disturbances, and manual medicine procedures may be consider a viable option.

  1. Decoding complex flow-field patterns in visual working memory.

    PubMed

    Christophel, Thomas B; Haynes, John-Dylan

    2014-05-01

    There has been a long history of research on visual working memory. Whereas early studies have focused on the role of lateral prefrontal cortex in the storage of sensory information, this has been challenged by research in humans that has directly assessed the encoding of perceptual contents, pointing towards a role of visual and parietal regions during storage. In a previous study we used pattern classification to investigate the storage of complex visual color patterns across delay periods. This revealed coding of such contents in early visual and parietal brain regions. Here we aim to investigate whether the involvement of visual and parietal cortex is also observable for other types of complex, visuo-spatial pattern stimuli. Specifically, we used a combination of fMRI and multivariate classification to investigate the retention of complex flow-field stimuli defined by the spatial patterning of motion trajectories of random dots. Subjects were trained to memorize the precise spatial layout of these stimuli and to retain this information during an extended delay. We used a multivariate decoding approach to identify brain regions where spatial patterns of activity encoded the memorized stimuli. Content-specific memory signals were observable in motion sensitive visual area MT+ and in posterior parietal cortex that might encode spatial information in a modality independent manner. Interestingly, we also found information about the memorized visual stimulus in somatosensory cortex, suggesting a potential crossmodal contribution to memory. Our findings thus indicate that working memory storage of visual percepts might be distributed across unimodal, multimodal and even crossmodal brain regions. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. How visual search relates to visual diagnostic performance: a narrative systematic review of eye-tracking research in radiology.

    PubMed

    van der Gijp, A; Ravesloot, C J; Jarodzka, H; van der Schaaf, M F; van der Schaaf, I C; van Schaik, J P J; Ten Cate, Th J

    2017-08-01

    Eye tracking research has been conducted for decades to gain understanding of visual diagnosis such as in radiology. For educational purposes, it is important to identify visual search patterns that are related to high perceptual performance and to identify effective teaching strategies. This review of eye-tracking literature in the radiology domain aims to identify visual search patterns associated with high perceptual performance. Databases PubMed, EMBASE, ERIC, PsycINFO, Scopus and Web of Science were searched using 'visual perception' OR 'eye tracking' AND 'radiology' and synonyms. Two authors independently screened search results and included eye tracking studies concerning visual skills in radiology published between January 1, 1994 and July 31, 2015. Two authors independently assessed study quality with the Medical Education Research Study Quality Instrument, and extracted study data with respect to design, participant and task characteristics, and variables. A thematic analysis was conducted to extract and arrange study results, and a textual narrative synthesis was applied for data integration and interpretation. The search resulted in 22 relevant full-text articles. Thematic analysis resulted in six themes that informed the relation between visual search and level of expertise: (1) time on task, (2) eye movement characteristics of experts, (3) differences in visual attention, (4) visual search patterns, (5) search patterns in cross sectional stack imaging, and (6) teaching visual search strategies. Expert search was found to be characterized by a global-focal search pattern, which represents an initial global impression, followed by a detailed, focal search-to-find mode. Specific task-related search patterns, like drilling through CT scans and systematic search in chest X-rays, were found to be related to high expert levels. One study investigated teaching of visual search strategies, and did not find a significant effect on perceptual performance. Eye tracking literature in radiology indicates several search patterns are related to high levels of expertise, but teaching novices to search as an expert may not be effective. Experimental research is needed to find out which search strategies can improve image perception in learners.

  3. Endogenous Sequential Cortical Activity Evoked by Visual Stimuli

    PubMed Central

    Miller, Jae-eun Kang; Hamm, Jordan P.; Jackson, Jesse; Yuste, Rafael

    2015-01-01

    Although the functional properties of individual neurons in primary visual cortex have been studied intensely, little is known about how neuronal groups could encode changing visual stimuli using temporal activity patterns. To explore this, we used in vivo two-photon calcium imaging to record the activity of neuronal populations in primary visual cortex of awake mice in the presence and absence of visual stimulation. Multidimensional analysis of the network activity allowed us to identify neuronal ensembles defined as groups of cells firing in synchrony. These synchronous groups of neurons were themselves activated in sequential temporal patterns, which repeated at much higher proportions than chance and were triggered by specific visual stimuli such as natural visual scenes. Interestingly, sequential patterns were also present in recordings of spontaneous activity without any sensory stimulation and were accompanied by precise firing sequences at the single-cell level. Moreover, intrinsic dynamics could be used to predict the occurrence of future neuronal ensembles. Our data demonstrate that visual stimuli recruit similar sequential patterns to the ones observed spontaneously, consistent with the hypothesis that already existing Hebbian cell assemblies firing in predefined temporal sequences could be the microcircuit substrate that encodes visual percepts changing in time. PMID:26063915

  4. Idiosyncratic characteristics of saccadic eye movements when viewing different visual environments.

    PubMed

    Andrews, T J; Coppola, D M

    1999-08-01

    Eye position was recorded in different viewing conditions to assess whether the temporal and spatial characteristics of saccadic eye movements in different individuals are idiosyncratic. Our aim was to determine the degree to which oculomotor control is based on endogenous factors. A total of 15 naive subjects viewed five visual environments: (1) The absence of visual stimulation (i.e. a dark room); (2) a repetitive visual environment (i.e. simple textured patterns); (3) a complex natural scene; (4) a visual search task; and (5) reading text. Although differences in visual environment had significant effects on eye movements, idiosyncrasies were also apparent. For example, the mean fixation duration and size of an individual's saccadic eye movements when passively viewing a complex natural scene covaried significantly with those same parameters in the absence of visual stimulation and in a repetitive visual environment. In contrast, an individual's spatio-temporal characteristics of eye movements during active tasks such as reading text or visual search covaried together, but did not correlate with the pattern of eye movements detected when viewing a natural scene, simple patterns or in the dark. These idiosyncratic patterns of eye movements in normal viewing reveal an endogenous influence on oculomotor control. The independent covariance of eye movements during different visual tasks shows that saccadic eye movements during active tasks like reading or visual search differ from those engaged during the passive inspection of visual scenes.

  5. Neural Networks for the Beginner.

    ERIC Educational Resources Information Center

    Snyder, Robin M.

    Motivated by the brain, neural networks are a right-brained approach to artificial intelligence that is used to recognize patterns based on previous training. In practice, one would not program an expert system to recognize a pattern and one would not train a neural network to make decisions from rules; but one could combine the best features of…

  6. Effects of aging on identifying emotions conveyed by point-light walkers.

    PubMed

    Spencer, Justine M Y; Sekuler, Allison B; Bennett, Patrick J; Giese, Martin A; Pilz, Karin S

    2016-02-01

    The visual system is able to recognize human motion simply from point lights attached to the major joints of an actor. Moreover, it has been shown that younger adults are able to recognize emotions from such dynamic point-light displays. Previous research has suggested that the ability to perceive emotional stimuli changes with age. For example, it has been shown that older adults are impaired in recognizing emotional expressions from static faces. In addition, it has been shown that older adults have difficulties perceiving visual motion, which might be helpful to recognize emotions from point-light displays. In the current study, 4 experiments were completed in which older and younger adults were asked to identify 3 emotions (happy, sad, and angry) displayed by 4 types of point-light walkers: upright and inverted normal walkers, which contained both local motion and global form information; upright scrambled walkers, which contained only local motion information; and upright random-position walkers, which contained only global form information. Overall, emotion discrimination accuracy was lower in older participants compared with younger participants, specifically when identifying sad and angry point-light walkers. In addition, observers in both age groups were able to recognize emotions from all types of point-light walkers, suggesting that both older and younger adults are able to recognize emotions from point-light walkers on the basis of local motion or global form. (c) 2016 APA, all rights reserved).

  7. Karen and George: Face Recognition by Visually Impaired Children.

    ERIC Educational Resources Information Center

    Ellis, Hadyn D.; And Others

    1988-01-01

    Two visually impaired children, aged 8 and 10, appeared to have severe difficulty in recognizing faces. After assessment, it became apparent that only one had unusually poor facial recognition skills. After training, which included matching face photographs, schematic faces, and digitized faces, there was no evidence of any improvement.…

  8. Breaking Snake Camouflage: Humans Detect Snakes More Accurately than Other Animals under Less Discernible Visual Conditions.

    PubMed

    Kawai, Nobuyuki; He, Hongshen

    2016-01-01

    Humans and non-human primates are extremely sensitive to snakes as exemplified by their ability to detect pictures of snakes more quickly than those of other animals. These findings are consistent with the Snake Detection Theory, which hypothesizes that as predators, snakes were a major source of evolutionary selection that favored expansion of the visual system of primates for rapid snake detection. Many snakes use camouflage to conceal themselves from both prey and their own predators, making it very challenging to detect them. If snakes have acted as a selective pressure on primate visual systems, they should be more easily detected than other animals under difficult visual conditions. Here we tested whether humans discerned images of snakes more accurately than those of non-threatening animals (e.g., birds, cats, or fish) under conditions of less perceptual information by presenting a series of degraded images with the Random Image Structure Evolution technique (interpolation of random noise). We find that participants recognize mosaic images of snakes, which were regarded as functionally equivalent to camouflage, more accurately than those of other animals under dissolved conditions. The present study supports the Snake Detection Theory by showing that humans have a visual system that accurately recognizes snakes under less discernible visual conditions.

  9. Breaking Snake Camouflage: Humans Detect Snakes More Accurately than Other Animals under Less Discernible Visual Conditions

    PubMed Central

    He, Hongshen

    2016-01-01

    Humans and non-human primates are extremely sensitive to snakes as exemplified by their ability to detect pictures of snakes more quickly than those of other animals. These findings are consistent with the Snake Detection Theory, which hypothesizes that as predators, snakes were a major source of evolutionary selection that favored expansion of the visual system of primates for rapid snake detection. Many snakes use camouflage to conceal themselves from both prey and their own predators, making it very challenging to detect them. If snakes have acted as a selective pressure on primate visual systems, they should be more easily detected than other animals under difficult visual conditions. Here we tested whether humans discerned images of snakes more accurately than those of non-threatening animals (e.g., birds, cats, or fish) under conditions of less perceptual information by presenting a series of degraded images with the Random Image Structure Evolution technique (interpolation of random noise). We find that participants recognize mosaic images of snakes, which were regarded as functionally equivalent to camouflage, more accurately than those of other animals under dissolved conditions. The present study supports the Snake Detection Theory by showing that humans have a visual system that accurately recognizes snakes under less discernible visual conditions. PMID:27783686

  10. The effect of flower-like and non-flower-like visual properties on choice of unrewarding patterns by bumblebees

    NASA Astrophysics Data System (ADS)

    Orbán, Levente L.; Plowright, Catherine M. S.

    2013-07-01

    How do distinct visual stimuli help bumblebees discover flowers before they have experienced any reward outside of their nest? Two visual floral properties, type of a pattern (concentric vs radial) and its position on unrewarding artificial flowers (central vs peripheral on corolla), were manipulated in two experiments. Both visual properties showed significant effects on floral choice. When pitted against each other, pattern was more important than position. Experiment 1 shows a significant effect of concentric pattern position, and experiment 2 shows a significant preference towards radial patterns regardless of their position. These results show that the presence of markings at the center of a flower are not so important as the presence of markings that will direct bees there.

  11. Object similarity affects the perceptual strategy underlying invariant visual object recognition in rats

    PubMed Central

    Rosselli, Federica B.; Alemi, Alireza; Ansuini, Alessio; Zoccolan, Davide

    2015-01-01

    In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness). In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant) to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: (i) smaller and more scattered; (ii) only partially preserved across object views; and (iii) only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning. PMID:25814936

  12. The anatomy of object recognition--visual form agnosia caused by medial occipitotemporal stroke.

    PubMed

    Karnath, Hans-Otto; Rüter, Johannes; Mandler, André; Himmelbach, Marc

    2009-05-06

    The influential model on visual information processing by Milner and Goodale (1995) has suggested a dissociation between action- and perception-related processing in a dorsal versus ventral stream projection. It was inspired substantially by the observation of a double dissociation of disturbed visual action versus perception in patients with optic ataxia on the one hand and patients with visual form agnosia (VFA) on the other. Unfortunately, almost all cases with VFA reported so far suffered from inhalational intoxication, the majority with carbon monoxide (CO). Since CO induces a diffuse and widespread pattern of neuronal and white matter damage throughout the whole brain, precise conclusions from these patients with VFA on the selective role of ventral stream structures for shape and orientation perception were difficult. Here, we report patient J.S., who demonstrated VFA after a well circumscribed brain lesion due to stroke etiology. Like the famous patient D.F. with VFA after CO intoxication studied by Milner, Goodale, and coworkers (Goodale et al., 1991, 1994; Milner et al., 1991; Servos et al., 1995; Mon-Williams et al., 2001a,b; Wann et al., 2001; Westwood et al., 2002; McIntosh et al., 2004; Schenk and Milner, 2006), J.S. showed an obvious dissociation between disturbed visual perception of shape and orientation information on the one side and preserved visuomotor abilities based on the same information on the other. In both hemispheres, damage primarily affected the fusiform and the lingual gyri as well as the adjacent posterior cingulate gyrus. We conclude that these medial structures of the ventral occipitotemporal cortex are integral for the normal flow of shape and of contour information into the ventral stream system allowing to recognize objects.

  13. Production and perception rules underlying visual patterns: effects of symmetry and hierarchy.

    PubMed

    Westphal-Fitch, Gesche; Huber, Ludwig; Gómez, Juan Carlos; Fitch, W Tecumseh

    2012-07-19

    Formal language theory has been extended to two-dimensional patterns, but little is known about two-dimensional pattern perception. We first examined spontaneous two-dimensional visual pattern production by humans, gathered using a novel touch screen approach. Both spontaneous creative production and subsequent aesthetic ratings show that humans prefer ordered, symmetrical patterns over random patterns. We then further explored pattern-parsing abilities in different human groups, and compared them with pigeons. We generated visual plane patterns based on rules varying in complexity. All human groups tested, including children and individuals diagnosed with autism spectrum disorder (ASD), were able to detect violations of all production rules tested. Our ASD participants detected pattern violations with the same speed and accuracy as matched controls. Children's ability to detect violations of a relatively complex rotational rule correlated with age, whereas their ability to detect violations of a simple translational rule did not. By contrast, even with extensive training, pigeons were unable to detect orientation-based structural violations, suggesting that, unlike humans, they did not learn the underlying structural rules. Visual two-dimensional patterns offer a promising new formally-grounded way to investigate pattern production and perception in general, widely applicable across species and age groups.

  14. Production and perception rules underlying visual patterns: effects of symmetry and hierarchy

    PubMed Central

    Westphal-Fitch, Gesche; Huber, Ludwig; Gómez, Juan Carlos; Fitch, W. Tecumseh

    2012-01-01

    Formal language theory has been extended to two-dimensional patterns, but little is known about two-dimensional pattern perception. We first examined spontaneous two-dimensional visual pattern production by humans, gathered using a novel touch screen approach. Both spontaneous creative production and subsequent aesthetic ratings show that humans prefer ordered, symmetrical patterns over random patterns. We then further explored pattern-parsing abilities in different human groups, and compared them with pigeons. We generated visual plane patterns based on rules varying in complexity. All human groups tested, including children and individuals diagnosed with autism spectrum disorder (ASD), were able to detect violations of all production rules tested. Our ASD participants detected pattern violations with the same speed and accuracy as matched controls. Children's ability to detect violations of a relatively complex rotational rule correlated with age, whereas their ability to detect violations of a simple translational rule did not. By contrast, even with extensive training, pigeons were unable to detect orientation-based structural violations, suggesting that, unlike humans, they did not learn the underlying structural rules. Visual two-dimensional patterns offer a promising new formally-grounded way to investigate pattern production and perception in general, widely applicable across species and age groups. PMID:22688636

  15. Deaf-And-Mute Sign Language Generation System

    NASA Astrophysics Data System (ADS)

    Kawai, Hideo; Tamura, Shinichi

    1984-08-01

    We have developed a system which can recognize speech and generate the corresponding animation-like sign language sequence. The system is implemented in a popular personal computer. This has three video-RAM's and a voice recognition board which can recognize only registered voice of a specific speaker. Presently, fourty sign language patterns and fifty finger spellings are stored in two floppy disks. Each sign pattern is composed of one to four sub-patterns. That is, if the pattern is composed of one sub-pattern, it is displayed as a still pattern. If not, it is displayed as a motion pattern. This system will help communications between deaf-and-mute persons and healthy persons. In order to display in high speed, almost programs are written in a machine language.

  16. Face-selective neurons maintain consistent visual responses across months

    PubMed Central

    McMahon, David B. T.; Jones, Adam P.; Bondar, Igor V.; Leopold, David A.

    2014-01-01

    Face perception in both humans and monkeys is thought to depend on neurons clustered in discrete, specialized brain regions. Because primates are frequently called upon to recognize and remember new individuals, the neuronal representation of faces in the brain might be expected to change over time. The functional properties of neurons in behaving animals are typically assessed over time periods ranging from minutes to hours, which amounts to a snapshot compared to a lifespan of a neuron. It therefore remains unclear how neuronal properties observed on a given day predict that same neuron's activity months or years later. Here we show that the macaque inferotemporal cortex contains face-selective cells that show virtually no change in their patterns of visual responses over time periods as long as one year. Using chronically implanted microwire electrodes guided by functional MRI targeting, we obtained distinct profiles of selectivity for face and nonface stimuli that served as fingerprints for individual neurons in the anterior fundus (AF) face patch within the superior temporal sulcus. Longitudinal tracking over a series of daily recording sessions revealed that face-selective neurons maintain consistent visual response profiles across months-long time spans despite the influence of ongoing daily experience. We propose that neurons in the AF face patch are specialized for aspects of face perception that demand stability as opposed to plasticity. PMID:24799679

  17. Teaching Tectonics to Undergraduates with Web GIS

    NASA Astrophysics Data System (ADS)

    Anastasio, D. J.; Bodzin, A.; Sahagian, D. L.; Rutzmoser, S.

    2013-12-01

    Geospatial reasoning skills provide a means for manipulating, interpreting, and explaining structured information and are involved in higher-order cognitive processes that include problem solving and decision-making. Appropriately designed tools, technologies, and curriculum can support spatial learning. We present Web-based visualization and analysis tools developed with Javascript APIs to enhance tectonic curricula while promoting geospatial thinking and scientific inquiry. The Web GIS interface integrates graphics, multimedia, and animations that allow users to explore and discover geospatial patterns that are not easily recognized. Features include a swipe tool that enables users to see underneath layers, query tools useful in exploration of earthquake and volcano data sets, a subduction and elevation profile tool which facilitates visualization between map and cross-sectional views, drafting tools, a location function, and interactive image dragging functionality on the Web GIS. The Web GIS platform is independent and can be implemented on tablets or computers. The GIS tool set enables learners to view, manipulate, and analyze rich data sets from local to global scales, including such data as geology, population, heat flow, land cover, seismic hazards, fault zones, continental boundaries, and elevation using two- and three- dimensional visualization and analytical software. Coverages which allow users to explore plate boundaries and global heat flow processes aided learning in a Lehigh University Earth and environmental science Structural Geology and Tectonics class and are freely available on the Web.

  18. Face-selective neurons maintain consistent visual responses across months.

    PubMed

    McMahon, David B T; Jones, Adam P; Bondar, Igor V; Leopold, David A

    2014-06-03

    Face perception in both humans and monkeys is thought to depend on neurons clustered in discrete, specialized brain regions. Because primates are frequently called upon to recognize and remember new individuals, the neuronal representation of faces in the brain might be expected to change over time. The functional properties of neurons in behaving animals are typically assessed over time periods ranging from minutes to hours, which amounts to a snapshot compared to a lifespan of a neuron. It therefore remains unclear how neuronal properties observed on a given day predict that same neuron's activity months or years later. Here we show that the macaque inferotemporal cortex contains face-selective cells that show virtually no change in their patterns of visual responses over time periods as long as one year. Using chronically implanted microwire electrodes guided by functional MRI targeting, we obtained distinct profiles of selectivity for face and nonface stimuli that served as fingerprints for individual neurons in the anterior fundus (AF) face patch within the superior temporal sulcus. Longitudinal tracking over a series of daily recording sessions revealed that face-selective neurons maintain consistent visual response profiles across months-long time spans despite the influence of ongoing daily experience. We propose that neurons in the AF face patch are specialized for aspects of face perception that demand stability as opposed to plasticity.

  19. Object Representations in Human Visual Cortex Formed Through Temporal Integration of Dynamic Partial Shape Views.

    PubMed

    Orlov, Tanya; Zohary, Ehud

    2018-01-17

    We typically recognize visual objects using the spatial layout of their parts, which are present simultaneously on the retina. Therefore, shape extraction is based on integration of the relevant retinal information over space. The lateral occipital complex (LOC) can represent shape faithfully in such conditions. However, integration over time is sometimes required to determine object shape. To study shape extraction through temporal integration of successive partial shape views, we presented human participants (both men and women) with artificial shapes that moved behind a narrow vertical or horizontal slit. Only a tiny fraction of the shape was visible at any instant at the same retinal location. However, observers perceived a coherent whole shape instead of a jumbled pattern. Using fMRI and multivoxel pattern analysis, we searched for brain regions that encode temporally integrated shape identity. We further required that the representation of shape should be invariant to changes in the slit orientation. We show that slit-invariant shape information is most accurate in the LOC. Importantly, the slit-invariant shape representations matched the conventional whole-shape representations assessed during full-image runs. Moreover, when the same slit-dependent shape slivers were shuffled, thereby preventing their spatiotemporal integration, slit-invariant shape information was reduced dramatically. The slit-invariant representation of the various shapes also mirrored the structure of shape perceptual space as assessed by perceptual similarity judgment tests. Therefore, the LOC is likely to mediate temporal integration of slit-dependent shape views, generating a slit-invariant whole-shape percept. These findings provide strong evidence for a global encoding of shape in the LOC regardless of integration processes required to generate the shape percept. SIGNIFICANCE STATEMENT Visual objects are recognized through spatial integration of features available simultaneously on the retina. The lateral occipital complex (LOC) represents shape faithfully in such conditions even if the object is partially occluded. However, shape must sometimes be reconstructed over both space and time. Such is the case in anorthoscopic perception, when an object is moving behind a narrow slit. In this scenario, spatial information is limited at any moment so the whole-shape percept can only be inferred by integration of successive shape views over time. We find that LOC carries shape-specific information recovered using such temporal integration processes. The shape representation is invariant to slit orientation and is similar to that evoked by a fully viewed image. Existing models of object recognition lack such capabilities. Copyright © 2018 the authors 0270-6474/18/380659-20$15.00/0.

  20. Identification of superficial defects in reconstructed 3D objects using phase-shifting fringe projection

    NASA Astrophysics Data System (ADS)

    Madrigal, Carlos A.; Restrepo, Alejandro; Branch, John W.

    2016-09-01

    3D reconstruction of small objects is used in applications of surface analysis, forensic analysis and tissue reconstruction in medicine. In this paper, we propose a strategy for the 3D reconstruction of small objects and the identification of some superficial defects. We applied a technique of projection of structured light patterns, specifically sinusoidal fringes and an algorithm of phase unwrapping. A CMOS camera was used to capture images and a DLP digital light projector for synchronous projection of the sinusoidal pattern onto the objects. We implemented a technique based on a 2D flat pattern as calibration process, so the intrinsic and extrinsic parameters of the camera and the DLP were defined. Experimental tests were performed in samples of artificial teeth, coal particles, welding defects and surfaces tested with Vickers indentation. Areas less than 5cm were studied. The objects were reconstructed in 3D with densities of about one million points per sample. In addition, the steps of 3D description, identification of primitive, training and classification were implemented to recognize defects, such as: holes, cracks, roughness textures and bumps. We found that pattern recognition strategies are useful, when quality supervision of surfaces has enough quantities of points to evaluate the defective region, because the identification of defects in small objects is a demanding activity of the visual inspection.

  1. Evidence of different underlying processes in pattern recall and decision-making.

    PubMed

    Gorman, Adam D; Abernethy, Bruce; Farrow, Damian

    2015-01-01

    The visual search characteristics of expert and novice basketball players were recorded during pattern recall and decision-making tasks to determine whether the two tasks shared common visual-perceptual processing strategies. The order in which participants entered the pattern elements in the recall task was also analysed to further examine the nature of the visual-perceptual strategies and the relative emphasis placed upon particular pattern features. The experts demonstrated superior performance across the recall and decision-making tasks [see also Gorman, A. D., Abernethy, B., & Farrow, D. (2012). Classical pattern recall tests and the prospective nature of expert performance. The Quarterly Journal of Experimental Psychology, 65, 1151-1160; Gorman, A. D., Abernethy, B., & Farrow, D. (2013a). Is the relationship between pattern recall and decision-making influenced by anticipatory recall? The Quarterly Journal of Experimental Psychology, 66, 2219-2236)] but a number of significant differences in the visual search data highlighted disparities in the processing strategies, suggesting that recall skill may utilize different underlying visual-perceptual processes than those required for accurate decision-making performance in the natural setting. Performance on the recall task was characterized by a proximal-to-distal order of entry of the pattern elements with participants tending to enter the players located closest to the ball carrier earlier than those located more distal to the ball carrier. The results provide further evidence of the underlying perceptual processes employed by experts when extracting visual information from complex and dynamic patterns.

  2. Anosognosia for obvious visual field defects in stroke patients.

    PubMed

    Baier, Bernhard; Geber, Christian; Müller-Forell, Wiebke; Müller, Notger; Dieterich, Marianne; Karnath, Hans-Otto

    2015-01-01

    Patients with anosognosia for visual field defect (AVFD) fail to recognize consciously their visual field defect. There is still unclarity whether specific neural correlates are associated with AVFD. We studied AVFD in 54 patients with acute stroke and a visual field defect. Nineteen percent of this unselected sample showed AVFD. By using modern voxelwise lesion-behaviour mapping techniques we found an association between AVFD and parts of the lingual gyrus, the cuneus as well as the posterior cingulate and corpus callosum. Damage to these regions appears to induce unawareness of visual field defects and thus may play a significant role for conscious visual perception.

  3. Developing Signal-Pattern-Recognition Programs

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O.; Hammen, David

    2006-01-01

    Pattern Interpretation and Recognition Application Toolkit Environment (PIRATE) is a block-oriented software system that aids the development of application programs that analyze signals in real time in order to recognize signal patterns that are indicative of conditions or events of interest. PIRATE was originally intended for use in writing application programs to recognize patterns in space-shuttle telemetry signals received at Johnson Space Center's Mission Control Center: application programs were sought to (1) monitor electric currents on shuttle ac power busses to recognize activations of specific power-consuming devices, (2) monitor various pressures and infer the states of affected systems by applying a Kalman filter to the pressure signals, (3) determine fuel-leak rates from sensor data, (4) detect faults in gyroscopes through analysis of system measurements in the frequency domain, and (5) determine drift rates in inertial measurement units by regressing measurements against time. PIRATE can also be used to develop signal-pattern-recognition software for different purposes -- for example, to monitor and control manufacturing processes.

  4. TOPICAL REVIEW: Prosthetic interfaces with the visual system: biological issues

    NASA Astrophysics Data System (ADS)

    Cohen, Ethan D.

    2007-06-01

    The design of effective visual prostheses for the blind represents a challenge for biomedical engineers and neuroscientists. Significant progress has been made in the miniaturization and processing power of prosthesis electronics; however development lags in the design and construction of effective machine brain interfaces with visual system neurons. This review summarizes what has been learned about stimulating neurons in the human and primate retina, lateral geniculate nucleus and visual cortex. Each level of the visual system presents unique challenges for neural interface design. Blind patients with the retinal degenerative disease retinitis pigmentosa (RP) are a common population in clinical trials of visual prostheses. The visual performance abilities of normals and RP patients are compared. To generate pattern vision in blind patients, the visual prosthetic interface must effectively stimulate the retinotopically organized neurons in the central visual field to elicit patterned visual percepts. The development of more biologically compatible methods of stimulating visual system neurons is critical to the development of finer spatial percepts. Prosthesis electrode arrays need to adapt to different optimal stimulus locations, stimulus patterns, and patient disease states.

  5. The uncrowded window of object recognition

    PubMed Central

    Pelli, Denis G; Tillman, Katharine A

    2009-01-01

    It is now emerging that vision is usually limited by object spacing rather than size. The visual system recognizes an object by detecting and then combining its features. ‘Crowding’ occurs when objects are too close together and features from several objects are combined into a jumbled percept. Here, we review the explosion of studies on crowding—in grating discrimination, letter and face recognition, visual search, selective attention, and reading—and find a universal principle, the Bouma law. The critical spacing required to prevent crowding is equal for all objects, although the effect is weaker between dissimilar objects. Furthermore, critical spacing at the cortex is independent of object position, and critical spacing at the visual field is proportional to object distance from fixation. The region where object spacing exceeds critical spacing is the ‘uncrowded window’. Observers cannot recognize objects outside of this window and its size limits the speed of reading and search. PMID:18828191

  6. Recognition of emotion with temporal lobe epilepsy and asymmetrical amygdala damage.

    PubMed

    Fowler, Helen L; Baker, Gus A; Tipples, Jason; Hare, Dougal J; Keller, Simon; Chadwick, David W; Young, Andrew W

    2006-08-01

    Impairments in emotion recognition occur when there is bilateral damage to the amygdala. In this study, ability to recognize auditory and visual expressions of emotion was investigated in people with asymmetrical amygdala damage (AAD) and temporal lobe epilepsy (TLE). Recognition of five emotions was tested across three participant groups: those with right AAD and TLE, those with left AAD and TLE, and a comparison group. Four tasks were administered: recognition of emotion from facial expressions, sentences describing emotion-laden situations, nonverbal sounds, and prosody. Accuracy scores for each task and emotion were analysed, and no consistent overall effect of AAD on emotion recognition was found. However, some individual participants with AAD were significantly impaired at recognizing emotions, in both auditory and visual domains. The findings indicate that a minority of individuals with AAD have impairments in emotion recognition, but no evidence of specific impairments (e.g., visual or auditory) was found.

  7. Complex Event Recognition Architecture

    NASA Technical Reports Server (NTRS)

    Fitzgerald, William A.; Firby, R. James

    2009-01-01

    Complex Event Recognition Architecture (CERA) is the name of a computational architecture, and software that implements the architecture, for recognizing complex event patterns that may be spread across multiple streams of input data. One of the main components of CERA is an intuitive event pattern language that simplifies what would otherwise be the complex, difficult tasks of creating logical descriptions of combinations of temporal events and defining rules for combining information from different sources over time. In this language, recognition patterns are defined in simple, declarative statements that combine point events from given input streams with those from other streams, using conjunction, disjunction, and negation. Patterns can be built on one another recursively to describe very rich, temporally extended combinations of events. Thereafter, a run-time matching algorithm in CERA efficiently matches these patterns against input data and signals when patterns are recognized. CERA can be used to monitor complex systems and to signal operators or initiate corrective actions when anomalous conditions are recognized. CERA can be run as a stand-alone monitoring system, or it can be integrated into a larger system to automatically trigger responses to changing environments or problematic situations.

  8. Retinal Wave Patterns Are Governed by Mutual Excitation among Starburst Amacrine Cells and Drive the Refinement and Maintenance of Visual Circuits

    PubMed Central

    Xu, Hong-Ping; Burbridge, Timothy J.; Ye, Meijun; Chen, Minggang; Ge, Xinxin; Zhou, Z. Jimmy

    2016-01-01

    Retinal waves are correlated bursts of spontaneous activity whose spatiotemporal patterns are critical for early activity-dependent circuit elaboration and refinement in the mammalian visual system. Three separate developmental wave epochs or stages have been described, but the mechanism(s) of pattern generation of each and their distinct roles in visual circuit development remain incompletely understood. We used neuroanatomical, in vitro and in vivo electrophysiological, and optical imaging techniques in genetically manipulated mice to examine the mechanisms of wave initiation and propagation and the role of wave patterns in visual circuit development. Through deletion of β2 subunits of nicotinic acetylcholine receptors (β2-nAChRs) selectively from starburst amacrine cells (SACs), we show that mutual excitation among SACs is critical for Stage II (cholinergic) retinal wave propagation, supporting models of wave initiation and pattern generation from within a single retinal cell type. We also demonstrate that β2-nAChRs in SACs, and normal wave patterns, are necessary for eye-specific segregation. Finally, we show that Stage III (glutamatergic) retinal waves are not themselves necessary for normal eye-specific segregation, but elimination of both Stage II and Stage III retinal waves dramatically disrupts eye-specific segregation. This suggests that persistent Stage II retinal waves can adequately compensate for Stage III retinal wave loss during the development and refinement of eye-specific segregation. These experiments confirm key features of the “recurrent network” model for retinal wave propagation and clarify the roles of Stage II and Stage III retinal wave patterns in visual circuit development. SIGNIFICANCE STATEMENT Spontaneous activity drives early mammalian circuit development, but the initiation and patterning of activity vary across development and among modalities. Cholinergic “retinal waves” are initiated in starburst amacrine cells and propagate to retinal ganglion cells and higher-order visual areas, but the mechanism responsible for creating their unique and critical activity pattern is incompletely understood. We demonstrate that cholinergic wave patterns are dictated by recurrent connectivity within starburst amacrine cells, and retinal ganglion cells act as “readouts” of patterned activity. We also show that eye-specific segregation occurs normally without glutamatergic waves, but elimination of both cholinergic and glutamatergic waves completely disrupts visual circuit development. These results suggest that each retinal wave pattern during development is optimized for concurrently refining multiple visual circuits. PMID:27030771

  9. Cross-Modal Decoding of Neural Patterns Associated with Working Memory: Evidence for Attention-Based Accounts of Working Memory.

    PubMed

    Majerus, Steve; Cowan, Nelson; Péters, Frédéric; Van Calster, Laurens; Phillips, Christophe; Schrouff, Jessica

    2016-01-01

    Recent studies suggest common neural substrates involved in verbal and visual working memory (WM), interpreted as reflecting shared attention-based, short-term retention mechanisms. We used a machine-learning approach to determine more directly the extent to which common neural patterns characterize retention in verbal WM and visual WM. Verbal WM was assessed via a standard delayed probe recognition task for letter sequences of variable length. Visual WM was assessed via a visual array WM task involving the maintenance of variable amounts of visual information in the focus of attention. We trained a classifier to distinguish neural activation patterns associated with high- and low-visual WM load and tested the ability of this classifier to predict verbal WM load (high-low) from their associated neural activation patterns, and vice versa. We observed significant between-task prediction of load effects during WM maintenance, in posterior parietal and superior frontal regions of the dorsal attention network; in contrast, between-task prediction in sensory processing cortices was restricted to the encoding stage. Furthermore, between-task prediction of load effects was strongest in those participants presenting the highest capacity for the visual WM task. This study provides novel evidence for common, attention-based neural patterns supporting verbal and visual WM. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Banknote recognition: investigating processing and cognition framework using competitive neural network.

    PubMed

    Oyedotun, Oyebade K; Khashman, Adnan

    2017-02-01

    Humans are apt at recognizing patterns and discovering even abstract features which are sometimes embedded therein. Our ability to use the banknotes in circulation for business transactions lies in the effortlessness with which we can recognize the different banknote denominations after seeing them over a period of time. More significant is that we can usually recognize these banknote denominations irrespective of what parts of the banknotes are exposed to us visually. Furthermore, our recognition ability is largely unaffected even when these banknotes are partially occluded. In a similar analogy, the robustness of intelligent systems to perform the task of banknote recognition should not collapse under some minimum level of partial occlusion. Artificial neural networks are intelligent systems which from inception have taken many important cues related to structure and learning rules from the human nervous/cognition processing system. Likewise, it has been shown that advances in artificial neural network simulations can help us understand the human nervous/cognition system even furthermore. In this paper, we investigate three cognition hypothetical frameworks to vision-based recognition of banknote denominations using competitive neural networks. In order to make the task more challenging and stress-test the investigated hypotheses, we also consider the recognition of occluded banknotes. The implemented hypothetical systems are tasked to perform fast recognition of banknotes with up to 75 % occlusion. The investigated hypothetical systems are trained on Nigeria's Naira banknotes and several experiments are performed to demonstrate the findings presented within this work.

  11. Spontaneous generalization of abstract multimodal patterns in young domestic chicks.

    PubMed

    Versace, Elisabetta; Spierings, Michelle J; Caffini, Matteo; Ten Cate, Carel; Vallortigara, Giorgio

    2017-05-01

    From the early stages of life, learning the regularities associated with specific objects is crucial for making sense of experiences. Through filial imprinting, young precocial birds quickly learn the features of their social partners by mere exposure. It is not clear though to what extent chicks can extract abstract patterns of the visual and acoustic stimuli present in the imprinting object, and how they combine them. To investigate this issue, we exposed chicks (Gallus gallus) to three days of visual and acoustic imprinting, using either patterns with two identical items or patterns with two different items, presented visually, acoustically or in both modalities. Next, chicks were given a choice between the familiar and the unfamiliar pattern, present in either the multimodal, visual or acoustic modality. The responses to the novel stimuli were affected by their imprinting experience, and the effect was stronger for chicks imprinted with multimodal patterns than for the other groups. Interestingly, males and females adopted a different strategy, with males more attracted by unfamiliar patterns and females more attracted by familiar patterns. Our data show that chicks can generalize abstract patterns by mere exposure through filial imprinting and that multimodal stimulation is more effective than unimodal stimulation for pattern learning.

  12. Automated numerical simulation of biological pattern formation based on visual feedback simulation framework

    PubMed Central

    Sun, Mingzhu; Xu, Hui; Zeng, Xingjuan; Zhao, Xin

    2017-01-01

    There are various fantastic biological phenomena in biological pattern formation. Mathematical modeling using reaction-diffusion partial differential equation systems is employed to study the mechanism of pattern formation. However, model parameter selection is both difficult and time consuming. In this paper, a visual feedback simulation framework is proposed to calculate the parameters of a mathematical model automatically based on the basic principle of feedback control. In the simulation framework, the simulation results are visualized, and the image features are extracted as the system feedback. Then, the unknown model parameters are obtained by comparing the image features of the simulation image and the target biological pattern. Considering two typical applications, the visual feedback simulation framework is applied to fulfill pattern formation simulations for vascular mesenchymal cells and lung development. In the simulation framework, the spot, stripe, labyrinthine patterns of vascular mesenchymal cells, the normal branching pattern and the branching pattern lacking side branching for lung branching are obtained in a finite number of iterations. The simulation results indicate that it is easy to achieve the simulation targets, especially when the simulation patterns are sensitive to the model parameters. Moreover, this simulation framework can expand to other types of biological pattern formation. PMID:28225811

  13. Automated numerical simulation of biological pattern formation based on visual feedback simulation framework.

    PubMed

    Sun, Mingzhu; Xu, Hui; Zeng, Xingjuan; Zhao, Xin

    2017-01-01

    There are various fantastic biological phenomena in biological pattern formation. Mathematical modeling using reaction-diffusion partial differential equation systems is employed to study the mechanism of pattern formation. However, model parameter selection is both difficult and time consuming. In this paper, a visual feedback simulation framework is proposed to calculate the parameters of a mathematical model automatically based on the basic principle of feedback control. In the simulation framework, the simulation results are visualized, and the image features are extracted as the system feedback. Then, the unknown model parameters are obtained by comparing the image features of the simulation image and the target biological pattern. Considering two typical applications, the visual feedback simulation framework is applied to fulfill pattern formation simulations for vascular mesenchymal cells and lung development. In the simulation framework, the spot, stripe, labyrinthine patterns of vascular mesenchymal cells, the normal branching pattern and the branching pattern lacking side branching for lung branching are obtained in a finite number of iterations. The simulation results indicate that it is easy to achieve the simulation targets, especially when the simulation patterns are sensitive to the model parameters. Moreover, this simulation framework can expand to other types of biological pattern formation.

  14. Circadian timed episodic-like memory - a bee knows what to do when, and also where.

    PubMed

    Pahl, Mario; Zhu, Hong; Pix, Waltraud; Tautz, Juergen; Zhang, Shaowu

    2007-10-01

    This study investigates how the colour, shape and location of patterns could be memorized within a time frame. Bees were trained to visit two Y-mazes, one of which presented yellow vertical (rewarded) versus horizontal (non-rewarded) gratings at one site in the morning, while another presented blue horizontal (rewarded) versus vertical (non-rewarded) gratings at another site in the afternoon. The bees could perform well in the learning tests and various transfer tests, in which (i) all contextual cues from the learning test were present; (ii) the colour cues of the visual patterns were removed, but the location cue, the orientation of the visual patterns and the temporal cue still existed; (iii) the location cue was removed, but other contextual cues, i.e. the colour and orientation of the visual patterns and the temporal cue still existed; (iv) the location cue and the orientation cue of the visual patterns were removed, but the colour cue and temporal cue still existed; (v) the location cue, and the colour cue of the visual patterns were removed, but the orientation cue and the temporal cue still existed. The results reveal that the honeybee can recall the memory of the correct visual patterns by using spatial and/or temporal information. The relative importance of different contextual cues is compared and discussed. The bees' ability to integrate elements of circadian time, place and visual stimuli is akin to episodic-like memory; we have therefore named this kind of memory circadian timed episodic-like memory.

  15. Is race erased? Decoding race from patterns of neural activity when skin color is not diagnostic of group boundaries.

    PubMed

    Ratner, Kyle G; Kaul, Christian; Van Bavel, Jay J

    2013-10-01

    Several theories suggest that people do not represent race when it does not signify group boundaries. However, race is often associated with visually salient differences in skin tone and facial features. In this study, we investigated whether race could be decoded from distributed patterns of neural activity in the fusiform gyri and early visual cortex when visual features that often covary with race were orthogonal to group membership. To this end, we used multivariate pattern analysis to examine an fMRI dataset that was collected while participants assigned to mixed-race groups categorized own-race and other-race faces as belonging to their newly assigned group. Whereas conventional univariate analyses provided no evidence of race-based responses in the fusiform gyri or early visual cortex, multivariate pattern analysis suggested that race was represented within these regions. Moreover, race was represented in the fusiform gyri to a greater extent than early visual cortex, suggesting that the fusiform gyri results do not merely reflect low-level perceptual information (e.g. color, contrast) from early visual cortex. These findings indicate that patterns of activation within specific regions of the visual cortex may represent race even when overall activation in these regions is not driven by racial information.

  16. Patterned Video Sensors For Low Vision

    NASA Technical Reports Server (NTRS)

    Juday, Richard D.

    1996-01-01

    Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.

  17. Neuropareidolia: diagnostic clues apropos of visual illusions.

    PubMed

    Maranhão-Filho, Péricles; Vincent, Maurice B

    2009-12-01

    Diagnosis in neuroimaging involves the recognition of specific patterns indicative of particular diseases. Pareidolia, the misperception of vague or obscure stimuli being perceived as something clear and distinct, is somewhat beneficial for the physician in the pursuit of diagnostic strategies. Animals may be pareidolically recognized in neuroimages according to the presence of specific diseases. By associating a given radiological aspect with an animal, doctors improve their diagnostic skills and reinforce mnemonic strategies in radiology practice. The most important pareidolical perceptions of animals in neuroimaging are the hummingbird sign in progressive supranuclear palsy, the panda sign in Wilson's disease, the panda sign in sarcoidosis, the butterfly sign in glioblastomas, the butterfly sign in progressive scoliosis and horizontal gaze palsy, the elephant sign in Alzheimer's disease and the eye-of-the-tiger sign in pantothenate kinase-associated neurodegenerative disease.

  18. Mandible behaviour interpretation during wakefulness, sleep and sleep-disordered breathing.

    PubMed

    Maury, Gisèle; Senny, Frédéric; Cambron, Laurent; Albert, Adelin; Seidel, Laurence; Poirrier, Robert

    2014-12-01

    The mandible movement (MM) signal provides information on mandible activity. It can be read visually to assess sleep-wake state and respiratory events. This study aimed to assess (1) the training of independent scorers to recognize the signal specificities; (2) intrascorer reproducibility and (3) interscorer variability. MM was collected in the mid-sagittal plane of the face of 40 patients. The typical MM was extracted and classified into seven distinct pattern classes: active wakefulness (AW), quiet wakefulness or quiet sleep (QW/S), sleep snoring (SS), sleep obstructive events (OAH), sleep mixed apnea (MA), respiratory related arousal (RERA) and sleep central events (CAH). Four scorers were trained; their diagnostic capacities were assessed on two reading sessions. The intra- and interscorer agreements were assessed using Cohen's κ. Intrascorer reproducibility for the two sessions ranged from 0.68 [95% confidence interval (CI): 0.59-0.77] to 0.88 (95% CI: 0.82-0.94), while the between-scorer agreement amounted to 0.68 (95% CI: 0.65-0.71) and 0.74 (95% CI: 0.72-0.77), respectively. The overall accuracy of the scorers was 75.2% (range: 72.4-80.7%). CAH MMs were the most difficult to discern (overall accuracy 65.6%). For the two sessions, the recognition rate of abnormal respiratory events (OAH, CAH, MA and RERA) was excellent: the interscorer mean agreement was 90.7% (Cohen's κ: 0.83; 95% CI: 0.79-0.88). The discrimination of OAH, CAH, MA characteristics was good, with an interscorer agreement of 80.8% (Cohen's κ: 0.65; 95% CI: 0.62-0.68). Visual analysis of isolated MMs can successfully diagnose sleep-wake state, normal and abnormal respiration and recognize the presence of respiratory effort. © 2014 European Sleep Research Society.

  19. Evaluation of New Visualization Approaches for Representing Uncertainty in the Recognized Maritime Picture

    DTIC Science & Technology

    2008-10-01

    and Risley , 2006) provided an invaluable insight into the scope and capability of emerging visualization techniques. While the latter provided some...Richmond BC (CAN);MacDonald Dettwiler and Associates Ltd, Dartmouth NS (CAN). Davenport, M. and Risley , C. (2006). Information Visualization: The...Spatial Time Late 7 Color of square Size of ellipse/circle Color of circle 8 Color of wedge Angular width of wedge Hourglass fill 9

  20. Expression patterns of Eph genes in the "dual visual development" of the lamprey and their significance in the evolution of vision in vertebrates.

    PubMed

    Suzuki, Daichi G; Murakami, Yasunori; Yamazaki, Yuji; Wada, Hiroshi

    2015-01-01

    Image-forming vision is crucial to animals for recognizing objects in their environment. In vertebrates, this type of vision is achieved with paired camera eyes and topographic projection of the optic nerve. Topographic projection is established by an orthogonal gradient of axon guidance molecules, such as Ephs. To explore the evolution of image-forming vision in vertebrates, lampreys, which belong to the basal lineage of vertebrates, are key animals because they show unique "dual visual development." In the embryonic and pre-ammocoete larval stage (the "primary" phase), photoreceptive "ocellus-like" eyes develop, but there is no retinotectal optic nerve projection. In the late ammocoete larval stage (the "secondary" phase), the eyes grow and form into camera eyes, and retinotectal projection is newly formed. After metamorphosis, this retinotectal projection in adult lampreys is topographic, similar to that of gnathostomes. In this study, we explored the involvement of Ephs in lamprey "dual visual development" and establishment of the image-form vision. We found that gnathostome-like orthogonal gradient expression was present in the retina during the "secondary" phase; i.e., EphB showed a gradient of expression along the dorsoventral axis, while EphC was expressed along the anteroposterior axis. However, no orthogonal gradient expression was observed during the "primary" phase. These observations suggest that Ephs are likely recruited de novo for the guidance of topographical "second" optic nerve projection. Transformations during lamprey "dual visual development" may represent "recapitulation" from a protochordate-like ancestor to a gnathostome-like vertebrate ancestor. © 2015 Wiley Periodicals, Inc.

  1. A bio-inspired system for spatio-temporal recognition in static and video imagery

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Moore, Christopher K.; Chelian, Suhas

    2007-04-01

    This paper presents a bio-inspired method for spatio-temporal recognition in static and video imagery. It builds upon and extends our previous work on a bio-inspired Visual Attention and object Recognition System (VARS). The VARS approach locates and recognizes objects in a single frame. This work presents two extensions of VARS. The first extension is a Scene Recognition Engine (SCE) that learns to recognize spatial relationships between objects that compose a particular scene category in static imagery. This could be used for recognizing the category of a scene, e.g., office vs. kitchen scene. The second extension is the Event Recognition Engine (ERE) that recognizes spatio-temporal sequences or events in sequences. This extension uses a working memory model to recognize events and behaviors in video imagery by maintaining and recognizing ordered spatio-temporal sequences. The working memory model is based on an ARTSTORE1 neural network that combines an ART-based neural network with a cascade of sustained temporal order recurrent (STORE)1 neural networks. A series of Default ARTMAP classifiers ascribes event labels to these sequences. Our preliminary studies have shown that this extension is robust to variations in an object's motion profile. We evaluated the performance of the SCE and ERE on real datasets. The SCE module was tested on a visual scene classification task using the LabelMe2 dataset. The ERE was tested on real world video footage of vehicles and pedestrians in a street scene. Our system is able to recognize the events in this footage involving vehicles and pedestrians.

  2. Correlative Light- and Electron Microscopy Using Quantum Dot Nanoparticles.

    PubMed

    Killingsworth, Murray C; Bobryshev, Yuri V

    2016-08-07

    A method is described whereby quantum dot (QD) nanoparticles can be used for correlative immunocytochemical studies of human pathology tissue using widefield fluorescence light microscopy and transmission electron microscopy (TEM). To demonstrate the protocol we have immunolabeled ultrathin epoxy sections of human somatostatinoma tumor using a primary antibody to somatostatin, followed by a biotinylated secondary antibody and visualization with streptavidin conjugated 585 nm cadmium-selenium (CdSe) quantum dots (QDs). The sections are mounted on a TEM specimen grid then placed on a glass slide for observation by widefield fluorescence light microscopy. Light microscopy reveals 585 nm QD labeling as bright orange fluorescence forming a granular pattern within the tumor cell cytoplasm. At low to mid-range magnification by light microscopy the labeling pattern can be easily recognized and the level of non-specific or background labeling assessed. This is a critical step for subsequent interpretation of the immunolabeling pattern by TEM and evaluation of the morphological context. The same section is then blotted dry and viewed by TEM. QD probes are seen to be attached to amorphous material contained in individual secretory granules. Images are acquired from the same region of interest (ROI) seen by light microscopy for correlative analysis. Corresponding images from each modality may then be blended to overlay fluorescence data on TEM ultrastructure of the corresponding region.

  3. Shape Recognition in Infancy: Visual Integration of Sequential Information.

    ERIC Educational Resources Information Center

    Rose, Susan A

    1988-01-01

    Investigated infants' integration of visual information across space and time. In four experiments, infants aged 12 months and 6 months viewed objects after watching light trace similar and dissimilar shapes. Infants looked longer at novel shapes, although six-month-olds did not recognize figures taking more than 10 seconds to trace. One-year-old…

  4. Use of Closed-Circuit Television with a Severely Visually Impaired Young Child.

    ERIC Educational Resources Information Center

    Miller-Wood, D. J.; And Others

    1990-01-01

    A closed-circuit television system was used with a five-year-old girl with severely limited vision to develop visual skills, especially skills related to concept formation. At the end of training, the girl could recognize lines, forms, shapes, letters, numbers, and words and could read short sentences. (Author/JDD)

  5. 3D Visual Proxemics: Recognizing Human Interactions in 3D from a Single Image (Open Access)

    DTIC Science & Technology

    2013-06-28

    accurate tracking and identity associations of people’s motions in videos. Proxemics is a subfield of anthropology that involves study of people...cinematography where the shot composition and camera viewpoint is optimized for visual weight [1]. In cinema , a shot is either a long shot, a medium

  6. The Relationship between Visual Metaphor Comprehension and Recognition of Similarities in Children with Learning Disabilities

    ERIC Educational Resources Information Center

    Mashal, Nira; Kasirer, Anat

    2012-01-01

    Previous studies have shown metaphoric comprehension deficits in children with learning disabilities. To understand metaphoric language, children must have enough semantic knowledge about the metaphorical terms and the ability to recognize similarity between two different domains. In the current study visual and verbal metaphor understanding was…

  7. Computing with Connections in Visual Recognition of Origami Objects.

    ERIC Educational Resources Information Center

    Sabbah, Daniel

    1985-01-01

    Summarizes an initial foray in tackling artificial intelligence problems using a connectionist approach. The task chosen is visual recognition of Origami objects, and the questions answered are how to construct a connectionist network to represent and recognize projected Origami line drawings and the advantages such an approach would have. (30…

  8. Screening Students with Visual Impairments for Intellectual Giftedness: A Pilot Study

    ERIC Educational Resources Information Center

    Besnoy, Kevin D; Manning, Sandra; Karnes, Frances A.

    2005-01-01

    Children with visual impairments who are gifted may be one of the most underserved student populations in our educational system (Johnsen & Corn, 1989). A paucity of current research exists concerning appropriate practices for recognizing these students in publicly funded school settings. In 2004, the American Foundation for the Blind reported…

  9. Developmental Changes in Visual Object Recognition between 18 and 24 Months of Age

    ERIC Educational Resources Information Center

    Pereira, Alfredo F.; Smith, Linda B.

    2009-01-01

    Two experiments examined developmental changes in children's visual recognition of common objects during the period of 18 to 24 months. Experiment 1 examined children's ability to recognize common category instances that presented three different kinds of information: (1) richly detailed and prototypical instances that presented both local and…

  10. Intrusive effects of implicitly processed information on explicit memory.

    PubMed

    Sentz, Dustin F; Kirkhart, Matthew W; LoPresto, Charles; Sobelman, Steven

    2002-02-01

    This study described the interference of implicitly processed information on the memory for explicitly processed information. Participants studied a list of words either auditorily or visually under instructions to remember the words (explicit study). They were then visually presented another word list under instructions which facilitate implicit but not explicit processing. Following a distractor task, memory for the explicit study list was tested with either a visual or auditory recognition task that included new words, words from the explicit study list, and words implicitly processed. Analysis indicated participants both failed to recognize words from the explicit study list and falsely recognized words that were implicitly processed as originating from the explicit study list. However, this effect only occurred when the testing modality was visual, thereby matching the modality for the implicitly processed information, regardless of the modality of the explicit study list. This "modality effect" for explicit memory was interpreted as poor source memory for implicitly processed information and in light of the procedures used. as well as illustrating an example of "remembering causing forgetting."

  11. A novel role for visual perspective cues in the neural computation of depth.

    PubMed

    Kim, HyungGoo R; Angelaki, Dora E; DeAngelis, Gregory C

    2015-01-01

    As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extraretinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We found that incorporating these 'dynamic perspective' cues allowed the visual system to generate selectivity for depth sign from motion parallax in macaque cortical area MT, a computation that was previously thought to require extraretinal signals regarding eye velocity. Our findings suggest neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations.

  12. Visual experience sculpts whole-cortex spontaneous infraslow activity patterns through an Arc-dependent mechanism

    PubMed Central

    Kraft, Andrew W.; Mitra, Anish; Bauer, Adam Q.; Raichle, Marcus E.; Culver, Joseph P.; Lee, Jin-Moo

    2017-01-01

    Decades of work in experimental animals has established the importance of visual experience during critical periods for the development of normal sensory-evoked responses in the visual cortex. However, much less is known concerning the impact of early visual experience on the systems-level organization of spontaneous activity. Human resting-state fMRI has revealed that infraslow fluctuations in spontaneous activity are organized into stereotyped spatiotemporal patterns across the entire brain. Furthermore, the organization of spontaneous infraslow activity (ISA) is plastic in that it can be modulated by learning and experience, suggesting heightened sensitivity to change during critical periods. Here we used wide-field optical intrinsic signal imaging in mice to examine whole-cortex spontaneous ISA patterns. Using monocular or binocular visual deprivation, we examined the effects of critical period visual experience on the development of ISA correlation and latency patterns within and across cortical resting-state networks. Visual modification with monocular lid suturing reduced correlation between left and right cortices (homotopic correlation) within the visual network, but had little effect on internetwork correlation. In contrast, visual deprivation with binocular lid suturing resulted in increased visual homotopic correlation and increased anti-correlation between the visual network and several extravisual networks, suggesting cross-modal plasticity. These network-level changes were markedly attenuated in mice with genetic deletion of Arc, a gene known to be critical for activity-dependent synaptic plasticity. Taken together, our results suggest that critical period visual experience induces global changes in spontaneous ISA relationships, both within the visual network and across networks, through an Arc-dependent mechanism. PMID:29087327

  13. Development of visual cortical function in infant macaques: A BOLD fMRI study

    PubMed Central

    Meeson, Alan; Munk, Matthias H. J.; Kourtzi, Zoe; Movshon, J. Anthony; Logothetis, Nikos K.; Kiorpes, Lynne

    2017-01-01

    Functional brain development is not well understood. In the visual system, neurophysiological studies in nonhuman primates show quite mature neuronal properties near birth although visual function is itself quite immature and continues to develop over many months or years after birth. Our goal was to assess the relative development of two main visual processing streams, dorsal and ventral, using BOLD fMRI in an attempt to understand the global mechanisms that support the maturation of visual behavior. Seven infant macaque monkeys (Macaca mulatta) were repeatedly scanned, while anesthetized, over an age range of 102 to 1431 days. Large rotating checkerboard stimuli induced BOLD activation in visual cortices at early ages. Additionally we used static and dynamic Glass pattern stimuli to probe BOLD responses in primary visual cortex and two extrastriate areas: V4 and MT-V5. The resulting activations were analyzed with standard GLM and multivoxel pattern analysis (MVPA) approaches. We analyzed three contrasts: Glass pattern present/absent, static/dynamic Glass pattern presentation, and structured/random Glass pattern form. For both GLM and MVPA approaches, robust coherent BOLD activation appeared relatively late in comparison to the maturation of known neuronal properties and the development of behavioral sensitivity to Glass patterns. Robust differential activity to Glass pattern present/absent and dynamic/static stimulus presentation appeared first in V1, followed by V4 and MT-V5 at older ages; there was no reliable distinction between the two extrastriate areas. A similar pattern of results was obtained with the two analysis methods, although MVPA analysis showed reliable differential responses emerging at later ages than GLM. Although BOLD responses to large visual stimuli are detectable, our results with more refined stimuli indicate that global BOLD activity changes as behavioral performance matures. This reflects an hierarchical development of the visual pathways. Since fMRI BOLD reflects neural activity on a population level, our results indicate that, although individual neurons might be adult-like, a longer maturation process takes place on a population level. PMID:29145469

  14. Designing informative warning signals: Effects of indicator type, modality, and task demand on recognition speed and accuracy

    PubMed Central

    Stevens, Catherine J.; Brennan, David; Petocz, Agnes; Howell, Clare

    2009-01-01

    An experiment investigated the assumption that natural indicators which exploit existing learned associations between a signal and an event make more effective warnings than previously unlearned symbolic indicators. Signal modality (visual, auditory) and task demand (low, high) were also manipulated. Warning effectiveness was indexed by accuracy and reaction time (RT) recorded during training and dual task test phases. Thirty-six participants were trained to recognize 4 natural and 4 symbolic indicators, either visual or auditory, paired with critical incidents from an aviation context. As hypothesized, accuracy was greater and RT was faster in response to natural indicators during the training phase. This pattern of responding was upheld in test phase conditions with respect to accuracy but observed in RT only in test phase conditions involving high demand and the auditory modality. Using the experiment as a specific example, we argue for the importance of considering the cognitive contribution of the user (viz., prior learned associations) in the warning design process. Drawing on semiotics and cognitive psychology, we highlight the indexical nature of so-called auditory icons or natural indicators and argue that the cogniser is an indispensable element in the tripartite nature of signification. PMID:20523852

  15. Deep Filter Banks for Texture Recognition, Description, and Segmentation.

    PubMed

    Cimpoi, Mircea; Maji, Subhransu; Kokkinos, Iasonas; Vedaldi, Andrea

    Visual textures have played a key role in image understanding because they convey important semantics of images, and because texture representations that pool local image descriptors in an orderless manner have had a tremendous impact in diverse applications. In this paper we make several contributions to texture understanding. First, instead of focusing on texture instance and material category recognition, we propose a human-interpretable vocabulary of texture attributes to describe common texture patterns, complemented by a new describable texture dataset for benchmarking. Second, we look at the problem of recognizing materials and texture attributes in realistic imaging conditions, including when textures appear in clutter, developing corresponding benchmarks on top of the recently proposed OpenSurfaces dataset. Third, we revisit classic texture represenations, including bag-of-visual-words and the Fisher vectors, in the context of deep learning and show that these have excellent efficiency and generalization properties if the convolutional layers of a deep model are used as filter banks. We obtain in this manner state-of-the-art performance in numerous datasets well beyond textures, an efficient method to apply deep features to image regions, as well as benefit in transferring features from one domain to another.

  16. Posterior cerebral atrophy in the absence of medial temporal lobe atrophy in pathologically-confirmed Alzheimer's disease

    PubMed Central

    Lehmann, Manja; Koedam, Esther L.G.E.; Barnes, Josephine; Bartlett, Jonathan W.; Ryan, Natalie S.; Pijnenburg, Yolande A.L.; Barkhof, Frederik; Wattjes, Mike P.; Scheltens, Philip; Fox, Nick C.

    2012-01-01

    Medial temporal lobe atrophy (MTA) is a recognized marker of Alzheimer's disease (AD), however, it can be prominent in frontotemporal lobar degeneration (FTLD). There is an increasing awareness that posterior atrophy (PA) is important in AD and may aid the differentiation of AD from FTLD. Visual rating scales are a convenient way of assessing atrophy in a clinical setting. In this study, 2 visual rating scales measuring MTA and PA were used to compare atrophy patterns in 62 pathologically-confirmed AD and 40 FTLD patients. Anatomical correspondence of MTA and PA was assessed using manually-delineated regions of the hippocampus and posterior cingulate gyrus, respectively. Both MTA and PA scales showed good inter- and intrarater reliabilities (kappa > 0.8). MTA scores showed a good correspondence with manual hippocampal volumes. Thirty percent of the AD patients showed PA in the absence of MTA. Adding the PA to the MTA scale improved discrimination of AD from FTLD, and early-onset AD from normal aging. These results underline the importance of considering PA in AD diagnosis, particularly in younger patients where medial temporal atrophy may be less conspicuous. PMID:21596458

  17. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.

  18. Passive method of eliminating accommodation/convergence disparity in stereoscopic head-mounted displays

    NASA Astrophysics Data System (ADS)

    Eichenlaub, Jesse B.

    2005-03-01

    The difference in accommodation and convergence distance experienced when viewing stereoscopic displays has long been recognized as a source of visual discomfort. It is especially problematic in head mounted virtual reality and enhanced reality displays, where images must often be displayed across a large depth range or superimposed on real objects. DTI has demonstrated a novel method of creating stereoscopic images in which the focus and fixation distances are closely matched for all parts of the scene from close distances to infinity. The method is passive in the sense that it does not rely on eye tracking, moving parts, variable focus optics, vibrating optics, or feedback loops. The method uses a rapidly changing illumination pattern in combination with a high speed microdisplay to create cones of light that converge at different distances to form the voxels of a high resolution space filling image. A bench model display was built and a series of visual tests were performed in order to demonstrate the concept and investigate both its capabilities and limitations. Results proved conclusively that real optical images were being formed and that observers had to change their focus to read text or see objects at different distances

  19. Temporal stability of visual search-driven biometrics

    NASA Astrophysics Data System (ADS)

    Yoon, Hong-Jun; Carmichael, Tandy R.; Tourassi, Georgia

    2015-03-01

    Previously, we have shown the potential of using an individual's visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circles shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant's "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, temporally stable personalized fingerprint of perceptual organization.

  20. Compression and reflection of visually evoked cortical waves

    PubMed Central

    Xu, Weifeng; Huang, Xiaoying; Takagaki, Kentaroh; Wu, Jian-young

    2007-01-01

    Summary Neuronal interactions between primary and secondary visual cortical areas are important for visual processing, but the spatiotemporal patterns of the interaction are not well understood. We used voltage-sensitive dye imaging to visualize neuronal activity in rat visual cortex and found novel visually evoked waves propagating from V1 to other visual areas. A primary wave originated in the monocular area of V1 and was “compressed” when propagating to V2. A reflected wave initiated after compression and propagated backward into V1. The compression occurred at the V1/V2 border, and local GABAA inhibition is important for the compression. The compression/reflection pattern provides a two-phase modulation: V1 is first depolarized by the primary wave and then V1 and V2 are simultaneously depolarized by the reflected and primary waves, respectively. The compression/reflection pattern only occurred for evoked but not for spontaneous waves, suggesting that it is organized by an internal mechanism associated with visual processing. PMID:17610821

  1. Automated objective characterization of visual field defects in 3D

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang (Inventor)

    2006-01-01

    A method and apparatus for electronically performing a visual field test for a patient. A visual field test pattern is displayed to the patient on an electronic display device and the patient's responses to the visual field test pattern are recorded. A visual field representation is generated from the patient's responses. The visual field representation is then used as an input into a variety of automated diagnostic processes. In one process, the visual field representation is used to generate a statistical description of the rapidity of change of a patient's visual field at the boundary of a visual field defect. In another process, the area of a visual field defect is calculated using the visual field representation. In another process, the visual field representation is used to generate a statistical description of the volume of a patient's visual field defect.

  2. Identification of Geostructures of the Continental Crust Particularly as They Relate to Mineral Resource Evaluation. [Alaska

    NASA Technical Reports Server (NTRS)

    Lathram, E. H. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. A pattern of very old geostructures was recognized, reflecting structures in the crust. This pattern is not peculiar to Alaska, but can be recognized throughout the northern cordillera. A new metallogenic hypothesis for Alaska was developed, based on the relationship of space image linears to known mineral deposits. Using image linear analysis, regional geologic features were also recognized; these features may be used to guide in the location of undiscovered oil and/or gas accumulations in northern Alaska. The effectiveness of ERTS data in enhancing medium and small scale mapping was demonstrated. ERTS data were also used to recognize and monitor the state of large scale vehicular scars on Arctic tundra.

  3. The Role of Visual Cues in Microgravity Spatial Orientation

    NASA Technical Reports Server (NTRS)

    Oman, Charles M.; Howard, Ian P.; Smith, Theodore; Beall, Andrew C.; Natapoff, Alan; Zacher, James E.; Jenkin, Heather L.

    2003-01-01

    In weightlessness, astronauts must rely on vision to remain spatially oriented. Although gravitational down cues are missing, most astronauts maintain a subjective vertical -a subjective sense of which way is up. This is evidenced by anecdotal reports of crewmembers feeling upside down (inversion illusions) or feeling that a floor has become a ceiling and vice versa (visual reorientation illusions). Instability in the subjective vertical direction can trigger disorientation and space motion sickness. On Neurolab, a virtual environment display system was used to conduct five interrelated experiments, which quantified: (a) how the direction of each person's subjective vertical depends on the orientation of the surrounding visual environment, (b) whether rolling the virtual visual environment produces stronger illusions of circular self-motion (circular vection) and more visual reorientation illusions than on Earth, (c) whether a virtual scene moving past the subject produces a stronger linear self-motion illusion (linear vection), and (d) whether deliberate manipulation of the subjective vertical changes a crewmember's interpretation of shading or the ability to recognize objects. None of the crew's subjective vertical indications became more independent of environmental cues in weightlessness. Three who were either strongly dependent on or independent of stationary visual cues in preflight tests remained so inflight. One other became more visually dependent inflight, but recovered postflight. Susceptibility to illusions of circular self-motion increased in flight. The time to the onset of linear self-motion illusions decreased and the illusion magnitude significantly increased for most subjects while free floating in weightlessness. These decreased toward one-G levels when the subject 'stood up' in weightlessness by wearing constant force springs. For several subjects, changing the relative direction of the subjective vertical in weightlessness-either by body rotation or by simply cognitively initiating a visual reorientation-altered the illusion of convexity produced when viewing a flat, shaded disc. It changed at least one person's ability to recognize previously presented two-dimensional shapes. Overall, results show that most astronauts become more dependent on dynamic visual motion cues and some become responsive to stationary orientation cues. The direction of the subjective vertical is labile in the absence of gravity. This can interfere with the ability to properly interpret shading, or to recognize complex objects in different orientations.

  4. Use of Sine Shaped High-Frequency Rhythmic Visual Stimuli Patterns for SSVEP Response Analysis and Fatigue Rate Evaluation in Normal Subjects

    PubMed Central

    Keihani, Ahmadreza; Shirzhiyan, Zahra; Farahi, Morteza; Shamsi, Elham; Mahnam, Amin; Makkiabadi, Bahador; Haidari, Mohsen R.; Jafari, Amir H.

    2018-01-01

    Background: Recent EEG-SSVEP signal based BCI studies have used high frequency square pulse visual stimuli to reduce subjective fatigue. However, the effect of total harmonic distortion (THD) has not been considered. Compared to CRT and LCD monitors, LED screen displays high-frequency wave with better refresh rate. In this study, we present high frequency sine wave simple and rhythmic patterns with low THD rate by LED to analyze SSVEP responses and evaluate subjective fatigue in normal subjects. Materials and Methods: We used patterns of 3-sequence high-frequency sine waves (25, 30, and 35 Hz) to design our visual stimuli. Nine stimuli patterns, 3 simple (repetition of each of above 3 frequencies e.g., P25-25-25) and 6 rhythmic (all of the frequencies in 6 different sequences e.g., P25-30-35) were chosen. A hardware setup with low THD rate (<0.1%) was designed to present these patterns on LED. Twenty two normal subjects (aged 23–30 (25 ± 2.1) yrs) were enrolled. Visual analog scale (VAS) was used for subjective fatigue evaluation after presentation of each stimulus pattern. PSD, CCA, and LASSO methods were employed to analyze SSVEP responses. The data including SSVEP features and fatigue rate for different visual stimuli patterns were statistically evaluated. Results: All 9 visual stimuli patterns elicited SSVEP responses. Overall, obtained accuracy rates were 88.35% for PSD and > 90% for CCA and LASSO (for TWs > 1 s). High frequency rhythmic patterns group with low THD rate showed higher accuracy rate (99.24%) than simple patterns group (98.48%). Repeated measure ANOVA showed significant difference between rhythmic pattern features (P < 0.0005). Overall, there was no significant difference between the VAS of rhythmic [3.85 ± 2.13] compared to the simple patterns group [3.96 ± 2.21], (P = 0.63). Rhythmic group had lower within group VAS variation (min = P25-30-35 [2.90 ± 2.45], max = P35-25-30 [4.81 ± 2.65]) as well as least individual pattern VAS (P25-30-35). Discussion and Conclusion: Overall, rhythmic and simple pattern groups had higher and similar accuracy rates. Rhythmic stimuli patterns showed insignificantly lower fatigue rate than simple patterns. We conclude that both rhythmic and simple visual high frequency sine wave stimuli require further research for human subject SSVEP-BCI studies. PMID:29892219

  5. Use of Sine Shaped High-Frequency Rhythmic Visual Stimuli Patterns for SSVEP Response Analysis and Fatigue Rate Evaluation in Normal Subjects.

    PubMed

    Keihani, Ahmadreza; Shirzhiyan, Zahra; Farahi, Morteza; Shamsi, Elham; Mahnam, Amin; Makkiabadi, Bahador; Haidari, Mohsen R; Jafari, Amir H

    2018-01-01

    Background: Recent EEG-SSVEP signal based BCI studies have used high frequency square pulse visual stimuli to reduce subjective fatigue. However, the effect of total harmonic distortion (THD) has not been considered. Compared to CRT and LCD monitors, LED screen displays high-frequency wave with better refresh rate. In this study, we present high frequency sine wave simple and rhythmic patterns with low THD rate by LED to analyze SSVEP responses and evaluate subjective fatigue in normal subjects. Materials and Methods: We used patterns of 3-sequence high-frequency sine waves (25, 30, and 35 Hz) to design our visual stimuli. Nine stimuli patterns, 3 simple (repetition of each of above 3 frequencies e.g., P25-25-25) and 6 rhythmic (all of the frequencies in 6 different sequences e.g., P25-30-35) were chosen. A hardware setup with low THD rate (<0.1%) was designed to present these patterns on LED. Twenty two normal subjects (aged 23-30 (25 ± 2.1) yrs) were enrolled. Visual analog scale (VAS) was used for subjective fatigue evaluation after presentation of each stimulus pattern. PSD, CCA, and LASSO methods were employed to analyze SSVEP responses. The data including SSVEP features and fatigue rate for different visual stimuli patterns were statistically evaluated. Results: All 9 visual stimuli patterns elicited SSVEP responses. Overall, obtained accuracy rates were 88.35% for PSD and > 90% for CCA and LASSO (for TWs > 1 s). High frequency rhythmic patterns group with low THD rate showed higher accuracy rate (99.24%) than simple patterns group (98.48%). Repeated measure ANOVA showed significant difference between rhythmic pattern features ( P < 0.0005). Overall, there was no significant difference between the VAS of rhythmic [3.85 ± 2.13] compared to the simple patterns group [3.96 ± 2.21], ( P = 0.63). Rhythmic group had lower within group VAS variation (min = P25-30-35 [2.90 ± 2.45], max = P35-25-30 [4.81 ± 2.65]) as well as least individual pattern VAS (P25-30-35). Discussion and Conclusion: Overall, rhythmic and simple pattern groups had higher and similar accuracy rates. Rhythmic stimuli patterns showed insignificantly lower fatigue rate than simple patterns. We conclude that both rhythmic and simple visual high frequency sine wave stimuli require further research for human subject SSVEP-BCI studies.

  6. Basic visual function and cortical thickness patterns in posterior cortical atrophy.

    PubMed

    Lehmann, Manja; Barnes, Josephine; Ridgway, Gerard R; Wattam-Bell, John; Warrington, Elizabeth K; Fox, Nick C; Crutch, Sebastian J

    2011-09-01

    Posterior cortical atrophy (PCA) is characterized by a progressive decline in higher-visual object and space processing, but the extent to which these deficits are underpinned by basic visual impairments is unknown. This study aimed to assess basic and higher-order visual deficits in 21 PCA patients. Basic visual skills including form detection and discrimination, color discrimination, motion coherence, and point localization were measured, and associations and dissociations between specific basic visual functions and measures of higher-order object and space perception were identified. All participants showed impairment in at least one aspect of basic visual processing. However, a number of dissociations between basic visual skills indicated a heterogeneous pattern of visual impairment among the PCA patients. Furthermore, basic visual impairments were associated with particular higher-order object and space perception deficits, but not with nonvisual parietal tasks, suggesting the specific involvement of visual networks in PCA. Cortical thickness analysis revealed trends toward lower cortical thickness in occipitotemporal (ventral) and occipitoparietal (dorsal) regions in patients with visuoperceptual and visuospatial deficits, respectively. However, there was also a lot of overlap in their patterns of cortical thinning. These findings suggest that different presentations of PCA represent points in a continuum of phenotypical variation.

  7. Visualizing Patterns of Drug Prescriptions with EventFlow: A Pilot Study of Asthma Medications in the Military Health System

    DTIC Science & Technology

    2013-06-01

    1 Visualizing Patterns of Drug Prescriptions with EventFlow: A Pilot Study of Asthma Medications in the...asthmatics within the Military Health System (MHS). Visualizing the patterns of asthma medication use surrounding a LABA prescription is a quick way to...random sample of 100 asthma patients under age 65 with a new LABA prescription from January 1, 2006-March 1, 2010 in MHS healthcare claims. Analysis was

  8. Biases in rhythmic sensorimotor coordination: effects of modality and intentionality.

    PubMed

    Debats, Nienke B; Ridderikhoff, Arne; de Boer, Betteco J; Peper, C Lieke E

    2013-08-01

    Sensorimotor biases were examined for intentional (tracking task) and unintentional (distractor task) rhythmic coordination. The tracking task involved unimanual tracking of either an oscillating visual signal or the passive movements of the contralateral hand (proprioceptive signal). In both conditions the required coordination patterns (isodirectional and mirror-symmetric) were defined relative to the body midline and the hands were not visible. For proprioceptive tracking the two patterns did not differ in stability, whereas for visual tracking the isodirectional pattern was performed more stably than the mirror-symmetric pattern. However, when visual feedback about the unimanual hand movements was provided during visual tracking, the isodirectional pattern ceased to be dominant. Together these results indicated that the stability of the coordination patterns did not depend on the modality of the target signal per se, but on the combination of sensory signals that needed to be processed (unimodal vs. cross-modal). The distractor task entailed rhythmic unimanual movements during which a rhythmic visual or proprioceptive distractor signal had to be ignored. The observed biases were similar as for intentional coordination, suggesting that intentionality did not affect the underlying sensorimotor processes qualitatively. Intentional tracking was characterized by active sensory pursuit, through muscle activity in the passively moved arm (proprioceptive tracking task) and rhythmic eye movements (visual tracking task). Presumably this pursuit afforded predictive information serving the coordination process. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Sensory factors limiting horizontal and vertical visual span for letter recognition

    PubMed Central

    Yu, Deyue; Legge, Gordon E.; Wagoner, Gunther; Chung, Susana T. L.

    2014-01-01

    Reading speed for English text is slower for text oriented vertically than horizontally. Yu, Park, Gerold, and Legge (2010) showed that slower reading of vertical text is associated with a smaller visual span (the number of letters recognized with high accuracy without moving the eyes). Three possible sensory determinants of the size of the visual span are: resolution (decreasing acuity at letter positions farther from the midline), mislocations (uncertainty about the relative position of letters in strings), and crowding (interference from flanking letters in recognizing the target letter). In the present study, we asked which of these factors is most important in determining the size of the visual span, and likely in turn in determining the horizontal/vertical difference in reading when letter size is above the critical print size for reading. We used a decomposition analysis to represent constraints due to resolution, mislocations, and crowding as losses in information transmitted (in bits) about letter recognition. Across vertical and horizontal conditions, crowding accounted for 75% of the loss in information, mislocations accounted for 19% of the loss, and declining acuity away from fixation accounted for only 6%. We conclude that crowding is the major factor limiting the size of the visual span, and that the horizontal/vertical difference in the size of the visual span is associated with stronger crowding along the vertical midline. PMID:25187253

  10. Sensory factors limiting horizontal and vertical visual span for letter recognition

    PubMed Central

    Yu, Deyue; Legge, Gordon E.; Wagoner, Gunther; Chung, Susana T. L.

    2014-01-01

    Reading speed for English text is slower for text oriented vertically than horizontally. Yu, Park, Gerold, and Legge (2010) showed that slower reading of vertical text is associated with a smaller visual span (the number of letters recognized with high accuracy without moving the eyes). Three possible sensory determinants of the size of the visual span are: resolution (decreasing acuity at letter positions farther from the midline), mislocations (uncertainty about the relative position of letters in strings), and crowding (interference from flanking letters in recognizing the target letter). In the present study, we asked which of these factors is most important in determining the size of the visual span, and likely in turn in determining the horizontal/vertical difference in reading when letter size is above the critical print size for reading. We used a decomposition analysis to represent constraints due to resolution, mislocations, and crowding as losses in information transmitted (in bits) about letter recognition. Across vertical and horizontal conditions, crowding accounted for 75% of the loss in information, mislocations accounted for 19% of the loss, and declining acuity away from fixation accounted for only 6%. We conclude that crowding is the major factor limiting the size of the visual span, and that the horizontal/vertical difference in the size of the visual span is associated with stronger crowding along the vertical midline.

  11. Identification and intensity of disgust: Distinguishing visual, linguistic and facial expressions processing in Parkinson disease.

    PubMed

    Sedda, Anna; Petito, Sara; Guarino, Maria; Stracciari, Andrea

    2017-07-14

    Most of the studies since now show an impairment for facial displays of disgust recognition in Parkinson disease. A general impairment in disgust processing in patients with Parkinson disease might adversely affect their social interactions, given the relevance of this emotion for human relations. However, despite the importance of faces, disgust is also expressed through other format of visual stimuli such as sentences and visual images. The aim of our study was to explore disgust processing in a sample of patients affected by Parkinson disease, by means of various tests tackling not only facial recognition but also other format of visual stimuli through which disgust can be recognized. Our results confirm that patients are impaired in recognizing facial displays of disgust. Further analyses show that patients are also impaired and slower for other facial expressions, with the only exception of happiness. Notably however, patients with Parkinson disease processed visual images and sentences as controls. Our findings show a dissociation within different formats of visual stimuli of disgust, suggesting that Parkinson disease is not characterized by a general compromising of disgust processing, as often suggested. The involvement of the basal ganglia-frontal cortex system might spare some cognitive components of emotional processing, related to memory and culture, at least for disgust. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Fingerprint pattern restoration by digital image processing techniques.

    PubMed

    Wen, Che-Yen; Yu, Chiu-Chung

    2003-09-01

    Fingerprint evidence plays an important role in solving criminal problems. However, defective (lacking information needed for completeness) or contaminated (undesirable information included) fingerprint patterns make identifying and recognizing processes difficult. Unfortunately. this is the usual case. In the recognizing process (enhancement of patterns, or elimination of "false alarms" so that a fingerprint pattern can be searched in the Automated Fingerprint Identification System (AFIS)), chemical and physical techniques have been proposed to improve pattern legibility. In the identifying process, a fingerprint examiner can enhance contaminated (but not defective) fingerprint patterns under guidelines provided by the Scientific Working Group on Friction Ridge Analysis, Study and Technology (SWGFAST), the Scientific Working Group on Imaging Technology (SWGIT), and an AFIS working group within the National Institute of Justice. Recently, the image processing techniques have been successfully applied in forensic science. For example, we have applied image enhancement methods to improve the legibility of digital images such as fingerprints and vehicle plate numbers. In this paper, we propose a novel digital image restoration technique based on the AM (amplitude modulation)-FM (frequency modulation) reaction-diffusion method to restore defective or contaminated fingerprint patterns. This method shows its potential application to fingerprint pattern enhancement in the recognizing process (but not for the identifying process). Synthetic and real images are used to show the capability of the proposed method. The results of enhancing fingerprint patterns by the manual process and our method are evaluated and compared.

  13. StreamMap: Smooth Dynamic Visualization of High-Density Streaming Points.

    PubMed

    Li, Chenhui; Baciu, George; Han, Yu

    2018-03-01

    Interactive visualization of streaming points for real-time scatterplots and linear blending of correlation patterns is increasingly becoming the dominant mode of visual analytics for both big data and streaming data from active sensors and broadcasting media. To better visualize and interact with inter-stream patterns, it is generally necessary to smooth out gaps or distortions in the streaming data. Previous approaches either animate the points directly or present a sampled static heat-map. We propose a new approach, called StreamMap, to smoothly blend high-density streaming points and create a visual flow that emphasizes the density pattern distributions. In essence, we present three new contributions for the visualization of high-density streaming points. The first contribution is a density-based method called super kernel density estimation that aggregates streaming points using an adaptive kernel to solve the overlapping problem. The second contribution is a robust density morphing algorithm that generates several smooth intermediate frames for a given pair of frames. The third contribution is a trend representation design that can help convey the flow directions of the streaming points. The experimental results on three datasets demonstrate the effectiveness of StreamMap when dynamic visualization and visual analysis of trend patterns on streaming points are required.

  14. Developmental trajectory of neural specialization for letter and number visual processing.

    PubMed

    Park, Joonkoo; van den Berg, Berry; Chiang, Crystal; Woldorff, Marty G; Brannon, Elizabeth M

    2018-05-01

    Adult neuroimaging studies have demonstrated dissociable neural activation patterns in the visual cortex in response to letters (Latin alphabet) and numbers (Arabic numerals), which suggest a strong experiential influence of reading and mathematics on the human visual system. Here, developmental trajectories in the event-related potential (ERP) patterns evoked by visual processing of letters, numbers, and false fonts were examined in four different age groups (7-, 10-, 15-year-olds, and young adults). The 15-year-olds and adults showed greater neural sensitivity to letters over numbers in the left visual cortex and the reverse pattern in the right visual cortex, extending previous findings in adults to teenagers. In marked contrast, 7- and 10-year-olds did not show this dissociable neural pattern. Furthermore, the contrast of familiar stimuli (letters or numbers) versus unfamiliar ones (false fonts) showed stark ERP differences between the younger (7- and 10-year-olds) and the older (15-year-olds and adults) participants. These results suggest that both coarse (familiar versus unfamiliar) and fine (letters versus numbers) tuning for letters and numbers continue throughout childhood and early adolescence, demonstrating a profound impact of uniquely human cultural inventions on visual cognition and its development. © 2017 John Wiley & Sons Ltd.

  15. Application of local binary pattern and human visual Fibonacci texture features for classification different medical images

    NASA Astrophysics Data System (ADS)

    Sanghavi, Foram; Agaian, Sos

    2017-05-01

    The goal of this paper is to (a) test the nuclei based Computer Aided Cancer Detection system using Human Visual based system on the histopathology images and (b) Compare the results of the proposed system with the Local Binary Pattern and modified Fibonacci -p pattern systems. The system performance is evaluated using different parameters such as accuracy, specificity, sensitivity, positive predictive value, and negative predictive value on 251 prostate histopathology images. The accuracy of 96.69% was observed for cancer detection using the proposed human visual based system compared to 87.42% and 94.70% observed for Local Binary patterns and the modified Fibonacci p patterns.

  16. Retinal Wave Patterns Are Governed by Mutual Excitation among Starburst Amacrine Cells and Drive the Refinement and Maintenance of Visual Circuits.

    PubMed

    Xu, Hong-Ping; Burbridge, Timothy J; Ye, Meijun; Chen, Minggang; Ge, Xinxin; Zhou, Z Jimmy; Crair, Michael C

    2016-03-30

    Retinal waves are correlated bursts of spontaneous activity whose spatiotemporal patterns are critical for early activity-dependent circuit elaboration and refinement in the mammalian visual system. Three separate developmental wave epochs or stages have been described, but the mechanism(s) of pattern generation of each and their distinct roles in visual circuit development remain incompletely understood. We used neuroanatomical,in vitroandin vivoelectrophysiological, and optical imaging techniques in genetically manipulated mice to examine the mechanisms of wave initiation and propagation and the role of wave patterns in visual circuit development. Through deletion of β2 subunits of nicotinic acetylcholine receptors (β2-nAChRs) selectively from starburst amacrine cells (SACs), we show that mutual excitation among SACs is critical for Stage II (cholinergic) retinal wave propagation, supporting models of wave initiation and pattern generation from within a single retinal cell type. We also demonstrate that β2-nAChRs in SACs, and normal wave patterns, are necessary for eye-specific segregation. Finally, we show that Stage III (glutamatergic) retinal waves are not themselves necessary for normal eye-specific segregation, but elimination of both Stage II and Stage III retinal waves dramatically disrupts eye-specific segregation. This suggests that persistent Stage II retinal waves can adequately compensate for Stage III retinal wave loss during the development and refinement of eye-specific segregation. These experiments confirm key features of the "recurrent network" model for retinal wave propagation and clarify the roles of Stage II and Stage III retinal wave patterns in visual circuit development. Spontaneous activity drives early mammalian circuit development, but the initiation and patterning of activity vary across development and among modalities. Cholinergic "retinal waves" are initiated in starburst amacrine cells and propagate to retinal ganglion cells and higher-order visual areas, but the mechanism responsible for creating their unique and critical activity pattern is incompletely understood. We demonstrate that cholinergic wave patterns are dictated by recurrent connectivity within starburst amacrine cells, and retinal ganglion cells act as "readouts" of patterned activity. We also show that eye-specific segregation occurs normally without glutamatergic waves, but elimination of both cholinergic and glutamatergic waves completely disrupts visual circuit development. These results suggest that each retinal wave pattern during development is optimized for concurrently refining multiple visual circuits. Copyright © 2016 the authors 0270-6474/16/363872-16$15.00/0.

  17. Fast and Famous: Looking for the Fastest Speed at Which a Face Can be Recognized

    PubMed Central

    Barragan-Jason, Gladys; Besson, Gabriel; Ceccaldi, Mathieu; Barbeau, Emmanuel J.

    2012-01-01

    Face recognition is supposed to be fast. However, the actual speed at which faces can be recognized remains unknown. To address this issue, we report two experiments run with speed constraints. In both experiments, famous faces had to be recognized among unknown ones using a large set of stimuli to prevent pre-activation of features which would speed up recognition. In the first experiment (31 participants), recognition of famous faces was investigated using a rapid go/no-go task. In the second experiment, 101 participants performed a highly time constrained recognition task using the Speed and Accuracy Boosting procedure. Results indicate that the fastest speed at which a face can be recognized is around 360–390 ms. Such latencies are about 100 ms longer than the latencies recorded in similar tasks in which subjects have to detect faces among other stimuli. We discuss which model of activation of the visual ventral stream could account for such latencies. These latencies are not consistent with a purely feed-forward pass of activity throughout the visual ventral stream. An alternative is that face recognition relies on the core network underlying face processing identified in fMRI studies (OFA, FFA, and pSTS) and reentrant loops to refine face representation. However, the model of activation favored is that of an activation of the whole visual ventral stream up to anterior areas, such as the perirhinal cortex, combined with parallel and feed-back processes. Further studies are needed to assess which of these three models of activation can best account for face recognition. PMID:23460051

  18. A smart sensor architecture based on emergent computation in an array of outer-totalistic cells

    NASA Astrophysics Data System (ADS)

    Dogaru, Radu; Dogaru, Ioana; Glesner, Manfred

    2005-06-01

    A novel smart-sensor architecture is proposed, capable to segment and recognize characters in a monochrome image. It is capable to provide a list of ASCII codes representing the recognized characters from the monochrome visual field. It can operate as a blind's aid or for industrial applications. A bio-inspired cellular model with simple linear neurons was found the best to perform the nontrivial task of cropping isolated compact objects such as handwritten digits or characters. By attaching a simple outer-totalistic cell to each pixel sensor, emergent computation in the resulting cellular automata lattice provides a straightforward and compact solution to the otherwise computationally intensive problem of character segmentation. A simple and robust recognition algorithm is built in a compact sequential controller accessing the array of cells so that the integrated device can provide directly a list of codes of the recognized characters. Preliminary simulation tests indicate good performance and robustness to various distortions of the visual field.

  19. Development of a Guide-Dog Robot: Leading and Recognizing a Visually-Handicapped Person using a LRF

    NASA Astrophysics Data System (ADS)

    Saegusa, Shozo; Yasuda, Yuya; Uratani, Yoshitaka; Tanaka, Eiichirou; Makino, Toshiaki; Chang, Jen-Yuan (James

    A conceptual Guide-Dog Robot prototype to lead and to recognize a visually-handicapped person is developed and discussed in this paper. Key design features of the robot include a movable platform, human-machine interface, and capability of avoiding obstacles. A novel algorithm enabling the robot to recognize its follower's locomotion as well to detect the center of corridor is proposed and implemented in the robot's human-machine interface. It is demonstrated that using the proposed novel leading and detecting algorithm along with a rapid scanning laser range finder (LRF) sensor, the robot is able to successfully and effectively lead a human walking in corridor without running into obstacles such as trash boxes or adjacent walking persons. Position and trajectory of the robot leading a human maneuvering in common corridor environment are measured by an independent LRF observer. The measured data suggest that the proposed algorithms are effective to enable the robot to detect center of the corridor and position of its follower correctly.

  20. Symbol Recognition Using a Concept Lattice of Graphical Patterns

    NASA Astrophysics Data System (ADS)

    Rusiñol, Marçal; Bertet, Karell; Ogier, Jean-Marc; Lladós, Josep

    In this paper we propose a new approach to recognize symbols by the use of a concept lattice. We propose to build a concept lattice in terms of graphical patterns. Each model symbol is decomposed in a set of composing graphical patterns taken as primitives. Each one of these primitives is described by boundary moment invariants. The obtained concept lattice relates which symbolic patterns compose a given graphical symbol. A Hasse diagram is derived from the context and is used to recognize symbols affected by noise. We present some preliminary results over a variation of the dataset of symbols from the GREC 2005 symbol recognition contest.

  1. Increasing elevation of fire in the Sierra Nevada and implications for forest change

    USGS Publications Warehouse

    Schwartz, Mark W.; Butt, Nathalie; Dolanc, Christopher R.; Holguin, Andrew; Moritz, Max A.; North, Malcolm P.; Safford, Hugh D.; Stephenson, Nathan L.; Thorne, James H.; van Mantgem, Phillip J.

    2015-01-01

    Fire in high-elevation forest ecosystems can have severe impacts on forest structure, function and biodiversity. Using a 105-year data set, we found increasing elevation extent of fires in the Sierra Nevada, and pose five hypotheses to explain this pattern. Beyond the recognized pattern of increasing fire frequency in the Sierra Nevada since the late 20th century, we find that the upper elevation extent of those fires has also been increasing. Factors such as fire season climate and fuel build up are recognized potential drivers of changes in fire regimes. Patterns of warming climate and increasing stand density are consistent with both the direction and magnitude of increasing elevation of wildfire. Reduction in high elevation wildfire suppression and increasing ignition frequencies may also contribute to the observed pattern. Historical biases in fire reporting are recognized, but not likely to explain the observed patterns. The four plausible mechanistic hypotheses (changes in fire management, climate, fuels, ignitions) are not mutually exclusive, and likely have synergistic interactions that may explain the observed changes. Irrespective of mechanism, the observed pattern of increasing occurrence of fire in these subalpine forests may have significant impacts on their resilience to changing climatic conditions.

  2. Using Molecular Visualization to Explore Protein Structure and Function and Enhance Student Facility with Computational Tools

    ERIC Educational Resources Information Center

    Terrell, Cassidy R.; Listenberger, Laura L.

    2017-01-01

    Recognizing that undergraduate students can benefit from analysis of 3D protein structure and function, we have developed a multiweek, inquiry-based molecular visualization project for Biochemistry I students. This project uses a virtual model of cyclooxygenase-1 (COX-1) to guide students through multiple levels of protein structure analysis. The…

  3. Effects of Visual and Auditory Perceptual Aptitudes and Letter Discrimination Pretraining on Word Recognition.

    ERIC Educational Resources Information Center

    Janssen, David Rainsford

    This study investigated alternate methods of letter discrimination pretraining and word recognition training in young children. Seventy kindergarten children were trained to recognize eight printed words in a vocabulary list by a mixed-list paired-associate method. Four of the stimulus words had visual response choices (pictures) and four had…

  4. Handwriting Error Patterns of Children with Mild Motor Difficulties.

    ERIC Educational Resources Information Center

    Malloy-Miller, Theresa; And Others

    1995-01-01

    A test of handwriting legibility and 6 perceptual-motor tests were completed by 66 children ages 7-12. Among handwriting error patterns, execution was associated with visual-motor skill and sensory discrimination, aiming with visual-motor and fine-motor skills. The visual-spatial factor had no significant association with perceptual-motor…

  5. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  6. Treatment of a patient with posterior cortical atrophy (PCA) with chiropractic manipulation and Dynamic Neuromuscular Stabilization (DNS): A case report

    PubMed Central

    Francio, Vinicius T.; Boesch, Ron; Tunning, Michael

    2015-01-01

    Objective: Posterior cortical atrophy (PCA) is a rare progressive neurodegenerative syndrome which unusual symptoms include deficits of balance, bodily orientation, chronic pain syndrome and dysfunctional motor patterns. Current research provides minimal guidance on support, education and recommended evidence-based patient care. This case reports the utilization of chiropractic spinal manipulation, dynamic neuromuscular stabilization (DNS), and other adjunctive procedures along with medical treatment of PCA. Clinical features: A 54-year-old male presented to a chiropractic clinic with non-specific back pain associated with visual disturbances, slight memory loss, and inappropriate cognitive motor control. After physical examination, brain MRI and PET scan, the diagnosis of PCA was recognized. Intervention and Outcome: Chiropractic spinal manipulation and dynamic neuromuscular stabilization were utilized as adjunctive care to conservative pharmacological treatment of PCA. Outcome measurements showed a 60% improvement in the patient’s perception of health with restored functional neuromuscular pattern, improvements in locomotion, posture, pain control, mood, tolerance to activities of daily living (ADLs) and overall satisfactory progress in quality of life. Yet, no changes on memory loss progression, visual space orientation, and speech were observed. Conclusion: PCA is a progressive and debilitating condition. Because of poor awareness of PCA by physicians, patients usually receive incomplete care. Additional efforts must be centered on the musculoskeletal features of PCA, aiming enhancement in quality of life and functional improvements (FI). Adjunctive rehabilitative treatment is considered essential for individuals with cognitive and motor disturbances, and manual medicine procedures may be consider a viable option. PMID:25729084

  7. Visual Object Pattern Separation Varies in Older Adults

    ERIC Educational Resources Information Center

    Holden, Heather M.; Toner, Chelsea; Pirogovsky, Eva; Kirwan, C. Brock; Gilbert, Paul E.

    2013-01-01

    Young and nondemented older adults completed a visual object continuous recognition memory task in which some stimuli (lures) were similar but not identical to previously presented objects. The lures were hypothesized to result in increased interference and increased pattern separation demand. To examine variability in object pattern separation…

  8. Steady-state pattern electroretinogram and short-duration transient visual evoked potentials in glaucomatous and healthy eyes.

    PubMed

    Amarasekera, Dilru C; Resende, Arthur F; Waisbourd, Michael; Puri, Sanjeev; Moster, Marlene R; Hark, Lisa A; Katz, L Jay; Fudemberg, Scott J; Mantravadi, Anand V

    2018-01-01

    This study evaluates two rapid electrophysiological glaucoma diagnostic tests that may add a functional perspective to glaucoma diagnosis. This study aimed to determine the ability of two office-based electrophysiological diagnostic tests, steady-state pattern electroretinogram and short-duration transient visual evoked potentials, to discern between glaucomatous and healthy eyes. This is a cross-sectional study in a hospital setting. Forty-one patients with glaucoma and 41 healthy volunteers participated in the study. Steady-state pattern electroretinogram and short-duration transient visual evoked potential testing was conducted in glaucomatous and healthy eyes. A 64-bar-size stimulus with both a low-contrast and high-contrast setting was used to compare steady-state pattern electroretinogram parameters in both groups. A low-contrast and high-contrast checkerboard stimulus was used to measure short-duration transient visual evoked potential parameters in both groups. Steady-state pattern electroretinogram parameters compared were MagnitudeD, MagnitudeD/Magnitude ratio, and the signal-to-noise ratio. Short-duration transient visual evoked potential parameters compared were amplitude and latency. MagnitudeD was significantly lower in glaucoma patients when using a low-contrast (P = 0.001) and high-contrast (P < 0.001) 64-bar-size steady-state pattern electroretinogram stimulus. MagnitudeD/Magnitude ratio and SNR were significantly lower in the glaucoma group when using a high-contrast 64-bar-size stimulus (P < 0.001 and P = 0.010, respectively). Short-duration transient visual evoked potential amplitude and latency were not significantly different between the two groups. Steady-state pattern electroretinogram was effectively able to discern between glaucomatous and healthy eyes. Steady-state pattern electroretinogram may thus have a role as a clinically useful electrophysiological diagnostic tool. © 2017 Royal Australian and New Zealand College of Ophthalmologists.

  9. Fractal analysis of radiologists' visual scanning pattern in screening mammography

    NASA Astrophysics Data System (ADS)

    Alamudun, Folami T.; Yoon, Hong-Jun; Hudson, Kathy; Morin-Ducote, Garnetta; Tourassi, Georgia

    2015-03-01

    Several researchers have investigated radiologists' visual scanning patterns with respect to features such as total time examining a case, time to initially hit true lesions, number of hits, etc. The purpose of this study was to examine the complexity of the radiologists' visual scanning pattern when viewing 4-view mammographic cases, as they typically do in clinical practice. Gaze data were collected from 10 readers (3 breast imaging experts and 7 radiology residents) while reviewing 100 screening mammograms (24 normal, 26 benign, 50 malignant). The radiologists' scanpaths across the 4 mammographic views were mapped to a single 2-D image plane. Then, fractal analysis was applied on the composite 4- view scanpaths. For each case, the complexity of each radiologist's scanpath was measured using fractal dimension estimated with the box counting method. The association between the fractal dimension of the radiologists' visual scanpath, case pathology, case density, and radiologist experience was evaluated using fixed effects ANOVA. ANOVA showed that the complexity of the radiologists' visual search pattern in screening mammography is dependent on case specific attributes (breast parenchyma density and case pathology) as well as on reader attributes, namely experience level. Visual scanning patterns are significantly different for benign and malignant cases than for normal cases. There is also substantial inter-observer variability which cannot be explained only by experience level.

  10. Transformations of visual memory induced by implied motions of pattern elements.

    PubMed

    Finke, R A; Freyd, J J

    1985-10-01

    Four experiments measured distortions in short-term visual memory induced by displays depicting independent translations of the elements of a pattern. In each experiment, observers saw a sequence of 4 dot patterns and were instructed to remember the third pattern and to compare it with the fourth. The first three patterns depicted translations of the dots in consistent, but separate directions. Error rates and reaction times for rejecting the fourth pattern as different from the third were substantially higher when the dots in that pattern were displaced slightly forward, in the same directions as the implied motions, compared with when the dots were displaced in the opposite, backward directions. These effects showed little variation across interstimulus intervals ranging from 250 to 2,000 ms, and did not depend on whether the displays gave rise to visual apparent motion. However, they were eliminated when the dots in the fourth pattern were displaced by larger amounts in each direction, corresponding to the dot positions in the next and previous patterns in the same inducing sequence. These findings extend our initial report of the phenomenon of "representational momentum" (Freyd & Finke, 1984a), and help to rule out alternatives to the proposal that visual memories tend to undergo, at least to some extent, the transformations implied by a prior sequence of observed events.

  11. Comparative study of visual pathways in owls (Aves: Strigiformes).

    PubMed

    Gutiérrez-Ibáñez, Cristián; Iwaniuk, Andrew N; Lisney, Thomas J; Wylie, Douglas R

    2013-01-01

    Although they are usually regarded as nocturnal, owls exhibit a wide range of activity patterns, from strictly nocturnal, to crepuscular or cathemeral, to diurnal. Several studies have shown that these differences in the activity pattern are reflected in differences in eye morphology and retinal organization. Despite the evidence that differences in activity pattern among owl species are reflected in the peripheral visual system, there has been no attempt to correlate these differences with changes in the visual regions in the brain. In this study, we compare the relative size of nuclei in the main visual pathways in nine species of owl that exhibit a wide range of activity patterns. We found marked differences in the relative size of all visual structures among the species studied, both in the tectofugal and the thalamofugal pathway, as well in other retinorecipient nuclei, including the nucleus lentiformis mesencephali, the nucleus of the basal optic root and the nucleus geniculatus lateralis, pars ventralis. We show that the barn owl (Tyto alba), a species widely used in the study of the integration of visual and auditory processing, has reduced visual pathways compared to strigid owls. Our results also suggest there could be a trade-off between the relative size of visual pathways and auditory pathways, similar to that reported in mammals. Finally, our results show that although there is no relationship between activity pattern and the relative size of either the tectofugal or the thalamofugal pathway, there is a positive correlation between the relative size of both visual pathways and the relative number of cells in the retinal ganglion layer. Copyright © 2012 S. Karger AG, Basel.

  12. Visual Analytics for Pattern Discovery in Home Care

    PubMed Central

    Monsen, Karen A.; Bae, Sung-Heui; Zhang, Wenhui

    2016-01-01

    Summary Background Visualization can reduce the cognitive load of information, allowing users to easily interpret and assess large amounts of data. The purpose of our study was to examine home health data using visual analysis techniques to discover clinically salient associations between patient characteristics with problem-oriented health outcomes of older adult home health patients during the home health service period. Methods Knowledge, Behavior and Status ratings at discharge as well as change from admission to discharge that was coded using the Omaha System was collected from a dataset on 988 de-identified patient data from 15 home health agencies. SPSS Visualization Designer v1.0 was used to visually analyze patterns between independent and outcome variables using heat maps and histograms. Visualizations suggesting clinical salience were tested for significance using correlation analysis. Results The mean age of the patients was 80 years, with the majority female (66%). Of the 150 visualizations, 69 potentially meaningful patterns were statistically evaluated through bivariate associations, revealing 21 significant associations. Further, 14 associations between episode length and Charlson co-morbidity index mainly with urinary related diagnoses and problems remained significant after adjustment analyses. Through visual analysis, the adverse association of the longer home health episode length and higher Charlson co-morbidity index with behavior or status outcomes for patients with impaired urinary function was revealed. Conclusions We have demonstrated the use of visual analysis to discover novel patterns that described high-needs subgroups among the older home health patient population. The effective presentation of these data patterns can allow clinicians to identify areas of patient improvement, and time periods that are most effective for implementing home health interventions to improve patient outcomes. PMID:27466053

  13. Visual Learning Induces Changes in Resting-State fMRI Multivariate Pattern of Information.

    PubMed

    Guidotti, Roberto; Del Gratta, Cosimo; Baldassarre, Antonello; Romani, Gian Luca; Corbetta, Maurizio

    2015-07-08

    When measured with functional magnetic resonance imaging (fMRI) in the resting state (R-fMRI), spontaneous activity is correlated between brain regions that are anatomically and functionally related. Learning and/or task performance can induce modulation of the resting synchronization between brain regions. Moreover, at the neuronal level spontaneous brain activity can replay patterns evoked by a previously presented stimulus. Here we test whether visual learning/task performance can induce a change in the patterns of coded information in R-fMRI signals consistent with a role of spontaneous activity in representing task-relevant information. Human subjects underwent R-fMRI before and after perceptual learning on a novel visual shape orientation discrimination task. Task-evoked fMRI patterns to trained versus novel stimuli were recorded after learning was completed, and before the second R-fMRI session. Using multivariate pattern analysis on task-evoked signals, we found patterns in several cortical regions, as follows: visual cortex, V3/V3A/V7; within the default mode network, precuneus, and inferior parietal lobule; and, within the dorsal attention network, intraparietal sulcus, which discriminated between trained and novel visual stimuli. The accuracy of classification was strongly correlated with behavioral performance. Next, we measured multivariate patterns in R-fMRI signals before and after learning. The frequency and similarity of resting states representing the task/visual stimuli states increased post-learning in the same cortical regions recruited by the task. These findings support a representational role of spontaneous brain activity. Copyright © 2015 the authors 0270-6474/15/359786-13$15.00/0.

  14. Research of Daily Conversation Transmitting System Based on Mouth Part Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Watanabe, Mutsumi; Nishi, Natsuko

    The authors are developing a vision-based intension transfer technique by recognizing user’s face expressions and movements, to help free and convenient communications with aged or disabled persons who find difficulties in talking, discriminating small character prints and operating keyboards by hands and fingers. In this paper we report a prototype system, where layered daily conversations are successively selected by recognizing the transition in shape of user’s mouth parts using camera image sequences settled in front of the user. Four mouth part patterns are used in the system. A method that automatically recognizes these patterns by analyzing the intensity histogram data around the mouth region is newly developed. The confirmation of a selection on the way is executed by detecting the open and shut movements of mouth through the temporal change in intensity histogram data. The method has been installed in a desktop PC by VC++ programs. Experimental results of mouth shape pattern recognition by twenty-five persons have shown the effectiveness of the method.

  15. A novel role for visual perspective cues in the neural computation of depth

    PubMed Central

    Kim, HyungGoo R.; Angelaki, Dora E.; DeAngelis, Gregory C.

    2014-01-01

    As we explore a scene, our eye movements add global patterns of motion to the retinal image, complicating visual motion produced by self-motion or moving objects. Conventionally, it has been assumed that extra-retinal signals, such as efference copy of smooth pursuit commands, are required to compensate for the visual consequences of eye rotations. We consider an alternative possibility: namely, that the visual system can infer eye rotations from global patterns of image motion. We visually simulated combinations of eye translation and rotation, including perspective distortions that change dynamically over time. We demonstrate that incorporating these “dynamic perspective” cues allows the visual system to generate selectivity for depth sign from motion parallax in macaque area MT, a computation that was previously thought to require extra-retinal signals regarding eye velocity. Our findings suggest novel neural mechanisms that analyze global patterns of visual motion to perform computations that require knowledge of eye rotations. PMID:25436667

  16. Electrophysiological evidence of altered visual processing in adults who experienced visual deprivation during infancy.

    PubMed

    Segalowitz, Sidney J; Sternin, Avital; Lewis, Terri L; Dywan, Jane; Maurer, Daphne

    2017-04-01

    We examined the role of early visual input in visual system development by testing adults who had been born with dense bilateral cataracts that blocked all patterned visual input during infancy until the cataractous lenses were removed surgically and the eyes fitted with compensatory contact lenses. Patients viewed checkerboards and textures to explore early processing regions (V1, V2), Glass patterns to examine global form processing (V4), and moving stimuli to explore global motion processing (V5). Patients' ERPs differed from those of controls in that (1) the V1 component was much smaller for all but the simplest stimuli and (2) extrastriate components did not differentiate amongst texture stimuli, Glass patterns, or motion stimuli. The results indicate that early visual deprivation contributes to permanent abnormalities at early and mid levels of visual processing, consistent with enduring behavioral deficits in the ability to process complex textures, global form, and global motion. © 2017 Wiley Periodicals, Inc.

  17. Adding a visualization feature to web search engines: it's time.

    PubMed

    Wong, Pak Chung

    2008-01-01

    It's widely recognized that all Web search engines today are almost identical in presentation layout and behavior. In fact, the same presentation approach has been applied to depicting search engine results pages (SERPs) since the first Web search engine launched in 1993. In this Visualization Viewpoints article, I propose to add a visualization feature to Web search engines and suggest that the new addition can improve search engines' performance and capabilities, which in turn lead to better Web search technology.

  18. Design Fragments

    DTIC Science & Technology

    2007-04-19

    define the patterns and are better at analyzing behavior. SPQR (System for Pattern Query and Recognition) [18, 58] can recognize pattern vari- ants...Stotts. SPQR : Flexible automated design pattern extraction from source code. ase, 00:215, 2003. ISSN 1527-1366. doi: http://doi.ieeecomputersociety. org

  19. Love is in the gaze: an eye-tracking study of love and sexual desire.

    PubMed

    Bolmont, Mylene; Cacioppo, John T; Cacioppo, Stephanie

    2014-09-01

    Reading other people's eyes is a valuable skill during interpersonal interaction. Although a number of studies have investigated visual patterns in relation to the perceiver's interest, intentions, and goals, little is known about eye gaze when it comes to differentiating intentions to love from intentions to lust (sexual desire). To address this question, we conducted two experiments: one testing whether the visual pattern related to the perception of love differs from that related to lust and one testing whether the visual pattern related to the expression of love differs from that related to lust. Our results show that a person's eye gaze shifts as a function of his or her goal (love vs. lust) when looking at a visual stimulus. Such identification of distinct visual patterns for love and lust could have theoretical and clinical importance in couples therapy when these two phenomena are difficult to disentangle from one another on the basis of patients' self-reports. © The Author(s) 2014.

  20. Visualizing Dynamic Bitcoin Transaction Patterns.

    PubMed

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J

    2016-06-01

    This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network.

  1. Visualizing Dynamic Bitcoin Transaction Patterns

    PubMed Central

    McGinn, Dan; Birch, David; Akroyd, David; Molina-Solana, Miguel; Guo, Yike; Knottenbelt, William J.

    2016-01-01

    Abstract This work presents a systemic top-down visualization of Bitcoin transaction activity to explore dynamically generated patterns of algorithmic behavior. Bitcoin dominates the cryptocurrency markets and presents researchers with a rich source of real-time transactional data. The pseudonymous yet public nature of the data presents opportunities for the discovery of human and algorithmic behavioral patterns of interest to many parties such as financial regulators, protocol designers, and security analysts. However, retaining visual fidelity to the underlying data to retain a fuller understanding of activity within the network remains challenging, particularly in real time. We expose an effective force-directed graph visualization employed in our large-scale data observation facility to accelerate this data exploration and derive useful insight among domain experts and the general public alike. The high-fidelity visualizations demonstrated in this article allowed for collaborative discovery of unexpected high frequency transaction patterns, including automated laundering operations, and the evolution of multiple distinct algorithmic denial of service attacks on the Bitcoin network. PMID:27441715

  2. Recognizing lexical and semantic change patterns in evolving life science ontologies to inform mapping adaptation.

    PubMed

    Dos Reis, Julio Cesar; Dinh, Duy; Da Silveira, Marcos; Pruski, Cédric; Reynaud-Delaître, Chantal

    2015-03-01

    Mappings established between life science ontologies require significant efforts to maintain them up to date due to the size and frequent evolution of these ontologies. In consequence, automatic methods for applying modifications on mappings are highly demanded. The accuracy of such methods relies on the available description about the evolution of ontologies, especially regarding concepts involved in mappings. However, from one ontology version to another, a further understanding of ontology changes relevant for supporting mapping adaptation is typically lacking. This research work defines a set of change patterns at the level of concept attributes, and proposes original methods to automatically recognize instances of these patterns based on the similarity between attributes denoting the evolving concepts. This investigation evaluates the benefits of the proposed methods and the influence of the recognized change patterns to select the strategies for mapping adaptation. The summary of the findings is as follows: (1) the Precision (>60%) and Recall (>35%) achieved by comparing manually identified change patterns with the automatic ones; (2) a set of potential impact of recognized change patterns on the way mappings is adapted. We found that the detected correlations cover ∼66% of the mapping adaptation actions with a positive impact; and (3) the influence of the similarity coefficient calculated between concept attributes on the performance of the recognition algorithms. The experimental evaluations conducted with real life science ontologies showed the effectiveness of our approach to accurately characterize ontology evolution at the level of concept attributes. This investigation confirmed the relevance of the proposed change patterns to support decisions on mapping adaptation. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Patterned-string tasks: relation between fine motor skills and visual-spatial abilities in parrots.

    PubMed

    Krasheninnikova, Anastasia

    2013-01-01

    String-pulling and patterned-string tasks are often used to analyse perceptual and cognitive abilities in animals. In addition, the paradigm can be used to test the interrelation between visual-spatial and motor performance. Two Australian parrot species, the galah (Eolophus roseicapilla) and the cockatiel (Nymphicus hollandicus), forage on the ground, but only the galah uses its feet to manipulate food. I used a set of string pulling and patterned-string tasks to test whether usage of the feet during foraging is a prerequisite for solving the vertical string pulling problem. Indeed, the two species used techniques that clearly differed in the extent of beak-foot coordination but did not differ in terms of their success in solving the string pulling task. However, when the visual-spatial skills of the subjects were tested, the galahs outperformed the cockatiels. This supports the hypothesis that the fine motor skills needed for advanced beak-foot coordination may be interrelated with certain visual-spatial abilities needed for solving patterned-string tasks. This pattern was also found within each of the two species on the individual level: higher motor abilities positively correlated with performance in patterned-string tasks. This is the first evidence of an interrelation between visual-spatial and motor abilities in non-mammalian animals.

  4. Intelligent Control for Future Autonomous Distributed Sensor Systems

    DTIC Science & Technology

    2007-03-26

    recognized, the use of a pre-computed reconfiguration solution that fits the recognized scenario could allow reconfiguration to take place without...This data was loaded into the program developed to visualize the seabed and then the simulation was performed using frames to denote the target...to generate separate images for each eye. Users wear lightweight, inexpensive polarized eyeglasses and see a stereoscopic image. 35 Fig. 10

  5. Differential Roles of the Fan-Shaped Body and the Ellipsoid Body in "Drosophila" Visual Pattern Memory

    ERIC Educational Resources Information Center

    Pan, Yufeng; Zhou, Yanqiong; Guo, Chao; Gong, Haiyun; Gong, Zhefeng; Liu, Li

    2009-01-01

    The central complex is a prominent structure in the "Drosophila" brain. Visual learning experiments in the flight simulator, with flies with genetically altered brains, revealed that two groups of horizontal neurons in one of its substructures, the fan-shaped body, were required for "Drosophila" visual pattern memory. However,…

  6. Brief Report: Early VEPs to Pattern-Reversal in Adolescents and Adults with Autism

    ERIC Educational Resources Information Center

    Kovarski, K.; Thillay, A.; Houy-Durand, E.; Roux, S.; Bidet-Caulet, A.; Bonnet-Brilhault, F.; Batty, M.

    2016-01-01

    Autism spectrum disorder (ASD) is characterized by atypical visual perception both in the social and nonsocial domain. In order to measure a reliable visual response, visual evoked potentials were recorded during a passive pattern-reversal stimulation in adolescents and adults with and without ASD. While the present results show the same…

  7. Visual Field Asymmetries in Attention Vary with Self-Reported Attention Deficits

    ERIC Educational Resources Information Center

    Poynter, William; Ingram, Paul; Minor, Scott

    2010-01-01

    The purpose of this study was to determine whether an index of self-reported attention deficits predicts the pattern of visual field asymmetries observed in behavioral measures of attention. Studies of "normal" subjects do not present a consistent pattern of asymmetry in attention functions, with some studies showing better left visual field (LVF)…

  8. A new method for text detection and recognition in indoor scene for assisting blind people

    NASA Astrophysics Data System (ADS)

    Jabnoun, Hanen; Benzarti, Faouzi; Amiri, Hamid

    2017-03-01

    Developing assisting system of handicapped persons become a challenging ask in research projects. Recently, a variety of tools are designed to help visually impaired or blind people object as a visual substitution system. The majority of these tools are based on the conversion of input information into auditory or tactile sensory information. Furthermore, object recognition and text retrieval are exploited in the visual substitution systems. Text detection and recognition provides the description of the surrounding environments, so that the blind person can readily recognize the scene. In this work, we aim to introduce a method for detecting and recognizing text in indoor scene. The process consists on the detection of the regions of interest that should contain the text using the connected component. Then, the text detection is provided by employing the images correlation. This component of an assistive blind person should be simple, so that the users are able to obtain the most informative feedback within the shortest time.

  9. Vertical visual features have a strong influence on cuttlefish camouflage.

    PubMed

    Ulmer, K M; Buresch, K C; Kossodo, M M; Mäthger, L M; Siemann, L A; Hanlon, R T

    2013-04-01

    Cuttlefish and other cephalopods use visual cues from their surroundings to adaptively change their body pattern for camouflage. Numerous previous experiments have demonstrated the influence of two-dimensional (2D) substrates (e.g., sand and gravel habitats) on camouflage, yet many marine habitats have varied three-dimensional (3D) structures among which cuttlefish camouflage from predators, including benthic predators that view cuttlefish horizontally against such 3D backgrounds. We conducted laboratory experiments, using Sepia officinalis, to test the relative influence of horizontal versus vertical visual cues on cuttlefish camouflage: 2D patterns on benthic substrates were tested versus 2D wall patterns and 3D objects with patterns. Specifically, we investigated the influence of (i) quantity and (ii) placement of high-contrast elements on a 3D object or a 2D wall, as well as (iii) the diameter and (iv) number of 3D objects with high-contrast elements on cuttlefish body pattern expression. Additionally, we tested the influence of high-contrast visual stimuli covering the entire 2D benthic substrate versus the entire 2D wall. In all experiments, visual cues presented in the vertical plane evoked the strongest body pattern response in cuttlefish. These experiments support field observations that, in some marine habitats, cuttlefish will respond to vertically oriented background features even when the preponderance of visual information in their field of view seems to be from the 2D surrounding substrate. Such choices highlight the selective decision-making that occurs in cephalopods with their adaptive camouflage capability.

  10. Unsupervised Clustering of Subcellular Protein Expression Patterns in High-Throughput Microscopy Images Reveals Protein Complexes and Functional Relationships between Proteins

    PubMed Central

    Handfield, Louis-François; Chong, Yolanda T.; Simmons, Jibril; Andrews, Brenda J.; Moses, Alan M.

    2013-01-01

    Protein subcellular localization has been systematically characterized in budding yeast using fluorescently tagged proteins. Based on the fluorescence microscopy images, subcellular localization of many proteins can be classified automatically using supervised machine learning approaches that have been trained to recognize predefined image classes based on statistical features. Here, we present an unsupervised analysis of protein expression patterns in a set of high-resolution, high-throughput microscope images. Our analysis is based on 7 biologically interpretable features which are evaluated on automatically identified cells, and whose cell-stage dependency is captured by a continuous model for cell growth. We show that it is possible to identify most previously identified localization patterns in a cluster analysis based on these features and that similarities between the inferred expression patterns contain more information about protein function than can be explained by a previous manual categorization of subcellular localization. Furthermore, the inferred cell-stage associated to each fluorescence measurement allows us to visualize large groups of proteins entering the bud at specific stages of bud growth. These correspond to proteins localized to organelles, revealing that the organelles must be entering the bud in a stereotypical order. We also identify and organize a smaller group of proteins that show subtle differences in the way they move around the bud during growth. Our results suggest that biologically interpretable features based on explicit models of cell morphology will yield unprecedented power for pattern discovery in high-resolution, high-throughput microscopy images. PMID:23785265

  11. Validation of a freshwater Otolith microstructure pattern for Nisqually Chinook Salmon (Oncorhynchus tshawytscha)

    USGS Publications Warehouse

    Lind-Null, Angie; Larsen, Kim

    2011-01-01

    The Nisqually Fall Chinook salmon (Oncorhynchus tshawytscha) population is one of 27 stocks in the Puget Sound (Washington) evolutionarily significant unit listed as threatened under the federal Endangered Species Act (ESA). Extensive restoration of the Nisqually River delta ecosystem has taken place to assist in recovery of the stock since estuary habitat is a critical transition zone for juvenile fall Chinook salmon. A pre-restoration baseline that includes the characterization of life history strategies, estuary residence times, growth rates and habitat use is needed to evaluate the potential response of hatchery and natural origin Chinook salmon to restoration efforts and to determine restoration success. Otolith microstructure analysis was selected as a tool to examine Chinook salmon life history, growth and residence in the Nisqually River estuary. The purpose of the current study is to incorporate microstructural analysis from the otoliths of juvenile Nisqually Chinook salmon collected at the downstream migrant trap within true freshwater (FW) habitat of the Nisqually River. The results from this analysis confirmed the previously documented Nisqually-specific FW microstructure pattern and revealed a Nisqually-specific microstructure pattern early in development (“developmental pattern”). No inter-annual variation in the microstructure pattern was visually observed when compared to samples from previous years. Furthermore, the Nisqually-specific “developmental pattern” and the FW microstructure pattern used in combination during analysis will allow us to recognize and separate with further confidence future unmarked Chinook salmon otolith collections into Nisqually-origin (natural or unmarked hatchery) and non-Nisqually origin categories. Freshwater mean increment width, growth rate and residence time were also calculated.

  12. Artificial intelligence in radiology.

    PubMed

    Hosny, Ahmed; Parmar, Chintan; Quackenbush, John; Schwartz, Lawrence H; Aerts, Hugo J W L

    2018-05-17

    Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.

  13. Frapbot: An open-source application for FRAP data.

    PubMed

    Kohze, Robin; Dieteren, Cindy E J; Koopman, Werner J H; Brock, Roland; Schmidt, Samuel

    2017-08-01

    We introduce Frapbot, a free-of-charge open source software web application written in R, which provides manual and automated analyses of fluorescence recovery after photobleaching (FRAP) datasets. For automated operation, starting from data tables containing columns of time-dependent intensity values for various regions of interests within the images, a pattern recognition algorithm recognizes the relevant columns and identifies the presence or absence of prebleach values and the time point of photobleaching. Raw data, residuals, normalization, and boxplots indicating the distribution of half times of recovery (t 1/2 ) of all uploaded files are visualized instantly in a batch-wise manner using a variety of user-definable fitting options. The fitted results are provided as .zip file, which contains .csv formatted output tables. Alternatively, the user can manually control any of the options described earlier. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  14. Comparison of animated jet stream visualizations

    NASA Astrophysics Data System (ADS)

    Nocke, Thomas; Hoffmann, Peter

    2016-04-01

    The visualization of 3D atmospheric phenomena in space and time is still a challenging problem. In particular, multiple solutions of animated jet stream visualizations have been produced in recent years, which were designed to visually analyze and communicate the jet and related impacts on weather circulation patterns and extreme weather events. This PICO integrates popular and new jet animation solutions and inter-compares them. The applied techniques (e.g. stream lines or line integral convolution) and parametrizations (color mapping, line lengths) are discussed with respect to visualization quality criteria and their suitability for certain visualization tasks (e.g. jet patterns and jet anomaly analysis, communicating its relevance for climate change).

  15. Encoder: A Connectionist Model of How Learning to Visually Encode Fixated Text Images Improves Reading Fluency

    ERIC Educational Resources Information Center

    Martin, Gale L.

    2004-01-01

    This article proposes that visual encoding learning improves reading fluency by widening the span over which letters are recognized from a fixated text image so that fewer fixations are needed to cover a text line. Encoder is a connectionist model that learns to convert images like the fixated text images human readers encode into the…

  16. Automatic lip reading by using multimodal visual features

    NASA Astrophysics Data System (ADS)

    Takahashi, Shohei; Ohya, Jun

    2013-12-01

    Since long time ago, speech recognition has been researched, though it does not work well in noisy places such as in the car or in the train. In addition, people with hearing-impaired or difficulties in hearing cannot receive benefits from speech recognition. To recognize the speech automatically, visual information is also important. People understand speeches from not only audio information, but also visual information such as temporal changes in the lip shape. A vision based speech recognition method could work well in noisy places, and could be useful also for people with hearing disabilities. In this paper, we propose an automatic lip-reading method for recognizing the speech by using multimodal visual information without using any audio information such as speech recognition. First, the ASM (Active Shape Model) is used to track and detect the face and lip in a video sequence. Second, the shape, optical flow and spatial frequencies of the lip features are extracted from the lip detected by ASM. Next, the extracted multimodal features are ordered chronologically so that Support Vector Machine is performed in order to learn and classify the spoken words. Experiments for classifying several words show promising results of this proposed method.

  17. Beauty and the beholder: the role of visual sensitivity in visual preference

    PubMed Central

    Spehar, Branka; Wong, Solomon; van de Klundert, Sarah; Lui, Jessie; Clifford, Colin W. G.; Taylor, Richard P.

    2015-01-01

    For centuries, the essence of aesthetic experience has remained one of the most intriguing mysteries for philosophers, artists, art historians and scientists alike. Recently, views emphasizing the link between aesthetics, perception and brain function have become increasingly prevalent (Ramachandran and Hirstein, 1999; Zeki, 1999; Livingstone, 2002; Ishizu and Zeki, 2013). The link between art and the fractal-like structure of natural images has also been highlighted (Spehar et al., 2003; Graham and Field, 2007; Graham and Redies, 2010). Motivated by these claims and our previous findings that humans display a consistent preference across various images with fractal-like statistics, here we explore the possibility that observers’ preference for visual patterns might be related to their sensitivity for such patterns. We measure sensitivity to simple visual patterns (sine-wave gratings varying in spatial frequency and random textures with varying scaling exponent) and find that they are highly correlated with visual preferences exhibited by the same observers. Although we do not attempt to offer a comprehensive neural model of aesthetic experience, we demonstrate a strong relationship between visual sensitivity and preference for simple visual patterns. Broadly speaking, our results support assertions that there is a close relationship between aesthetic experience and the sensory coding of natural stimuli. PMID:26441611

  18. Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures

    PubMed Central

    Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding. PMID:21690314

  19. Immune functions of insect βGRPs and their potential application.

    PubMed

    Rao, Xiang-Jun; Zhan, Ming-Yue; Pan, Yue-Min; Liu, Su; Yang, Pei-Jin; Yang, Li-Ling; Yu, Xiao-Qiang

    2018-06-01

    Insects rely completely on the innate immune system to sense the foreign bodies and to mount the immune responses. Germ-line encoded pattern recognition receptors play crucial roles in recognizing pathogen-associated molecular patterns. Among them, β-1,3-glucan recognition proteins (βGRPs) and gram-negative bacteria-binding proteins (GNBPs) belong to the same pattern recognition receptor family, which can recognize β-1,3-glucans. Typical insect βGRPs are comprised of a tandem carbohydrate-binding module in the N-terminal and a glucanase-like domain in the C-terminal. The former can recognize triple-helical β-1,3-glucans, whereas the latter, which normally lacks the enzymatic activity, can recruit adapter proteins to initiate the protease cascade. According to studies, insect βGRPs possess at least three types of functions. Firstly, some βGRPs cooperate with peptidoglycan recognition proteins to recognize the lysine-type peptidoglycans upstream of the Toll pathway. Secondly, some directly recognize fungal β-1,3-glucans to activate the Toll pathway and melanization. Thirdly, some form the 'attack complexes' with other immune effectors to promote the antifungal defenses. The current review will focus on the discovery of insect βGRPs, functions of some well-characterized members, structure-function studies and their potential application. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Fractal Analysis of Radiologists Visual Scanning Pattern in Screening Mammography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alamudun, Folami T; Yoon, Hong-Jun; Hudson, Kathy

    2015-01-01

    Several investigators have investigated radiologists visual scanning patterns with respect to features such as total time examining a case, time to initially hit true lesions, number of hits, etc. The purpose of this study was to examine the complexity of the radiologists visual scanning pattern when viewing 4-view mammographic cases, as they typically do in clinical practice. Gaze data were collected from 10 readers (3 breast imaging experts and 7 radiology residents) while reviewing 100 screening mammograms (24 normal, 26 benign, 50 malignant). The radiologists scanpaths across the 4 mammographic views were mapped to a single 2-D image plane. Then,more » fractal analysis was applied on the derived scanpaths using the box counting method. For each case, the complexity of each radiologist s scanpath was estimated using fractal dimension. The association between gaze complexity, case pathology, case density, and radiologist experience was evaluated using 3 factor fixed effects ANOVA. ANOVA showed that case pathology, breast density, and experience level are all independent predictors of the visual scanning pattern complexity. Visual scanning patterns are significantly different for benign and malignant cases than for normal cases as well as when breast parenchyma density changes.« less

  1. Micro-Valences: Perceiving Affective Valence in Everyday Objects

    PubMed Central

    Lebrecht, Sophie; Bar, Moshe; Barrett, Lisa Feldman; Tarr, Michael J.

    2012-01-01

    Perceiving the affective valence of objects influences how we think about and react to the world around us. Conversely, the speed and quality with which we visually recognize objects in a visual scene can vary dramatically depending on that scene’s affective content. Although typical visual scenes contain mostly “everyday” objects, the affect perception in visual objects has been studied using somewhat atypical stimuli with strong affective valences (e.g., guns or roses). Here we explore whether affective valence must be strong or overt to exert an effect on our visual perception. We conclude that everyday objects carry subtle affective valences – “micro-valences” – which are intrinsic to their perceptual representation. PMID:22529828

  2. Visualizing Oceans of Data: Using learning research to inform the design of student interfaces to climate data (Invited)

    NASA Astrophysics Data System (ADS)

    Krumhansl, R.; Peach, C. L.; Busey, A.; Foster, J.; Baker, I.

    2013-12-01

    To be climate literate, students must be data-literate. To connect with the evidence behind scientists' assertions about climate change, students (and other novices) must be able to distinguish long-term trends from short-term variability in graphs, recognize the distribution of sea surface temperature or precipitation changes on maps, and discern important patterns in animations that display changes in data over time. Although the development of cyberinfrastructure for accessing near digital, sharable, real-time and archived earth systems data has the potential to transform how climate science is taught by connecting students directly with evidence to support their understanding, online interfaces to scientific data are typically industrial-strength - built by scientists for scientists - and their design can significantly impede broad use by novices. To inform efforts at bridging scientific data portals to the classroom, Education Development Center, Inc. (EDC) and the Scripps Institution of Oceanography conducted an NSF-funded 2-year interdisciplinary review of literature and expert opinion pertinent to making interfaces to large scientific databases accessible to and usable by student learners and their instructors. The >70 cross-cutting and specific guidelines in our project report are grounded in the fundamentals of Cognitive Load Theory, Visual Perception, Schema theory and Universal Design for Learning. The components of the human visual system and associated cognitive processes are highly specialized and have evolved in response to survival demands of the three-dimensional world humans have lived in for thousands of years. Because the use of two-dimensional representations, such as maps and graphs, and the use and navigation of Web interfaces has developed quite recently in human history, our visual perception system is not specifically adapted to these tasks. Therefore, it's critical to understand how to design two-dimensional media to take advantage of the strengths of our highly evolved and complex visual system and to compensate for its weaknesses. Looking at the design of data interfaces through this lens helps us understand, for example, why red stands out (finding ripe berries in a bush), why movement grabs our attention (hunting and avoiding predators), and why variations in light luminance and shading work better than variations in color hue for perceiving shape and form. This presentation will, through specific examples, explain how to avoid the pitfalls and make scientific databases more broadly accessible by: 1) adjusting the cognitive load imposed by the user interface and visualizations so that it doesn't exceed the amount of information the learner can actively process; 2) drawing attention to important features and patterns; and 3) enabling customization of visualizations and tools to meet the needs of diverse learners

  3. Visualizing Oceans of Data: Using learning research to inform the design of student interfaces to climate data (Invited)

    NASA Astrophysics Data System (ADS)

    Krumhansl, R.; Peach, C. L.; Busey, A.; Foster, J.; Baker, I.

    2011-12-01

    To be climate literate, students must be data-literate. To connect with the evidence behind scientists' assertions about climate change, students (and other novices) must be able to distinguish long-term trends from short-term variability in graphs, recognize the distribution of sea surface temperature or precipitation changes on maps, and discern important patterns in animations that display changes in data over time. Although the development of cyberinfrastructure for accessing near digital, sharable, real-time and archived earth systems data has the potential to transform how climate science is taught by connecting students directly with evidence to support their understanding, online interfaces to scientific data are typically industrial-strength - built by scientists for scientists - and their design can significantly impede broad use by novices. To inform efforts at bridging scientific data portals to the classroom, Education Development Center, Inc. (EDC) and the Scripps Institution of Oceanography conducted an NSF-funded 2-year interdisciplinary review of literature and expert opinion pertinent to making interfaces to large scientific databases accessible to and usable by student learners and their instructors. The >70 cross-cutting and specific guidelines in our project report are grounded in the fundamentals of Cognitive Load Theory, Visual Perception, Schema theory and Universal Design for Learning. The components of the human visual system and associated cognitive processes are highly specialized and have evolved in response to survival demands of the three-dimensional world humans have lived in for thousands of years. Because the use of two-dimensional representations, such as maps and graphs, and the use and navigation of Web interfaces has developed quite recently in human history, our visual perception system is not specifically adapted to these tasks. Therefore, it's critical to understand how to design two-dimensional media to take advantage of the strengths of our highly evolved and complex visual system and to compensate for its weaknesses. Looking at the design of data interfaces through this lens helps us understand, for example, why red stands out (finding ripe berries in a bush), why movement grabs our attention (hunting and avoiding predators), and why variations in light luminance and shading work better than variations in color hue for perceiving shape and form. This presentation will, through specific examples, explain how to avoid the pitfalls and make scientific databases more broadly accessible by: 1) adjusting the cognitive load imposed by the user interface and visualizations so that it doesn't exceed the amount of information the learner can actively process; 2) drawing attention to important features and patterns; and 3) enabling customization of visualizations and tools to meet the needs of diverse learners

  4. Spatial Analysis of “Crazy Quilts”, a Class of Potentially Random Aesthetic Artefacts

    PubMed Central

    Westphal-Fitch, Gesche; Fitch, W. Tecumseh

    2013-01-01

    Human artefacts in general are highly structured and often display ordering principles such as translational, reflectional or rotational symmetry. In contrast, human artefacts that are intended to appear random and non symmetrical are very rare. Furthermore, many studies show that humans find it extremely difficult to recognize or reproduce truly random patterns or sequences. Here, we attempt to model two-dimensional decorative spatial patterns produced by humans that show no obvious order. “Crazy quilts” represent a historically important style of quilt making that became popular in the 1870s, and lasted about 50 years. Crazy quilts are unusual because unlike most human artefacts, they are specifically intended to appear haphazard and unstructured. We evaluate the degree to which this intention was achieved by using statistical techniques of spatial point pattern analysis to compare crazy quilts with regular quilts from the same region and era and to evaluate the fit of various random distributions to these two quilt classes. We found that the two quilt categories exhibit fundamentally different spatial characteristics: The patch areas of crazy quilts derive from a continuous random distribution, while area distributions of regular quilts consist of Gaussian mixtures. These Gaussian mixtures derive from regular pattern motifs that are repeated and we suggest that such a mixture is a distinctive signature of human-made visual patterns. In contrast, the distribution found in crazy quilts is shared with many other naturally occurring spatial patterns. Centroids of patches in the two quilt classes are spaced differently and in general, crazy quilts but not regular quilts are well-fitted by a random Strauss process. These results indicate that, within the constraints of the quilt format, Victorian quilters indeed achieved their goal of generating random structures. PMID:24066095

  5. Spatial analysis of "crazy quilts", a class of potentially random aesthetic artefacts.

    PubMed

    Westphal-Fitch, Gesche; Fitch, W Tecumseh

    2013-01-01

    Human artefacts in general are highly structured and often display ordering principles such as translational, reflectional or rotational symmetry. In contrast, human artefacts that are intended to appear random and non symmetrical are very rare. Furthermore, many studies show that humans find it extremely difficult to recognize or reproduce truly random patterns or sequences. Here, we attempt to model two-dimensional decorative spatial patterns produced by humans that show no obvious order. "Crazy quilts" represent a historically important style of quilt making that became popular in the 1870s, and lasted about 50 years. Crazy quilts are unusual because unlike most human artefacts, they are specifically intended to appear haphazard and unstructured. We evaluate the degree to which this intention was achieved by using statistical techniques of spatial point pattern analysis to compare crazy quilts with regular quilts from the same region and era and to evaluate the fit of various random distributions to these two quilt classes. We found that the two quilt categories exhibit fundamentally different spatial characteristics: The patch areas of crazy quilts derive from a continuous random distribution, while area distributions of regular quilts consist of Gaussian mixtures. These Gaussian mixtures derive from regular pattern motifs that are repeated and we suggest that such a mixture is a distinctive signature of human-made visual patterns. In contrast, the distribution found in crazy quilts is shared with many other naturally occurring spatial patterns. Centroids of patches in the two quilt classes are spaced differently and in general, crazy quilts but not regular quilts are well-fitted by a random Strauss process. These results indicate that, within the constraints of the quilt format, Victorian quilters indeed achieved their goal of generating random structures.

  6. Amplifying the helicopter drift in a conformal HMD

    NASA Astrophysics Data System (ADS)

    Schmerwitz, Sven; Knabl, Patrizia M.; Lueken, Thomas; Doehler, Hans-Ullrich

    2016-05-01

    Helicopter operations require a well-controlled and minimal lateral drift shortly before ground contact. Any lateral speed exceeding this small threshold can cause a dangerous momentum around the roll axis, which may cause a total roll over of the helicopter. As long as pilots can observe visual cues from the ground, they are able to easily control the helicopter drift. But whenever natural vision is reduced or even obscured, e.g. due to night, fog, or dust, this controllability diminishes. Therefore helicopter operators could benefit from some type of "drift indication" that mitigates the influence of a degraded visual environment. Generally humans derive ego motion by the perceived environmental object flow. The visual cues perceived are located close to the helicopter, therefore even small movements can be recognized. This fact was used to investigate a modified drift indication. To enhance the perception of ego motion in a conformal HMD symbol set the measured movement was used to generate a pattern motion in the forward field of view close or on the landing pad. The paper will discuss the method of amplified ego motion drift indication. Aspects concerning impact factors like visualization type, location, gain and more will be addressed. Further conclusions from previous studies, a high fidelity experiment and a part task experiment, will be provided. A part task study will be presented that compared different amplified drift indications against a predictor. 24 participants, 15 holding a fixed wing license and 4 helicopter pilots, had to perform a dual task on a virtual reality headset. A simplified control model was used to steer a "helicopter" down to a landing pad while acknowledging randomly placed characters.

  7. Embodied learning of a generative neural model for biological motion perception and inference

    PubMed Central

    Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V.

    2015-01-01

    Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons. PMID:26217215

  8. Emerging Object Representations in the Visual System Predict Reaction Times for Categorization

    PubMed Central

    Ritchie, J. Brendan; Tovar, David A.; Carlson, Thomas A.

    2015-01-01

    Recognizing an object takes just a fraction of a second, less than the blink of an eye. Applying multivariate pattern analysis, or “brain decoding”, methods to magnetoencephalography (MEG) data has allowed researchers to characterize, in high temporal resolution, the emerging representation of object categories that underlie our capacity for rapid recognition. Shortly after stimulus onset, object exemplars cluster by category in a high-dimensional activation space in the brain. In this emerging activation space, the decodability of exemplar category varies over time, reflecting the brain’s transformation of visual inputs into coherent category representations. How do these emerging representations relate to categorization behavior? Recently it has been proposed that the distance of an exemplar representation from a categorical boundary in an activation space is critical for perceptual decision-making, and that reaction times should therefore correlate with distance from the boundary. The predictions of this distance hypothesis have been born out in human inferior temporal cortex (IT), an area of the brain crucial for the representation of object categories. When viewed in the context of a time varying neural signal, the optimal time to “read out” category information is when category representations in the brain are most decodable. Here, we show that the distance from a decision boundary through activation space, as measured using MEG decoding methods, correlates with reaction times for visual categorization during the period of peak decodability. Our results suggest that the brain begins to read out information about exemplar category at the optimal time for use in choice behaviour, and support the hypothesis that the structure of the representation for objects in the visual system is partially constitutive of the decision process in recognition. PMID:26107634

  9. Embodied learning of a generative neural model for biological motion perception and inference.

    PubMed

    Schrodt, Fabian; Layher, Georg; Neumann, Heiko; Butz, Martin V

    2015-01-01

    Although an action observation network and mirror neurons for understanding the actions and intentions of others have been under deep, interdisciplinary consideration over recent years, it remains largely unknown how the brain manages to map visually perceived biological motion of others onto its own motor system. This paper shows how such a mapping may be established, even if the biologically motion is visually perceived from a new vantage point. We introduce a learning artificial neural network model and evaluate it on full body motion tracking recordings. The model implements an embodied, predictive inference approach. It first learns to correlate and segment multimodal sensory streams of own bodily motion. In doing so, it becomes able to anticipate motion progression, to complete missing modal information, and to self-generate learned motion sequences. When biological motion of another person is observed, this self-knowledge is utilized to recognize similar motion patterns and predict their progress. Due to the relative encodings, the model shows strong robustness in recognition despite observing rather large varieties of body morphology and posture dynamics. By additionally equipping the model with the capability to rotate its visual frame of reference, it is able to deduce the visual perspective onto the observed person, establishing full consistency to the embodied self-motion encodings by means of active inference. In further support of its neuro-cognitive plausibility, we also model typical bistable perceptions when crucial depth information is missing. In sum, the introduced neural model proposes a solution to the problem of how the human brain may establish correspondence between observed bodily motion and its own motor system, thus offering a mechanism that supports the development of mirror neurons.

  10. Hemispheric specialization in quantification processes.

    PubMed

    Pasini, M; Tessari, A

    2001-01-01

    Three experiments were carried out to study hemispheric specialization for subitizing (the rapid enumeration of small patterns) and counting (the serial quantification process based on some formal principles). The experiments consist of numerosity identification of dot patterns presented in one visual field, with a tachistoscopic technique, or eye movements monitored through glasses, and comparison between centrally presented dot patterns and lateralized tachistoscopically presented digits. Our experiments show left visual field advantage in the identification and comparison tasks in the subitizing range, whereas right visual field advantage has been found in the comparison task for the counting range.

  11. Implications of Sustained and Transient Channels for Theories of Visual Pattern Masking, Saccadic Suppression, and Information Processing

    ERIC Educational Resources Information Center

    Breitmeyer, Bruno G.; Ganz, Leo

    1976-01-01

    This paper reviewed briefly the major types of masking effects obtained with various methods and the major theories or models that have been proposed to account for these effects, and outlined a three-mechanism model of visual pattern masking based on psychophysical and neurophysiological properties of the visual system. (Author/RK)

  12. Why Do We Move Our Eyes while Trying to Remember? The Relationship between Non-Visual Gaze Patterns and Memory

    ERIC Educational Resources Information Center

    Micic, Dragana; Ehrlichman, Howard; Chen, Rebecca

    2010-01-01

    Non-visual gaze patterns (NVGPs) involve saccades and fixations that spontaneously occur in cognitive activities that are not ostensibly visual. While reasons for their appearance remain obscure, convergent empirical evidence suggests that NVGPs change according to processing requirements of tasks. We examined NVGPs in tasks with long-term memory…

  13. Architecture of the parallel hierarchical network for fast image recognition

    NASA Astrophysics Data System (ADS)

    Timchenko, Leonid; Wójcik, Waldemar; Kokriatskaia, Natalia; Kutaev, Yuriy; Ivasyuk, Igor; Kotyra, Andrzej; Smailova, Saule

    2016-09-01

    Multistage integration of visual information in the brain allows humans to respond quickly to most significant stimuli while maintaining their ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing includes main types of cortical multistage convergence. The input images are mapped into a flexible hierarchy that reflects complexity of image data. Procedures of the temporal image decomposition and hierarchy formation are described in mathematical expressions. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image that encapsulates a structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a quick response of the system. The result is presented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match. With regard to the forecasting method, its idea lies in the following. In the results synchronization block, network-processed data arrive to the database where a sample of most correlated data is drawn using service parameters of the parallel-hierarchical network.

  14. Preliminary study of first motion from nuclear explosions recorded on seismograms in the first zone

    USGS Publications Warehouse

    Healy, J.H.; Mangan, G.B.

    1963-01-01

    The U.S. Geological Survey has recorded more than 300 seismograms from more than 50 underground nuclear explosions. Most were recorded at distances of less than 1,000 km. These seismograms have been studied to obtain travel times and amplitudes which have been presented in reports on crustal structure and in a new series of nuclear shot reports. This report describes preliminary studies of first motion of seismic waves generated by underground nuclear explosions. Visual inspection of all seismograms was made in an attempt to identify the direction of first motion, and to estimate the probability of recording detectable first motion at various distances for various charge sizes and in different geologic environments. In this study, a characteristic pattern of the first phase became apparent on seismograms where first motion was clearly recorded. When an interpreter became familiar with this pattern, he was frequently able to identify the polarity of the first arrival even though the direction of first motion could not be seen clearly on the seismogram. In addition, it was sometimes possible to recognize this pattern for secondary arrivals of larger amplitude. These qualitative visual observations suggest that it might be possible to define a simple criterion that could be used in a digital computer to identify polarity, not only of the first phase, but of secondary phases as well. A short segment of recordings near the first motion on 56 seismograms was digitized on an optical digitizer. Spectral analyses of these digitized recordings were made to determine the range of frequencies present, and studies were made with various simple digital filters to explore the nature of polarity as a function of frequency. These studies have not yet led to conclusive results, partly because of inaccuracies resulting from optical digitization. The work is continuing, using an electronic digitizer that will allow study of a much larger sample of more accurately digitized data.

  15. A probabilistic model for analysing the effect of performance levels on visual behaviour patterns of young sailors in simulated navigation.

    PubMed

    Manzanares, Aarón; Menayo, Ruperto; Segado, Francisco; Salmerón, Diego; Cano, Juan Antonio

    2015-01-01

    The visual behaviour is a determining factor in sailing due to the influence of the environmental conditions. The aim of this research was to determine the visual behaviour pattern in sailors with different practice time in one star race, applying a probabilistic model based on Markov chains. The sample of this study consisted of 20 sailors, distributed in two groups, top ranking (n = 10) and bottom ranking (n = 10), all of them competed in the Optimist Class. An automated system of measurement, which integrates the VSail-Trainer sail simulator and the Eye Tracking System(TM) was used. The variables under consideration were the sequence of fixations and the fixation recurrence time performed on each location by the sailors. The event consisted of one of simulated regatta start, with stable conditions of wind, competitor and sea. Results show that top ranking sailors perform a low recurrence time on relevant locations and higher on irrelevant locations while bottom ranking sailors make a low recurrence time in most of the locations. The visual pattern performed by bottom ranking sailors is focused around two visual pivots, which does not happen in the top ranking sailor's pattern. In conclusion, the Markov chains analysis has allowed knowing the visual behaviour pattern of the top and bottom ranking sailors and its comparison.

  16. Zif268 mRNA Expression Patterns Reveal a Distinct Impact of Early Pattern Vision Deprivation on the Development of Primary Visual Cortical Areas in the Cat.

    PubMed

    Laskowska-Macios, Karolina; Zapasnik, Monika; Hu, Tjing-Tjing; Kossut, Malgorzata; Arckens, Lutgarde; Burnat, Kalina

    2015-10-01

    Pattern vision deprivation (BD) can induce permanent deficits in global motion perception. The impact of timing and duration of BD on the maturation of the central and peripheral visual field representations in cat primary visual areas 17 and 18 remains unknown. We compared early BD, from eye opening for 2, 4, or 6 months, with late onset BD, after 2 months of normal vision, using the expression pattern of the visually driven activity reporter gene zif268 as readout. Decreasing zif268 mRNA levels between months 2 and 4 characterized the normal maturation of the (supra)granular layers of the central and peripheral visual field representations in areas 17 and 18. In general, all BD conditions had higher than normal zif268 levels. In area 17, early BD induced a delayed decrease, beginning later in peripheral than in central area 17. In contrast, the decrease occurred between months 2 and 4 throughout area 18. Lack of pattern vision stimulation during the first 4 months of life therefore has a different impact on the development of areas 17 and 18. A high zif268 expression level at a time when normal vision is restored seems to predict the capacity of a visual area to compensate for BD. © The Author 2014. Published by Oxford University Press.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoon, Hong-Jun; Carmichael, Tandy; Tourassi, Georgia

    Previously, we have shown the potential of using an individual s visual search pattern as a possible biometric. That study focused on viewing images displaying dot-patterns with different spatial relationships to determine which pattern can be more effective in establishing the identity of an individual. In this follow-up study we investigated the temporal stability of this biometric. We performed an experiment with 16 individuals asked to search for a predetermined feature of a random-dot pattern as we tracked their eye movements. Each participant completed four testing sessions consisting of two dot patterns repeated twice. One dot pattern displayed concentric circlesmore » shifted to the left or right side of the screen overlaid with visual noise, and participants were asked which side the circles were centered on. The second dot-pattern displayed a number of circles (between 0 and 4) scattered on the screen overlaid with visual noise, and participants were asked how many circles they could identify. Each session contained 5 untracked tutorial questions and 50 tracked test questions (200 total tracked questions per participant). To create each participant s "fingerprint", we constructed a Hidden Markov Model (HMM) from the gaze data representing the underlying visual search and cognitive process. The accuracy of the derived HMM models was evaluated using cross-validation for various time-dependent train-test conditions. Subject identification accuracy ranged from 17.6% to 41.8% for all conditions, which is significantly higher than random guessing (1/16 = 6.25%). The results suggest that visual search pattern is a promising, fairly stable personalized fingerprint of perceptual organization.« less

  18. How do expert soccer players encode visual information to make decisions in simulated game situations?

    PubMed

    Poplu, Gérald; Ripoll, Hubert; Mavromatis, Sébastien; Baratgin, Jean

    2008-09-01

    The aim of this study was to determine what visual information expert soccer players encode when they are asked to make a decision. We used a repetition-priming paradigm to test the hypothesis that experts encode a soccer pattern's structure independently of the players' physical characteristics (i.e., posture and morphology). The participants were given either realistic (digital photos) or abstract (three-dimensional schematic representations) soccer game patterns. The results showed that the experts benefited from priming effects regardless of how abstract the stimuli were. This suggests that an abstract representation of a realistic pattern (i.e., one that does not include visual information related to the players'physical characteristics) is sufficient to activate experts'specific knowledge during decision making. These results seem to show that expert soccer players encode and store abstract representations of visual patterns in memory.

  19. A description of discrete internal representation schemes for visual pattern discrimination.

    PubMed

    Foster, D H

    1980-01-01

    A general description of a class of schemes for pattern vision is outlined in which the visual system is assumed to form a discrete internal representation of the stimulus. These representations are discrete in that they are considered to comprise finite combinations of "components" which are selected from a fixed and finite repertoire, and which designate certain simple pattern properties or features. In the proposed description it is supposed that the construction of an internal representation is a probabilistic process. A relationship is then formulated associating the probability density functions governing this construction and performance in visually discriminating patterns when differences in pattern shape are small. Some questions related to the application of this relationship to the experimental investigation of discrete internal representations are briefly discussed.

  20. Comparison of productive house spatial planning in Kampung Batik - Central Java object of observation: Pekalongan and Lasem

    NASA Astrophysics Data System (ADS)

    Kridarso, E. R.

    2018-01-01

    Home is a basic human need other than clothing and food. As one of the basic needs of man, it has variety of functions, for example, as a place to protect and develop themselves occupants, also as an asset that have economic and non-economic value. Houses that have economic value can be utilized as capital to earn a living by using part of room as a working space, named as productive house. Batik products become the focus of observation with the consideration that batik is a unique Indonesian cultural richness that has been recognized internationally. Pekalongan and Lasem is a coastal city located on the north coast of Java Island, where both cities become the benchmark of batik products located in the coastal area. Kampung Batik in Pekalongan and Lasem is the location used as an object of observation for comparative pattern of productive house layout with qualitative method. The data obtained in primary and secondary, in the form of visual recordings, maps and sketches of productive layout pattern of batik houses. The comparative result shows many similarities in the pattern of productive layout of batik houses in Pekalongan and Lasem; Differences exist in existing occupants. The existing equations are due to the activities undertaken and the differences that exist are due to the growing culture in both locations of observation.

  1. A Comparison of the Visual Attention Patterns of People With Aphasia and Adults Without Neurological Conditions for Camera-Engaged and Task-Engaged Visual Scenes.

    PubMed

    Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria

    2016-04-01

    The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological conditions. Participants viewed camera-engaged (i.e., human figure facing camera) and task-engaged (i.e., human figure looking at and touching an object) visual scenes. Participants with aphasia responded to engagement cues by focusing on objects of interest more for task-engaged scenes than camera-engaged scenes; however, the difference in their responses to these scenes were not as pronounced as those observed in adults without neurological conditions. In addition, people with aphasia spent more time looking at background areas of interest and less time looking at person areas of interest for camera-engaged scenes than did control participants. Results indicate people with aphasia visually attend to scenes differently than adults without neurological conditions. As a consequence, augmentative and alternative communication (AAC) facilitators may have different visual attention behaviors than the people with aphasia for whom they are constructing or selecting visual scenes. Further examination of the visual attention of people with aphasia may help optimize visual scene selection.

  2. Image pattern recognition supporting interactive analysis and graphical visualization

    NASA Technical Reports Server (NTRS)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  3. Halftone visual cryptography.

    PubMed

    Zhou, Zhi; Arce, Gonzalo R; Di Crescenzo, Giovanni

    2006-08-01

    Visual cryptography encodes a secret binary image (SI) into n shares of random binary patterns. If the shares are xeroxed onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the n shares, however, have no visual meaning and hinder the objectives of visual cryptography. Extended visual cryptography [1] was proposed recently to construct meaningful binary images as shares using hypergraph colourings, but the visual quality is poor. In this paper, a novel technique named halftone visual cryptography is proposed to achieve visual cryptography via halftoning. Based on the blue-noise dithering principles, the proposed method utilizes the void and cluster algorithm [2] to encode a secret binary image into n halftone shares (images) carrying significant visual information. The simulation shows that the visual quality of the obtained halftone shares are observably better than that attained by any available visual cryptography method known to date.

  4. Phonological Concept Learning.

    PubMed

    Moreton, Elliott; Pater, Joe; Pertsova, Katya

    2017-01-01

    Linguistic and non-linguistic pattern learning have been studied separately, but we argue for a comparative approach. Analogous inductive problems arise in phonological and visual pattern learning. Evidence from three experiments shows that human learners can solve them in analogous ways, and that human performance in both cases can be captured by the same models. We test GMECCS (Gradual Maximum Entropy with a Conjunctive Constraint Schema), an implementation of the Configural Cue Model (Gluck & Bower, ) in a Maximum Entropy phonotactic-learning framework (Goldwater & Johnson, ; Hayes & Wilson, ) with a single free parameter, against the alternative hypothesis that learners seek featurally simple algebraic rules ("rule-seeking"). We study the full typology of patterns introduced by Shepard, Hovland, and Jenkins () ("SHJ"), instantiated as both phonotactic patterns and visual analogs, using unsupervised training. Unlike SHJ, Experiments 1 and 2 found that both phonotactic and visual patterns that depended on fewer features could be more difficult than those that depended on more features, as predicted by GMECCS but not by rule-seeking. GMECCS also correctly predicted performance differences between stimulus subclasses within each pattern. A third experiment tried supervised training (which can facilitate rule-seeking in visual learning) to elicit simple rule-seeking phonotactic learning, but cue-based behavior persisted. We conclude that similar cue-based cognitive processes are available for phonological and visual concept learning, and hence that studying either kind of learning can lead to significant insights about the other. Copyright © 2015 Cognitive Science Society, Inc.

  5. Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection

    PubMed Central

    Denison, Rachel N.; Driver, Jon; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067

  6. An evaluation-guided approach for effective data visualization on tablets

    NASA Astrophysics Data System (ADS)

    Games, Peter S.; Joshi, Alark

    2015-01-01

    There is a rising trend of data analysis and visualization tasks being performed on a tablet device. Apps with interactive data visualization capabilities are available for a wide variety of domains. We investigate whether users grasp how to effectively interpret and interact with visualizations. We conducted a detailed user evaluation to study the abilities of individuals with respect to analyzing data on a tablet through an interactive visualization app. Based upon the results of the user evaluation, we find that most subjects performed well at understanding and interacting with simple visualizations, specifically tables and line charts. A majority of the subjects struggled with identifying interactive widgets, recognizing interactive widgets with overloaded functionality, and understanding visualizations which do not display data for sorted attributes. Based on our study, we identify guidelines for designers and developers of mobile data visualization apps that include recommendations for effective data representation and interaction.

  7. Con-Text: Text Detection for Fine-grained Object Classification.

    PubMed

    Karaoglu, Sezer; Tao, Ran; van Gemert, Jan C; Gevers, Theo

    2017-05-24

    This work focuses on fine-grained object classification using recognized scene text in natural images. While the state-of-the-art relies on visual cues only, this paper is the first work which proposes to combine textual and visual cues. Another novelty is the textual cue extraction. Unlike the state-of-the-art text detection methods, we focus more on the background instead of text regions. Once text regions are detected, they are further processed by two methods to perform text recognition i.e. ABBYY commercial OCR engine and a state-of-the-art character recognition algorithm. Then, to perform textual cue encoding, bi- and trigrams are formed between the recognized characters by considering the proposed spatial pairwise constraints. Finally, extracted visual and textual cues are combined for fine-grained classification. The proposed method is validated on four publicly available datasets: ICDAR03, ICDAR13, Con-Text and Flickr-logo. We improve the state-of-the-art end-to-end character recognition by a large margin of 15% on ICDAR03. We show that textual cues are useful in addition to visual cues for fine-grained classification. We show that textual cues are also useful for logo retrieval. Adding textual cues outperforms visual- and textual-only in fine-grained classification (70.7% to 60.3%) and logo retrieval (57.4% to 54.8%).

  8. Visual working memory for global, object, and part-based information.

    PubMed

    Patterson, Michael D; Bly, Benjamin Martin; Porcelli, Anthony J; Rypma, Bart

    2007-06-01

    We investigated visual working memory for novel objects and parts of novel objects. After a delay period, participants showed strikingly more accurate performance recognizing a single whole object than the parts of that object. This bias to remember whole objects, rather than parts, persisted even when the division between parts was clearly defined and the parts were disconnected from each other so that, in order to remember the single whole object, the participants needed to mentally combine the parts. In addition, the bias was confirmed when the parts were divided by color. These experiments indicated that holistic perceptual-grouping biases are automatically used to organize storage in visual working memory. In addition, our results suggested that the bias was impervious to top-down consciously directed control, because when task demands were manipulated through instruction and catch trials, the participants still recognized whole objects more quickly and more accurately than their parts. This bias persisted even when the whole objects were novel and the parts were familiar. We propose that visual working memory representations depend primarily on the global configural properties of whole objects, rather than part-based representations, even when the parts themselves can be clearly perceived as individual objects. This global configural bias beneficially reduces memory load on a capacity-limited system operating in a complex visual environment, because fewer distinct items must be remembered.

  9. Automated Steering Control Design by Visual Feedback Approach —System Identification and Control Experiments with a Radio-Controlled Car—

    NASA Astrophysics Data System (ADS)

    Fujiwara, Yukihiro; Yoshii, Masakazu; Arai, Yasuhito; Adachi, Shuichi

    Advanced safety vehicle(ASV)assists drivers’ manipulation to avoid trafic accidents. A variety of researches on automatic driving systems are necessary as an element of ASV. Among them, we focus on visual feedback approach in which the automatic driving system is realized by recognizing road trajectory using image information. The purpose of this paper is to examine the validity of this approach by experiments using a radio-controlled car. First, a practical image processing algorithm to recognize white lines on the road is proposed. Second, a model of the radio-controlled car is built by system identication experiments. Third, an automatic steering control system is designed based on H∞ control theory. Finally, the effectiveness of the designed control system is examined via traveling experiments.

  10. Spatial and temporal coherence in perceptual binding

    PubMed Central

    Blake, Randolph; Yang, Yuede

    1997-01-01

    Component visual features of objects are registered by distributed patterns of activity among neurons comprising multiple pathways and visual areas. How these distributed patterns of activity give rise to unified representations of objects remains unresolved, although one recent, controversial view posits temporal coherence of neural activity as a binding agent. Motivated by the possible role of temporal coherence in feature binding, we devised a novel psychophysical task that requires the detection of temporal coherence among features comprising complex visual images. Results show that human observers can more easily detect synchronized patterns of temporal contrast modulation within hybrid visual images composed of two components when those components are drawn from the same original picture. Evidently, time-varying changes within spatially coherent features produce more salient neural signals. PMID:9192701

  11. The barista on the bus: cellular and synaptic mechanisms for visual recognition memory.

    PubMed

    Barth, Alison L; Wheeler, Mark E

    2008-04-24

    Our ability to recognize that something is familiar, often referred to as visual recognition memory, has been correlated with a reduction in neural activity in the perirhinal cortex. In this issue of Neuron, Griffiths et al. now provide evidence that this form of memory requires AMPA receptor endocytosis and long-term depression of excitatory synapses in this brain area.

  12. When Seeing Depends on Knowing: Adults with Autism Spectrum Conditions Show Diminished Top-Down Processes in the Visual Perception of Degraded Faces but Not Degraded Objects

    ERIC Educational Resources Information Center

    Loth, Eva; Gomez, Juan Carlos; Happe, Francesca

    2010-01-01

    Behavioural, neuroimaging and neurophysiological approaches emphasise the active and constructive nature of visual perception, determined not solely by the environmental input, but modulated top-down by prior knowledge. For example, degraded images, which at first appear as meaningless "blobs", can easily be recognized as, say, a face, after…

  13. Eye Movement Correlates of Expertise in Visual Arts.

    PubMed

    Francuz, Piotr; Zaniewski, Iwo; Augustynowicz, Paweł; Kopiś, Natalia; Jankowski, Tomasz

    2018-01-01

    The aim of this study was to search for oculomotor correlates of expertise in visual arts, in particular with regard to paintings. Achieving this goal was possible by gathering data on eye movements of two groups of participants: experts and non-experts in visual arts who viewed and appreciated the aesthetics of paintings. In particular, we were interested in whether visual arts experts more accurately recognize a balanced composition in one of the two paintings being compared simultaneously, and whether people who correctly recognize harmonious paintings are characterized by a different visual scanning strategy than those who do not recognize them. For the purposes of this study, 25 paintings with an almost ideal balanced composition have been chosen. Some of these paintings are masterpieces of the world cultural heritage, and some of them are unknown. Using Photoshop, the artist developed three additional versions of each of these paintings, differing from the original in the degree of destruction of its harmonious composition: slight, moderate, or significant. The task of the participants was to look at all versions of the same painting in pairs (including the original) and decide which of them looked more pleasing. The study involved 23 experts in art, students of art history, art education or the Academy of Fine Arts, and 19 non-experts, students in the social sciences and the humanities. The experimental manipulation of comparing pairs of paintings, whose composition is at different levels of harmony, has proved to be an effective tool for differentiating people because of their ability to distinguish paintings with balanced composition from an unbalanced one. It turned out that this ability only partly coincides with expertise understood as the effect of education in the field of visual arts. We also found that the eye movements of people who more accurately appreciated paintings with balanced composition differ from those who more liked their altered versions due to dwell time, first and average fixation duration and number of fixations. The familiarity of paintings turned out to be the factor significantly affects both the aesthetic evaluation of paintings and eye movement.

  14. Eye Movement Correlates of Expertise in Visual Arts

    PubMed Central

    Francuz, Piotr; Zaniewski, Iwo; Augustynowicz, Paweł; Kopiś, Natalia; Jankowski, Tomasz

    2018-01-01

    The aim of this study was to search for oculomotor correlates of expertise in visual arts, in particular with regard to paintings. Achieving this goal was possible by gathering data on eye movements of two groups of participants: experts and non-experts in visual arts who viewed and appreciated the aesthetics of paintings. In particular, we were interested in whether visual arts experts more accurately recognize a balanced composition in one of the two paintings being compared simultaneously, and whether people who correctly recognize harmonious paintings are characterized by a different visual scanning strategy than those who do not recognize them. For the purposes of this study, 25 paintings with an almost ideal balanced composition have been chosen. Some of these paintings are masterpieces of the world cultural heritage, and some of them are unknown. Using Photoshop, the artist developed three additional versions of each of these paintings, differing from the original in the degree of destruction of its harmonious composition: slight, moderate, or significant. The task of the participants was to look at all versions of the same painting in pairs (including the original) and decide which of them looked more pleasing. The study involved 23 experts in art, students of art history, art education or the Academy of Fine Arts, and 19 non-experts, students in the social sciences and the humanities. The experimental manipulation of comparing pairs of paintings, whose composition is at different levels of harmony, has proved to be an effective tool for differentiating people because of their ability to distinguish paintings with balanced composition from an unbalanced one. It turned out that this ability only partly coincides with expertise understood as the effect of education in the field of visual arts. We also found that the eye movements of people who more accurately appreciated paintings with balanced composition differ from those who more liked their altered versions due to dwell time, first and average fixation duration and number of fixations. The familiarity of paintings turned out to be the factor significantly affects both the aesthetic evaluation of paintings and eye movement. PMID:29632478

  15. Covisualization by computational optical-sectioning microscopy of integrin and associated proteins at the cell membrane of living onion protoplasts

    NASA Technical Reports Server (NTRS)

    Gens, J. S.; Reuzeau, C.; Doolittle, K. W.; McNally, J. G.; Pickard, B. G.; Evans, M. L. (Principal Investigator)

    1996-01-01

    Using higher-resolution wide-field computational optical-sectioning fluorescence microscopy, the distribution of antigens recognized by antibodies against animal beta 1 integrin, fibronectin, and vitronectin has been visualized at the outer surface of enzymatically protoplasted onion epidermis cells and in depectinated cell wall fragments. On the protoplast all three antigens are colocalized in an array of small spots, as seen in raw images, in Gaussian filtered images, and in images restored by two different algorithms. Fibronectin and vitronectin but not beta 1 integrin antigenicities colocalize as puncta in comparably prepared and processed images of the wall fragments. Several control visualizations suggest considerable specifity of antibody recognition. Affinity purification of onion cell extract with the same anti-integrin used for visualization has yielded protein that separates in SDS-PAGE into two bands of about 105-110 and 115-125 kDa. These bands are again recognized by the visualization antibody, which was raised against the extracellular domain of chicken beta 1 integrin, and are also recognized by an antibody against the intracellular domain of chicken beta 1 integrin. Because beta 1 integrin is a key protein in numerous animal adhesion sites, it appears that the punctate distribution of this protein in the cell membranes of onion epidermis represents the adhesion sites long known to occur in cells of this tissue. Because vitronectin and fibronection are matrix proteins that bind to integrin in animals, the punctate occurrence of antigenically similar proteins both in the wall (matrix) and on enzymatically prepared protoplasts reinforces the concept that onion cells have adhesion sites with some similarity to certain kinds of adhesion sites in animals.

  16. Gestalt Effects in Visual Working Memory.

    PubMed

    Kałamała, Patrycja; Sadowska, Aleksandra; Ordziniak, Wawrzyniec; Chuderski, Adam

    2017-01-01

    Four experiments investigated whether conforming to Gestalt principles, well known to drive visual perception, also facilitates the active maintenance of information in visual working memory (VWM). We used the change detection task, which required the memorization of visual patterns composed of several shapes. We observed no effects of symmetry of visual patterns on VWM performance. However, there was a moderate positive effect when a particular shape that was probed matched the shape of the whole pattern (the whole-part similarity effect). Data support the models assuming that VWM encodes not only particular objects of the perceptual scene but also the spatial relations between them (the ensemble representation). The ensemble representation may prime objects similar to its shape and thereby boost access to them. In contrast, the null effect of symmetry relates the fact that this very feature of an ensemble does not yield any useful additional information for VWM.

  17. Allothetic and idiothetic sensor fusion in rat-inspired robot localization

    NASA Astrophysics Data System (ADS)

    Weitzenfeld, Alfredo; Fellous, Jean-Marc; Barrera, Alejandra; Tejera, Gonzalo

    2012-06-01

    We describe a spatial cognition model based on the rat's brain neurophysiology as a basis for new robotic navigation architectures. The model integrates allothetic (external visual landmarks) and idiothetic (internal kinesthetic information) cues to train either rat or robot to learn a path enabling it to reach a goal from multiple starting positions. It stands in contrast to most robotic architectures based on SLAM, where a map of the environment is built to provide probabilistic localization information computed from robot odometry and landmark perception. Allothetic cues suffer in general from perceptual ambiguity when trying to distinguish between places with equivalent visual patterns, while idiothetic cues suffer from imprecise motions and limited memory recalls. We experiment with both types of cues in different maze configurations by training rats and robots to find the goal starting from a fixed location, and then testing them to reach the same target from new starting locations. We show that the robot, after having pre-explored a maze, can find a goal with improved efficiency, and is able to (1) learn the correct route to reach the goal, (2) recognize places already visited, and (3) exploit allothetic and idiothetic cues to improve on its performance. We finally contrast our biologically-inspired approach to more traditional robotic approaches and discuss current work in progress.

  18. Cortical regions activated by the subjective sense of perceptual coherence of environmental sounds: a proposal for a neuroscience of intuition.

    PubMed

    Volz, Kirsten G; Rübsamen, Rudolf; von Cramon, D Yves

    2008-09-01

    According to the Oxford English Dictionary, intuition is "the ability to understand or know something immediately, without conscious reasoning." In other words, people continuously, without conscious attention, recognize patterns in the stream of sensations that impinge upon them. The result is a vague perception of coherence, which subsequently biases thought and behavior accordingly. Within the visual domain, research using paradigms with difficult recognition has suggested that the orbitofrontal cortex (OFC) serves as a fast detector and predictor of potential content that utilizes coarse facets of the input. To investigate whether the OFC is crucial in biasing task-specific processing, and hence subserves intuitive judgments in various modalities, we used a difficult-recognition paradigm in the auditory domain. Participants were presented with short sequences of distorted, nonverbal, environmental sounds and had to perform a sound categorization task. Imaging results revealed rostral medial OFC activation for such auditory intuitive coherence judgments. By means of a conjunction analysis between the present results and those from a previous study on visual intuitive coherence judgments, the rostral medial OFC was shown to be activated via both modalities. We conclude that rostral OFC activation during intuitive coherence judgments subserves the detection of potential content on the basis of only coarse facets of the input.

  19. Emotional Devaluation of Distracting Patterns and Faces: A Consequence of Attentional Inhibition during Visual Search?

    ERIC Educational Resources Information Center

    Raymond, Jane E.; Fenske, Mark J.; Westoby, Nikki

    2005-01-01

    Visual search has been studied extensively, yet little is known about how its constituent processes affect subsequent emotional evaluation of searched-for and searched-through items. In 3 experiments, the authors asked observers to locate a colored pattern or tinted face in an array of other patterns or faces. Shortly thereafter, either the target…

  20. A Simple Model-Based Approach to Inferring and Visualizing Cancer Mutation Signatures

    PubMed Central

    Shiraishi, Yuichi; Tremmel, Georg; Miyano, Satoru; Stephens, Matthew

    2015-01-01

    Recent advances in sequencing technologies have enabled the production of massive amounts of data on somatic mutations from cancer genomes. These data have led to the detection of characteristic patterns of somatic mutations or “mutation signatures” at an unprecedented resolution, with the potential for new insights into the causes and mechanisms of tumorigenesis. Here we present new methods for modelling, identifying and visualizing such mutation signatures. Our methods greatly simplify mutation signature models compared with existing approaches, reducing the number of parameters by orders of magnitude even while increasing the contextual factors (e.g. the number of flanking bases) that are accounted for. This improves both sensitivity and robustness of inferred signatures. We also provide a new intuitive way to visualize the signatures, analogous to the use of sequence logos to visualize transcription factor binding sites. We illustrate our new method on somatic mutation data from urothelial carcinoma of the upper urinary tract, and a larger dataset from 30 diverse cancer types. The results illustrate several important features of our methods, including the ability of our new visualization tool to clearly highlight the key features of each signature, the improved robustness of signature inferences from small sample sizes, and more detailed inference of signature characteristics such as strand biases and sequence context effects at the base two positions 5′ to the mutated site. The overall framework of our work is based on probabilistic models that are closely connected with “mixed-membership models” which are widely used in population genetic admixture analysis, and in machine learning for document clustering. We argue that recognizing these relationships should help improve understanding of mutation signature extraction problems, and suggests ways to further improve the statistical methods. Our methods are implemented in an R package pmsignature (https://github.com/friend1ws/pmsignature) and a web application available at https://friend1ws.shinyapps.io/pmsignature_shiny/. PMID:26630308

  1. Independent voluntary correction and savings in locomotor learning.

    PubMed

    Leech, Kristan A; Roemmich, Ryan T

    2018-06-14

    People can acquire new walking patterns in many different ways. For example, we can change our gait voluntarily in response to instruction or adapt by sensing our movement errors. Here we investigated how acquisition of a new walking pattern through simultaneous voluntary correction and adaptive learning affected the resulting motor memory of the learned pattern. We studied adaptation to split-belt treadmill walking with and without visual feedback of stepping patterns. As expected, visual feedback enabled faster acquisition of the new walking pattern. However, upon later re-exposure to the same split-belt perturbation, participants exhibited similar motor memories whether they had learned with or without visual feedback. Participants who received feedback did not re-engage the mechanism used to accelerate initial acquisition of the new walking pattern to similarly accelerate subsequent relearning. These findings reveal that voluntary correction neither benefits nor interferes with the ability to save a new walking pattern over time. © 2018. Published by The Company of Biologists Ltd.

  2. Data Flow Analysis and Visualization for Spatiotemporal Statistical Data without Trajectory Information.

    PubMed

    Kim, Seokyeon; Jeong, Seongmin; Woo, Insoo; Jang, Yun; Maciejewski, Ross; Ebert, David S

    2018-03-01

    Geographic visualization research has focused on a variety of techniques to represent and explore spatiotemporal data. The goal of those techniques is to enable users to explore events and interactions over space and time in order to facilitate the discovery of patterns, anomalies and relationships within the data. However, it is difficult to extract and visualize data flow patterns over time for non-directional statistical data without trajectory information. In this work, we develop a novel flow analysis technique to extract, represent, and analyze flow maps of non-directional spatiotemporal data unaccompanied by trajectory information. We estimate a continuous distribution of these events over space and time, and extract flow fields for spatial and temporal changes utilizing a gravity model. Then, we visualize the spatiotemporal patterns in the data by employing flow visualization techniques. The user is presented with temporal trends of geo-referenced discrete events on a map. As such, overall spatiotemporal data flow patterns help users analyze geo-referenced temporal events, such as disease outbreaks, crime patterns, etc. To validate our model, we discard the trajectory information in an origin-destination dataset and apply our technique to the data and compare the derived trajectories and the original. Finally, we present spatiotemporal trend analysis for statistical datasets including twitter data, maritime search and rescue events, and syndromic surveillance.

  3. Multitask visual learning using genetic programming.

    PubMed

    Jaśkowski, Wojciech; Krawiec, Krzysztof; Wieloch, Bartosz

    2008-01-01

    We propose a multitask learning method of visual concepts within the genetic programming (GP) framework. Each GP individual is composed of several trees that process visual primitives derived from input images. Two trees solve two different visual tasks and are allowed to share knowledge with each other by commonly calling the remaining GP trees (subfunctions) included in the same individual. The performance of a particular tree is measured by its ability to reproduce the shapes contained in the training images. We apply this method to visual learning tasks of recognizing simple shapes and compare it to a reference method. The experimental verification demonstrates that such multitask learning often leads to performance improvements in one or both solved tasks, without extra computational effort.

  4. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    PubMed

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.

  5. fMRI-based Multivariate Pattern Analyses Reveal Imagery Modality and Imagery Content Specific Representations in Primary Somatosensory, Motor and Auditory Cortices.

    PubMed

    de Borst, Aline W; de Gelder, Beatrice

    2017-08-01

    Previous studies have shown that the early visual cortex contains content-specific representations of stimuli during visual imagery, and that these representational patterns of imagery content have a perceptual basis. To date, there is little evidence for the presence of a similar organization in the auditory and tactile domains. Using fMRI-based multivariate pattern analyses we showed that primary somatosensory, auditory, motor, and visual cortices are discriminative for imagery of touch versus sound. In the somatosensory, motor and visual cortices the imagery modality discriminative patterns were similar to perception modality discriminative patterns, suggesting that top-down modulations in these regions rely on similar neural representations as bottom-up perceptual processes. Moreover, we found evidence for content-specific representations of the stimuli during auditory imagery in the primary somatosensory and primary motor cortices. Both the imagined emotions and the imagined identities of the auditory stimuli could be successfully classified in these regions. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Brain activation in response to randomized visual stimulation as obtained from conjunction and differential analysis: an fMRI study

    NASA Astrophysics Data System (ADS)

    Nasaruddin, N. H.; Yusoff, A. N.; Kaur, S.

    2014-11-01

    The objective of this multiple-subjects functional magnetic resonance imaging (fMRI) study was to identify the common brain areas that are activated when viewing black-and-white checkerboard pattern stimuli of various shapes, pattern and size and to investigate specific brain areas that are involved in processing static and moving visual stimuli. Sixteen participants viewed the moving (expanding ring, rotating wedge, flipping hour glass and bowtie and arc quadrant) and static (full checkerboard) stimuli during an fMRI scan. All stimuli have black-and-white checkerboard pattern. Statistical parametric mapping (SPM) was used in generating brain activation. Differential analyses were implemented to separately search for areas involved in processing static and moving stimuli. In general, the stimuli of various shapes, pattern and size activated multiple brain areas mostly in the left hemisphere. The activation in the right middle temporal gyrus (MTG) was found to be significantly higher in processing moving visual stimuli as compared to static stimulus. In contrast, the activation in the left calcarine sulcus and left lingual gyrus were significantly higher for static stimulus as compared to moving stimuli. Visual stimulation of various shapes, pattern and size used in this study indicated left lateralization of activation. The involvement of the right MTG in processing moving visual information was evident from differential analysis, while the left calcarine sulcus and left lingual gyrus are the areas that are involved in the processing of static visual stimulus.

  7. Storyline Visualization: A Compelling Way to Understand Patterns over Time and Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2017-10-16

    Storyline visualization is a compelling way to understand patterns over time and space. Much effort has been spent developing efficient and aesthetically pleasing layout optimization algorithms. But what if those algorithms are optimizing the wrong things? To answer this question, we conducted a design study with different storyline layout algorithms. We found that users with our new design principles for storyline visualization outperform existing methods.

  8. Obstacle detection by recognizing binary expansion patterns

    NASA Technical Reports Server (NTRS)

    Baram, Yoram; Barniv, Yair

    1993-01-01

    This paper describes a technique for obstacle detection, based on the expansion of the image-plane projection of a textured object, as its distance from the sensor decreases. Information is conveyed by vectors whose components represent first-order temporal and spatial derivatives of the image intensity, which are related to the time to collision through the local divergence. Such vectors may be characterized as patterns corresponding to 'safe' or 'dangerous' situations. We show that essential information is conveyed by single-bit vector components, representing the signs of the relevant derivatives. We use two recently developed, high capacity classifiers, employing neural learning techniques, to recognize the imminence of collision from such patterns.

  9. Retinotopically specific reorganization of visual cortex for tactile pattern recognition

    PubMed Central

    Cheung, Sing-Hang; Fang, Fang; He, Sheng; Legge, Gordon E.

    2009-01-01

    Although previous studies have shown that Braille reading and other tactile-discrimination tasks activate the visual cortex of blind and sighted people [1–5], it is not known whether this kind of cross-modal reorganization is influenced by retinotopic organization. We have addressed this question by studying S, a visually impaired adult with the rare ability to read print visually and Braille by touch. S had normal visual development until age six years, and thereafter severe acuity reduction due to corneal opacification, but no evidence of visual-field loss. Functional magnetic resonance imaging (fMRI) revealed that, in S’s early visual areas, tactile information processing activated what would be the foveal representation for normally-sighted individuals, and visual information processing activated what would be the peripheral representation. Control experiments showed that this activation pattern was not due to visual imagery. S’s high-level visual areas which correspond to shape- and object-selective areas in normally-sighted individuals were activated by both visual and tactile stimuli. The retinotopically specific reorganization in early visual areas suggests an efficient redistribution of neural resources in the visual cortex. PMID:19361999

  10. Effects of musical training on sound pattern processing in high-school students.

    PubMed

    Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse

    2009-05-01

    Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.

  11. Integrating visual learning within a model-based ATR system

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark; Nebrich, Mark

    2017-05-01

    Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.

  12. Measured Visual Motion Sensitivity at Fixed Contrast in the Periphery and Far Periphery

    DTIC Science & Technology

    2017-08-01

    group Soldier performance. Soldier performance depends on visual detection of enemy personnel and materiel. Vision modeling in IWARS is neither...a highly time-critical and order- dependent activity, these unrealistic characterizations of target detection time and order severely limit the...recognize that MVTs should depend on target contrast, so we selected a target design different from that used in the Monaco et al. (2007) study. Based

  13. Looking but Not Seeing: Atypical Visual Scanning and Recognition of Faces in 2 and 4-Year-Old Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Chawarska, Katarzyna; Shic, Frederick

    2009-01-01

    This study used eye-tracking to examine visual scanning and recognition of faces by 2- and 4-year-old children with autism spectrum disorder (ASD) (N = 44) and typically developing (TD) controls (N = 30). TD toddlers at both age levels scanned and recognized faces similarly. Toddlers with ASD looked increasingly away from faces with age,…

  14. Atoms of recognition in human and computer vision.

    PubMed

    Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel

    2016-03-08

    Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.

  15. Target recognition and scene interpretation in image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-08-01

    Vision is only a part of a system that converts visual information into knowledge structures. These structures drive the vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, which is an interpretation of visual information in terms of these knowledge models. These mechanisms provide a reliable recognition if the object is occluded or cannot be recognized as a whole. It is hard to split the entire system apart, and reliable solutions to the target recognition problems are possible only within the solution of a more generic Image Understanding Problem. Brain reduces informational and computational complexities, using implicit symbolic coding of features, hierarchical compression, and selective processing of visual information. Biologically inspired Network-Symbolic representation, where both systematic structural/logical methods and neural/statistical methods are parts of a single mechanism, is the most feasible for such models. It converts visual information into relational Network-Symbolic structures, avoiding artificial precise computations of 3-dimensional models. Network-Symbolic Transformations derive abstract structures, which allows for invariant recognition of an object as exemplar of a class. Active vision helps creating consistent models. Attention, separation of figure from ground and perceptual grouping are special kinds of network-symbolic transformations. Such Image/Video Understanding Systems will be reliably recognizing targets.

  16. I see/hear what you mean: semantic activation in visual word recognition depends on perceptual attention.

    PubMed

    Connell, Louise; Lynott, Dermot

    2014-04-01

    How does the meaning of a word affect how quickly we can recognize it? Accounts of visual word recognition allow semantic information to facilitate performance but have neglected the role of modality-specific perceptual attention in activating meaning. We predicted that modality-specific semantic information would differentially facilitate lexical decision and reading aloud, depending on how perceptual attention is implicitly directed by each task. Large-scale regression analyses showed the perceptual modalities involved in representing a word's referent concept influence how easily that word is recognized. Both lexical decision and reading-aloud tasks direct attention toward vision, and are faster and more accurate for strongly visual words. Reading aloud additionally directs attention toward audition and is faster and more accurate for strongly auditory words. Furthermore, the overall semantic effects are as large for reading aloud as lexical decision and are separable from age-of-acquisition effects. These findings suggest that implicitly directing perceptual attention toward a particular modality facilitates representing modality-specific perceptual information in the meaning of a word, which in turn contributes to the lexical decision or reading-aloud response.

  17. Sensitivity to timing and order in human visual cortex

    PubMed Central

    Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.

    2014-01-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116

  18. Visual analysis and exploration of complex corporate shareholder networks

    NASA Astrophysics Data System (ADS)

    Tekušová, Tatiana; Kohlhammer, Jörn

    2008-01-01

    The analysis of large corporate shareholder network structures is an important task in corporate governance, in financing, and in financial investment domains. In a modern economy, large structures of cross-corporation, cross-border shareholder relationships exist, forming complex networks. These networks are often difficult to analyze with traditional approaches. An efficient visualization of the networks helps to reveal the interdependent shareholding formations and the controlling patterns. In this paper, we propose an effective visualization tool that supports the financial analyst in understanding complex shareholding networks. We develop an interactive visual analysis system by combining state-of-the-art visualization technologies with economic analysis methods. Our system is capable to reveal patterns in large corporate shareholder networks, allows the visual identification of the ultimate shareholders, and supports the visual analysis of integrated cash flow and control rights. We apply our system on an extensive real-world database of shareholder relationships, showing its usefulness for effective visual analysis.

  19. Cell-Based Odorant Sensor Array for Odor Discrimination Based on Insect Odorant Receptors.

    PubMed

    Termtanasombat, Maneerat; Mitsuno, Hidefumi; Misawa, Nobuo; Yamahira, Shinya; Sakurai, Takeshi; Yamaguchi, Satoshi; Nagamune, Teruyuki; Kanzaki, Ryohei

    2016-07-01

    The olfactory system of living organisms can accurately discriminate numerous odors by recognizing the pattern of activation of several odorant receptors (ORs). Thus, development of an odorant sensor array based on multiple ORs presents the possibility of mimicking biological odor discrimination mechanisms. Recently, we developed novel odorant sensor elements with high sensitivity and selectivity based on insect OR-expressing Sf21 cells that respond to target odorants by displaying increased fluorescence intensity. Here we introduce the development of an odorant sensor array composed of several Sf21 cell lines expressing different ORs. In this study, an array pattern of four cell lines expressing Or13a, Or56a, BmOR1, and BmOR3 was successfully created using a patterned polydimethylsiloxane film template and cell-immobilizing reagents, termed biocompatible anchor for membrane (BAM). We demonstrated that BAM could create a clear pattern of Sf21 sensor cells without impacting their odorant-sensing performance. Our sensor array showed odorant-specific response patterns toward both odorant mixtures and single odorant stimuli, allowing us to visualize the presence of 1-octen-3-ol, geosmin, bombykol, and bombykal as an increased fluorescence intensity in the region of Or13a, Or56a, BmOR1, and BmOR3 cell lines, respectively. Therefore, we successfully developed a new methodology for creating a cell-based odorant sensor array that enables us to discriminate multiple target odorants. Our method might be expanded into the development of an odorant sensor capable of detecting a large range of environmental odorants that might become a promising tool used in various applications including the study of insect semiochemicals and food contamination.

  20. The Pattern Glare Test: a review and determination of normative values.

    PubMed

    Evans, B J W; Stevenson, S J

    2008-07-01

    Pattern glare is characterised by symptoms of visual perceptual distortions and visual stress on viewing striped patterns. People with migraine or Meares-Irlen syndrome (visual stress) are especially prone to pattern glare. The literature on pattern glare is reviewed, and the goal of this study was to develop clinical norms for the Wilkins and Evans Pattern Glare Test. This comprises three test plates of square wave patterns of spatial frequency 0.5, 3 and 12 cycles per degree (cpd). Patients are shown the 0.5 cpd grating and the number of distortions that are reported in response to a list of questions is recorded. This is repeated for the other patterns. People who are prone to pattern glare experience visual perceptual distortions on viewing the 3 cpd grating, and pattern glare can be quantified as either the sum of distortions reported with the 3 cpd pattern or as the difference between the number of distortions with the 3 and 12 cpd gratings, the '3-12 cpd difference'. In study 1, 100 patients consulting an optometrist performed the Pattern Glare Test and the 95th percentile of responses was calculated as the limit of the normal range. The normal range for the number of distortions was found to be <4 on the 3 cpd grating and <2 for the 3-12 cpd difference. Pattern glare was similar in both genders but decreased with age. In study 2, 30 additional participants were given the test in the reverse of the usual testing order and were compared with a sub-group from study 1, matched for age and gender. Participants experienced more distortions with the 12 cpd grating if it was presented after the 3 cpd grating. However, the order did not influence the two key measures of pattern glare. In study 3, 30 further participants who reported a medical diagnosis of migraine were compared with a sub-group of the participants in study 1 who did not report migraine or frequent headaches, matched for age and gender. The migraine group reported more symptoms on viewing all gratings, particularly the 3 cpd grating. The only variable to be significantly different between the groups was the 3-12 cpd difference. In conclusion, people have an abnormal degree of pattern glare if they have a Pattern Glare Test score of >3 on the 3 cpd grating or a score of >1 on the 3-12 cpd difference. The literature suggests that these people are likely to have visual stress in everyday life and may therefore benefit from interventions designed to alleviate visual stress, such as precision tinted lenses.

  1. Isolated cortical visual loss with subtle brain MRI abnormalities in a case of hypoxic-ischemic encephalopathy.

    PubMed

    Margolin, Edward; Gujar, Sachin K; Trobe, Jonathan D

    2007-12-01

    A 16-year-old boy who was briefly asystolic and hypotensive after a motor vehicle accident complained of abnormal vision after recovering consciousness. Visual acuity was normal, but visual fields were severely constricted without clear hemianopic features. The ophthalmic examination was otherwise normal. Brain MRI performed 11 days after the accident showed no pertinent abnormalities. At 6 months after the event, brain MRI demonstrated brain volume loss in the primary visual cortex and no other abnormalities. One year later, visual fields remained severely constricted; neurologic examination, including formal neuropsychometric testing, was normal. This case emphasizes the fact that hypoxic-ischemic encephalopathy (HIE) may cause enduring damage limited to primary visual cortex and that the MRI abnormalities may be subtle. These phenomena should be recognized in the management of patients with HIE.

  2. Properties of visual evoked potentials to onset of movement on a television screen.

    PubMed

    Kubová, Z; Kuba, M; Hubacek, J; Vít, F

    1990-08-01

    In 80 subjects the dependence of movement-onset visual evoked potentials on some measures of stimulation was examined, and these responses were compared with pattern-reversal visual evoked potentials to verify the effectiveness of pattern movement application for visual evoked potential acquisition. Horizontally moving vertical gratings were generated on a television screen. The typical movement-onset reactions were characterized by one marked negative peak only, with a peak time between 140 and 200 ms. In all subjects the sufficient stimulus duration for acquisition of movement-onset-related visual evoked potentials was 100 ms; in some cases it was only 20 ms. Higher velocity (5.6 degree/s) produced higher amplitudes of movement-onset visual evoked potentials than did the lower velocity (2.8 degrees/s). In 80% of subjects, the more distinct reactions were found in the leads from lateral occipital areas (in 60% from the right hemisphere), with no correlation to handedness of subjects. Unlike pattern-reversal visual evoked potentials, the movement-onset responses tended to be larger to extramacular stimulation (annular target of 5 degrees-9 degrees) than to macular stimulation (circular target of 5 degrees diameter).

  3. Magnifying visual target information and the role of eye movements in motor sequence learning.

    PubMed

    Massing, Matthias; Blandin, Yannick; Panzer, Stefan

    2016-01-01

    An experiment investigated the influence of eye movements on learning a simple motor sequence task when the visual display was magnified. The task was to reproduce a 1300 ms spatial-temporal pattern of elbow flexions and extensions. The spatial-temporal pattern was displayed in front of the participants. Participants were randomly assigned to four groups differing on eye movements (free to use their eyes/instructed to fixate) and the visual display (small/magnified). All participants had to perform a pre-test, an acquisition phase, a delayed retention test, and a transfer test. The results indicated that participants in each practice condition increased their performance during acquisition. The participants who were permitted to use their eyes in the magnified visual display outperformed those who were instructed to fixate on the magnified visual display. When a small visual display was used, the instruction to fixate induced no performance decrements compared to participants who were permitted to use their eyes during acquisition. The findings demonstrated that a spatial-temporal pattern can be learned without eye movements, but being permitting to use eye movements facilitates the response production when the visual angle is increased. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Letters persistence after physical offset: visual word form area and left planum temporale. An fMRI study.

    PubMed

    Barban, Francesco; Zannino, Gian Daniele; Macaluso, Emiliano; Caltagirone, Carlo; Carlesimo, Giovanni A

    2013-06-01

    Iconic memory is a high-capacity low-duration visual memory store that allows the persistence of a visual stimulus after its offset. The categorical nature of this store has been extensively debated. This study provides functional magnetic resonance imaging evidence for brain regions underlying the persistence of postcategorical representations of visual stimuli. In a partial report paradigm, subjects matched a cued row of a 3 × 3 array of letters (postcategorical stimuli) or false fonts (precategorical stimuli) with a subsequent triplet of stimuli. The cued row was indicated by two visual flankers presented at the onset (physical stimulus readout) or after the offset of the array (iconic memory readout). The left planum temporale showed a greater modulation of the source of readout (iconic memory vs. physical stimulus) when letters were presented compared to false fonts. This is a multimodal brain region responsible for matching incoming acoustic and visual patterns with acoustic pattern templates. These findings suggest that letters persist after their physical offset in an abstract postcategorical representation. A targeted region of interest analysis revealed a similar pattern of activation in the Visual Word Form Area. These results suggest that multiple higher-order visual areas mediate iconic memory for postcategorical stimuli. Copyright © 2012 Wiley Periodicals, Inc.

  5. Neural representation of form-contingent color filling-in in the early visual cortex.

    PubMed

    Hong, Sang Wook; Tong, Frank

    2017-11-01

    Perceptual filling-in exemplifies the constructive nature of visual processing. Color, a prominent surface property of visual objects, can appear to spread to neighboring areas that lack any color. We investigated cortical responses to a color filling-in illusion that effectively dissociates perceived color from the retinal input (van Lier, Vergeer, & Anstis, 2009). Observers adapted to a star-shaped stimulus with alternating red- and cyan-colored points to elicit a complementary afterimage. By presenting an achromatic outline that enclosed one of the two afterimage colors, perceptual filling-in of that color was induced in the unadapted central region. Visual cortical activity was monitored with fMRI, and analyzed using multivariate pattern analysis. Activity patterns in early visual areas (V1-V4) reliably distinguished between the two color-induced filled-in conditions, but only higher extrastriate visual areas showed the predicted correspondence with color perception. Activity patterns allowed for reliable generalization between filled-in colors and physical presentations of perceptually matched colors in areas V3 and V4, but not in earlier visual areas. These findings suggest that the perception of filled-in surface color likely requires more extensive processing by extrastriate visual areas, in order for the neural representation of surface color to become aligned with perceptually matched real colors.

  6. Recognizing molecular patterns by machine learning: An agnostic structural definition of the hydrogen bond

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gasparotto, Piero; Ceriotti, Michele, E-mail: michele.ceriotti@epfl.ch

    The concept of chemical bonding can ultimately be seen as a rationalization of the recurring structural patterns observed in molecules and solids. Chemical intuition is nothing but the ability to recognize and predict such patterns, and how they transform into one another. Here, we discuss how to use a computer to identify atomic patterns automatically, so as to provide an algorithmic definition of a bond based solely on structural information. We concentrate in particular on hydrogen bonding – a central concept to our understanding of the physical chemistry of water, biological systems, and many technologically important materials. Since the hydrogenmore » bond is a somewhat fuzzy entity that covers a broad range of energies and distances, many different criteria have been proposed and used over the years, based either on sophisticate electronic structure calculations followed by an energy decomposition analysis, or on somewhat arbitrary choices of a range of structural parameters that is deemed to correspond to a hydrogen-bonded configuration. We introduce here a definition that is univocal, unbiased, and adaptive, based on our machine-learning analysis of an atomistic simulation. The strategy we propose could be easily adapted to similar scenarios, where one has to recognize or classify structural patterns in a material or chemical compound.« less

  7. Recognizing molecular patterns by machine learning: an agnostic structural definition of the hydrogen bond.

    PubMed

    Gasparotto, Piero; Ceriotti, Michele

    2014-11-07

    The concept of chemical bonding can ultimately be seen as a rationalization of the recurring structural patterns observed in molecules and solids. Chemical intuition is nothing but the ability to recognize and predict such patterns, and how they transform into one another. Here, we discuss how to use a computer to identify atomic patterns automatically, so as to provide an algorithmic definition of a bond based solely on structural information. We concentrate in particular on hydrogen bonding--a central concept to our understanding of the physical chemistry of water, biological systems, and many technologically important materials. Since the hydrogen bond is a somewhat fuzzy entity that covers a broad range of energies and distances, many different criteria have been proposed and used over the years, based either on sophisticate electronic structure calculations followed by an energy decomposition analysis, or on somewhat arbitrary choices of a range of structural parameters that is deemed to correspond to a hydrogen-bonded configuration. We introduce here a definition that is univocal, unbiased, and adaptive, based on our machine-learning analysis of an atomistic simulation. The strategy we propose could be easily adapted to similar scenarios, where one has to recognize or classify structural patterns in a material or chemical compound.

  8. Computer Program Recognizes Patterns in Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hand, Charles

    2003-01-01

    A computer program recognizes selected patterns in time-series data like digitized samples of seismic or electrophysiological signals. The program implements an artificial neural network (ANN) and a set of N clocks for the purpose of determining whether N or more instances of a certain waveform, W, occur within a given time interval, T. The ANN must be trained to recognize W in the incoming stream of data. The first time the ANN recognizes W, it sets clock 1 to count down from T to zero; the second time it recognizes W, it sets clock 2 to count down from T to zero, and so forth through the Nth instance. On the N + 1st instance, the cycle is repeated, starting with clock 1. If any clock has not reached zero when it is reset, then N instances of W have been detected within time T, and the program so indicates. The program can readily be encoded in a field-programmable gate array or an application-specific integrated circuit that could be used, for example, to detect electroencephalographic or electrocardiographic waveforms indicative of epileptic seizures or heart attacks, respectively.

  9. Integrating Data Clustering and Visualization for the Analysis of 3D Gene Expression Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Data Analysis and Visualization; nternational Research Training Group ``Visualization of Large and Unstructured Data Sets,'' University of Kaiserslautern, Germany; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA

    2008-05-12

    The recent development of methods for extracting precise measurements of spatial gene expression patterns from three-dimensional (3D) image data opens the way for new analyses of the complex gene regulatory networks controlling animal development. We present an integrated visualization and analysis framework that supports user-guided data clustering to aid exploration of these new complex datasets. The interplay of data visualization and clustering-based data classification leads to improved visualization and enables a more detailed analysis than previously possible. We discuss (i) integration of data clustering and visualization into one framework; (ii) application of data clustering to 3D gene expression data; (iii)more » evaluation of the number of clusters k in the context of 3D gene expression clustering; and (iv) improvement of overall analysis quality via dedicated post-processing of clustering results based on visualization. We discuss the use of this framework to objectively define spatial pattern boundaries and temporal profiles of genes and to analyze how mRNA patterns are controlled by their regulatory transcription factors.« less

  10. Structural salience and the nonaccidentality of a Gestalt.

    PubMed

    Strother, Lars; Kubovy, Michael

    2012-08-01

    We perceive structure through a process of perceptual organization. Here we report a new perceptual organization phenomenon-the facilitation of visual grouping by global curvature. Observers viewed patterns that they perceived as organized into collections of curves. The patterns were perceptually ambiguous such that the perceived orientation of the patterns varied from trial to trial. When patterns were sufficiently dense and proximity was equated for the predominant perceptual alternatives, observers tended to perceive the organization with the greatest curvature. This effect is tantamount to visual grouping by maximal curvature and thus demonstrates an unprecedented effect of global structure on perceptual organization. We account for this result with a model that predicts the perceived organization of a pattern as function of its nonaccidentality, which we define as the probability that it could have occurred by chance. Our findings demonstrate a novel relationship between the geometry of a pattern and the visual salience of global structure. (c) 2012 APA, all rights reserved.

  11. Functional Characterization and Differential Coactivation Patterns of Two Cytoarchitectonic Visual Areas on the Human Posterior Fusiform Gyrus

    PubMed Central

    Caspers, Julian; Zilles, Karl; Amunts, Katrin; Laird, Angela R.; Fox, Peter T.; Eickhoff, Simon B.

    2016-01-01

    The ventral stream of the human extrastriate visual cortex shows a considerable functional heterogeneity from early visual processing (posterior) to higher, domain-specific processing (anterior). The fusiform gyrus hosts several of those “high-level” functional areas. We recently found a subdivision of the posterior fusiform gyrus on the microstructural level, that is, two distinct cytoarchitectonic areas, FG1 and FG2 (Caspers et al., Brain Structure & Function, 2013). To gain a first insight in the function of these two areas, here we studied their behavioral involvement and coactivation patterns by means of meta-analytic connectivity modeling based on the BrainMap database (www.brainmap.org), using probabilistic maps of these areas as seed regions. The coactivation patterns of the areas support the concept of a common involvement in a core network subserving different cognitive tasks, that is, object recognition, visual language perception, or visual attention. In addition, the analysis supports the previous cytoarchitectonic parcellation, indicating that FG1 appears as a transitional area between early and higher visual cortex and FG2 as a higher-order one. The latter area is furthermore lateralized, as it shows strong relations to the visual language processing system in the left hemisphere, while its right side is stronger associated with face selective regions. These findings indicate that functional lateralization of area FG2 relies on a different pattern of connectivity rather than side-specific cytoarchitectonic features. PMID:24038902

  12. Migraine with aura: visual disturbances and interrelationship with the pain phase. Vågå study of headache epidemiology.

    PubMed

    Sjaastad, Ottar; Bakketeig, Leiv S; Petersen, Hans C

    2006-06-01

    In the Vågå study of headache epidemiology, 1838 or 88.6% of the available 18-65-year-old inhabitants of the commune were included. Everyone was questioned and examined personally by the principal investigator (OS). There were 178 cases of various types of visual disturbances during the migraine attack, which corresponds to 9.7% of the study group. The prevalence among females was 11.9% and among males 7.4%; female/male ratio was 1.70, as against 1.05 in the total Vågå study population. By far the most frequently occurring visual disturbance pattern was (A) 1. Visual disturbances --> 2. pain-free interlude --> 3. pain phase (in 78% of the cases). Other frequent patterns were: (B). Visual disturbances, but no pain phase (24%); and: (C) 1. Pain phase --> 2. visual disturbances (23%). Evidently, in the solitary case, there might be more than one visual disturbance pattern. The most frequently occurring solitary visual disturbances were: scintillating scotoma (62%) and obscuration (33%); but also more rare ones were identified, like anopsia, autokinesis (movement of stationary objects), tunnel vision and micropsia. Among the non-visual aura disturbances, paraesthesias and speech disturbances were the most frequent ones. The prevalence of migraine with aura seemed to be considerably higher than in similar studies. This also includes studies that have been carried out with a face-to-face interview technique.

  13. A bottom-up model of spatial attention predicts human error patterns in rapid scene recognition.

    PubMed

    Einhäuser, Wolfgang; Mundhenk, T Nathan; Baldi, Pierre; Koch, Christof; Itti, Laurent

    2007-07-20

    Humans demonstrate a peculiar ability to detect complex targets in rapidly presented natural scenes. Recent studies suggest that (nearly) no focal attention is required for overall performance in such tasks. Little is known, however, of how detection performance varies from trial to trial and which stages in the processing hierarchy limit performance: bottom-up visual processing (attentional selection and/or recognition) or top-down factors (e.g., decision-making, memory, or alertness fluctuations)? To investigate the relative contribution of these factors, eight human observers performed an animal detection task in natural scenes presented at 20 Hz. Trial-by-trial performance was highly consistent across observers, far exceeding the prediction of independent errors. This consistency demonstrates that performance is not primarily limited by idiosyncratic factors but by visual processing. Two statistical stimulus properties, contrast variation in the target image and the information-theoretical measure of "surprise" in adjacent images, predict performance on a trial-by-trial basis. These measures are tightly related to spatial attention, demonstrating that spatial attention and rapid target detection share common mechanisms. To isolate the causal contribution of the surprise measure, eight additional observers performed the animal detection task in sequences that were reordered versions of those all subjects had correctly recognized in the first experiment. Reordering increased surprise before and/or after the target while keeping the target and distractors themselves unchanged. Surprise enhancement impaired target detection in all observers. Consequently, and contrary to several previously published findings, our results demonstrate that attentional limitations, rather than target recognition alone, affect the detection of targets in rapidly presented visual sequences.

  14. Food Web Designer: a flexible tool to visualize interaction networks.

    PubMed

    Sint, Daniela; Traugott, Michael

    Species are embedded in complex networks of ecological interactions and assessing these networks provides a powerful approach to understand what the consequences of these interactions are for ecosystem functioning and services. This is mandatory to develop and evaluate strategies for the management and control of pests. Graphical representations of networks can help recognize patterns that might be overlooked otherwise. However, there is a lack of software which allows visualizing these complex interaction networks. Food Web Designer is a stand-alone, highly flexible and user friendly software tool to quantitatively visualize trophic and other types of bipartite and tripartite interaction networks. It is offered free of charge for use on Microsoft Windows platforms. Food Web Designer is easy to use without the need to learn a specific syntax due to its graphical user interface. Up to three (trophic) levels can be connected using links cascading from or pointing towards the taxa within each level to illustrate top-down and bottom-up connections. Link width/strength and abundance of taxa can be quantified, allowing generating fully quantitative networks. Network datasets can be imported, saved for later adjustment and the interaction webs can be exported as pictures for graphical display in different file formats. We show how Food Web Designer can be used to draw predator-prey and host-parasitoid food webs, demonstrating that this software is a simple and straightforward tool to graphically display interaction networks for assessing pest control or any other type of interaction in both managed and natural ecosystems from an ecological network perspective.

  15. Advanced simulation technology used to reduce accident rates through a better understanding of human behaviors and human perception

    NASA Astrophysics Data System (ADS)

    Manser, Michael P.; Hancock, Peter A.

    1996-06-01

    Human beings and technology have attained a mutually dependent and symbiotic relationship. It is easy to recognize how each depends on the other for survival. It is also easy to see how technology advances due to human activities. However, the role technology plays in advancing humankind is seldom examined. This presentation examines two research areas where the role of advanced visual simulation systems play an integral and essential role in understanding human perception and behavior. The ultimate goal of this research is the betterment of humankind through reduced accident and death rates in transportation environments. The first research area examined involved the estimation of time-to-contact. A high-fidelity wrap-around simulator (RAS) was used to examine people's ability to estimate time-to- contact. The ability of people to estimate the amount of time before an oncoming vehicle will collide with them is a necessary skill for avoiding collisions. A vehicle approached participants at one of three velocities, and while en route to the participant, the vehicle disappeared. The participants' task was to respond when they felt the accuracy of time-to-contact estimates and the practical applications of the result. The second area of research investigates the effects of various visual stimuli on underground transportation tunnel walls for the perception of vehicle speed. A RAS is paramount in creating visual patterns in peripheral vision. Flat-screen or front-screen simulators do not have this ability. Results are discussed in terms of speed perception and the application of these results to real world environments.

  16. Mixed Pattern Matching-Based Traffic Abnormal Behavior Recognition

    PubMed Central

    Cui, Zhiming; Zhao, Pengpeng

    2014-01-01

    A motion trajectory is an intuitive representation form in time-space domain for a micromotion behavior of moving target. Trajectory analysis is an important approach to recognize abnormal behaviors of moving targets. Against the complexity of vehicle trajectories, this paper first proposed a trajectory pattern learning method based on dynamic time warping (DTW) and spectral clustering. It introduced the DTW distance to measure the distances between vehicle trajectories and determined the number of clusters automatically by a spectral clustering algorithm based on the distance matrix. Then, it clusters sample data points into different clusters. After the spatial patterns and direction patterns learned from the clusters, a recognition method for detecting vehicle abnormal behaviors based on mixed pattern matching was proposed. The experimental results show that the proposed technical scheme can recognize main types of traffic abnormal behaviors effectively and has good robustness. The real-world application verified its feasibility and the validity. PMID:24605045

  17. Techniques for recognizing identity of several response functions from the data of visual inspection

    NASA Astrophysics Data System (ADS)

    Nechval, Nicholas A.

    1996-08-01

    The purpose of this paper is to present some efficient techniques for recognizing from the observed data whether several response functions are identical to each other. For example, in an industrial setting the problem may be to determine whether the production coefficients established in a small-scale pilot study apply to each of several large- scale production facilities. The techniques proposed here combine sensor information from automated visual inspection of manufactured products which is carried out by means of pixel-by-pixel comparison of the sensed image of the product to be inspected with some reference pattern (or image). Let (a1, . . . , am) be p-dimensional parameters associated with m response models of the same type. This study is concerned with the simultaneous comparison of a1, . . . , am. A generalized maximum likelihood ratio (GMLR) test is derived for testing equality of these parameters, where each of the parameters represents a corresponding vector of regression coefficients. The GMLR test reduces to an equivalent test based on a statistic that has an F distribution. The main advantage of the test lies in its relative simplicity and the ease with which it can be applied. Another interesting test for the same problem is an application of Fisher's method of combining independent test statistics which can be considered as a parallel procedure to the GMLR test. The combination of independent test statistics does not appear to have been used very much in applied statistics. There does, however, seem to be potential data analytic value in techniques for combining distributional assessments in relation to statistically independent samples which are of joint experimental relevance. In addition, a new iterated test for the problem defined above is presented. A rejection of the null hypothesis by this test provides some reason why all the parameters are not equal. A numerical example is discussed in the context of the proposed procedures for hypothesis testing.

  18. A method for enhancing gunshot residue patterns on dark and multicolored fabrics compared with the modified Griess test.

    PubMed

    Bailey, James A; Casanova, Ruby S; Bufkin, Kim

    2006-07-01

    In using infrared or infrared-enhanced photography to examine gunshot residue (GSR) on dark-colored clothing, the GSR particles are microscopically examined directly on the fabric followed by the modified Griess test (MGT) for nitrites. In conducting the MGT, the GSR is transferred to treated photographic paper for visualization. A positive reaction yields an orange color on specially treated photographic paper. The examiner also evaluates the size of the powder pattern based on the distribution of nitrite reaction sites or density. A false-positive reaction can occur using the MGT due to contaminants or dyes that produce an orange cloud reaction as well. A method for enhancing visualization of the pattern produced by burned and partially unburned powder is by treatment of the fabric with a solution of sodium hypochlorite. In order to evaluate the results of sodium hypochlorite treatment for GSR visualization, the MGT was used as a reference pattern. Enhancing GSR patterns on dark or multicolored clothing was performed by treating the fabric with an application of 5.25% solution of sodium hypochlorite. Bleaching the dyes in the fabric enhances visualization of the GSR pattern by eliminating the background color. Some dyes are not affected by sodium hypochlorite; therefore, bleaching may not enhance the GSR patterns in some fabrics. Sodium hypochlorite provides the investigator with a method for enhancing GSR patterns directly on the fabric. However, this study is not intended to act as a substitute for the MGT or Sodium Rhodizonate test.

  19. Recognition of bacterial plant pathogens: local, systemic and transgenerational immunity.

    PubMed

    Henry, Elizabeth; Yadeta, Koste A; Coaker, Gitta

    2013-09-01

    Bacterial pathogens can cause multiple plant diseases and plants rely on their innate immune system to recognize and actively respond to these microbes. The plant innate immune system comprises extracellular pattern recognition receptors that recognize conserved microbial patterns and intracellular nucleotide binding leucine-rich repeat (NLR) proteins that recognize specific bacterial effectors delivered into host cells. Plants lack the adaptive immune branch present in animals, but still afford flexibility to pathogen attack through systemic and transgenerational resistance. Here, we focus on current research in plant immune responses against bacterial pathogens. Recent studies shed light onto the activation and inactivation of pattern recognition receptors and systemic acquired resistance. New research has also uncovered additional layers of complexity surrounding NLR immune receptor activation, cooperation and sub-cellular localizations. Taken together, these recent advances bring us closer to understanding the web of molecular interactions responsible for coordinating defense responses and ultimately resistance. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.

  20. On the role of spatial phase and phase correlation in vision, illusion, and cognition

    PubMed Central

    Gladilin, Evgeny; Eils, Roland

    2015-01-01

    Numerous findings indicate that spatial phase bears an important cognitive information. Distortion of phase affects topology of edge structures and makes images unrecognizable. In turn, appropriately phase-structured patterns give rise to various illusions of virtual image content and apparent motion. Despite a large body of phenomenological evidence not much is known yet about the role of phase information in neural mechanisms of visual perception and cognition. Here, we are concerned with analysis of the role of spatial phase in computational and biological vision, emergence of visual illusions and pattern recognition. We hypothesize that fundamental importance of phase information for invariant retrieval of structural image features and motion detection promoted development of phase-based mechanisms of neural image processing in course of evolution of biological vision. Using an extension of Fourier phase correlation technique, we show that the core functions of visual system such as motion detection and pattern recognition can be facilitated by the same basic mechanism. Our analysis suggests that emergence of visual illusions can be attributed to presence of coherently phase-shifted repetitive patterns as well as the effects of acuity compensation by saccadic eye movements. We speculate that biological vision relies on perceptual mechanisms effectively similar to phase correlation, and predict neural features of visual pattern (dis)similarity that can be used for experimental validation of our hypothesis of “cognition by phase correlation.” PMID:25954190

  1. On the role of spatial phase and phase correlation in vision, illusion, and cognition.

    PubMed

    Gladilin, Evgeny; Eils, Roland

    2015-01-01

    Numerous findings indicate that spatial phase bears an important cognitive information. Distortion of phase affects topology of edge structures and makes images unrecognizable. In turn, appropriately phase-structured patterns give rise to various illusions of virtual image content and apparent motion. Despite a large body of phenomenological evidence not much is known yet about the role of phase information in neural mechanisms of visual perception and cognition. Here, we are concerned with analysis of the role of spatial phase in computational and biological vision, emergence of visual illusions and pattern recognition. We hypothesize that fundamental importance of phase information for invariant retrieval of structural image features and motion detection promoted development of phase-based mechanisms of neural image processing in course of evolution of biological vision. Using an extension of Fourier phase correlation technique, we show that the core functions of visual system such as motion detection and pattern recognition can be facilitated by the same basic mechanism. Our analysis suggests that emergence of visual illusions can be attributed to presence of coherently phase-shifted repetitive patterns as well as the effects of acuity compensation by saccadic eye movements. We speculate that biological vision relies on perceptual mechanisms effectively similar to phase correlation, and predict neural features of visual pattern (dis)similarity that can be used for experimental validation of our hypothesis of "cognition by phase correlation."

  2. Distributed patterns of activity in sensory cortex reflect the precision of multiple items maintained in visual short-term memory.

    PubMed

    Emrich, Stephen M; Riggall, Adam C; Larocque, Joshua J; Postle, Bradley R

    2013-04-10

    Traditionally, load sensitivity of sustained, elevated activity has been taken as an index of storage for a limited number of items in visual short-term memory (VSTM). Recently, studies have demonstrated that the contents of a single item held in VSTM can be decoded from early visual cortex, despite the fact that these areas do not exhibit elevated, sustained activity. It is unknown, however, whether the patterns of neural activity decoded from sensory cortex change as a function of load, as one would expect from a region storing multiple representations. Here, we use multivoxel pattern analysis to examine the neural representations of VSTM in humans across multiple memory loads. In an important extension of previous findings, our results demonstrate that the contents of VSTM can be decoded from areas that exhibit a transient response to visual stimuli, but not from regions that exhibit elevated, sustained load-sensitive delay-period activity. Moreover, the neural information present in these transiently activated areas decreases significantly with increasing load, indicating load sensitivity of the patterns of activity that support VSTM maintenance. Importantly, the decrease in classification performance as a function of load is correlated with within-subject changes in mnemonic resolution. These findings indicate that distributed patterns of neural activity in putatively sensory visual cortex support the representation and precision of information in VSTM.

  3. Distance-dependent pattern blending can camouflage salient aposematic signals.

    PubMed

    Barnett, James B; Cuthill, Innes C; Scott-Samuel, Nicholas E

    2017-07-12

    The effect of viewing distance on the perception of visual texture is well known: spatial frequencies higher than the resolution limit of an observer's visual system will be summed and perceived as a single combined colour. In animal defensive colour patterns, distance-dependent pattern blending may allow aposematic patterns, salient at close range, to match the background to distant observers. Indeed, recent research has indicated that reducing the distance from which a salient signal can be detected can increase survival over camouflage or conspicuous aposematism alone. We investigated whether the spatial frequency of conspicuous and cryptically coloured stripes affects the rate of avian predation. Our results are consistent with pattern blending acting to camouflage salient aposematic signals effectively at a distance. Experiments into the relative rate of avian predation on edible model caterpillars found that increasing spatial frequency (thinner stripes) increased survival. Similarly, visual modelling of avian predators showed that pattern blending increased the similarity between caterpillar and background. These results show how a colour pattern can be tuned to reveal or conceal different information at different distances, and produce tangible survival benefits. © 2017 The Author(s).

  4. Role of visual and non-visual cues in constructing a rotation-invariant representation of heading in parietal cortex

    PubMed Central

    Sunkara, Adhira

    2015-01-01

    As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues. DOI: http://dx.doi.org/10.7554/eLife.04693.001 PMID:25693417

  5. Crowding by a single bar: probing pattern recognition mechanisms in the visual periphery.

    PubMed

    Põder, Endel

    2014-11-06

    Whereas visual crowding does not greatly affect the detection of the presence of simple visual features, it heavily inhibits combining them into recognizable objects. Still, crowding effects have rarely been directly related to general pattern recognition mechanisms. In this study, pattern recognition mechanisms in visual periphery were probed using a single crowding feature. Observers had to identify the orientation of a rotated T presented briefly in a peripheral location. Adjacent to the target, a single bar was presented. The bar was either horizontal or vertical and located in a random direction from the target. It appears that such a crowding bar has very strong and regular effects on the identification of the target orientation. The observer's responses are determined by approximate relative positions of basic visual features; exact image-based similarity to the target is not important. A version of the "standard model" of object recognition with second-order features explains the main regularities of the data. © 2014 ARVO.

  6. Object based implicit contextual learning: a study of eye movements.

    PubMed

    van Asselen, Marieke; Sampaio, Joana; Pina, Ana; Castelo-Branco, Miguel

    2011-02-01

    Implicit contextual cueing refers to a top-down mechanism in which visual search is facilitated by learned contextual features. In the current study we aimed to investigate the mechanism underlying implicit contextual learning using object information as a contextual cue. Therefore, we measured eye movements during an object-based contextual cueing task. We demonstrated that visual search is facilitated by repeated object information and that this reduction in response times is associated with shorter fixation durations. This indicates that by memorizing associations between objects in our environment we can recognize objects faster, thereby facilitating visual search.

  7. Visual skills in airport-security screening.

    PubMed

    McCarley, Jason S; Kramer, Arthur F; Wickens, Christopher D; Vidoni, Eric D; Boot, Walter R

    2004-05-01

    An experiment examined visual performance in a simulated luggage-screening task. Observers participated in five sessions of a task requiring them to search for knives hidden in x-ray images of cluttered bags. Sensitivity and response times improved reliably as a result of practice. Eye movement data revealed that sensitivity increases were produced entirely by changes in observers' ability to recognize target objects, and not by changes in the effectiveness of visual scanning. Moreover, recognition skills were in part stimulus-specific, such that performance was degraded by the introduction of unfamiliar target objects. Implications for screener training are discussed.

  8. The Role of Visual and Semantic Properties in the Emergence of Category-Specific Patterns of Neural Response in the Human Brain.

    PubMed

    Coggan, David D; Baker, Daniel H; Andrews, Timothy J

    2016-01-01

    Brain-imaging studies have found distinct spatial and temporal patterns of response to different object categories across the brain. However, the extent to which these categorical patterns of response reflect higher-level semantic or lower-level visual properties of the stimulus remains unclear. To address this question, we measured patterns of EEG response to intact and scrambled images in the human brain. Our rationale for using scrambled images is that they have many of the visual properties found in intact images, but do not convey any semantic information. Images from different object categories (bottle, face, house) were briefly presented (400 ms) in an event-related design. A multivariate pattern analysis revealed categorical patterns of response to intact images emerged ∼80-100 ms after stimulus onset and were still evident when the stimulus was no longer present (∼800 ms). Next, we measured the patterns of response to scrambled images. Categorical patterns of response to scrambled images also emerged ∼80-100 ms after stimulus onset. However, in contrast to the intact images, distinct patterns of response to scrambled images were mostly evident while the stimulus was present (∼400 ms). Moreover, scrambled images were able to account only for all the variance in the intact images at early stages of processing. This direct manipulation of visual and semantic content provides new insights into the temporal dynamics of object perception and the extent to which different stages of processing are dependent on lower-level or higher-level properties of the image.

  9. EventThread: Visual Summarization and Stage Analysis of Event Sequence Data.

    PubMed

    Guo, Shunan; Xu, Ke; Zhao, Rongwen; Gotz, David; Zha, Hongyuan; Cao, Nan

    2018-01-01

    Event sequence data such as electronic health records, a person's academic records, or car service records, are ordered series of events which have occurred over a period of time. Analyzing collections of event sequences can reveal common or semantically important sequential patterns. For example, event sequence analysis might reveal frequently used care plans for treating a disease, typical publishing patterns of professors, and the patterns of service that result in a well-maintained car. It is challenging, however, to visually explore large numbers of event sequences, or sequences with large numbers of event types. Existing methods focus on extracting explicitly matching patterns of events using statistical analysis to create stages of event progression over time. However, these methods fail to capture latent clusters of similar but not identical evolutions of event sequences. In this paper, we introduce a novel visualization system named EventThread which clusters event sequences into threads based on tensor analysis and visualizes the latent stage categories and evolution patterns by interactively grouping the threads by similarity into time-specific clusters. We demonstrate the effectiveness of EventThread through usage scenarios in three different application domains and via interviews with an expert user.

  10. Thinking Visually about Algebra

    ERIC Educational Resources Information Center

    Baroudi, Ziad

    2015-01-01

    Many introductions to algebra in high school begin with teaching students to generalise linear numerical patterns. This article argues that this approach needs to be changed so that students encounter variables in the context of modelling visual patterns so that the variables have a meaning. The article presents sample classroom activities,…

  11. Nurses' Behaviors and Visual Scanning Patterns May Reduce Patient Identification Errors

    ERIC Educational Resources Information Center

    Marquard, Jenna L.; Henneman, Philip L.; He, Ze; Jo, Junghee; Fisher, Donald L.; Henneman, Elizabeth A.

    2011-01-01

    Patient identification (ID) errors occurring during the medication administration process can be fatal. The aim of this study is to determine whether differences in nurses' behaviors and visual scanning patterns during the medication administration process influence their capacities to identify patient ID errors. Nurse participants (n = 20)…

  12. Floor-fractured crater models of the Sudbury structure, Canada

    NASA Technical Reports Server (NTRS)

    Wichman, R. W.; Schultz, P. H.

    1992-01-01

    The Sudbury structure in Ontario, Canada, is one of the oldest and largest impact structures recognized in the geological record. It is also one of the most extensively deformed and volcanically modified impact structures on Earth. Although few other terrestrial craters are recognized as volcanically modified, numerous impact craters on the Moon have been volcanically and tectonically modified and provide possible analogs for the observed pattern of modification at Sudbury. We correlate the pattern of early deformation at Sudbury to fracture patterns in two alternative lunar analogs and then use these analogs both to estimate the initial size of the Sudbury structure and to model the nature of early crater modification at Sudbury.

  13. Frequency spectrum might act as communication code between retina and visual cortex I

    PubMed Central

    Yang, Xu; Gong, Bo; Lu, Jian-Wei

    2015-01-01

    AIM To explore changes and possible communication relationship of local potential signals recorded simultaneously from retina and visual cortex I (V1). METHODS Fourteen C57BL/6J mice were measured with pattern electroretinogram (PERG) and pattern visually evoked potential (PVEP) and fast Fourier transform has been used to analyze the frequency components of those signals. RESULTS The amplitude of PERG and PVEP was measured at about 36.7 µV and 112.5 µV respectively and the dominant frequency of PERG and PVEP, however, stay unchanged and both signals do not have second, or otherwise, harmonic generation. CONCLUSION The results suggested that retina encodes visual information in the way of frequency spectrum and then transfers it to primary visual cortex. The primary visual cortex accepts and deciphers the input visual information coded from retina. Frequency spectrum may act as communication code between retina and V1. PMID:26682156

  14. Frequency spectrum might act as communication code between retina and visual cortex I.

    PubMed

    Yang, Xu; Gong, Bo; Lu, Jian-Wei

    2015-01-01

    To explore changes and possible communication relationship of local potential signals recorded simultaneously from retina and visual cortex I (V1). Fourteen C57BL/6J mice were measured with pattern electroretinogram (PERG) and pattern visually evoked potential (PVEP) and fast Fourier transform has been used to analyze the frequency components of those signals. The amplitude of PERG and PVEP was measured at about 36.7 µV and 112.5 µV respectively and the dominant frequency of PERG and PVEP, however, stay unchanged and both signals do not have second, or otherwise, harmonic generation. The results suggested that retina encodes visual information in the way of frequency spectrum and then transfers it to primary visual cortex. The primary visual cortex accepts and deciphers the input visual information coded from retina. Frequency spectrum may act as communication code between retina and V1.

  15. Examining drivers' eye glance patterns during distracted driving: Insights from scanning randomness and glance transition matrix.

    PubMed

    Wang, Yuan; Bao, Shan; Du, Wenjun; Ye, Zhirui; Sayer, James R

    2017-12-01

    Visual attention to the driving environment is of great importance for road safety. Eye glance behavior has been used as an indicator of distracted driving. This study examined and quantified drivers' glance patterns and features during distracted driving. Data from an existing naturalistic driving study were used. Entropy rate was calculated and used to assess the randomness associated with drivers' scanning patterns. A glance-transition proportion matrix was defined to quantity visual search patterns transitioning among four main eye glance locations while driving (i.e., forward on-road, phone, mirrors and others). All measurements were calculated within a 5s time window under both cell phone and non-cell phone use conditions. Results of the glance data analyses showed different patterns between distracted and non-distracted driving, featured by a higher entropy rate value and highly biased attention transferring between forward and phone locations during distracted driving. Drivers in general had higher number of glance transitions, and their on-road glance duration was significantly shorter during distracted driving when compared to non-distracted driving. Results suggest that drivers have a higher scanning randomness/disorder level and shift their main attention from surrounding areas towards phone area when engaging in visual-manual tasks. Drivers' visual search patterns during visual-manual distraction with a high scanning randomness and a high proportion of eye glance transitions towards the location of the phone provide insight into driver distraction detection. This will help to inform the design of in-vehicle human-machine interface/systems. Copyright © 2017. Published by Elsevier Ltd.

  16. The integration of visual context information in facial emotion recognition in 5- to 15-year-olds.

    PubMed

    Theurel, Anne; Witt, Arnaud; Malsert, Jennifer; Lejeune, Fleur; Fiorentini, Chiara; Barisnikov, Koviljka; Gentaz, Edouard

    2016-10-01

    The current study investigated the role of congruent visual context information in the recognition of facial emotional expression in 190 participants from 5 to 15years of age. Children performed a matching task that presented pictures with different facial emotional expressions (anger, disgust, happiness, fear, and sadness) in two conditions: with and without a visual context. The results showed that emotions presented with visual context information were recognized more accurately than those presented in the absence of visual context. The context effect remained steady with age but varied according to the emotion presented and the gender of participants. The findings demonstrated for the first time that children from the age of 5years are able to integrate facial expression and visual context information, and this integration improves facial emotion recognition. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Physiological Laterality of Superficial Cerebral Veins on Susceptibility-Weighted Imaging.

    PubMed

    Matsushima, Satoshi; Shimizu, Tetsuya; Gomi, Taku; Fukuda, Kunihiko

    The purpose of this study is to evaluate whether laterality of the superficial cerebral veins can be seen on susceptibility-weighted imaging (SWI) in patients with no intracranial lesions that affect venous visualization. We retrospectively evaluated 386 patients who underwent brain magnetic resonance imaging including SWI in our institute. Patients with a lesion with the potential to affect venous visualization on SWI were excluded. Two neuroradiologists visually evaluated the findings and scored the visualization of the superficial cerebral veins. Of the 386 patients, 315 (81.6%) showed no obvious laterality on venous visualization, 64 (16.6%) showed left-side dominant laterality, and 7 (1.8%) showed right-side dominant laterality. Left-side dominant physiological laterality exists in the visualization of the superficial cerebral veins on SWI. Therefore, when recognizing left-side dominant laterality of the superficial cerebral veins on SWI, the radiologist must also consider the possibility of physiological laterality.

  18. [Microcomputer control of a LED stimulus display device].

    PubMed

    Ohmoto, S; Kikuchi, T; Kumada, T

    1987-02-01

    A visual stimulus display system controlled by a microcomputer was constructed at low cost. The system consists of a LED stimulus display device, a microcomputer, two interface boards, a pointing device (a "mouse") and two kinds of software. The first software package is written in BASIC. Its functions are: to construct stimulus patterns using the mouse, to construct letter patterns (alphabet, digit, symbols and Japanese letters--kanji, hiragana, katakana), to modify the patterns, to store the patterns on a floppy disc, to translate the patterns into integer data which are used to display the patterns in the second software. The second software package, written in BASIC and machine language, controls display of a sequence of stimulus patterns in predetermined time schedules in visual experiments.

  19. Is Abdominal Fetal Electrocardiography an Alternative to Doppler Ultrasound for FHR Variability Evaluation?

    PubMed Central

    Jezewski, Janusz; Wrobel, Janusz; Matonia, Adam; Horoba, Krzysztof; Martinek, Radek; Kupka, Tomasz; Jezewski, Michal

    2017-01-01

    Great expectations are connected with application of indirect fetal electrocardiography (FECG), especially for home telemonitoring of pregnancy. Evaluation of fetal heart rate (FHR) variability, when determined from FECG, uses the same criteria as for FHR signal acquired classically—through ultrasound Doppler method (US). Therefore, the equivalence of those two methods has to be confirmed, both in terms of recognizing classical FHR patterns: baseline, accelerations/decelerations (A/D), long-term variability (LTV), as well as evaluating the FHR variability with beat-to-beat accuracy—short-term variability (STV). The research material consisted of recordings collected from 60 patients in physiological and complicated pregnancy. The FHR signals of at least 30 min duration were acquired dually, using two systems for fetal and maternal monitoring, based on US and FECG methods. Recordings were retrospectively divided into normal (41) and abnormal (19) fetal outcome. The complex process of data synchronization and validation was performed. Obtained low level of the signal loss (4.5% for US and 1.8% for FECG method) enabled to perform both direct comparison of FHR signals, as well as indirect one—by using clinically relevant parameters. Direct comparison showed that there is no measurement bias between the acquisition methods, whereas the mean absolute difference, important for both visual and computer-aided signal analysis, was equal to 1.2 bpm. Such low differences do not affect the visual assessment of the FHR signal. However, in the indirect comparison the inconsistencies of several percent were noted. This mainly affects the acceleration (7.8%) and particularly deceleration (54%) patterns. In the signals acquired using the electrocardiography the obtained STV and LTV indices have shown significant overestimation by 10 and 50% respectively. It also turned out, that ability of clinical parameters to distinguish between normal and abnormal groups do not depend on the acquisition method. The obtained results prove that the abdominal FECG, considered as an alternative to the ultrasound approach, does not change the interpretation of the FHR signal, which was confirmed during both visual assessment and automated analysis. PMID:28559852

  20. Insensitivity of visual short-term memory to irrelevant visual information.

    PubMed

    Andrade, Jackie; Kemps, Eva; Werniers, Yves; May, Jon; Szmalec, Arnaud

    2002-07-01

    Several authors have hypothesized that visuo-spatial working memory is functionally analogous to verbal working memory. Irrelevant background speech impairs verbal short-term memory. We investigated whether irrelevant visual information has an analogous effect on visual short-term memory, using a dynamic visual noise (DVN) technique known to disrupt visual imagery (Quinn & McConnell, 1996b). Experiment I replicated the effect of DVN on pegword imagery. Experiments 2 and 3 showed no effect of DVN on recall of static matrix patterns, despite a significant effect of a concurrent spatial tapping task. Experiment 4 showed no effect of DVN on encoding or maintenance of arrays of matrix patterns, despite testing memory by a recognition procedure to encourage visual rather than spatial processing. Serial position curves showed a one-item recency effect typical of visual short-term memory. Experiment 5 showed no effect of DVN on short-term recognition of Chinese characters, despite effects of visual similarity and a concurrent colour memory task that confirmed visual processing of the characters. We conclude that irrelevant visual noise does not impair visual short-term memory. Visual working memory may not be functionally analogous to verbal working memory, and different cognitive processes may underlie visual short-term memory and visual imagery.

  1. A novel method for flow pattern identification in unstable operational conditions using gamma ray and radial basis function.

    PubMed

    Roshani, G H; Nazemi, E; Roshani, M M

    2017-05-01

    Changes of fluid properties (especially density) strongly affect the performance of radiation-based multiphase flow meter and could cause error in recognizing the flow pattern and determining void fraction. In this work, we proposed a methodology based on combination of multi-beam gamma ray attenuation and dual modality densitometry techniques using RBF neural network in order to recognize the flow regime and determine the void fraction in gas-liquid two phase flows independent of the liquid phase changes. The proposed system is consisted of one 137 Cs source, two transmission detectors and one scattering detector. The registered counts in two transmission detectors were used as the inputs of one primary Radial Basis Function (RBF) neural network for recognizing the flow regime independent of liquid phase density. Then, after flow regime identification, three RBF neural networks were utilized for determining the void fraction independent of liquid phase density. Registered count in scattering detector and first transmission detector were used as the inputs of these three RBF neural networks. Using this simple methodology, all the flow patterns were correctly recognized and the void fraction was predicted independent of liquid phase density with mean relative error (MRE) of less than 3.28%. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Essential Use Cases for Pedagogical Patterns

    ERIC Educational Resources Information Center

    Derntl, Michael; Botturi, Luca

    2006-01-01

    Coming from architecture, through computer science, pattern-based design spread into other disciplines and is nowadays recognized as a powerful way of capturing and reusing effective design practice. However, current pedagogical pattern approaches lack widespread adoption, both by users and authors, and are still limited to individual initiatives.…

  3. Case-Based Behavior Recognition in Beyond Visual Range Air Combat

    DTIC Science & Technology

    2015-05-01

    Auslander et al. 2014), which can then be given to a plan generator . Recognizing high-level behaviors, which is our focus, should also help to recognize...kinematically avoid a missile by flying away from it), and Crank (an agent flies at the maximum offset but tries to keep its target in radar). For the UAV and...3.4 the pruning algorithm must take into account the limitations of the acquisition system, which generally results in All Out Aggressive cases being

  4. Eye tracking to evaluate evidence recognition in crime scene investigations.

    PubMed

    Watalingam, Renuka Devi; Richetelli, Nicole; Pelz, Jeff B; Speir, Jacqueline A

    2017-11-01

    Crime scene analysts are the core of criminal investigations; decisions made at the scene greatly affect the speed of analysis and the quality of conclusions, thereby directly impacting the successful resolution of a case. If an examiner fails to recognize the pertinence of an item on scene, the analyst's theory regarding the crime will be limited. Conversely, unselective evidence collection will most likely include irrelevant material, thus increasing a forensic laboratory's backlog and potentially sending the investigation into an unproductive and costly direction. Therefore, it is critical that analysts recognize and properly evaluate forensic evidence that can assess the relative support of differing hypotheses related to event reconstruction. With this in mind, the aim of this study was to determine if quantitative eye tracking data and qualitative reconstruction accuracy could be used to distinguish investigator expertise. In order to assess this, 32 participants were successfully recruited and categorized as experts or trained novices based on their practical experiences and educational backgrounds. Each volunteer then processed a mock crime scene while wearing a mobile eye tracker, wherein visual fixations, durations, search patterns, and reconstruction accuracy were evaluated. The eye tracking data (dwell time and task percentage on areas of interest or AOIs) were compared using Earth Mover's Distance (EMD) and the Needleman-Wunsch (N-W) algorithm, revealing significant group differences for both search duration (EMD), as well as search sequence (N-W). More specifically, experts exhibited greater dissimilarity in search duration, but greater similarity in search sequences than their novice counterparts. In addition to the quantitative visual assessment of examiner variability, each participant's reconstruction skill was assessed using a 22-point binary scoring system, in which significant group differences were detected as a function of total reconstruction accuracy. This result, coupled with the fact that the study failed to detect a significant difference between the groups when evaluating the total time needed to complete the investigation, indicates that experts are more efficient and effective. Finally, the results presented here provide a basis for continued research in the use of eye trackers to assess expertise in complex and distributed environments, including suggestions for future work, and cautions regarding the degree to which visual attention can infer cognitive understanding. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. TreeNetViz: revealing patterns of networks over tree structures.

    PubMed

    Gou, Liang; Zhang, Xiaolong Luke

    2011-12-01

    Network data often contain important attributes from various dimensions such as social affiliations and areas of expertise in a social network. If such attributes exhibit a tree structure, visualizing a compound graph consisting of tree and network structures becomes complicated. How to visually reveal patterns of a network over a tree has not been fully studied. In this paper, we propose a compound graph model, TreeNet, to support visualization and analysis of a network at multiple levels of aggregation over a tree. We also present a visualization design, TreeNetViz, to offer the multiscale and cross-scale exploration and interaction of a TreeNet graph. TreeNetViz uses a Radial, Space-Filling (RSF) visualization to represent the tree structure, a circle layout with novel optimization to show aggregated networks derived from TreeNet, and an edge bundling technique to reduce visual complexity. Our circular layout algorithm reduces both total edge-crossings and edge length and also considers hierarchical structure constraints and edge weight in a TreeNet graph. These experiments illustrate that the algorithm can reduce visual cluttering in TreeNet graphs. Our case study also shows that TreeNetViz has the potential to support the analysis of a compound graph by revealing multiscale and cross-scale network patterns. © 2011 IEEE

  6. Patterns of species richness and the center of diversity in modern Indo-Pacific larger foraminifera.

    PubMed

    Förderer, Meena; Rödder, Dennis; Langer, Martin R

    2018-05-29

    Symbiont-bearing Larger Benthic Foraminifera (LBF) are ubiquitous components of shallow tropical and subtropical environments and contribute substantially to carbonaceous reef and shelf sediments. Climate change is dramatically affecting carbonate producing organisms and threatens the diversity and structural integrity of coral reef ecosystems. Recent invertebrate and vertebrate surveys have identified the Coral Triangle as the planet's richest center of marine life delineating the region as a top priority for conservation. We compiled and analyzed extensive occurrence records for 68 validly recognized species of LBF from the Indian and Pacific Ocean, established individual range maps and applied Minimum Convex Polygon (MCP) and Species Distribution Model (SDM) methodologies to create the first ocean-wide species richness maps. SDM output was further used for visualizing latitudinal and longitudinal diversity gradients. Our findings provide strong support for assigning the tropical Central Indo-Pacific as the world's species-richest marine region with the Central Philippines emerging as the bullseye of LBF diversity. Sea surface temperature and nutrient content were identified as the most influential environmental constraints exerting control over the distribution of LBF. Our findings contribute to the completion of worldwide research on tropical marine biodiversity patterns and the identification of targeting centers for conservation efforts.

  7. Patterns of Negotiation

    NASA Astrophysics Data System (ADS)

    Sood, Suresh; Pattinson, Hugh

    Traditionally, face-to-face negotiations in the real world have not been looked at as a complex systems interaction of actors resulting in a dynamic and potentially emergent system. If indeed negotiations are an outcome of a dynamic interaction of simpler behavior just as with a complex system, we should be able to see the patterns contributing to the complexities of a negotiation under study. This paper and the supporting research sets out to show B2B (business-to-business) negotiations as complex systems of interacting actors exhibiting dynamic and emergent behavior. This paper discusses the exploratory research based on negotiation simulations in which a large number of business students participate as buyers and sellers. The student interactions are captured on video and a purpose built research method attempts to look for patterns of interactions between actors using visualization techniques traditionally reserved to observe the algorithmic complexity of complex systems. Students are videoed negotiating with partners. Each video is tagged according to a recognized classification and coding scheme for negotiations. The classification relates to the phases through which any particular negotiation might pass, such as laughter, aggression, compromise, and so forth — through some 30 possible categories. Were negotiations more or less successful if they progressed through the categories in different ways? Furthermore, does the data depict emergent pathway segments considered to be more or less successful? This focus on emergence within the data provides further strong support for face-to-face (F2F) negotiations to be construed as complex systems.

  8. Severe South American Ocular Toxoplasmosis Is Associated with Decreased Ifn-γ/Il-17a and Increased Il-6/Il-13 Intraocular Levels

    PubMed Central

    de-la-Torre, Alejandra; Sauer, Arnaud; Pfaff, Alexander W.; Bourcier, Tristan; Brunet, Julie; Speeg-Schatz, Claude; Ballonzoli, Laurent; Villard, Odile; Ajzenberg, Daniel; Sundar, Natarajan; Grigg, Michael E.

    2013-01-01

    In a cross sectional study, 19 French and 23 Colombian cases of confirmed active ocular toxoplasmosis (OT) were evaluated. The objective was to compare clinical, parasitological and immunological responses and relate them to the infecting strains. A complete ocular examination was performed in each patient. The infecting strain was characterized by genotyping when intraocular Toxoplasma DNA was detectable, as well as by peptide-specific serotyping for each patient. To characterize the immune response, we assessed Toxoplasma protein recognition patterns by intraocular antibodies and the intraocular profile of cytokines, chemokines and growth factors. Significant differences were found for size of active lesions, unilateral macular involvement, unilateral visual impairment, vitreous inflammation, synechiae, and vasculitis, with higher values observed throughout for Colombian patients. Multilocus PCR-DNA sequence genotyping was only successful in three Colombian patients revealing one type I and two atypical strains. The Colombian OT patients possessed heterogeneous atypical serotypes whereas the French were uniformly reactive to type II strain peptides. The protein patterns recognized by intraocular antibodies and the cytokine patterns were strikingly different between the two populations. Intraocular IFN-γ and IL-17 expression was lower, while higher levels of IL-13 and IL-6 were detected in aqueous humor of Colombian patients. Our results are consistent with the hypothesis that South American strains may cause more severe OT due to an inhibition of the protective effect of IFN-γ. PMID:24278490

  9. Localized direction selective responses in the dendrites of visual interneurons of the fly

    PubMed Central

    2010-01-01

    Background The various tasks of visual systems, including course control, collision avoidance and the detection of small objects, require at the neuronal level the dendritic integration and subsequent processing of many spatially distributed visual motion inputs. While much is known about the pooled output in these systems, as in the medial superior temporal cortex of monkeys or in the lobula plate of the insect visual system, the motion tuning of the elements that provide the input has yet received little attention. In order to visualize the motion tuning of these inputs we examined the dendritic activation patterns of neurons that are selective for the characteristic patterns of wide-field motion, the lobula-plate tangential cells (LPTCs) of the blowfly. These neurons are known to sample direction-selective motion information from large parts of the visual field and combine these signals into axonal and dendro-dendritic outputs. Results Fluorescence imaging of intracellular calcium concentration allowed us to take a direct look at the local dendritic activity and the resulting local preferred directions in LPTC dendrites during activation by wide-field motion in different directions. These 'calcium response fields' resembled a retinotopic dendritic map of local preferred directions in the receptive field, the layout of which is a distinguishing feature of different LPTCs. Conclusions Our study reveals how neurons acquire selectivity for distinct visual motion patterns by dendritic integration of the local inputs with different preferred directions. With their spatial layout of directional responses, the dendrites of the LPTCs we investigated thus served as matched filters for wide-field motion patterns. PMID:20384983

  10. Asymmetric top-down modulation of ascending visual pathways in pigeons.

    PubMed

    Freund, Nadja; Valencia-Alfonso, Carlos E; Kirsch, Janina; Brodmann, Katja; Manns, Martina; Güntürkün, Onur

    2016-03-01

    Cerebral asymmetries are a ubiquitous phenomenon evident in many species, incl. humans, and they display some similarities in their organization across vertebrates. In many species the left hemisphere is associated with the ability to categorize objects based on abstract or experience-based behaviors. Using the asymmetrically organized visual system of pigeons as an animal model, we show that descending forebrain pathways asymmetrically modulate visually evoked responses of single thalamic units. Activity patterns of neurons within the nucleus rotundus, the largest thalamic visual relay structure in birds, were differently modulated by left and right hemispheric descending systems. Thus, visual information ascending towards the left hemisphere was modulated by forebrain top-down systems at thalamic level, while right thalamic units were strikingly less modulated. This asymmetry of top-down control could promote experience-based processes within the left hemisphere, while biasing the right side towards stimulus-bound response patterns. In a subsequent behavioral task we tested the possible functional impact of this asymmetry. Under monocular conditions, pigeons learned to discriminate color pairs, so that each hemisphere was trained on one specific discrimination. Afterwards the animals were presented with stimuli that put the hemispheres in conflict. Response patterns on the conflicting stimuli revealed a clear dominance of the left hemisphere. Transient inactivation of left hemispheric top-down control reduced this dominance while inactivation of right hemispheric top-down control had no effect on response patterns. Functional asymmetries of descending systems that modify visual ascending pathways seem to play an important role in the superiority of the left hemisphere in experience-based visual tasks. Copyright © 2015. Published by Elsevier Ltd.

  11. Use of a twin dataset to identify AMD-related visual patterns controlled by genetic factors

    NASA Astrophysics Data System (ADS)

    Quellec, Gwénolé; Abràmoff, Michael D.; Russell, Stephen R.

    2010-03-01

    The mapping of genotype to the phenotype of age-related macular degeneration (AMD) is expected to improve the diagnosis and treatment of the disease in a near future. In this study, we focused on the first step to discover this mapping: we identified visual patterns related to AMD which seem to be controlled by genetic factors, without explicitly relating them to the genes. For this purpose, we used a dataset of eye fundus photographs from 74 twin pairs, either monozygotic twins, who have the same genotype, or dizygotic twins, whose genes responsible for AMD are less likely to be identical. If we are able to differentiate monozygotic twins from dizygotic twins, based on a given visual pattern, then this pattern is likely to be controlled by genetic factors. The main visible consequence of AMD is the apparition of drusen between the retinal pigment epithelium and Bruch's membrane. We developed two automated drusen detectors based on the wavelet transform: a shape-based detector for hard drusen, and a texture- and color- based detector for soft drusen. Forty visual features were evaluated at the location of the automatically detected drusen. These features characterize the texture, the shape, the color, the spatial distribution, or the amount of drusen. A distance measure between twin pairs was defined for each visual feature; a smaller distance should be measured between monozygotic twins for visual features controlled by genetic factors. The predictions of several visual features (75.7% accuracy) are comparable or better than the predictions of human experts.

  12. Aging effect in pattern, motion and cognitive visual evoked potentials.

    PubMed

    Kuba, Miroslav; Kremláček, Jan; Langrová, Jana; Kubová, Zuzana; Szanyi, Jana; Vít, František

    2012-06-01

    An electrophysiological study on the effect of aging on the visual pathway and various levels of visual information processing (primary cortex, associate visual motion processing cortex and cognitive cortical areas) was performed. We examined visual evoked potentials (VEPs) to pattern-reversal, motion-onset (translation and radial motion) and visual stimuli with a cognitive task (cognitive VEPs - P300 wave) at luminance of 17 cd/m(2). The most significant age-related change in a group of 150 healthy volunteers (15-85 years of age) was the increase in the P300 wave latency (2 ms per 1 year of age). Delays of the motion-onset VEPs (0.47 ms/year in translation and 0.46 ms/year in radial motion) and the pattern-reversal VEPs (0.26 ms/year) and the reductions of their amplitudes with increasing subject age (primarily in P300) were also found to be significant. The amplitude of the motion-onset VEPs to radial motion remained the most constant parameter with increasing age. Age-related changes were stronger in males. Our results indicate that cognitive VEPs, despite larger variability of their parameters, could be a useful criterion for an objective evaluation of the aging processes within the CNS. Possible differences in aging between the motion-processing system and the form-processing system within the visual pathway might be indicated by the more pronounced delay in the motion-onset VEPs and by their preserved size for radial motion (a biologically significant variant of motion) compared to the changes in pattern-reversal VEPs. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf

    PubMed Central

    Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z.; Zhang, Fan; Gonçalves, Óscar F.; Fang, Fang; Bi, Yanchao

    2016-01-01

    Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus—a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461

  14. An association between auditory-visual synchrony processing and reading comprehension: Behavioral and electrophysiological evidence

    PubMed Central

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2016-01-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension. PMID:28129060

  15. An Association between Auditory-Visual Synchrony Processing and Reading Comprehension: Behavioral and Electrophysiological Evidence.

    PubMed

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2017-03-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.

  16. Elastic, Cottage Cheese, and Gasoline: Visualizing Division of Fractions

    ERIC Educational Resources Information Center

    Peck, Sallie; Wood, Japheth

    2008-01-01

    Teachers must be prepared to recognize valid alternative representations of arithmetic problems. Challenging examples involving mixed fractions and division are presented along with teacher's discussion from a professional development workshop. (Contains 6 figures and 1 table.)

  17. Superconductive neuristor R-junction

    NASA Technical Reports Server (NTRS)

    Reible, S. A.

    1976-01-01

    Device incorporating specially-configured pure metal transition region can be developed to simulate a nerve cell. Combination of such cells may be formed to simulate an eye or brain and can be used in recognizing characters and other visual images.

  18. On vegetation mapping in Alaska using LANDSAT imagery with primary concerns for method and purpose in satellite image-based vegetation and land-use mapping and the visual interpretation of imagery in photographic format

    NASA Technical Reports Server (NTRS)

    Anderson, J. H. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. A simulated color infrared LANDSAT image covering the western Seward Peninsula was used for identifying and mapping vegetation by direct visual examination. The 1:1,083,400 scale print used was prepared by a color additive process using positive transparencies from MSS bands 4, 5, and 7. Seven color classes were recognized. A vegetation map of 3200 sq km area just west of Fairbanks, Alaska was made. Five colors were recognized on the image and identified to vegetation types roughly equivalent to formations in the UNESCO classification: orange - broadleaf deciduous forest; gray - needleleaf evergreen forest; light violet - subarctic alpine tundra vegetation; violet - broadleaf deciduous shrub thicket; and dull violet - bog vegetation.

  19. Interference with facial emotion recognition by verbal but not visual loads.

    PubMed

    Reed, Phil; Steed, Ian

    2015-12-01

    The ability to recognize emotions through facial characteristics is critical for social functioning, but is often impaired in those with a developmental or intellectual disability. The current experiments explored the degree to which interfering with the processing capacities of typically-developing individuals would produce a similar inability to recognize emotions through the facial elements of faces displaying particular emotions. It was found that increasing the cognitive load (in an attempt to model learning impairments in a typically developing population) produced deficits in correctly identifying emotions from facial elements. However, this effect was much more pronounced when using a concurrent verbal task than when employing a concurrent visual task, suggesting that there is a substantial verbal element to the labeling and subsequent recognition of emotions. This concurs with previous work conducted with those with developmental disabilities that suggests emotion recognition deficits are connected with language deficits. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information

    NASA Astrophysics Data System (ADS)

    Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.

    Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.

  1. Creating a meaningful visual perception in blind volunteers by optic nerve stimulation

    NASA Astrophysics Data System (ADS)

    Brelén, M. E.; Duret, F.; Gérard, B.; Delbeke, J.; Veraart, C.

    2005-03-01

    A blind volunteer, suffering from retinitis pigmentosa, has been chronically implanted with an optic nerve visual prosthesis. Vision rehabilitation with this volunteer has concentrated on the development of a stimulation strategy according to which video camera images are converted into stimulation pulses. The aim is to convey as much information as possible about the visual scene within the limits of the device's capabilities. Pattern recognition tasks were used to assess the effectiveness of the stimulation strategy. The results demonstrate how even a relatively basic algorithm can efficiently convey useful information regarding the visual scene. By increasing the number of phosphenes used in the algorithm, better performance is observed but a longer training period is required. After a learning period, the volunteer achieved a pattern recognition score of 85% at 54 s on average per pattern. After nine evaluation sessions, when using a stimulation strategy exploiting all available phosphenes, no saturation effect has yet been observed.

  2. Bayesian learning of visual chunks by human observers

    PubMed Central

    Orbán, Gergő; Fiser, József; Aslin, Richard N.; Lengyel, Máté

    2008-01-01

    Efficient and versatile processing of any hierarchically structured information requires a learning mechanism that combines lower-level features into higher-level chunks. We investigated this chunking mechanism in humans with a visual pattern-learning paradigm. We developed an ideal learner based on Bayesian model comparison that extracts and stores only those chunks of information that are minimally sufficient to encode a set of visual scenes. Our ideal Bayesian chunk learner not only reproduced the results of a large set of previous empirical findings in the domain of human pattern learning but also made a key prediction that we confirmed experimentally. In accordance with Bayesian learning but contrary to associative learning, human performance was well above chance when pair-wise statistics in the exemplars contained no relevant information. Thus, humans extract chunks from complex visual patterns by generating accurate yet economical representations and not by encoding the full correlational structure of the input. PMID:18268353

  3. Discovering Visual Scanning Patterns in a Computerized Cancellation Test

    ERIC Educational Resources Information Center

    Huang, Ho-Chuan; Wang, Tsui-Ying

    2013-01-01

    The purpose of this study was to develop an attention sequential mining mechanism for investigating the sequential patterns of children's visual scanning process in a computerized cancellation test. Participants had to locate and cancel the target amongst other non-targets in a structured form, and a random form with Chinese stimuli. Twenty-three…

  4. Gaze Patterns of Gross Anatomy Students Change with Classroom Learning

    ERIC Educational Resources Information Center

    Zumwalt, Ann C.; Iyer, Arjun; Ghebremichael, Abenet; Frustace, Bruno S.; Flannery, Sean

    2015-01-01

    Numerous studies have documented that experts exhibit more efficient gaze patterns than those of less experienced individuals. In visual search tasks, experts use fewer, longer fixations to fixate for relatively longer on salient regions of the visual field while less experienced observers spend more time examining nonsalient regions. This study…

  5. Patterns of Visual Attention to Faces and Objects in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine

    2011-01-01

    This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns.…

  6. Visualization of dietary patterns and their associations with age-related macular degeneration

    USDA-ARS?s Scientific Manuscript database

    PURPOSE: We aimed to visualize the relationship of predominant dietary patterns and their associations with AMD. METHODS: A total of 8103 eyes from 4088 participants in the baseline Age-Related Eye Disease Study (AREDS) were classified into three groups: control (n=2739), early AMD (n=4599), and adv...

  7. Experience improves feature extraction in Drosophila.

    PubMed

    Peng, Yueqing; Xi, Wang; Zhang, Wei; Zhang, Ke; Guo, Aike

    2007-05-09

    Previous exposure to a pattern in the visual scene can enhance subsequent recognition of that pattern in many species from honeybees to humans. However, whether previous experience with a visual feature of an object, such as color or shape, can also facilitate later recognition of that particular feature from multiple visual features is largely unknown. Visual feature extraction is the ability to select the key component from multiple visual features. Using a visual flight simulator, we designed a novel protocol for visual feature extraction to investigate the effects of previous experience on visual reinforcement learning in Drosophila. We found that, after conditioning with a visual feature of objects among combinatorial shape-color features, wild-type flies exhibited poor ability to extract the correct visual feature. However, the ability for visual feature extraction was greatly enhanced in flies trained previously with that visual feature alone. Moreover, we demonstrated that flies might possess the ability to extract the abstract category of "shape" but not a particular shape. Finally, this experience-dependent feature extraction is absent in flies with defective MBs, one of the central brain structures in Drosophila. Our results indicate that previous experience can enhance visual feature extraction in Drosophila and that MBs are required for this experience-dependent visual cognition.

  8. Visual Attention Patterns of Women with Androphilic and Gynephilic Sexual Attractions.

    PubMed

    Dawson, Samantha J; Fretz, Katherine M; Chivers, Meredith L

    2017-01-01

    Women who report exclusive sexual attractions to men (i.e., androphilia) exhibit gender-nonspecific patterns of sexual response-similar magnitude of genital response to both male and female targets. Interestingly, women reporting any degree of attraction to women (i.e., gynephilia) show significantly greater sexual responses to stimuli depicting female targets compared to male targets. At present, the mechanism(s) underlying these patterns are unknown. According to the information processing model (IPM), attentional processing of sexual cues initiates sexual responding; thus, attention to sexual cues may be one mechanism to explain the observed within-gender differences in specificity findings among women. The purpose of the present study was to examine patterns of initial and controlled visual attention among women with varying sexual attractions. We used eye tracking to assess visual attention to sexually preferred and nonpreferred cues in a sample of 164 women who differed in their degree of androphilia and gynephilia. We found that both exclusively and predominantly androphilic women showed gender-nonspecific patterns of initial attention. In contrast, ambiphilic (i.e., concurrent androphilia and gynephilia) and predominantly/exclusively gynephilic women oriented more quickly toward female targets. Controlled attention patterns mirrored patterns of self-reported sexual attractions for three of these four groups of women, such that gender-specific patterns of visual attention were found for androphilic and gynephilic women. Ambiphilic women looked significantly longer at female targets compared to male targets. These findings support predictions from the IPM and suggest that both initial and controlled attention to sexual cues may be mechanisms contributing to within-gender variation in sexual responding.

  9. [Plot analysis in the dark coniferous ecosystem using GPS and GIS techniques].

    PubMed

    Guan, Wenbin; Xie, Chunhua; Wu, Jian'an; Yu, Xinxiao; Chen, Gengwei; Li, Tongyang

    2002-07-01

    It is generally difficult to survey in primary forests located on high-altitude region. However, it is convenient to identify and to recognize plots accompanied by GPS and GIS techniques, which can also display the spatial pattern of arbors precisely. Using the method of rapid-static positioning cooperated with tape-measure, it is concluded that except some points, the positioning was relatively precise, the average value of RMS was 2.84, variance was 2.96, and delta B, delta L, and delta H were 1.2, 1.2, and 4.3 m with their variances being +/- 0.6, +/- 1.1, and +/- 21.1, respectively, which could meet the needs of forestry management sufficiently. Accompanied by some other models, many ecological processes under small and even medium scale, such as the dynamics of gap succession, could also be simulated visually by GIS. Therefore, the techniques of "2S" were patent for forest ecosystem management under the fine scale, especially in the area of high altitude.

  10. Misophonia: diagnostic criteria for a new psychiatric disorder.

    PubMed

    Schröder, Arjan; Vulink, Nienke; Denys, Damiaan

    2013-01-01

    Some patients report a preoccupation with a specific aversive human sound that triggers impulsive aggression. This condition is relatively unknown and has hitherto never been described, although the phenomenon has anecdotally been named misophonia. 42 patients who reported misophonia were recruited by our hospital website. All patients were interviewed by an experienced psychiatrist and were screened with an adapted version of the Y-BOCS, HAM-D, HAM-A, SCL-90 and SCID II. The misophonia patients shared a similar pattern of symptoms in which an auditory or visual stimulus provoked an immediate aversive physical reaction with anger, disgust and impulsive aggression. The intensity of these emotions caused subsequent obsessions with the cue, avoidance and social dysfunctioning with intense suffering. The symptoms cannot be classified in the current nosological DSM-IV TR or ICD-10 systems. We suggest that misophonia should be classified as a discrete psychiatric disorder. Diagnostic criteria could help to officially recognize the patients and the disorder, improve its identification by professional health carers, and encourage scientific research.

  11. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  12. The dual function of barred plumage in birds: camouflage and communication.

    PubMed

    Gluckman, T L; Cardoso, G C

    2010-11-01

    A commonly held principle in visual ecology is that communication compromises camouflage: while visual signals are often conspicuous, camouflage provides concealment. However, some traits may have evolved for communication and camouflage simultaneously, thereby overcoming this functional compromise. Visual patterns generally provide camouflage, but it was suggested that a particular type of visual pattern – avian barred plumage – could also be a signal of individual quality. Here, we test if the evolution of sexual dimorphism in barred plumage, as well as differences between juvenile and adult plumage, indicate camouflage and/or signalling functions across the class Aves. We found a higher frequency of female- rather than male-biased sexual dimorphism in barred plumage, indicating that camouflage is its most common function. But we also found that, compared to other pigmentation patterns, barred plumage is more frequently biased towards males and its expression more frequently restricted to adulthood, suggesting that barred plumage often evolves or is maintained as a sexual communication signal. This illustrates how visual traits can accommodate the apparently incompatible functions of camouflage and communication, which has implications for our understanding of avian visual ecology and sexual ornamentation.

  13. Visual symptomatology and referral patterns for Operation Iraqi Freedom and Operation Enduring Freedom veterans with traumatic brain injury.

    PubMed

    Bulson, Ryan; Jun, Weon; Hayes, John

    2012-01-01

    Advances in protective armor technology and changes in the "patterns of war" have created a population of Operation Iraqi Freedom/Operation Enduring Freedom (OIF/OEF) veterans with traumatic brain injury (TBI) that provide a unique challenge to Department of Veterans Affairs (VA) healthcare practitioners. The purpose of the study was to determine the frequency of symptomatic ocular and visual sequelae of TBI in OIF/OEF veterans at the Portland VA Medical Center, a Polytrauma Support Clinic Team site. A retrospective analysis of 100 OIF/OEF veterans with TBI was conducted to determine the prevalence of ocular and visual complaints. Referral patterns were also investigated. Visual symptoms were reported in approximately 50% of veterans with TBI. Loss of consciousness, but not number of deployments or number of blast exposures, was found to have a statistically significant association with severity of reported visual symptoms. The most commonly reported symptoms included blurred vision (67%), photosensitivity (50%), and accommodative problems (40%). Visual symptoms of OIF/OEF veterans at the Portland VA Medical Center are reported at slightly lower rates than similar studies conducted at the Palo Alto and Edward Hines Jr VA facilities.

  14. The Common Prescription Patterns Based on the Hierarchical Clustering of Herb-Pairs Efficacies

    PubMed Central

    2016-01-01

    Prescription patterns are rules or regularities used to generate, recognize, or judge a prescription. Most of existing studies focused on the specific prescription patterns for diverse diseases or syndromes, while little attention was paid to the common patterns, which reflect the global view of the regularities of prescriptions. In this paper, we designed a method CPPM to find the common prescription patterns. The CPPM is based on the hierarchical clustering of herb-pair efficacies (HPEs). Firstly, HPEs were hierarchically clustered; secondly, the individual herbs are labeled by the HPEC (the clusters of HPEs); and then the prescription patterns were extracted from the combinations of HPEC; finally the common patterns are recognized statistically. The results showed that HPEs have hierarchical clustering structure. When the clustering level is 2 and the HPEs were classified into two clusters, the common prescription patterns are obvious. Among 332 candidate prescriptions, 319 prescriptions follow the common patterns. The description of the patterns is that if a prescription contains the herbs of the cluster (C 1), it is very likely to have other herbs of another cluster (C 2); while a prescription has the herbs of C 2, it may have no herbs of C 1. Finally, we discussed that the common patterns are mathematically coincident with the Blood-Qi theory. PMID:27190534

  15. Performance of normal adults and children on central auditory diagnostic tests and their corresponding visual analogs.

    PubMed

    Bellis, Teri James; Ross, Jody

    2011-09-01

    It has been suggested that, in order to validate a diagnosis of (C)APD (central auditory processing disorder), testing using direct cross-modal analogs should be performed to demonstrate that deficits exist solely or primarily in the auditory modality (McFarland and Cacace, 1995; Cacace and McFarland, 2005). This modality-specific viewpoint is controversial and not universally accepted (American Speech-Language-Hearing Association [ASHA], 2005; Musiek et al, 2005). Further, no such analogs have been developed to date, and neither the feasibility of such testing in normally functioning individuals nor the concurrent validity of cross-modal analogs has been established. The purpose of this study was to investigate the feasibility of cross-modal testing by examining the performance of normal adults and children on four tests of central auditory function and their corresponding visual analogs. In addition, this study investigated the degree to which concurrent validity of auditory and visual versions of these tests could be demonstrated. An experimental repeated measures design was employed. Participants consisted of two groups (adults, n=10; children, n=10) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of auditory/otologic, language, learning, neurologic, or related disorders. Visual analogs of four tests in common clinical use for the diagnosis of (C)APD were developed (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; Duration Patterns [Pinheiro and Musiek, 1985]; and the Random Gap Detection Test [RGDT; Keith, 2000]). Participants underwent two 1 hr test sessions separated by at least 1 wk. Order of sessions (auditory, visual) and tests within each session were counterbalanced across participants. ANOVAs (analyses of variance) were used to examine effects of group, modality, and laterality (for the Dichotic/Dichoptic Digits tests) or response condition (for the auditory and visual Frequency Patterns and Duration Patterns tests). Pearson product-moment correlations were used to investigate relationships between auditory and visual performance. Adults performed significantly better than children on the Dichotic/Dichoptic Digits tests. Results also revealed a significant effect of modality, with auditory better than visual, and a significant modality×laterality interaction, with a right-ear advantage seen for the auditory task and a left-visual-field advantage seen for the visual task. For the Frequency Patterns test and its visual analog, results revealed a significant modality×response condition interaction, with humming better than labeling for the auditory version but the reversed effect for the visual version. For Duration Patterns testing, visual performance was significantly poorer than auditory performance. Due to poor test-retest reliability and ceiling effects for the auditory and visual gap-detection tasks, analyses could not be performed. No cross-modal correlations were observed for any test. Results demonstrated that cross-modal testing is at least feasible using easily accessible computer hardware and software. The lack of any cross-modal correlations suggests independent processing mechanisms for auditory and visual versions of each task. Examination of performance in individuals with central auditory and pan-sensory disorders is needed to determine the utility of cross-modal analogs in the differential diagnosis of (C)APD. American Academy of Audiology.

  16. The effect of vertical and horizontal symmetry on memory for tactile patterns in late blind individuals.

    PubMed

    Cattaneo, Zaira; Vecchi, Tomaso; Fantino, Micaela; Herbert, Andrew M; Merabet, Lotfi B

    2013-02-01

    Visual stimuli that exhibit vertical symmetry are easier to remember than stimuli symmetric along other axes, an advantage that extends to the haptic modality as well. Critically, the vertical symmetry memory advantage has not been found in early blind individuals, despite their overall superior memory, as compared with sighted individuals, and the presence of an overall advantage for identifying symmetric over asymmetric patterns. The absence of the vertical axis memory advantage in the early blind may depend on their total lack of visual experience or on the effect of prolonged visual deprivation. To disentangle this issue, in this study, we measured the ability of late blind individuals to remember tactile spatial patterns that were either vertically or horizontally symmetric or asymmetric. Late blind participants showed better memory performance for symmetric patterns. An additional advantage for the vertical axis of symmetry over the horizontal one was reported, but only for patterns presented in the frontal plane. In the horizontal plane, no difference was observed between vertical and horizontal symmetric patterns, due to the latter being recalled particularly well. These results are discussed in terms of the influence of the spatial reference frame adopted during exploration. Overall, our data suggest that prior visual experience is sufficient to drive the vertical symmetry memory advantage, at least when an external reference frame based on geocentric cues (i.e., gravity) is adopted.

  17. Imaging diagnosis--pulmonary metastases in New World camelids.

    PubMed

    Gall, David A; Zekas, Lisa J; Van Metre, David; Holt, Timothy

    2006-01-01

    The radiographic appearance of pulmonary metastatic disease from carcinoma is described in a llama and an alpaca. In one, a diffuse miliary pattern was seen. In the other, a more atypical unstructured interstitial pattern was recognized. Metastatic pulmonary neoplasia in camelids may assume a generalized miliary or unstructured pattern.

  18. Contextual cueing: implicit learning and memory of visual context guides spatial attention.

    PubMed

    Chun, M M; Jiang, Y

    1998-06-01

    Global context plays an important, but poorly understood, role in visual tasks. This study demonstrates that a robust memory for visual context exists to guide spatial attention. Global context was operationalized as the spatial layout of objects in visual search displays. Half of the configurations were repeated across blocks throughout the entire session, and targets appeared within consistent locations in these arrays. Targets appearing in learned configurations were detected more quickly. This newly discovered form of search facilitation is termed contextual cueing. Contextual cueing is driven by incidentally learned associations between spatial configurations (context) and target locations. This benefit was obtained despite chance performance for recognizing the configurations, suggesting that the memory for context was implicit. The results show how implicit learning and memory of visual context can guide spatial attention towards task-relevant aspects of a scene.

  19. Effect of disease stage on progression of hydroxychloroquine retinopathy.

    PubMed

    Marmor, Michael F; Hu, Julia

    2014-09-01

    Hydroxychloroquine sulfate retinopathy can progress after the drug is stopped. It is not clear how this relates to the stage of retinopathy or whether early screening with modern imaging technology can prevent progression and visual loss. To determine the relationship between progression of retinopathy and the severity of disease using objective data from optical coherence tomography and assess the value of early screening for the toxic effects of hydroxychloroquine. Clinical findings in patients with hydroxychloroquine retinopathy were monitored with repeated anatomical and functional examinations for 13 to 40 months after the drug was stopped in a referral practice in a university medical center. Eleven patients participated, with the severity of toxic effects categorized as early (patchy parafoveal damage shown on field or objective testing), moderate (a 50%-100% parafoveal ring of optical coherence tomography thinning but intact retinal pigment epithelium), and severe (visible bull's-eye damage). Visual acuity, white 10-2 visual field pattern density plots, fundus autofluorescence, spectral-density optical coherence tomography cross sections, thickness (from cube diagrams), and ellipsoid zone length. Visual acuity and visual fields showed no consistent change. Fundus autofluorescence showed little or no change except in severe cases in which the bull's-eye damage expanded progressively. Optical coherence tomography cross sections showed little visible change in early and moderate cases but progressive foveal thinning (approximately 7 μm/y) and loss of ellipsoid zone (in the range of 100 μm/y) in severe cases, which was confirmed by quantitative measurements. The measurements also showed some foveal thinning (approximately 4 μm/y) and deepening of parafoveal loss in moderate cases, but the breadth of the ellipsoid zone remained constant in both early and moderate cases. A few cases showed a suggestion of ellipsoid zone improvement. Patients with hydroxychloroquine retinopathy involving the retinal pigment epithelium demonstrated progressive damage on optical coherence tomography for at least 3 years after the drug was discontinued, including loss of foveal thickness and cone structure. Cases recognized before retinal pigment epithelium damage retained foveal architecture with little retinal thinning. Early recognition of hydroxychloroquine toxic effects before any fundus changes are visible, using visual fields and optical coherence tomography (along with fundus autofluorescence and multifocal electroretinography as indicated), will greatly minimize late progression and the risk of visual loss.

  20. Sensitivity to timing and order in human visual cortex.

    PubMed

    Singer, Jedediah M; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel

    2015-03-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. Copyright © 2015 the American Physiological Society.

Top