Two Methods for Teaching Simple Visual Discriminations to Learners with Severe Disabilities
ERIC Educational Resources Information Center
Graff, Richard B.; Green, Gina
2004-01-01
Simple discriminations are involved in many functional skills; additionally, they are components of conditional discriminations (identity and arbitrary matching-to-sample), which are involved in a wide array of other important performances. Many individuals with severe disabilities have difficulty acquiring simple discriminations with standard…
Dynamic functional brain networks involved in simple visual discrimination learning.
Fidalgo, Camino; Conejo, Nélida María; González-Pardo, Héctor; Arias, Jorge Luis
2014-10-01
Visual discrimination tasks have been widely used to evaluate many types of learning and memory processes. However, little is known about the brain regions involved at different stages of visual discrimination learning. We used cytochrome c oxidase histochemistry to evaluate changes in regional brain oxidative metabolism during visual discrimination learning in a water-T maze at different time points during training. As compared with control groups, the results of the present study reveal the gradual activation of cortical (prefrontal and temporal cortices) and subcortical brain regions (including the striatum and the hippocampus) associated to the mastery of a simple visual discrimination task. On the other hand, the brain regions involved and their functional interactions changed progressively over days of training. Regions associated with novelty, emotion, visuo-spatial orientation and motor aspects of the behavioral task seem to be relevant during the earlier phase of training, whereas a brain network comprising the prefrontal cortex was found along the whole learning process. This study highlights the relevance of functional interactions among brain regions to investigate learning and memory processes. Copyright © 2014 Elsevier Inc. All rights reserved.
Detection and recognition of simple spatial forms
NASA Technical Reports Server (NTRS)
Watson, A. B.
1983-01-01
A model of human visual sensitivity to spatial patterns is constructed. The model predicts the visibility and discriminability of arbitrary two-dimensional monochrome images. The image is analyzed by a large array of linear feature sensors, which differ in spatial frequency, phase, orientation, and position in the visual field. All sensors have one octave frequency bandwidths, and increase in size linearly with eccentricity. Sensor responses are processed by an ideal Bayesian classifier, subject to uncertainty. The performance of the model is compared to that of the human observer in detecting and discriminating some simple images.
Lomber, S G; Payne, B R; Cornwell, P
1996-01-01
Extrastriate visual cortex of the ventral-posterior suprasylvian gyrus (vPS cortex) of freely behaving cats was reversibly deactivated with cooling to determine its role in performance on a battery of simple or masked two-dimensional pattern discriminations, and three-dimensional object discriminations. Deactivation of vPS cortex by cooling profoundly impaired the ability of the cats to recall the difference between all previously learned pattern and object discriminations. However, the cats' ability to learn or relearn pattern and object discriminations while vPS was deactivated depended upon the nature of the pattern or object and the cats' prior level of exposure to them. During cooling of vPS cortex, the cats could neither learn the novel object discriminations nor relearn a highly familiar masked or partially occluded pattern discrimination, although they could relearn both the highly familiar object and simple pattern discriminations. These cooling-induced deficits resemble those induced by cooling of the topologically equivalent inferotemporal cortex of monkeys and provides evidence that the equivalent regions contribute to visual processing in similar ways. Images Fig. 1 Fig. 3 PMID:8643686
Crowding with detection and coarse discrimination of simple visual features.
Põder, Endel
2008-04-24
Some recent studies have suggested that there are actually no crowding effects with detection and coarse discrimination of simple visual features. The present study tests the generality of this idea. A target Gabor patch, surrounded by either 2 or 6 flanker Gabors, was presented briefly at 4 deg eccentricity of the visual field. Each Gabor patch was oriented either vertically or horizontally (selected randomly). Observers' task was either to detect the presence of the target (presented with probability 0.5) or to identify the orientation of the target. The target-flanker distance was varied. Results were similar for the two tasks but different for 2 and 6 flankers. The idea that feature detection and coarse discrimination are immune to crowding may be valid for the two-flanker condition only. With six flankers, a normal crowding effect was observed. It is suggested that the complexity of the full pattern (target plus flankers) could explain the difference.
Simple and conditional visual discrimination with wheel running as reinforcement in rats.
Iversen, I H
1998-09-01
Three experiments explored whether access to wheel running is sufficient as reinforcement to establish and maintain simple and conditional visual discriminations in nondeprived rats. In Experiment 1, 2 rats learned to press a lit key to produce access to running; responding was virtually absent when the key was dark, but latencies to respond were longer than for customary food and water reinforcers. Increases in the intertrial interval did not improve the discrimination performance. In Experiment 2, 3 rats acquired a go-left/go-right discrimination with a trial-initiating response and reached an accuracy that exceeded 80%; when two keys showed a steady light, pressing the left key produced access to running whereas pressing the right key produced access to running when both keys showed blinking light. Latencies to respond to the lights shortened when the trial-initiation response was introduced and became much shorter than in Experiment 1. In Experiment 3, 1 rat acquired a conditional discrimination task (matching to sample) with steady versus blinking lights at an accuracy exceeding 80%. A trial-initiation response allowed self-paced trials as in Experiment 2. When the rat was exposed to the task for 19 successive 24-hr periods with access to food and water, the discrimination performance settled in a typical circadian pattern and peak accuracy exceeded 90%. When the trial-initiation response was under extinction, without access to running, the circadian activity pattern determined the time of spontaneous recovery. The experiments demonstrate that wheel-running reinforcement can be used to establish and maintain simple and conditional visual discriminations in nondeprived rats.
Lack of power enhances visual perceptual discrimination.
Weick, Mario; Guinote, Ana; Wilkinson, David
2011-09-01
Powerless individuals face much challenge and uncertainty. As a consequence, they are highly vigilant and closely scrutinize their social environments. The aim of the present research was to determine whether these qualities enhance performance in more basic cognitive tasks involving simple visual feature discrimination. To test this hypothesis, participants performed a series of perceptual matching and search tasks involving colour, texture, and size discrimination. As predicted, those primed with powerlessness generated shorter reaction times and made fewer eye movements than either powerful or control participants. The results indicate that the heightened vigilance shown by powerless individuals is associated with an advantage in performing simple types of psychophysical discrimination. These findings highlight, for the first time, an underlying competency in perceptual cognition that sets powerless individuals above their powerful counterparts, an advantage that may reflect functional adaptation to the environmental challenge and uncertainty that they face. © 2011 Canadian Psychological Association
Seki, Yoshimasa; Okanoya, Kazuo
2008-02-01
Both visual and auditory information are important for songbirds, especially in developmental and sexual contexts. To investigate bimodal cognition in songbirds, the authors conducted audiovisual discrimination training in Bengalese finches. The authors used two types of stimulus: an "artificial stimulus," which is a combination of simple figures and sound, and a "biological stimulus," consisting of video images of singing males along with their songs. The authors found that while both sexes predominantly used visual cues in the discrimination tasks, males tended to be more dependent on auditory information for the biological stimulus. Female responses were always dependent on the visual stimulus for both stimulus types. Only males changed their discrimination strategy according to stimulus type. Although males used both visual and auditory cues for the biological stimulus, they responded to the artificial stimulus depending only on visual information, as the females did. These findings suggest a sex difference in innate auditory sensitivity. (c) 2008 APA.
[Change settings for visual analyzer of child users of mobile communication: longitudinal study].
Khorseva, N I; Grigor'ev, Iu G; Gorbunova, N V
2014-01-01
The paper represents theresults of longitudinal monitoring of the changes in the parameters of simple visual-motor reaction, the visual acuity and the rate of the visual discrimination in the child users of mobile communication, which indicate the multivariability of the possible effects of radiation from mobile phones on the auditory system of children.
Honeybees can discriminate between Monet and Picasso paintings.
Wu, Wen; Moreno, Antonio M; Tangen, Jason M; Reinhard, Judith
2013-01-01
Honeybees (Apis mellifera) have remarkable visual learning and discrimination abilities that extend beyond learning simple colours, shapes or patterns. They can discriminate landscape scenes, types of flowers, and even human faces. This suggests that in spite of their small brain, honeybees have a highly developed capacity for processing complex visual information, comparable in many respects to vertebrates. Here, we investigated whether this capacity extends to complex images that humans distinguish on the basis of artistic style: Impressionist paintings by Monet and Cubist paintings by Picasso. We show that honeybees learned to simultaneously discriminate between five different Monet and Picasso paintings, and that they do not rely on luminance, colour, or spatial frequency information for discrimination. When presented with novel paintings of the same style, the bees even demonstrated some ability to generalize. This suggests that honeybees are able to discriminate Monet paintings from Picasso ones by extracting and learning the characteristic visual information inherent in each painting style. Our study further suggests that discrimination of artistic styles is not a higher cognitive function that is unique to humans, but simply due to the capacity of animals-from insects to humans-to extract and categorize the visual characteristics of complex images.
THE ROLE OF THE HIPPOCAMPUS IN OBJECT DISCRIMINATION BASED ON VISUAL FEATURES.
Levcik, David; Nekovarova, Tereza; Antosova, Eliska; Stuchlik, Ales; Klement, Daniel
2018-06-07
The role of rodent hippocampus has been intensively studied in different cognitive tasks. However, its role in discrimination of objects remains controversial due to conflicting findings. We tested whether the number and type of features available for the identification of objects might affect the strategy (hippocampal-independent vs. hippocampal-dependent) that rats adopt to solve object discrimination tasks. We trained rats to discriminate 2D visual objects presented on a computer screen. The objects were defined either by their shape only or by multiple-features (a combination of filling pattern and brightness in addition to the shape). Our data showed that objects displayed as simple geometric shapes are not discriminated by trained rats after their hippocampi had been bilaterally inactivated by the GABA A -agonist muscimol. On the other hand, objects containing a specific combination of non-geometric features in addition to the shape are discriminated even without the hippocampus. Our results suggest that the involvement of the hippocampus in visual object discrimination depends on the abundance of object's features. Copyright © 2018. Published by Elsevier Inc.
Fengler, Ineke; Nava, Elena; Röder, Brigitte
2015-01-01
Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 h and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up) on an audio–visual (i.e., faces and voices) emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio–visual (i.e., tone bursts and light flashes) discrimination task and two unimodal (one auditory and one visual) perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile) tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seems to possibly prevail for longer durations. PMID:25954166
Spatial Probability Cuing and Right Hemisphere Damage
ERIC Educational Resources Information Center
Shaqiri, Albulena; Anderson, Britt
2012-01-01
In this experiment we studied statistical learning, inter-trial priming, and visual attention. We assessed healthy controls and right brain damaged (RBD) patients with and without neglect, on a simple visual discrimination task designed to measure priming effects and probability learning. All participants showed a preserved priming effect for item…
Do rats use shape to solve “shape discriminations”?
Minini, Loredana; Jeffery, Kathryn J.
2006-01-01
Visual discrimination tasks are increasingly used to explore the neurobiology of vision in rodents, but it remains unclear how the animals solve these tasks: Do they process shapes holistically, or by using low-level features such as luminance and angle acuity? In the present study we found that when discriminating triangles from squares, rats did not use shape but instead relied on local luminance differences in the lower hemifield. A second experiment prevented this strategy by using stimuli—squares and rectangles—that varied in size and location, and for which the only constant predictor of reward was aspect ratio (ratio of height to width: a simple descriptor of “shape”). Rats eventually learned to use aspect ratio but only when no other discriminand was available, and performance remained very poor even at asymptote. These results suggest that although rats can process both dimensions simultaneously, they do not naturally solve shape discrimination tasks this way. This may reflect either a failure to visually process global shape information or a failure to discover shape as the discriminative stimulus in a simultaneous discrimination. Either way, our results suggest that simultaneous shape discrimination is not a good task for studies of visual perception in rodents. PMID:16705141
Enhanced pure-tone pitch discrimination among persons with autism but not Asperger syndrome.
Bonnel, Anna; McAdams, Stephen; Smith, Bennett; Berthiaume, Claude; Bertone, Armando; Ciocca, Valter; Burack, Jacob A; Mottron, Laurent
2010-07-01
Persons with Autism spectrum disorders (ASD) display atypical perceptual processing in visual and auditory tasks. In vision, Bertone, Mottron, Jelenic, and Faubert (2005) found that enhanced and diminished visual processing is linked to the level of neural complexity required to process stimuli, as proposed in the neural complexity hypothesis. Based on these findings, Samson, Mottron, Jemel, Belin, and Ciocca (2006) proposed to extend the neural complexity hypothesis to the auditory modality. They hypothesized that persons with ASD should display enhanced performance for simple tones that are processed in primary auditory cortical regions, but diminished performance for complex tones that require additional processing in associative auditory regions, in comparison to typically developing individuals. To assess this hypothesis, we designed four auditory discrimination experiments targeting pitch, non-vocal and vocal timbre, and loudness. Stimuli consisted of spectro-temporally simple and complex tones. The participants were adolescents and young adults with autism, Asperger syndrome, and typical developmental histories, all with IQs in the normal range. Consistent with the neural complexity hypothesis and enhanced perceptual functioning model of ASD (Mottron, Dawson, Soulières, Hubert, & Burack, 2006), the participants with autism, but not with Asperger syndrome, displayed enhanced pitch discrimination for simple tones. However, no discrimination-thresholds differences were found between the participants with ASD and the typically developing persons across spectrally and temporally complex conditions. These findings indicate that enhanced pure-tone pitch discrimination may be a cognitive correlate of speech-delay among persons with ASD. However, auditory discrimination among this group does not appear to be directly contingent on the spectro-temporal complexity of the stimuli. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Hager, Audrey M; Dringenberg, Hans C
2012-12-01
The rat visual system is structured such that the large (>90 %) majority of retinal ganglion axons reach the contralateral lateral geniculate nucleus (LGN) and visual cortex (V1). This anatomical design allows for the relatively selective activation of one cerebral hemisphere under monocular viewing conditions. Here, we describe the design of a harness and face mask allowing simple and noninvasive monocular occlusion in rats. The harness is constructed from synthetic fiber (shoelace-type material) and fits around the girth region and neck, allowing for easy adjustments to fit rats of various weights. The face mask consists of soft rubber material that is attached to the harness by Velcro strips. Eyeholes in the mask can be covered by additional Velcro patches to occlude either one or both eyes. Rats readily adapt to wearing the device, allowing behavioral testing under different types of viewing conditions. We show that rats successfully acquire a water-maze-based visual discrimination task under monocular viewing conditions. Following task acquisition, interocular transfer was assessed. Performance with the previously occluded, "untrained" eye was impaired, suggesting that training effects were partially confined to one cerebral hemisphere. The method described herein provides a simple and noninvasive means to restrict visual input for studies of visual processing and learning in various rodent species.
A comparison of methods for teaching receptive labeling to children with autism spectrum disorders.
Grow, Laura L; Carr, James E; Kodak, Tiffany M; Jostad, Candice M; Kisamore, April N
2011-01-01
Many early intervention curricular manuals recommend teaching auditory-visual conditional discriminations (i.e., receptive labeling) using the simple-conditional method in which component simple discriminations are taught in isolation and in the presence of a distracter stimulus before the learner is required to respond conditionally. Some have argued that this procedure might be susceptible to faulty stimulus control such as stimulus overselectivity (Green, 2001). Consequently, there has been a call for the use of alternative teaching procedures such as the conditional-only method, which involves conditional discrimination training from the onset of intervention. The purpose of the present study was to compare the simple-conditional and conditional-only methods for teaching receptive labeling to 3 young children diagnosed with autism spectrum disorders. The data indicated that the conditional-only method was a more reliable and efficient teaching procedure. In addition, several error patterns emerged during training using the simple-conditional method. The implications of the results with respect to current teaching practices in early intervention programs are discussed.
Plescia, Fulvio; Sardo, Pierangelo; Rizzo, Valerio; Cacace, Silvana; Marino, Rosa Anna Maria; Brancato, Anna; Ferraro, Giuseppe; Carletti, Fabio; Cannizzaro, Carla
2014-01-01
Neurosteroids can alter neuronal excitability interacting with specific neurotransmitter receptors, thus affecting several functions such as cognition and emotionality. In this study we investigated, in adult male rats, the effects of the acute administration of pregnenolone-sulfate (PREGS) (10mg/kg, s.c.) on cognitive processes using the Can test, a non aversive spatial/visual task which allows the assessment of both spatial orientation-acquisition and object discrimination in a simple and in a complex version of the visual task. Electrophysiological recordings were also performed in vivo, after acute PREGS systemic administration in order to investigate on the neuronal activation in the hippocampus and the perirhinal cortex. Our results indicate that, PREGS induces an improvement in spatial orientation-acquisition and in object discrimination in the simple and in the complex visual task; the behavioural responses were also confirmed by electrophysiological recordings showing a potentiation in the neuronal activity of the hippocampus and the perirhinal cortex. In conclusion, this study demonstrates that PREGS systemic administration in rats exerts cognitive enhancing properties which involve both the acquisition and utilization of spatial information, and object discrimination memory, and also correlates the behavioural potentiation observed to an increase in the neuronal firing of discrete cerebral areas critical for spatial learning and object recognition. This provides further evidence in support of the role of PREGS in exerting a protective and enhancing role on human memory. Copyright © 2013. Published by Elsevier B.V.
Image Discrimination Models for Object Detection in Natural Backgrounds
NASA Technical Reports Server (NTRS)
Ahumada, A. J., Jr.
2000-01-01
This paper reviews work accomplished and in progress at NASA Ames relating to visual target detection. The focus is on image discrimination models, starting with Watson's pioneering development of a simple spatial model and progressing through this model's descendents and extensions. The application of image discrimination models to target detection will be described and results reviewed for Rohaly's vehicle target data and the Search 2 data. The paper concludes with a description of work we have done to model the process by which observers learn target templates and methods for elucidating those templates.
A COMPARISON OF METHODS FOR TEACHING RECEPTIVE LABELING TO CHILDREN WITH AUTISM SPECTRUM DISORDERS
Grow, Laura L; Carr, James E; Kodak, Tiffany M; Jostad, Candice M; Kisamore, April N
2011-01-01
Many early intervention curricular manuals recommend teaching auditory-visual conditional discriminations (i.e., receptive labeling) using the simple-conditional method in which component simple discriminations are taught in isolation and in the presence of a distracter stimulus before the learner is required to respond conditionally. Some have argued that this procedure might be susceptible to faulty stimulus control such as stimulus overselectivity (Green, 2001). Consequently, there has been a call for the use of alternative teaching procedures such as the conditional-only method, which involves conditional discrimination training from the onset of intervention. The purpose of the present study was to compare the simple-conditional and conditional-only methods for teaching receptive labeling to 3 young children diagnosed with autism spectrum disorders. The data indicated that the conditional-only method was a more reliable and efficient teaching procedure. In addition, several error patterns emerged during training using the simple-conditional method. The implications of the results with respect to current teaching practices in early intervention programs are discussed. PMID:21941380
A description of discrete internal representation schemes for visual pattern discrimination.
Foster, D H
1980-01-01
A general description of a class of schemes for pattern vision is outlined in which the visual system is assumed to form a discrete internal representation of the stimulus. These representations are discrete in that they are considered to comprise finite combinations of "components" which are selected from a fixed and finite repertoire, and which designate certain simple pattern properties or features. In the proposed description it is supposed that the construction of an internal representation is a probabilistic process. A relationship is then formulated associating the probability density functions governing this construction and performance in visually discriminating patterns when differences in pattern shape are small. Some questions related to the application of this relationship to the experimental investigation of discrete internal representations are briefly discussed.
Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.
Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo
2013-02-16
We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.
Heterogeneity effects in visual search predicted from the group scanning model.
Macquistan, A D
1994-12-01
The group scanning model of feature integration theory (Treisman & Gormican, 1988) suggests that subjects search visual displays serially by groups, but process items within each group in parallel. The size of these groups is determined by the discriminability of the targets in the background of distractors. When the target is poorly discriminable, the size of the scanned group will be small, and search will be slow. The model predicts that group size will be smallest when targets of an intermediate value on a perceptual dimension are presented in a heterogeneous background of distractors that have higher and lower values on the same dimension. Experiment 1 demonstrates this effect. Experiment 2 controls for a possible confound of decision complexity in Experiment 1. For simple feature targets, the group scanning model provides a good account of the visual search process.
Intellectual Abilities That Discriminate Good and Poor Problem Solvers.
ERIC Educational Resources Information Center
Meyer, Ruth Ann
1981-01-01
This study compared good and poor fourth-grade problem solvers on a battery of 19 "reference" tests for verbal, induction, numerical, word fluency, memory, perceptual speed, and simple visualization abilities. Results suggest verbal, numerical, and especially induction abilities are important to successful mathematical problem solving.…
Truppa, Valentina; Carducci, Paola; Trapanese, Cinzia; Hanus, Daniel
2015-01-01
Most experimental paradigms to study visual cognition in humans and non-human species are based on discrimination tasks involving the choice between two or more visual stimuli. To this end, different types of stimuli and procedures for stimuli presentation are used, which highlights the necessity to compare data obtained with different methods. The present study assessed whether, and to what extent, capuchin monkeys’ ability to solve a size discrimination problem is influenced by the type of procedure used to present the problem. Capuchins’ ability to generalise knowledge across different tasks was also evaluated. We trained eight adult tufted capuchin monkeys to select the larger of two stimuli of the same shape and different sizes by using pairs of food items (Experiment 1), computer images (Experiment 1) and objects (Experiment 2). Our results indicated that monkeys achieved the learning criterion faster with food stimuli compared to both images and objects. They also required consistently fewer trials with objects than with images. Moreover, female capuchins had higher levels of acquisition accuracy with food stimuli than with images. Finally, capuchins did not immediately transfer the solution of the problem acquired in one task condition to the other conditions. Overall, these findings suggest that – even in relatively simple visual discrimination problems where a single perceptual dimension (i.e., size) has to be judged – learning speed strongly depends on the mode of presentation. PMID:25927363
Spatial vision in older adults: perceptual changes and neural bases.
McKendrick, Allison M; Chan, Yu Man; Nguyen, Bao N
2018-05-17
The number of older adults is rapidly increasing internationally, leading to a significant increase in research on how healthy ageing impacts vision. Most clinical assessments of spatial vision involve simple detection (letter acuity, grating contrast sensitivity, perimetry). However, most natural visual environments are more spatially complicated, requiring contrast discrimination, and the delineation of object boundaries and contours, which are typically present on non-uniform backgrounds. In this review we discuss recent research that reports on the effects of normal ageing on these more complex visual functions, specifically in the context of recent neurophysiological studies. Recent research has concentrated on understanding the effects of healthy ageing on neural responses within the visual pathway in animal models. Such neurophysiological research has led to numerous, subsequently tested, hypotheses regarding the likely impact of healthy human ageing on specific aspects of spatial vision. Healthy normal ageing impacts significantly on spatial visual information processing from the retina through to visual cortex. Some human data validates that obtained from studies of animal physiology, however some findings indicate that rethinking of presumed neural substrates is required. Notably, not all spatial visual processes are altered by age. Healthy normal ageing impacts significantly on some spatial visual processes (in particular centre-surround tasks), but leaves contrast discrimination, contrast adaptation, and orientation discrimination relatively intact. The study of older adult vision contributes to knowledge of the brain mechanisms altered by the ageing process, can provide practical information regarding visual environments that older adults may find challenging, and may lead to new methods of assessing visual performance in clinical environments. © 2018 The Authors Ophthalmic & Physiological Optics © 2018 The College of Optometrists.
NASA Astrophysics Data System (ADS)
Takamatsu, k.; Tanaka, h.; Shoji, d.
2012-04-01
The Fukushima Daiichi nuclear disaster is a series of equipment failures and nuclear meltdowns, following the T¯o hoku earthquake and tsunami on 11 March 2011. We present a new method for visualizing nuclear reactors. Muon radiography based on the multiple Coulomb scattering of cosmic-ray muons has been performed. In this work, we discuss experimental results obtained with a cost-effective simple detection system assembled with three plastic scintillator strips. Actually, we counted the number of muons that were not largely deflected by restricting the zenith angle in one direction to 0.8o. The system could discriminate Fe, Pb and C. Materials lighter than Pb can be also discriminated with this system. This method only resolves the average material distribution along the muon path. Therefore the user must make assumptions or interpretations about the structure, or must use more than one detector to resolve the three dimensional material distribution. By applying this method to time-dependent muon radiography, we can detect changes with time, rendering the method suitable for real-time monitoring applications, possibly providing useful information about the reaction process in a nuclear reactor such as burnup of fuels. In nuclear power technology, burnup (also known as fuel utilization) is a measure of how much energy is extracted from a primary nuclear fuel source. Monitoring the burnup of fuels as a nondestructive inspection technique can contribute to safer operation. In nuclear reactor, the total mass is conserved so that the system cannot be monitored by conventional muon radiography. A plastic scintillator is relatively small and easy to setup compared to a gas or layered scintillation system. Thus, we think this simple radiographic method has the potential to visualize a core directly in cases of normal operations or meltdown accidents. Finally, we considered only three materials as a first step in this work. Further research is required to improve the ability of imaging the material distribution in a mass-conserved system.
Basic quantitative assessment of visual performance in patients with very low vision.
Bach, Michael; Wilke, Michaela; Wilhelm, Barbara; Zrenner, Eberhart; Wilke, Robert
2010-02-01
A variety of approaches to developing visual prostheses are being pursued: subretinal, epiretinal, via the optic nerve, or via the visual cortex. This report presents a method of comparing their efficacy at genuinely improving visual function, starting at no light perception (NLP). A test battery (a computer program, Basic Assessment of Light and Motion [BaLM]) was developed in four basic visual dimensions: (1) light perception (light/no light), with an unstructured large-field stimulus; (2) temporal resolution, with single versus double flash discrimination; (3) localization of light, where a wedge extends from the center into four possible directions; and (4) motion, with a coarse pattern moving in one of four directions. Two- or four-alternative, forced-choice paradigms were used. The participants' responses were self-paced and delivered with a keypad. The feasibility of the BaLM was tested in 73 eyes of 51 patients with low vision. The light and time test modules discriminated between NLP and light perception (LP). The localization and motion modules showed no significant response for NLP but discriminated between LP and hand movement (HM). All four modules reached their ceilings in the acuity categories higher than HM. BaLM results systematically differed between the very-low-acuity categories NLP, LP, and HM. Light and time yielded similar results, as did localization and motion; still, for assessing the visual prostheses with differing temporal characteristics, they are not redundant. The results suggest that this simple test battery provides a quantitative assessment of visual function in the very-low-vision range from NLP to HM.
Global Image Dissimilarity in Macaque Inferotemporal Cortex Predicts Human Visual Search Efficiency
Sripati, Arun P.; Olson, Carl R.
2010-01-01
Finding a target in a visual scene can be easy or difficult depending on the nature of the distractors. Research in humans has suggested that search is more difficult the more similar the target and distractors are to each other. However, it has not yielded an objective definition of similarity. We hypothesized that visual search performance depends on similarity as determined by the degree to which two images elicit overlapping patterns of neuronal activity in visual cortex. To test this idea, we recorded from neurons in monkey inferotemporal cortex (IT) and assessed visual search performance in humans using pairs of images formed from the same local features in different global arrangements. The ability of IT neurons to discriminate between two images was strongly predictive of the ability of humans to discriminate between them during visual search, accounting overall for 90% of the variance in human performance. A simple physical measure of global similarity – the degree of overlap between the coarse footprints of a pair of images – largely explains both the neuronal and the behavioral results. To explain the relation between population activity and search behavior, we propose a model in which the efficiency of global oddball search depends on contrast-enhancing lateral interactions in high-order visual cortex. PMID:20107054
Cell-assembly coding in several memory processes.
Sakurai, Y
1998-01-01
The present paper discusses why the cell assembly, i.e., an ensemble population of neurons with flexible functional connections, is a tenable view of the basic code for information processes in the brain. The main properties indicating the reality of cell-assembly coding are neurons overlaps among different assemblies and connection dynamics within and among the assemblies. The former can be detected as multiple functions of individual neurons in processing different kinds of information. Individual neurons appear to be involved in multiple information processes. The latter can be detected as changes of functional synaptic connections in processing different kinds of information. Correlations of activity among some of the recorded neurons appear to change in multiple information processes. Recent experiments have compared several different memory processes (tasks) and detected these two main properties, indicating cell-assembly coding of memory in the working brain. The first experiment compared different types of processing of identical stimuli, i.e., working memory and reference memory of auditory stimuli. The second experiment compared identical processes of different types of stimuli, i.e., discriminations of simple auditory, simple visual, and configural auditory-visual stimuli. The third experiment compared identical processes of different types of stimuli with or without temporal processing of stimuli, i.e., discriminations of elemental auditory, configural auditory-visual, and sequential auditory-visual stimuli. Some possible features of the cell-assembly coding, especially "dual coding" by individual neurons and cell assemblies, are discussed for future experimental approaches. Copyright 1998 Academic Press.
Clery, Stephane; Cumming, Bruce G.
2017-01-01
Fine judgments of stereoscopic depth rely mainly on relative judgments of depth (relative binocular disparity) between objects, rather than judgments of the distance to where the eyes are fixating (absolute disparity). In macaques, visual area V2 is the earliest site in the visual processing hierarchy for which neurons selective for relative disparity have been observed (Thomas et al., 2002). Here, we found that, in macaques trained to perform a fine disparity discrimination task, disparity-selective neurons in V2 were highly selective for the task, and their activity correlated with the animals' perceptual decisions (unexplained by the stimulus). This may partially explain similar correlations reported in downstream areas. Although compatible with a perceptual role of these neurons for the task, the interpretation of such decision-related activity is complicated by the effects of interneuronal “noise” correlations between sensory neurons. Recent work has developed simple predictions to differentiate decoding schemes (Pitkow et al., 2015) without needing measures of noise correlations, and found that data from early sensory areas were compatible with optimal linear readout of populations with information-limiting correlations. In contrast, our data here deviated significantly from these predictions. We additionally tested this prediction for previously reported results of decision-related activity in V2 for a related task, coarse disparity discrimination (Nienborg and Cumming, 2006), thought to rely on absolute disparity. Although these data followed the predicted pattern, they violated the prediction quantitatively. This suggests that optimal linear decoding of sensory signals is not generally a good predictor of behavior in simple perceptual tasks. SIGNIFICANCE STATEMENT Activity in sensory neurons that correlates with an animal's decision is widely believed to provide insights into how the brain uses information from sensory neurons. Recent theoretical work developed simple predictions to differentiate decoding schemes, and found support for optimal linear readout of early sensory populations with information-limiting correlations. Here, we observed decision-related activity for neurons in visual area V2 of macaques performing fine disparity discrimination, as yet the earliest site for this task. These findings, and previously reported results from V2 in a different task, deviated from the predictions for optimal linear readout of a population with information-limiting correlations. Our results suggest that optimal linear decoding of early sensory information is not a general decoding strategy used by the brain. PMID:28100751
Herrera-Guzmán, I; Peña-Casanova, J; Lara, J P; Gudayol-Ferré, E; Böhm, P
2004-08-01
The assessment of visual perception and cognition forms an important part of any general cognitive evaluation. We have studied the possible influence of age, sex, and education on a normal elderly Spanish population (90 healthy subjects) in performance in visual perception tasks. To evaluate visual perception and cognition, we have used the subjects performance with The Visual Object and Space Perception Battery (VOSP). The test consists of 8 subtests: 4 measure visual object perception (Incomplete Letters, Silhouettes, Object Decision, and Progressive Silhouettes) while the other 4 measure visual space perception (Dot Counting, Position Discrimination, Number Location, and Cube Analysis). The statistical procedures employed were either simple or multiple linear regression analyses (subtests with normal distribution) and Mann-Whitney tests, followed by ANOVA with Scheffe correction (subtests without normal distribution). Age and sex were found to be significant modifying factors in the Silhouettes, Object Decision, Progressive Silhouettes, Position Discrimination, and Cube Analysis subtests. Educational level was found to be a significant predictor of function for the Silhouettes and Object Decision subtests. The results of the sample were adjusted in line with the differences observed. Our study also offers preliminary normative data for the administration of the VOSP to an elderly Spanish population. The results are discussed and compared with similar studies performed in different cultural backgrounds.
Variability in visual working memory ability limits the efficiency of perceptual decision making.
Ester, Edward F; Ho, Tiffany C; Brown, Scott D; Serences, John T
2014-04-02
The ability to make rapid and accurate decisions based on limited sensory information is a critical component of visual cognition. Available evidence suggests that simple perceptual discriminations are based on the accumulation and integration of sensory evidence over time. However, the memory system(s) mediating this accumulation are unclear. One candidate system is working memory (WM), which enables the temporary maintenance of information in a readily accessible state. Here, we show that individual variability in WM capacity is strongly correlated with the speed of evidence accumulation in speeded two-alternative forced choice tasks. This relationship generalized across different decision-making tasks, and could not be easily explained by variability in general arousal or vigilance. Moreover, we show that performing a difficult discrimination task while maintaining a concurrent memory load has a deleterious effect on the latter, suggesting that WM storage and decision making are directly linked.
Verhaeghe, Pieter-Paul; Van der Bracht, Koen; Van de Putte, Bart
2016-04-01
According to the social model of disability, physical 'impairments' become disabilities through exclusion in social relations. An obvious form of social exclusion might be discrimination, for instance on the rental housing market. Although discrimination has detrimental health effects, very few studies have examined discrimination of people with a visual impairment. We aim to study (1) the extent of discrimination of individuals with a visual impairment on the rental housing market and (2) differences in rates of discrimination between landowners and real estate agents. We conducted correspondence tests among 268 properties on the Belgian rental housing market. Using matched tests, we compared reactions by realtors and landowners to tenants with and tenants without a visual impairment. The results show that individuals with a visual impairment are substantially discriminated against in the rental housing market: at least one in three lessors discriminate against individuals with a visual impairment. We further discern differences in the propensity toward discrimination according to the type of lessor. Private landlords are at least twice as likely to discriminate against tenants with a visual impairment than real estate agents. At the same time, realtors still discriminate against one in five tenants with a visual impairment. This study shows the substantial discrimination against visually people with an impairment. Given the important consequences discrimination might have for physical and mental health, further research into this topic is needed. Copyright © 2016 Elsevier Inc. All rights reserved.
Simple device for the direct visualization of oral-cavity tissue fluorescence
NASA Astrophysics Data System (ADS)
Lane, Pierre M.; Gilhuly, Terence; Whitehead, Peter D.; Zeng, Haishan; Poh, Catherine; Ng, Samson; Williams, Michelle; Zhang, Lewei; Rosin, Miriam; MacAulay, Calum E.
2006-03-01
Early identification of high-risk disease could greatly reduce both mortality and morbidity due to oral cancer. We describe a simple handheld device that facilitates the direct visualization of oral-cavity fluorescence for the detection of high-risk precancerous and early cancerous lesions. Blue excitation light (400 to 460 nm) is employed to excite green-red fluorescence from fluorophores in the oral tissues. Tissue fluorescence is viewed directly along an optical axis collinear with the axis of excitation to reduce inter- and intraoperator variability. This robust, field-of-view device enables the direct visualization of fluorescence in the context of surrounding normal tissue. Results from a pilot study of 44 patients are presented. Using histology as the gold standard, the device achieves a sensitivity of 98% and specificity of 100% when discriminating normal mucosa from severe dysplasia/carcinoma in situ (CIS) or invasive carcinoma. We envisage this device as a suitable adjunct for oral cancer screening, biopsy guidance, and margin delineation.
Discriminative components of data.
Peltonen, Jaakko; Kaski, Samuel
2005-01-01
A simple probabilistic model is introduced to generalize classical linear discriminant analysis (LDA) in finding components that are informative of or relevant for data classes. The components maximize the predictability of the class distribution which is asymptotically equivalent to 1) maximizing mutual information with the classes, and 2) finding principal components in the so-called learning or Fisher metrics. The Fisher metric measures only distances that are relevant to the classes, that is, distances that cause changes in the class distribution. The components have applications in data exploration, visualization, and dimensionality reduction. In empirical experiments, the method outperformed, in addition to more classical methods, a Renyi entropy-based alternative while having essentially equivalent computational cost.
Visual discrimination in an orangutan (Pongo pygmaeus): measuring visual preference.
Hanazuka, Yuki; Kurotori, Hidetoshi; Shimizu, Mika; Midorikawa, Akira
2012-04-01
Although previous studies have confirmed that trained orangutans visually discriminate between mammals and artificial objects, whether orangutans without operant conditioning can discriminate remains unknown. The visual discrimination ability in an orangutan (Pongo pygmaeus) with no experience in operant learning was examined using measures of visual preference. Sixteen color photographs of inanimate objects and of mammals with four legs were randomly presented to an orangutan. The results showed that the mean looking time at photographs of mammals with four legs was longer than that for inanimate objects, suggesting that the orangutan discriminated mammals with four legs from inanimate objects. The results implied that orangutans who have not experienced operant conditioning may possess the ability to discriminate visually.
Renfroe, Jenna B; Turner, Travis H; Hinson, Vanessa K
2017-02-01
Judgment of Line Orientation (JOLO) test is widely used in assessing visuospatial deficits in Parkinson's disease (PD). The neuropsychological assessment battery (NAB) offers the Visual Discrimination test, with age and education correction, parallel forms, and co-normed standardization sample for comparisons within and between domains. However, NAB Visual Discrimination has not been validated in PD, and may not measure the same construct as JOLO. A heterogeneous sample of 47 PD patients completed the JOLO and NAB Visual Discrimination within a broader neuropsychological evaluation. Pearson correlations assessed relationships between JOLO and NAB Visual Discrimination performances. Raw and demographically corrected scores from JOLO and Visual Discrimination were only weakly correlated. NAB Visual Discrimination subtest was moderately correlated with overall cognitive functioning, whereas the JOLO was not. Despite apparent virtues, results do not support NAB Visual Discrimination as an alternative to JOLO in assessing visuospatial functioning in PD. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru
2014-01-01
Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403
Feature extraction with deep neural networks by a generalized discriminant analysis.
Stuhlsatz, André; Lippel, Jens; Zielke, Thomas
2012-04-01
We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.
Murawski, Nathen J; Asok, Arun
2017-01-10
The precise contribution of visual information to contextual fear learning and discrimination has remained elusive. To better understand this contribution, we coupled the context pre-exposure facilitation effect (CPFE) fear conditioning paradigm with presentations of distinct visual scenes displayed on 4 LCD screens surrounding a conditioning chamber. Adult male Long-Evans rats received non-reinforced context pre-exposure on Day 1, an immediate 1.5mA foot shock on Day 2, and a non-reinforced context test on Day 3. Rats were pre-exposed to either digital Context (dCtx) A, dCtx B, a distinct Ctx C, or no context on Day 1. Digital context A and B were identical except for the visual image displayed on the LCD screens. Immediate shock and retention testing occurred in dCtx A. Rats pre-exposed dCtx A showed the CPFE with significantly higher levels of freezing compared to controls. Rats pre-exposed to Context B failed to show the CPFE, with freezing that did not highly differ from controls. These results suggest that visual information contributes to contextual fear learning and that visual components of the context can be manipulated via LCD screens. Our approach offers a simple modification to contextual fear conditioning paradigms whereby the visual features of a context can be manipulated to better understand the factors that contribute to contextual fear discrimination and generalization. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Tong, Frank; Harrison, Stephenie A; Dewey, John A; Kamitani, Yukiyasu
2012-11-15
Orientation-selective responses can be decoded from fMRI activity patterns in the human visual cortex, using multivariate pattern analysis (MVPA). To what extent do these feature-selective activity patterns depend on the strength and quality of the sensory input, and might the reliability of these activity patterns be predicted by the gross amplitude of the stimulus-driven BOLD response? Observers viewed oriented gratings that varied in luminance contrast (4, 20 or 100%) or spatial frequency (0.25, 1.0 or 4.0 cpd). As predicted, activity patterns in early visual areas led to better discrimination of orientations presented at high than low contrast, with greater effects of contrast found in area V1 than in V3. A second experiment revealed generally better decoding of orientations at low or moderate as compared to high spatial frequencies. Interestingly however, V1 exhibited a relative advantage at discriminating high spatial frequency orientations, consistent with the finer scale of representation in the primary visual cortex. In both experiments, the reliability of these orientation-selective activity patterns was well predicted by the average BOLD amplitude in each region of interest, as indicated by correlation analyses, as well as decoding applied to a simple model of voxel responses to simulated orientation columns. Moreover, individual differences in decoding accuracy could be predicted by the signal-to-noise ratio of an individual's BOLD response. Our results indicate that decoding accuracy can be well predicted by incorporating the amplitude of the BOLD response into simple simulation models of cortical selectivity; such models could prove useful in future applications of fMRI pattern classification. Copyright © 2012 Elsevier Inc. All rights reserved.
Tong, Frank; Harrison, Stephenie A.; Dewey, John A.; Kamitani, Yukiyasu
2012-01-01
Orientation-selective responses can be decoded from fMRI activity patterns in the human visual cortex, using multivariate pattern analysis (MVPA). To what extent do these feature-selective activity patterns depend on the strength and quality of the sensory input, and might the reliability of these activity patterns be predicted by the gross amplitude of the stimulus-driven BOLD response? Observers viewed oriented gratings that varied in luminance contrast (4, 20 or 100%) or spatial frequency (0.25, 1.0 or 4.0 cpd). As predicted, activity patterns in early visual areas led to better discrimination of orientations presented at high than low contrast, with greater effects of contrast found in area V1 than in V3. A second experiment revealed generally better decoding of orientations at low or moderate as compared to high spatial frequencies. Interestingly however, V1 exhibited a relative advantage at discriminating high spatial frequency orientations, consistent with the finer scale of representation in the primary visual cortex. In both experiments, the reliability of these orientation-selective activity patterns was well predicted by the average BOLD amplitude in each region of interest, as indicated by correlation analyses, as well as decoding applied to a simple model of voxel responses to simulated orientation columns. Moreover, individual differences in decoding accuracy could be predicted by the signal-to-noise ratio of an individual's BOLD response. Our results indicate that decoding accuracy can be well predicted by incorporating the amplitude of the BOLD response into simple simulation models of cortical selectivity; such models could prove useful in future applications of fMRI pattern classification. PMID:22917989
Clery, Stephane; Cumming, Bruce G; Nienborg, Hendrikje
2017-01-18
Fine judgments of stereoscopic depth rely mainly on relative judgments of depth (relative binocular disparity) between objects, rather than judgments of the distance to where the eyes are fixating (absolute disparity). In macaques, visual area V2 is the earliest site in the visual processing hierarchy for which neurons selective for relative disparity have been observed (Thomas et al., 2002). Here, we found that, in macaques trained to perform a fine disparity discrimination task, disparity-selective neurons in V2 were highly selective for the task, and their activity correlated with the animals' perceptual decisions (unexplained by the stimulus). This may partially explain similar correlations reported in downstream areas. Although compatible with a perceptual role of these neurons for the task, the interpretation of such decision-related activity is complicated by the effects of interneuronal "noise" correlations between sensory neurons. Recent work has developed simple predictions to differentiate decoding schemes (Pitkow et al., 2015) without needing measures of noise correlations, and found that data from early sensory areas were compatible with optimal linear readout of populations with information-limiting correlations. In contrast, our data here deviated significantly from these predictions. We additionally tested this prediction for previously reported results of decision-related activity in V2 for a related task, coarse disparity discrimination (Nienborg and Cumming, 2006), thought to rely on absolute disparity. Although these data followed the predicted pattern, they violated the prediction quantitatively. This suggests that optimal linear decoding of sensory signals is not generally a good predictor of behavior in simple perceptual tasks. Activity in sensory neurons that correlates with an animal's decision is widely believed to provide insights into how the brain uses information from sensory neurons. Recent theoretical work developed simple predictions to differentiate decoding schemes, and found support for optimal linear readout of early sensory populations with information-limiting correlations. Here, we observed decision-related activity for neurons in visual area V2 of macaques performing fine disparity discrimination, as yet the earliest site for this task. These findings, and previously reported results from V2 in a different task, deviated from the predictions for optimal linear readout of a population with information-limiting correlations. Our results suggest that optimal linear decoding of early sensory information is not a general decoding strategy used by the brain. Copyright © 2017 the authors 0270-6474/17/370715-11$15.00/0.
NASA Astrophysics Data System (ADS)
Rohaeti, Eti; Rafi, Mohamad; Syafitri, Utami Dyah; Heryanto, Rudi
2015-02-01
Turmeric (Curcuma longa), java turmeric (Curcuma xanthorrhiza) and cassumunar ginger (Zingiber cassumunar) are widely used in traditional Indonesian medicines (jamu). They have similar color for their rhizome and possess some similar uses, so it is possible to substitute one for the other. The identification and discrimination of these closely-related plants is a crucial task to ensure the quality of the raw materials. Therefore, an analytical method which is rapid, simple and accurate for discriminating these species using Fourier transform infrared spectroscopy (FTIR) combined with some chemometrics methods was developed. FTIR spectra were acquired in the mid-IR region (4000-400 cm-1). Standard normal variate, first and second order derivative spectra were compared for the spectral data. Principal component analysis (PCA) and canonical variate analysis (CVA) were used for the classification of the three species. Samples could be discriminated by visual analysis of the FTIR spectra by using their marker bands. Discrimination of the three species was also possible through the combination of the pre-processed FTIR spectra with PCA and CVA, in which CVA gave clearer discrimination. Subsequently, the developed method could be used for the identification and discrimination of the three closely-related plant species.
de la Rosa, Stephan; Ekramnia, Mina; Bülthoff, Heinrich H.
2016-01-01
The ability to discriminate between different actions is essential for action recognition and social interactions. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g., left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g., when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently categorized either the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms. PMID:26941633
Sakimoto, Yuya; Sakata, Shogo
2014-01-01
It was showed that solving a simple discrimination task (A+, B−) and a simultaneous feature-negative (FN) task (A+, AB−) used the hippocampal-independent strategy. Recently, we showed that the number of sessions required for a rat to completely learn a task differed between the FN and simple discrimination tasks, and there was a difference in hippocampal theta activity between these tasks. These results suggested that solving the FN task relied on a different strategy than the simple discrimination task. In this study, we provided supportive evidence that solving the FN and simple discrimination tasks involved different strategies by examining changes in performance and hippocampal theta activity in the FN task after transfer from the simple discrimination task (A+, B− → A+, AB−). The results of this study showed that performance on the FN task was impaired and there was a difference in hippocampal theta activity between the simple discrimination task and FN task. Thus, we concluded that solving the FN task uses a different strategy than the simple discrimination task. PMID:24917797
Chromatic information and feature detection in fast visual analysis
Del Viva, Maria M.; Punzi, Giovanni; Shevell, Steven K.; ...
2016-08-01
The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-andwhite movies provide compelling representations of real world scenes. Also, the contrast sensitivity ofmore » color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. As a result, we conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in.« less
Chromatic information and feature detection in fast visual analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Viva, Maria M.; Punzi, Giovanni; Shevell, Steven K.
The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-andwhite movies provide compelling representations of real world scenes. Also, the contrast sensitivity ofmore » color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. As a result, we conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in.« less
Norton, Daniel; McBain, Ryan; Holt, Daphne J; Ongur, Dost; Chen, Yue
2009-06-15
Impaired emotion recognition has been reported in schizophrenia, yet the nature of this impairment is not completely understood. Recognition of facial emotion depends on processing affective and nonaffective facial signals, as well as basic visual attributes. We examined whether and how poor facial emotion recognition in schizophrenia is related to basic visual processing and nonaffective face recognition. Schizophrenia patients (n = 32) and healthy control subjects (n = 29) performed emotion discrimination, identity discrimination, and visual contrast detection tasks, where the emotionality, distinctiveness of identity, or visual contrast was systematically manipulated. Subjects determined which of two presentations in a trial contained the target: the emotional face for emotion discrimination, a specific individual for identity discrimination, and a sinusoidal grating for contrast detection. Patients had significantly higher thresholds (worse performance) than control subjects for discriminating both fearful and happy faces. Furthermore, patients' poor performance in fear discrimination was predicted by performance in visual detection and face identity discrimination. Schizophrenia patients require greater emotional signal strength to discriminate fearful or happy face images from neutral ones. Deficient emotion recognition in schizophrenia does not appear to be determined solely by affective processing but is also linked to the processing of basic visual and facial information.
Convergent-Discriminant Validity of the Jewish Employment Vocational System (JEVS).
ERIC Educational Resources Information Center
Tryjankowski, Elaine M.
This study investigated the construct validity of five perceptual traits (auditory discrimination, visual discrimination, visual memory, visual-motor coordination, and auditory to visual-motor coordination) with five simulated work samples (union assembly, resistor reading, budgette assembly, lock assembly, and nail and screw sort) from the Jewish…
Stimulus novelty, task relevance and the visual evoked potential in man
NASA Technical Reports Server (NTRS)
Courchesne, E.; Hillyard, S. A.; Galambos, R.
1975-01-01
The effect of task relevance on P3 (waveform of human evoked potential) waves and the methodologies used to deal with them are outlined. Visual evoked potentials (VEPs) were recorded from normal adult subjects performing in a visual discrimination task. Subjects counted the number of presentations of the numeral 4 which was interposed rarely and randomly within a sequence of tachistoscopically flashed background stimuli. Intrusive, task-irrelevant (not counted) stimuli were also interspersed rarely and randomly in the sequence of 2s; these stimuli were of two types: simples, which were easily recognizable, and novels, which were completely unrecognizable. It was found that the simples and the counted 4s evoked posteriorly distributed P3 waves while the irrelevant novels evoked large, frontally distributed P3 waves. These large, frontal P3 waves to novels were also found to be preceded by large N2 waves. These findings indicate that the P3 wave is not a unitary phenomenon but should be considered in terms of a family of waves, differing in their brain generators and in their psychological correlates.
Dynamic and predictive links between touch and vision.
Gray, Rob; Tan, Hong Z
2002-07-01
We investigated crossmodal links between vision and touch for moving objects. In experiment 1, observers discriminated visual targets presented randomly at one of five locations on their forearm. Tactile pulses simulating motion along the forearm preceded visual targets. At short tactile-visual ISIs, discriminations were more rapid when the final tactile pulse and visual target were at the same location. At longer ISIs, discriminations were more rapid when the visual target was offset in the motion direction and were slower for offsets opposite to the motion direction. In experiment 2, speeded tactile discriminations at one of three random locations on the forearm were preceded by a visually simulated approaching object. Discriminations were more rapid when the object approached the location of the tactile stimulation and discrimination performance was dependent on the approaching object's time to contact. These results demonstrate dynamic links in the spatial mapping between vision and touch.
Contrast discrimination, non-uniform patterns and change blindness.
Scott-Brown, K C; Orbach, H S
1998-01-01
Change blindness--our inability to detect large changes in natural scenes when saccades, blinks and other transients interrupt visual input--seems to contradict psychophysical evidence for our exquisite sensitivity to contrast changes. Can the type of effects described as 'change blindness' be observed with simple, multi-element stimuli, amenable to psychophysical analysis? Such stimuli, composed of five mixed contrast elements, elicited a striking increase in contrast increment thresholds compared to those for an isolated element. Cue presentation prior to the stimulus substantially reduced thresholds, as for change blindness with natural scenes. On one hand, explanations for change blindness based on abstract and sketchy representations in short-term visual memory seem inappropriate for this low-level image property of contrast where there is ample evidence for exquisite performance on memory tasks. On the other hand, the highly increased thresholds for mixed contrast elements, and the decreased thresholds when a cue is present, argue against any simple early attentional or sensory explanation for change blindness. Thus, psychophysical results for very simple patterns cannot straightforwardly predict results even for the slightly more complicated patterns studied here. PMID:9872004
The oblique effect is both allocentric and egocentric
Mikellidou, Kyriaki; Cicchini, Guido Marco; Thompson, Peter G.; Burr, David C.
2016-01-01
Despite continuous movements of the head, humans maintain a stable representation of the visual world, which seems to remain always upright. The mechanisms behind this stability are largely unknown. To gain some insight on how head tilt affects visual perception, we investigate whether a well-known orientation-dependent visual phenomenon, the oblique effect—superior performance for stimuli at cardinal orientations (0° and 90°) compared with oblique orientations (45°)—is anchored in egocentric or allocentric coordinates. To this aim, we measured orientation discrimination thresholds at various orientations for different head positions both in body upright and in supine positions. We report that, in the body upright position, the oblique effect remains anchored in allocentric coordinates irrespective of head position. When lying supine, gravitational effects in the plane orthogonal to gravity are discounted. Under these conditions, the oblique effect was less marked than when upright, and anchored in egocentric coordinates. The results are well explained by a simple “compulsory fusion” model in which the head-based and the gravity-based signals are combined with different weightings (30% and 70%, respectively), even when this leads to reduced sensitivity in orientation discrimination. PMID:26129862
Deep neural networks for modeling visual perceptual learning.
Wenliang, Li; Seitz, Aaron R
2018-05-23
Understanding visual perceptual learning (VPL) has become increasingly more challenging as new phenomena are discovered with novel stimuli and training paradigms. While existing models aid our knowledge of critical aspects of VPL, the connections shown by these models between behavioral learning and plasticity across different brain areas are typically superficial. Most models explain VPL as readout from simple perceptual representations to decision areas and are not easily adaptable to explain new findings. Here, we show that a well-known instance of deep neural network (DNN), while not designed specifically for VPL, provides a computational model of VPL with enough complexity to be studied at many levels of analyses. After learning a Gabor orientation discrimination task, the DNN model reproduced key behavioral results, including increasing specificity with higher task precision, and also suggested that learning precise discriminations could asymmetrically transfer to coarse discriminations when the stimulus conditions varied. In line with the behavioral findings, the distribution of plasticity moved towards lower layers when task precision increased, and this distribution was also modulated by tasks with different stimulus types. Furthermore, learning in the network units demonstrated close resemblance to extant electrophysiological recordings in monkey visual areas. Altogether, the DNN fulfilled predictions of existing theories regarding specificity and plasticity, and reproduced findings of tuning changes in neurons of the primate visual areas. Although the comparisons were mostly qualitative, the DNN provides a new method of studying VPL and can serve as a testbed for theories and assist in generating predictions for physiological investigations. SIGNIFICANCE STATEMENT Visual perceptual learning (VPL) has been found to cause changes at multiple stages of the visual hierarchy. We found that training a deep neural network (DNN) on an orientation discrimination task produced similar behavioral and physiological patterns found in human and monkey experiments. Unlike existing VPL models, the DNN was pre-trained on natural images to reach high performance in object recognition but was not designed specifically for VPL, and yet it fulfilled predictions of existing theories regarding specificity and plasticity, and reproduced findings of tuning changes in neurons of the primate visual areas. When used with care, this unbiased and deep-hierarchical model can provide new ways of studying VPL from behavior to physiology. Copyright © 2018 the authors.
ERIC Educational Resources Information Center
Squire, Larry R.; Levy, Daniel A.; Shrager, Yael
2005-01-01
The perirhinal cortex is known to be important for memory, but there has recently been interest in the possibility that it might also be involved in visual perceptual functions. In four experiments, we assessed visual discrimination ability and visual discrimination learning in severely amnesic patients with large medial temporal lobe lesions that…
ERIC Educational Resources Information Center
Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve
2018-01-01
To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…
ERIC Educational Resources Information Center
Kodak, Tiffany; Clements, Andrea; Paden, Amber R.; LeBlanc, Brittany; Mintz, Joslyn; Toussaint, Karen A.
2015-01-01
The current investigation evaluated repertoires that may be related to performance on auditory-to-visual conditional discrimination training with 9 students who had been diagnosed with autism spectrum disorder. The skills included in the assessment were matching, imitation, scanning, an auditory discrimination, and a visual discrimination. The…
Petruno, Sarah K; Clark, Robert E; Reinagel, Pamela
2013-01-01
The pigmented Long-Evans rat has proven to be an excellent subject for studying visually guided behavior including quantitative visual psychophysics. This observation, together with its experimental accessibility and its close homology to the mouse, has made it an attractive model system in which to dissect the thalamic and cortical circuits underlying visual perception. Given that visually guided behavior in the absence of primary visual cortex has been described in the literature, however, it is an empirical question whether specific visual behaviors will depend on primary visual cortex in the rat. Here we tested the effects of cortical lesions on performance of two-alternative forced-choice visual discriminations by Long-Evans rats. We present data from one highly informative subject that learned several visual tasks and then received a bilateral lesion ablating >90% of primary visual cortex. After the lesion, this subject had a profound and persistent deficit in complex image discrimination, orientation discrimination, and full-field optic flow motion discrimination, compared with both pre-lesion performance and sham-lesion controls. Performance was intact, however, on another visual two-alternative forced-choice task that required approaching a salient visual target. A second highly informative subject learned several visual tasks prior to receiving a lesion ablating >90% of medial extrastriate cortex. This subject showed no impairment on any of the four task categories. Taken together, our data provide evidence that these image, orientation, and motion discrimination tasks require primary visual cortex in the Long-Evans rat, whereas approaching a salient visual target does not.
Rohaeti, Eti; Rafi, Mohamad; Syafitri, Utami Dyah; Heryanto, Rudi
2015-02-25
Turmeric (Curcuma longa), java turmeric (Curcuma xanthorrhiza) and cassumunar ginger (Zingiber cassumunar) are widely used in traditional Indonesian medicines (jamu). They have similar color for their rhizome and possess some similar uses, so it is possible to substitute one for the other. The identification and discrimination of these closely-related plants is a crucial task to ensure the quality of the raw materials. Therefore, an analytical method which is rapid, simple and accurate for discriminating these species using Fourier transform infrared spectroscopy (FTIR) combined with some chemometrics methods was developed. FTIR spectra were acquired in the mid-IR region (4000-400 cm(-1)). Standard normal variate, first and second order derivative spectra were compared for the spectral data. Principal component analysis (PCA) and canonical variate analysis (CVA) were used for the classification of the three species. Samples could be discriminated by visual analysis of the FTIR spectra by using their marker bands. Discrimination of the three species was also possible through the combination of the pre-processed FTIR spectra with PCA and CVA, in which CVA gave clearer discrimination. Subsequently, the developed method could be used for the identification and discrimination of the three closely-related plant species. Copyright © 2014 Elsevier B.V. All rights reserved.
Visual discrimination training improves Humphrey perimetry in chronic cortically induced blindness.
Cavanaugh, Matthew R; Huxlin, Krystel R
2017-05-09
To assess if visual discrimination training improves performance on visual perimetry tests in chronic stroke patients with visual cortex involvement. 24-2 and 10-2 Humphrey visual fields were analyzed for 17 chronic cortically blind stroke patients prior to and following visual discrimination training, as well as in 5 untrained, cortically blind controls. Trained patients practiced direction discrimination, orientation discrimination, or both, at nonoverlapping, blind field locations. All pretraining and posttraining discrimination performance and Humphrey fields were collected with online eye tracking, ensuring gaze-contingent stimulus presentation. Trained patients recovered ∼108 degrees 2 of vision on average, while untrained patients spontaneously improved over an area of ∼16 degrees 2 . Improvement was not affected by patient age, time since lesion, size of initial deficit, or training type, but was proportional to the amount of training performed. Untrained patients counterbalanced their improvements with worsening of sensitivity over ∼9 degrees 2 of their visual field. Worsening was minimal in trained patients. Finally, although discrimination performance improved at all trained locations, changes in Humphrey sensitivity occurred both within trained regions and beyond, extending over a larger area along the blind field border. In adults with chronic cortical visual impairment, the blind field border appears to have enhanced plastic potential, which can be recruited by gaze-controlled visual discrimination training to expand the visible field. Our findings underscore a critical need for future studies to measure the effects of vision restoration approaches on perimetry in larger cohorts of patients. Copyright © 2017 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the American Academy of Neurology.
The Multisensory Attentional Consequences of Tool Use: A Functional Magnetic Resonance Imaging Study
Holmes, Nicholas P.; Spence, Charles; Hansen, Peter C.; Mackay, Clare E.; Calvert, Gemma A.
2008-01-01
Background Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used. Methodology/Principal Findings We tested this hypothesis by scanning healthy human participants' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position. Conclusions/Significance These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use. PMID:18958150
To call a cloud 'cirrus': sound symbolism in names for categories or items.
Ković, Vanja; Sučević, Jelena; Styles, Suzy J
2017-01-01
The aim of the present paper is to experimentally test whether sound symbolism has selective effects on labels with different ranges-of-reference within a simple noun-hierarchy. In two experiments, adult participants learned the make up of two categories of unfamiliar objects ('alien life forms'), and were passively exposed to either category-labels or item-labels, in a learning-by-guessing categorization task. Following category training, participants were tested on their visual discrimination of object pairs. For different groups of participants, the labels were either congruent or incongruent with the objects. In Experiment 1, when trained on items with individual labels, participants were worse (made more errors) at detecting visual object mismatches when trained labels were incongruent. In Experiment 2, when participants were trained on items in labelled categories, participants were faster at detecting a match if the trained labels were congruent, and faster at detecting a mismatch if the trained labels were incongruent. This pattern of results suggests that sound symbolism in category labels facilitates later similarity judgments when congruent, and discrimination when incongruent, whereas for item labels incongruence generates error in judgements of visual object differences. These findings reveal that sound symbolic congruence has a different outcome at different levels of labelling within a noun hierarchy. These effects emerged in the absence of the label itself, indicating subtle but pervasive effects on visual object processing.
Bonetti, Jennifer; Quarino, Lawrence
2014-05-01
This study has shown that the combination of simple techniques with the use of multivariate statistics offers the potential for the comparative analysis of soil samples. Five samples were obtained from each of twelve state parks across New Jersey in both the summer and fall seasons. Each sample was examined using particle-size distribution, pH analysis in both water and 1 M CaCl2 , and a loss on ignition technique. Data from each of the techniques were combined, and principal component analysis (PCA) and canonical discriminant analysis (CDA) were used for multivariate data transformation. Samples from different locations could be visually differentiated from one another using these multivariate plots. Hold-one-out cross-validation analysis showed error rates as low as 3.33%. Ten blind study samples were analyzed resulting in no misclassifications using Mahalanobis distance calculations and visual examinations of multivariate plots. Seasonal variation was minimal between corresponding samples, suggesting potential success in forensic applications. © 2014 American Academy of Forensic Sciences.
Using Prosopagnosia to Test and Modify Visual Recognition Theory.
O'Brien, Alexander M
2018-02-01
Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.
Development and validation of a brief, descriptive Danish pain questionnaire (BDDPQ).
Perkins, F M; Werner, M U; Persson, F; Holte, K; Jensen, T S; Kehlet, H
2004-04-01
A new pain questionnaire should be simple, be documented to have discriminative function, and be related to previously used questionnaires. Word meaning was validated by using bilingual Danish medical students and asking them to translate words taken from the Danish version of the McGill pain questionnaire into English. Evaluative word value was estimated using a visual analog scale (VAS). Discriminative function was assessed by having patients with one of six painful conditions (postherpetic neuralgia, phantom limb pain, rheumatoid arthritis, ankle fracture, appendicitis, or labor pain) complete the questionnaire. We were not able to find Danish words that were reliably back-translated to the English words 'splitting' or 'gnawing'. A simple three-word set of evaluative terms had good separation when rated on a VAS scale ('let' 17.5+/-6.5 mm; 'moderat' 42.7+/-8.6 mm; and 'staerk' 74.9+/-9.7 mm). The questionnaire was able to discriminate among the six painful conditions with 77% accuracy by just using the descriptive words. The accuracy of the questionnaire increased to 96% with the addition of evaluative terms (for pain at rest and with activity), chronicity (acute vs. chronic), and location of the pain. A Danish pain questionnaire that subjects and patients can self-administer has been developed and validated relative to the words used in the English McGill Pain questionnaire. The discriminative ability of the questionnaire among some common painful conditions has been tested and documented. The questionnaire may be of use in patient care and research.
Takajo, Ichiro; Yamada, Akiteru; Umeki, Kazumi; Saeki, Yuji; Hashikura, Yuuki; Yamamoto, Ikuo; Umekita, Kunihiko; Urayama-Kawano, Midori; Yamasaki, Shogo; Taniguchi, Takako; Misawa, Naoaki; Okayama, Akihiko
2018-01-01
Vibrio furnissii and V. fluvialis are closely related, the discrimination of which by conventional biochemical assay remains a challenge. Investigation of the sequence of the 16S rRNA genes in a clinical isolate of V. furnissii by visual inspection of a sequencing electropherogram revealed two sites of single-nucleotide polymorphisms (SNPs; positions 460 A/G and 1261 A/G) in these genes. A test of 12 strains each of V. fluvialis and V. furnissii revealed these SNPs to be common in V. furnissii but not in V. fluvialis. Divergence of SNP frequency was observed among the strains of V. furnissii tested. Because the SNPs described in V. furnissii produce a difference in the target sequence of restriction enzymes, a combination of polymerase chain reaction (PCR) of the 16S rRNA genes using conventional primers and restriction fragment length polymorphism analysis using Eco RV and Eae I was shown to discriminate between V. fluvialis and V. furnissii. This method is simple and alleviates the need for expensive equipment or primer sets specific to these bacteria. Therefore, we believe that this method can be useful, alongside specific PCR and mass spectrometry, when there is a need to discriminate between V. fluvialis and V. furnissii. Copyright © 2017 Elsevier B.V. All rights reserved.
Additional Remarks on Designing Category-Level Attributes for Discriminative Visual Recognition
2013-01-01
Discriminative Visual Recognition ∗ Felix X. Yu†, Liangliang Cao§, Rogerio S. Feris§, John R. Smith§, Shih-Fu Chang† † Columbia University § IBM T. J...for Designing Category-Level Attributes for Dis- criminative Visual Recognition [3]. We first provide an overview of the proposed ap- proach in...2013 to 00-00-2013 4. TITLE AND SUBTITLE Additional Remarks on Designing Category-Level Attributes for Discriminative Visual Recognition 5a
Preschoolers Benefit From Visually Salient Speech Cues
Holt, Rachael Frush
2015-01-01
Purpose This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method Twelve adults and 27 typically developing 3- and 4-year-old children completed 3 audiovisual (AV) speech integration tasks: matching, discrimination, and recognition. The authors compared AV benefit for visually salient and less visually salient speech discrimination contrasts and assessed the visual saliency of consonant confusions in auditory-only and AV word recognition. Results Four-year-olds and adults demonstrated visual influence on all measures. Three-year-olds demonstrated visual influence on speech discrimination and recognition measures. All groups demonstrated greater AV benefit for the visually salient discrimination contrasts. AV recognition benefit in 4-year-olds and adults depended on the visual saliency of speech sounds. Conclusions Preschoolers can demonstrate AV speech integration. Their AV benefit results from efficient use of visually salient speech cues. Four-year-olds, but not 3-year-olds, used visual phonological knowledge to take advantage of visually salient speech cues, suggesting possible developmental differences in the mechanisms of AV benefit. PMID:25322336
Simultaneous Visual Discrimination in Asian Elephants
ERIC Educational Resources Information Center
Nissani, Moti; Hoefler-Nissani, Donna; Lay, U. Tin; Htun, U. Wan
2005-01-01
Two experiments explored the behavior of 20 Asian elephants ("Elephas aximus") in simultaneous visual discrimination tasks. In Experiment 1, 7 Burmese logging elephants acquired a white+/black- discrimination, reaching criterion in a mean of 2.6 sessions and 117 discrete trials, whereas 4 elephants acquired a black+/white- discrimination in 5.3…
Treatment of amblyopia in the adult: insights from a new rodent model of visual perceptual learning.
Bonaccorsi, Joyce; Berardi, Nicoletta; Sale, Alessandro
2014-01-01
Amblyopia is the most common form of impairment of visual function affecting one eye, with a prevalence of about 1-5% of the total world population. Amblyopia usually derives from conditions of early functional imbalance between the two eyes, owing to anisometropia, strabismus, or congenital cataract, and results in a pronounced reduction of visual acuity and severe deficits in contrast sensitivity and stereopsis. It is widely accepted that, due to a lack of sufficient plasticity in the adult brain, amblyopia becomes untreatable after the closure of the critical period in the primary visual cortex. However, recent results obtained both in animal models and in clinical trials have challenged this view, unmasking a previously unsuspected potential for promoting recovery even in adulthood. In this context, non invasive procedures based on visual perceptual learning, i.e., the improvement in visual performance on a variety of simple visual tasks following practice, emerge as particularly promising to rescue discrimination abilities in adult amblyopic subjects. This review will survey recent work regarding the impact of visual perceptual learning on amblyopia, with a special focus on a new experimental model of perceptual learning in the amblyopic rat.
Treatment of amblyopia in the adult: insights from a new rodent model of visual perceptual learning
Bonaccorsi, Joyce; Berardi, Nicoletta; Sale, Alessandro
2014-01-01
Amblyopia is the most common form of impairment of visual function affecting one eye, with a prevalence of about 1–5% of the total world population. Amblyopia usually derives from conditions of early functional imbalance between the two eyes, owing to anisometropia, strabismus, or congenital cataract, and results in a pronounced reduction of visual acuity and severe deficits in contrast sensitivity and stereopsis. It is widely accepted that, due to a lack of sufficient plasticity in the adult brain, amblyopia becomes untreatable after the closure of the critical period in the primary visual cortex. However, recent results obtained both in animal models and in clinical trials have challenged this view, unmasking a previously unsuspected potential for promoting recovery even in adulthood. In this context, non invasive procedures based on visual perceptual learning, i.e., the improvement in visual performance on a variety of simple visual tasks following practice, emerge as particularly promising to rescue discrimination abilities in adult amblyopic subjects. This review will survey recent work regarding the impact of visual perceptual learning on amblyopia, with a special focus on a new experimental model of perceptual learning in the amblyopic rat. PMID:25076874
Visual speech discrimination and identification of natural and synthetic consonant stimuli
Files, Benjamin T.; Tjan, Bosco S.; Jiang, Jintao; Bernstein, Lynne E.
2015-01-01
From phonetic features to connected discourse, every level of psycholinguistic structure including prosody can be perceived through viewing the talking face. Yet a longstanding notion in the literature is that visual speech perceptual categories comprise groups of phonemes (referred to as visemes), such as /p, b, m/ and /f, v/, whose internal structure is not informative to the visual speech perceiver. This conclusion has not to our knowledge been evaluated using a psychophysical discrimination paradigm. We hypothesized that perceivers can discriminate the phonemes within typical viseme groups, and that discrimination measured with d-prime (d’) and response latency is related to visual stimulus dissimilarities between consonant segments. In Experiment 1, participants performed speeded discrimination for pairs of consonant-vowel spoken nonsense syllables that were predicted to be same, near, or far in their perceptual distances, and that were presented as natural or synthesized video. Near pairs were within-viseme consonants. Natural within-viseme stimulus pairs were discriminated significantly above chance (except for /k/-/h/). Sensitivity (d’) increased and response times decreased with distance. Discrimination and identification were superior with natural stimuli, which comprised more phonetic information. We suggest that the notion of the viseme as a unitary perceptual category is incorrect. Experiment 2 probed the perceptual basis for visual speech discrimination by inverting the stimuli. Overall reductions in d’ with inverted stimuli but a persistent pattern of larger d’ for far than for near stimulus pairs are interpreted as evidence that visual speech is represented by both its motion and configural attributes. The methods and results of this investigation open up avenues for understanding the neural and perceptual bases for visual and audiovisual speech perception and for development of practical applications such as visual lipreading/speechreading speech synthesis. PMID:26217249
Peel, Hayden J.; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A.
2018-01-01
Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features. PMID:29725292
Peel, Hayden J; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A
2018-01-01
Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features.
Kodak, Tiffany; Clements, Andrea; Paden, Amber R; LeBlanc, Brittany; Mintz, Joslyn; Toussaint, Karen A
2015-01-01
The current investigation evaluated repertoires that may be related to performance on auditory-to-visual conditional discrimination training with 9 students who had been diagnosed with autism spectrum disorder. The skills included in the assessment were matching, imitation, scanning, an auditory discrimination, and a visual discrimination. The results of the skills assessment showed that 4 participants failed to demonstrate mastery of at least 1 of the skills. We compared the outcomes of the assessment to the results of auditory-visual conditional discrimination training and found that training outcomes were related to the assessment outcomes for 7 of the 9 participants. One participant who did not demonstrate mastery of all assessment skills subsequently learned several conditional discriminations when blocked training trials were conducted. Another participant who did not demonstrate mastery of the auditory discrimination skill subsequently acquired conditional discriminations in 1 of the training conditions. We discuss the implications of the assessment for practice and suggest additional areas of research on this topic. © Society for the Experimental Analysis of Behavior.
The Simplest Chronoscope V: A Theory of Dual Primary and Secondary Reaction Time Systems.
Montare, Alberto
2016-12-01
Extending work by Montare, visual simple reaction time, choice reaction time, discriminative reaction time, and overall reaction time scores obtained from college students by the simplest chronoscope (a falling meterstick) method were significantly faster as well as significantly less variable than scores of the same individuals from electromechanical reaction timers (machine method). Results supported the existence of dual reaction time systems: an ancient primary reaction time system theoretically activating the V5 parietal area of the dorsal visual stream that evolved to process significantly faster sensory-motor reactions to sudden stimulations arising from environmental objects in motion, and a secondary reaction time system theoretically activating the V4 temporal area of the ventral visual stream that subsequently evolved to process significantly slower sensory-perceptual-motor reactions to sudden stimulations arising from motionless colored objects. © The Author(s) 2016.
Møller, Cecilie; Højlund, Andreas; Bærentsen, Klaus B; Hansen, Niels Chr; Skewes, Joshua C; Vuust, Peter
2018-05-01
Perception is fundamentally a multisensory experience. The principle of inverse effectiveness (PoIE) states how the multisensory gain is maximal when responses to the unisensory constituents of the stimuli are weak. It is one of the basic principles underlying multisensory processing of spatiotemporally corresponding crossmodal stimuli that are well established at behavioral as well as neural levels. It is not yet clear, however, how modality-specific stimulus features influence discrimination of subtle changes in a crossmodally corresponding feature belonging to another modality. Here, we tested the hypothesis that reliance on visual cues to pitch discrimination follow the PoIE at the interindividual level (i.e., varies with varying levels of auditory-only pitch discrimination abilities). Using an oddball pitch discrimination task, we measured the effect of varying visually perceived vertical position in participants exhibiting a wide range of pitch discrimination abilities (i.e., musicians and nonmusicians). Visual cues significantly enhanced pitch discrimination as measured by the sensitivity index d', and more so in the crossmodally congruent than incongruent condition. The magnitude of gain caused by compatible visual cues was associated with individual pitch discrimination thresholds, as predicted by the PoIE. This was not the case for the magnitude of the congruence effect, which was unrelated to individual pitch discrimination thresholds, indicating that the pitch-height association is robust to variations in auditory skills. Our findings shed light on individual differences in multisensory processing by suggesting that relevant multisensory information that crucially aids some perceivers' performance may be of less importance to others, depending on their unisensory abilities.
Lambert, Anthony J; Wootton, Adrienne
2017-08-01
Different patterns of high density EEG activity were elicited by the same peripheral stimuli, in the context of Landmark Cueing and Perceptual Discrimination tasks. The C1 component of the visual event-related potential (ERP) at parietal - occipital electrode sites was larger in the Landmark Cueing task, and source localisation suggested greater activation in the superior parietal lobule (SPL) in this task, compared to the Perceptual Discrimination task, indicating stronger early recruitment of the dorsal visual stream. In the Perceptual Discrimination task, source localisation suggested widespread activation of the inferior temporal gyrus (ITG) and fusiform gyrus (FFG), structures associated with the ventral visual stream, during the early phase of the P1 ERP component. Moreover, during a later epoch (171-270ms after stimulus onset) increased temporal-occipital negativity, and stronger recruitment of ITG and FFG were observed in the Perceptual Discrimination task. These findings illuminate the contrasting functions of the dorsal and ventral visual streams, to support rapid shifts of attention in response to contextual landmarks, and conscious discrimination, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
Allon, Ayala S.; Balaban, Halely; Luria, Roy
2014-01-01
In three experiments we manipulated the resolution of novel complex objects in visual working memory (WM) by changing task demands. Previous studies that investigated the trade-off between quantity and resolution in visual WM yielded mixed results for simple familiar stimuli. We used the contralateral delay activity as an electrophysiological marker to directly track the deployment of visual WM resources while participants preformed a change-detection task. Across three experiments we presented the same novel complex items but changed the task demands. In Experiment 1 we induced a medium resolution task by using change trials in which a random polygon changed to a different type of polygon and replicated previous findings showing that novel complex objects are represented with higher resolution relative to simple familiar objects. In Experiment 2 we induced a low resolution task that required distinguishing between polygons and other types of stimulus categories, but we failed in finding a corresponding decrease in the resolution of the represented item. Finally, in Experiment 3 we induced a high resolution task that required discriminating between highly similar polygons with somewhat different contours. This time, we observed an increase in the item’s resolution. Our findings indicate that the resolution for novel complex objects can be increased but not decreased according to task demands, suggesting that minimal resolution is required in order to maintain these items in visual WM. These findings support studies claiming that capacity and resolution in visual WM reflect different mechanisms. PMID:24734026
Silvoniemi, Antti; Din, Mueez U; Suilamo, Sami; Shepherd, Tony; Minn, Heikki
2016-11-01
Delineation of gross tumour volume in 3D is a critical step in the radiotherapy (RT) treatment planning for oropharyngeal cancer (OPC). Static [ 18 F]-FDG PET/CT imaging has been suggested as a method to improve the reproducibility of tumour delineation, but it suffers from low specificity. We undertook this pilot study in which dynamic features in time-activity curves (TACs) of [ 18 F]-FDG PET/CT images were applied to help the discrimination of tumour from inflammation and adjacent normal tissue. Five patients with OPC underwent dynamic [ 18 F]-FDG PET/CT imaging in treatment position. Voxel-by-voxel analysis was performed to evaluate seven dynamic features developed with the knowledge of differences in glucose metabolism in different tissue types and visual inspection of TACs. The Gaussian mixture model and K-means algorithms were used to evaluate the performance of the dynamic features in discriminating tumour voxels compared to the performance of standardized uptake values obtained from static imaging. Some dynamic features showed a trend towards discrimination of different metabolic areas but lack of consistency means that clinical application is not recommended based on these results alone. Impact of inflammatory tissue remains a problem for volume delineation in RT of OPC, but a simple dynamic imaging protocol proved practicable and enabled simple data analysis techniques that show promise for complementing the information in static uptake values.
Object recognition with hierarchical discriminant saliency networks.
Han, Sunhyoung; Vasconcelos, Nuno
2014-01-01
The benefits of integrating attention and object recognition are investigated. While attention is frequently modeled as a pre-processor for recognition, we investigate the hypothesis that attention is an intrinsic component of recognition and vice-versa. This hypothesis is tested with a recognition model, the hierarchical discriminant saliency network (HDSN), whose layers are top-down saliency detectors, tuned for a visual class according to the principles of discriminant saliency. As a model of neural computation, the HDSN has two possible implementations. In a biologically plausible implementation, all layers comply with the standard neurophysiological model of visual cortex, with sub-layers of simple and complex units that implement a combination of filtering, divisive normalization, pooling, and non-linearities. In a convolutional neural network implementation, all layers are convolutional and implement a combination of filtering, rectification, and pooling. The rectification is performed with a parametric extension of the now popular rectified linear units (ReLUs), whose parameters can be tuned for the detection of target object classes. This enables a number of functional enhancements over neural network models that lack a connection to saliency, including optimal feature denoising mechanisms for recognition, modulation of saliency responses by the discriminant power of the underlying features, and the ability to detect both feature presence and absence. In either implementation, each layer has a precise statistical interpretation, and all parameters are tuned by statistical learning. Each saliency detection layer learns more discriminant saliency templates than its predecessors and higher layers have larger pooling fields. This enables the HDSN to simultaneously achieve high selectivity to target object classes and invariance. The performance of the network in saliency and object recognition tasks is compared to those of models from the biological and computer vision literatures. This demonstrates benefits for all the functional enhancements of the HDSN, the class tuning inherent to discriminant saliency, and saliency layers based on templates of increasing target selectivity and invariance. Altogether, these experiments suggest that there are non-trivial benefits in integrating attention and recognition.
Teodorescu, Kinneret; Bouchigny, Sylvain; Korman, Maria
2013-08-01
In this study, we explored the time course of haptic stiffness discrimination learning and how it was affected by two experimental factors, the addition of visual information and/or knowledge of results (KR) during training. Stiffness perception may integrate both haptic and visual modalities. However, in many tasks, the visual field is typically occluded, forcing stiffness perception to be dependent exclusively on haptic information. No studies to date addressed the time course of haptic stiffness perceptual learning. Using a virtual environment (VE) haptic interface and a two-alternative forced-choice discrimination task, the haptic stiffness discrimination ability of 48 participants was tested across 2 days. Each day included two haptic test blocks separated by a training block Additional visual information and/or KR were manipulated between participants during training blocks. Practice repetitions alone induced significant improvement in haptic stiffness discrimination. Between days, accuracy was slightly improved, but decision time performance was deteriorated. The addition of visual information and/or KR had only temporary effects on decision time, without affecting the time course of haptic discrimination learning. Learning in haptic stiffness discrimination appears to evolve through at least two distinctive phases: A single training session resulted in both immediate and latent learning. This learning was not affected by the training manipulations inspected. Training skills in VE in spaced sessions can be beneficial for tasks in which haptic perception is critical, such as surgery procedures, when the visual field is occluded. However, training protocols for such tasks should account for low impact of multisensory information and KR.
ERIC Educational Resources Information Center
Behrmann, Polly; Millman, Joan
The activities collected in this handbook are planned for parents to use with their children in a learning experience. They can also be used in the classroom. Sections contain games designed to develop visual discrimination, auditory discrimination, motor coordination and oral expression. An objective is given for each game, and directions for…
Short-term visual deprivation, tactile acuity, and haptic solid shape discrimination.
Crabtree, Charles E; Norman, J Farley
2014-01-01
Previous psychophysical studies have reported conflicting results concerning the effects of short-term visual deprivation upon tactile acuity. Some studies have found that 45 to 90 minutes of total light deprivation produce significant improvements in participants' tactile acuity as measured with a grating orientation discrimination task. In contrast, a single 2011 study found no such improvement while attempting to replicate these earlier findings. A primary goal of the current experiment was to resolve this discrepancy in the literature by evaluating the effects of a 90-minute period of total light deprivation upon tactile grating orientation discrimination. We also evaluated the potential effect of short-term deprivation upon haptic 3-D shape discrimination using a set of naturally-shaped solid objects. According to previous research, short-term deprivation enhances performance in a tactile 2-D shape discrimination task - perhaps a similar improvement also occurs for haptic 3-D shape discrimination. The results of the current investigation demonstrate that not only does short-term visual deprivation not enhance tactile acuity, it additionally has no effect upon haptic 3-D shape discrimination. While visual deprivation had no effect in our study, there was a significant effect of experience and learning for the grating orientation task - the participants' tactile acuity improved over time, independent of whether they had, or had not, experienced visual deprivation.
Associative visual learning by tethered bees in a controlled visual environment.
Buatois, Alexis; Pichot, Cécile; Schultheiss, Patrick; Sandoz, Jean-Christophe; Lazzari, Claudio R; Chittka, Lars; Avarguès-Weber, Aurore; Giurfa, Martin
2017-10-10
Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS-). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS- after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS- also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.
Vermaercke, Ben; Van den Bergh, Gert; Gerich, Florian; Op de Beeck, Hans
2015-01-01
Recent studies have revealed a surprising degree of functional specialization in rodent visual cortex. It is unknown to what degree this functional organization is related to the well-known hierarchical organization of the visual system in primates. We designed a study in rats that targets one of the hallmarks of the hierarchical object vision pathway in primates: selectivity for behaviorally relevant dimensions. We compared behavioral performance in a visual water maze with neural discriminability in five visual cortical areas. We tested behavioral discrimination in two independent batches of six rats using six pairs of shapes used previously to probe shape selectivity in monkey cortex (Lehky and Sereno, 2007). The relative difficulty (error rate) of shape pairs was strongly correlated between the two batches, indicating that some shape pairs were more difficult to discriminate than others. Then, we recorded in naive rats from five visual areas from primary visual cortex (V1) over areas LM, LI, LL, up to lateral occipito-temporal cortex (TO). Shape selectivity in the upper layers of V1, where the information enters cortex, correlated mostly with physical stimulus dissimilarity and not with behavioral performance. In contrast, neural discriminability in lower layers of all areas was strongly correlated with behavioral performance. These findings, in combination with the results from Vermaercke et al. (2014b), suggest that the functional specialization in rodent lateral visual cortex reflects a processing hierarchy resulting in the emergence of complex selectivity that is related to behaviorally relevant stimulus differences.
NASA Astrophysics Data System (ADS)
Asiedu, Mercy Nyamewaa; Simhal, Anish; Lam, Christopher T.; Mueller, Jenna; Chaudhary, Usamah; Schmitt, John W.; Sapiro, Guillermo; Ramanujam, Nimmi
2018-02-01
The world health organization recommends visual inspection with acetic acid (VIA) and/or Lugol's Iodine (VILI) for cervical cancer screening in low-resource settings. Human interpretation of diagnostic indicators for visual inspection is qualitative, subjective, and has high inter-observer discordance, which could lead both to adverse outcomes for the patient and unnecessary follow-ups. In this work, we a simple method for automatic feature extraction and classification for Lugol's Iodine cervigrams acquired with a low-cost, miniature, digital colposcope. Algorithms to preprocess expert physician-labelled cervigrams and to extract simple but powerful color-based features are introduced. The features are used to train a support vector machine model to classify cervigrams based on expert physician labels. The selected framework achieved a sensitivity, specificity, and accuracy of 89.2%, 66.7% and 80.6% with majority diagnosis of the expert physicians in discriminating cervical intraepithelial neoplasia (CIN +) relative to normal tissues. The proposed classifier also achieved an area under the curve of 84 when trained with majority diagnosis of the expert physicians. The results suggest that utilizing simple color-based features may enable unbiased automation of VILI cervigrams, opening the door to a full system of low-cost data acquisition complemented with automatic interpretation.
Sensory adaptation for timing perception.
Roseboom, Warrick; Linares, Daniel; Nishida, Shin'ya
2015-04-22
Recent sensory experience modifies subjective timing perception. For example, when visual events repeatedly lead auditory events, such as when the sound and video tracks of a movie are out of sync, subsequent vision-leads-audio presentations are reported as more simultaneous. This phenomenon could provide insights into the fundamental problem of how timing is represented in the brain, but the underlying mechanisms are poorly understood. Here, we show that the effect of recent experience on timing perception is not just subjective; recent sensory experience also modifies relative timing discrimination. This result indicates that recent sensory history alters the encoding of relative timing in sensory areas, excluding explanations of the subjective phenomenon based only on decision-level changes. The pattern of changes in timing discrimination suggests the existence of two sensory components, similar to those previously reported for visual spatial attributes: a lateral shift in the nonlinear transducer that maps relative timing into perceptual relative timing and an increase in transducer slope around the exposed timing. The existence of these components would suggest that previous explanations of how recent experience may change the sensory encoding of timing, such as changes in sensory latencies or simple implementations of neural population codes, cannot account for the effect of sensory adaptation on timing perception.
Carrara, Verena I; Darakomon, Mue Chae; Thin, Nant War War; Paw, Naw Ta Kaw; Wah, Naw; Wah, Hser Gay; Helen, Naw; Keereecharoen, Suporn; Paw, Naw Ta Mlar; Jittamala, Podjanee; Nosten, François H; Ricci, Daniela; McGready, Rose
2016-01-01
Neurological examination, including visual fixation and tracking of a target, is routinely performed in the Shoklo Malaria Research Unit postnatal care units on the Thailand-Myanmar border. We aimed to evaluate a simple visual newborn test developed in Italy and performed by non-specialized personnel working in neonatal care units. An intensive training of local health staff in Thailand was conducted prior to performing assessments at 24, 48 and 72 hours of life in healthy, low-risk term singletons. The 48 and 72 hours results were then compared to values obtained to those from Italy. Parents and staff administering the test reported on acceptability. One hundred and seventy nine newborns, between June 2011 and October 2012, participated in the study. The test was rapidly completed if the infant remained in an optimal behavioral stage (7 ± 2 minutes) but the test duration increased significantly (12 ± 4 minutes, p < 0.001) if its behavior changed. Infants were able to fix a target and to discriminate a colored face at 24 hours of life. Horizontal tracking of a target was achieved by 96% (152/159) of the infants at 48 hours. Circular tracking, stripe discrimination and attention to distance significantly improved between each 24-hour test period. The test was easily performed by non-specialized local staff and well accepted by the parents. Healthy term singletons in this limited-resource setting have a visual response similar to that obtained to gestational age matched newborns in Italy. It is possible to use these results as a reference set of values for the visual assessment in Karen and Burmese infants in the first 72 hours of life. The utility of the 24 hours test should be pursued.
Retter, Talia L; Rossion, Bruno
2016-07-01
Discrimination of facial identities is a fundamental function of the human brain that is challenging to examine with macroscopic measurements of neural activity, such as those obtained with functional magnetic resonance imaging (fMRI) and electroencephalography (EEG). Although visual adaptation or repetition suppression (RS) stimulation paradigms have been successfully implemented to this end with such recording techniques, objective evidence of an identity-specific discrimination response due to adaptation at the level of the visual representation is lacking. Here, we addressed this issue with fast periodic visual stimulation (FPVS) and EEG recording combined with a symmetry/asymmetry adaptation paradigm. Adaptation to one facial identity is induced through repeated presentation of that identity at a rate of 6 images per second (6 Hz) over 10 sec. Subsequently, this identity is presented in alternation with another facial identity (i.e., its anti-face, both faces being equidistant from an average face), producing an identity repetition rate of 3 Hz over a 20 sec testing sequence. A clear EEG response at 3 Hz is observed over the right occipito-temporal (ROT) cortex, indexing discrimination between the two facial identities in the absence of an explicit behavioral discrimination measure. This face identity discrimination occurs immediately after adaptation and disappears rapidly within 20 sec. Importantly, this 3 Hz response is not observed in a control condition without the single-identity 10 sec adaptation period. These results indicate that visual adaptation to a given facial identity produces an objective (i.e., at a pre-defined stimulation frequency) electrophysiological index of visual discrimination between that identity and another, and provides a unique behavior-free quantification of the effect of visual adaptation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zhao, Mingming; Shi, Yuhua; Wu, Lan; Guo, Licheng; Liu, Wei; Xiong, Chao; Yan, Song; Sun, Wei; Chen, Shilin
2016-01-01
Saffron is one of the most expensive species of Chinese herbs and has been subjected to various types of adulteration because of its high price and limited production. The present study introduces a loop-mediated isothermal amplification (LAMP) technique for the differentiation of saffron from its adulterants. This novel technique is sensitive, efficient and simple. Six specific LAMP primers were designed on the basis of the nucleotide sequence of the internal transcribed spacer 2 (ITS2) nuclear ribosomal DNA of Crocus sativus. All LAMP amplifications were performed successfully, and visual detection occurred within 60 min at isothermal conditions of 65 °C. The results indicated that the LAMP primers are accurate and highly specific for the discrimination of saffron from its adulterants. In particular, 10 fg of genomic DNA was determined to be the limit for template accuracy of LAMP in saffron. Thus, the proposed novel, simple, and sensitive LAMP assay is well suited for immediate on-site discrimination of herbal materials. Based on the study, a practical standard operating procedure (SOP) for utilizing the LAMP protocol for herbal authentication is provided. PMID:27146605
Sustained attention in language production: an individual differences investigation.
Jongman, Suzanne R; Roelofs, Ardi; Meyer, Antje S
2015-01-01
Whereas it has long been assumed that most linguistic processes underlying language production happen automatically, accumulating evidence suggests that these processes do require some form of attention. Here we investigated the contribution of sustained attention: the ability to maintain alertness over time. In Experiment 1, participants' sustained attention ability was measured using auditory and visual continuous performance tasks. Subsequently, employing a dual-task procedure, participants described pictures using simple noun phrases and performed an arrow-discrimination task while their vocal and manual response times (RTs) and the durations of their gazes to the pictures were measured. Earlier research has demonstrated that gaze duration reflects language planning processes up to and including phonological encoding. The speakers' sustained attention ability correlated with the magnitude of the tail of the vocal RT distribution, reflecting the proportion of very slow responses, but not with individual differences in gaze duration. This suggests that sustained attention was most important after phonological encoding. Experiment 2 showed that the involvement of sustained attention was significantly stronger in a dual-task situation (picture naming and arrow discrimination) than in simple naming. Thus, individual differences in maintaining attention on the production processes become especially apparent when a simultaneous second task also requires attentional resources.
Zhao, Mingming; Shi, Yuhua; Wu, Lan; Guo, Licheng; Liu, Wei; Xiong, Chao; Yan, Song; Sun, Wei; Chen, Shilin
2016-05-05
Saffron is one of the most expensive species of Chinese herbs and has been subjected to various types of adulteration because of its high price and limited production. The present study introduces a loop-mediated isothermal amplification (LAMP) technique for the differentiation of saffron from its adulterants. This novel technique is sensitive, efficient and simple. Six specific LAMP primers were designed on the basis of the nucleotide sequence of the internal transcribed spacer 2 (ITS2) nuclear ribosomal DNA of Crocus sativus. All LAMP amplifications were performed successfully, and visual detection occurred within 60 min at isothermal conditions of 65 °C. The results indicated that the LAMP primers are accurate and highly specific for the discrimination of saffron from its adulterants. In particular, 10 fg of genomic DNA was determined to be the limit for template accuracy of LAMP in saffron. Thus, the proposed novel, simple, and sensitive LAMP assay is well suited for immediate on-site discrimination of herbal materials. Based on the study, a practical standard operating procedure (SOP) for utilizing the LAMP protocol for herbal authentication is provided.
Campbell, Dana L M; Hauber, Mark E
2009-08-01
Female zebra finches (Taeniopygia guttata) use visual and acoustic traits for accurate recognition of male conspecifics. Evidence from video playbacks confirms that both sensory modalities are important for conspecific and species discrimination, but experimental evidence of the individual roles of these cue types affecting live conspecific recognition is limited. In a spatial paradigm to test discrimination, the authors used live male zebra finch stimuli of 2 color morphs, wild-type (conspecific) and white with a painted black beak (foreign), producing 1 of 2 vocalization types: songs and calls learned from zebra finch parents (conspecific) or cross-fostered songs and calls learned from Bengalese finch (Lonchura striata vars. domestica) foster parents (foreign). The authors found that female zebra finches consistently preferred males with conspecific visual and acoustic cues over males with foreign cues, but did not discriminate when the conspecific and foreign visual and acoustic cues were mismatched. These results indicate the importance of both visual and acoustic features for female zebra finches when discriminating between live conspecific males. Copyright 2009 APA, all rights reserved.
Short-Term Visual Deprivation, Tactile Acuity, and Haptic Solid Shape Discrimination
Crabtree, Charles E.; Norman, J. Farley
2014-01-01
Previous psychophysical studies have reported conflicting results concerning the effects of short-term visual deprivation upon tactile acuity. Some studies have found that 45 to 90 minutes of total light deprivation produce significant improvements in participants' tactile acuity as measured with a grating orientation discrimination task. In contrast, a single 2011 study found no such improvement while attempting to replicate these earlier findings. A primary goal of the current experiment was to resolve this discrepancy in the literature by evaluating the effects of a 90-minute period of total light deprivation upon tactile grating orientation discrimination. We also evaluated the potential effect of short-term deprivation upon haptic 3-D shape discrimination using a set of naturally-shaped solid objects. According to previous research, short-term deprivation enhances performance in a tactile 2-D shape discrimination task – perhaps a similar improvement also occurs for haptic 3-D shape discrimination. The results of the current investigation demonstrate that not only does short-term visual deprivation not enhance tactile acuity, it additionally has no effect upon haptic 3-D shape discrimination. While visual deprivation had no effect in our study, there was a significant effect of experience and learning for the grating orientation task – the participants' tactile acuity improved over time, independent of whether they had, or had not, experienced visual deprivation. PMID:25397327
Jerger, Susan; Damian, Markus F; McAlpine, Rachel P; Abdi, Hervé
2017-03-01
Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/-B/aa or/-B/az). The items started with an easy-to-speechread/B/or difficult-to-speechread/G/onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/-B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same-as opposed to different-responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g.,/-B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz-as opposed to az- responses in the audiovisual than auditory mode. Performance in the audiovisual mode showed more same responses for the intact vs. non-intact different pairs (e.g., Baa:/-B/aa) and more intact onset responses for nonword repetition (Baz for/-B/az). Thus visual speech altered both discrimination and identification in the CHL-to a large extent for the/B/onsets but only minimally for the/G/onsets. The CHL identified the stimuli similarly to the CNH but did not discriminate the stimuli similarly. A bias-free measure of the children's discrimination skills (i.e., d' analysis) revealed that the CHL had greater difficulty discriminating intact from non-intact speech in both modes. As the degree of HL worsened, the ability to discriminate the intact vs. non-intact onsets in the auditory mode worsened. Discrimination ability in CHL significantly predicted their identification of the onsets-even after variation due to the other variables was controlled. These results clearly established that visual speech can fill in non-intact auditory speech, and this effect, in turn, made the non-intact onsets more difficult to discriminate from intact speech and more likely to be perceived as intact. Such results 1) demonstrate the value of visual speech at multiple levels of linguistic processing and 2) support intervention programs that view visual speech as a powerful asset for developing spoken language in CHL. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Hervé
2017-01-01
Objectives Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Methods Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/–B/aa or /–B/az). The items started with an easy-to-speechread /B/ or difficult-to-speechread /G/ onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/–B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same—as opposed to different—responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g., /–B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz—as opposed to az— responses in the audiovisual than auditory mode. Results Performance in the audiovisual mode showed more same responses for the intact vs. non-intact different pairs (e.g., Baa:/–B/aa) and more intact onset responses for nonword repetition (Baz for/–B/az). Thus visual speech altered both discrimination and identification in the CHL—to a large extent for the /B/ onsets but only minimally for the /G/ onsets. The CHL identified the stimuli similarly to the CNH but did not discriminate the stimuli similarly. A bias-free measure of the children’s discrimination skills (i.e., d’ analysis) revealed that the CHL had greater difficulty discriminating intact from non-intact speech in both modes. As the degree of HL worsened, the ability to discriminate the intact vs. non-intact onsets in the auditory mode worsened. Discrimination ability in CHL significantly predicted their identification of the onsets—even after variation due to the other variables was controlled. Conclusions These results clearly established that visual speech can fill in non-intact auditory speech, and this effect, in turn, made the non-intact onsets more difficult to discriminate from intact speech and more likely to be perceived as intact. Such results 1) demonstrate the value of visual speech at multiple levels of linguistic processing and 2) support intervention programs that view visual speech as a powerful asset for developing spoken language in CHL. PMID:28167003
ERIC Educational Resources Information Center
Vause, Tricia; Martin, Garry L.; Yu, C.T.; Marion, Carole; Sakko, Gina
2005-01-01
The relationship between language, performance on the Assessment of Basic Learning Abilities (ABLA) test, and stimulus equivalence was examined. Five participants with minimal verbal repertoires were studied; 3 who passed up to ABLA Level 4, a visual quasi-identity discrimination and 2 who passed ABLA Level 6, an auditory-visual nonidentity…
Improved Discrimination of Visual Stimuli Following Repetitive Transcranial Magnetic Stimulation
Waterston, Michael L.; Pack, Christopher C.
2010-01-01
Background Repetitive transcranial magnetic stimulation (rTMS) at certain frequencies increases thresholds for motor-evoked potentials and phosphenes following stimulation of cortex. Consequently rTMS is often assumed to introduce a “virtual lesion” in stimulated brain regions, with correspondingly diminished behavioral performance. Methodology/Principal Findings Here we investigated the effects of rTMS to visual cortex on subjects' ability to perform visual psychophysical tasks. Contrary to expectations of a visual deficit, we find that rTMS often improves the discrimination of visual features. For coarse orientation tasks, discrimination of a static stimulus improved consistently following theta-burst stimulation of the occipital lobe. Using a reaction-time task, we found that these improvements occurred throughout the visual field and lasted beyond one hour post-rTMS. Low-frequency (1 Hz) stimulation yielded similar improvements. In contrast, we did not find consistent effects of rTMS on performance in a fine orientation discrimination task. Conclusions/Significance Overall our results suggest that rTMS generally improves or has no effect on visual acuity, with the nature of the effect depending on the type of stimulation and the task. We interpret our results in the context of an ideal-observer model of visual perception. PMID:20442776
Visual Discrimination of Color Normals and Color Deficients. Final Report.
ERIC Educational Resources Information Center
Chen, Yih-Wen
Since visual discrimination is one of the factors involved in learning from instructional media, the present study was designed (1) to investigate the effects of hue contrast, illuminant intensity, brightness contrast, and viewing distance on the discrimination accuracy of those who see color normally and those who do not, and (2) to investigate…
Visual body recognition in a prosopagnosic patient.
Moro, V; Pernigo, S; Avesani, R; Bulgarelli, C; Urgesi, C; Candidi, M; Aglioti, S M
2012-01-01
Conspicuous deficits in face recognition characterize prosopagnosia. Information on whether agnosic deficits may extend to non-facial body parts is lacking. Here we report the neuropsychological description of FM, a patient affected by a complete deficit in face recognition in the presence of mild clinical signs of visual object agnosia. His deficit involves both overt and covert recognition of faces (i.e. recognition of familiar faces, but also categorization of faces for gender or age) as well as the visual mental imagery of faces. By means of a series of matching-to-sample tasks we investigated: (i) a possible association between prosopagnosia and disorders in visual body perception; (ii) the effect of the emotional content of stimuli on the visual discrimination of faces, bodies and objects; (iii) the existence of a dissociation between identity recognition and the emotional discrimination of faces and bodies. Our results document, for the first time, the co-occurrence of body agnosia, i.e. the visual inability to discriminate body forms and body actions, and prosopagnosia. Moreover, the results show better performance in the discrimination of emotional face and body expressions with respect to body identity and neutral actions. Since FM's lesions involve bilateral fusiform areas, it is unlikely that the amygdala-temporal projections explain the relative sparing of emotion discrimination performance. Indeed, the emotional content of the stimuli did not improve the discrimination of their identity. The results hint at the existence of two segregated brain networks involved in identity and emotional discrimination that are at least partially shared by face and body processing. Copyright © 2011 Elsevier Ltd. All rights reserved.
Sequential Ideal-Observer Analysis of Visual Discriminations.
ERIC Educational Resources Information Center
Geisler, Wilson S.
1989-01-01
A new analysis, based on the concept of the ideal observer in signal detection theory, is described. It allows: tracing of the flow of discrimination information through the initial physiological stages of visual processing for arbitrary spatio-chromatic stimuli, and measurement of the information content of said visual stimuli. (TJH)
Investigating the role of the superior colliculus in active vision with the visual search paradigm.
Shen, Kelly; Valero, Jerome; Day, Gregory S; Paré, Martin
2011-06-01
We review here both the evidence that the functional visuomotor organization of the optic tectum is conserved in the primate superior colliculus (SC) and the evidence for the linking proposition that SC discriminating activity instantiates saccade target selection. We also present new data in response to questions that arose from recent SC visual search studies. First, we observed that SC discriminating activity predicts saccade initiation when monkeys perform an unconstrained search for a target defined by either a single visual feature or a conjunction of two features. Quantitative differences between the results in these two search tasks suggest, however, that SC discriminating activity does not only reflect saccade programming. This finding concurs with visual search studies conducted in posterior parietal cortex and the idea that, during natural active vision, visual attention is shifted concomitantly with saccade programming. Second, the analysis of a large neuronal sample recorded during feature search revealed that visual neurons in the superficial layers do possess discriminating activity. In addition, the hypotheses that there are distinct types of SC neurons in the deeper layers and that they are differently involved in saccade target selection were not substantiated. Third, we found that the discriminating quality of single-neuron activity substantially surpasses the ability of the monkeys to discriminate the target from distracters, raising the possibility that saccade target selection is a noisy process. We discuss these new findings in light of the visual search literature and the view that the SC is a visual salience map for orienting eye movements. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Is improved contrast sensitivity a natural consequence of visual training?
Levi, Aaron; Shaked, Danielle; Tadin, Duje; Huxlin, Krystel R.
2015-01-01
Many studies have shown that training and testing conditions modulate specificity of visual learning to trained stimuli and tasks. In visually impaired populations, generalizability of visual learning to untrained stimuli/tasks is almost always reported, with contrast sensitivity (CS) featuring prominently among these collaterally-improved functions. To understand factors underlying this difference, we measured CS for direction and orientation discrimination in the visual periphery of three groups of visually-intact subjects. Group 1 trained on an orientation discrimination task with static Gabors whose luminance contrast was decreased as performance improved. Group 2 trained on a global direction discrimination task using high-contrast random dot stimuli previously used to recover motion perception in cortically blind patients. Group 3 underwent no training. Both forms of training improved CS with some degree of specificity for basic attributes of the trained stimulus/task. Group 1's largest enhancement was in CS around the trained spatial/temporal frequencies; similarly, Group 2's largest improvements occurred in CS for discriminating moving and flickering stimuli. Group 3 saw no significant CS changes. These results indicate that CS improvements may be a natural consequence of multiple forms of visual training in visually intact humans, albeit with some specificity to the trained visual domain(s). PMID:26305736
Norman, J Farley; Phillips, Flip; Holmin, Jessica S; Norman, Hideko F; Beers, Amanda M; Boswell, Alexandria M; Cheeseman, Jacob R; Stethen, Angela G; Ronning, Cecilia
2012-10-01
A set of three experiments evaluated 96 participants' ability to visually and haptically discriminate solid object shape. In the past, some researchers have found haptic shape discrimination to be substantially inferior to visual shape discrimination, while other researchers have found haptics and vision to be essentially equivalent. A primary goal of the present study was to understand these discrepant past findings and to determine the true capabilities of the haptic system. All experiments used the same task (same vs. different shape discrimination) and stimulus objects (James Gibson's "feelies" and a set of naturally shaped objects--bell peppers). However, the methodology varied across experiments. Experiment 1 used random 3-dimensional (3-D) orientations of the stimulus objects, and the conditions were full-cue (active manipulation of objects and rotation of the visual objects in depth). Experiment 2 restricted the 3-D orientations of the stimulus objects and limited the haptic and visual information available to the participants. Experiment 3 compared restricted and full-cue conditions using random 3-D orientations. We replicated both previous findings in the current study. When we restricted visual and haptic information (and placed the stimulus objects in the same orientation on every trial), the participants' visual performance was superior to that obtained for haptics (replicating the earlier findings of Davidson et al. in Percept Psychophys 15(3):539-543, 1974). When the circumstances resembled those of ordinary life (e.g., participants able to actively manipulate objects and see them from a variety of perspectives), we found no significant difference between visual and haptic solid shape discrimination.
Time course of discrimination between emotional facial expressions: the role of visual saliency.
Calvo, Manuel G; Nummenmaa, Lauri
2011-08-01
Saccadic and manual responses were used to investigate the speed of discrimination between happy and non-happy facial expressions in two-alternative-forced-choice tasks. The minimum latencies of correct saccadic responses indicated that the earliest time point at which discrimination occurred ranged between 200 and 280ms, depending on type of expression. Corresponding minimum latencies for manual responses ranged between 440 and 500ms. For both response modalities, visual saliency of the mouth region was a critical factor in facilitating discrimination: The more salient the mouth was in happy face targets in comparison with non-happy distracters, the faster discrimination was. Global image characteristics (e.g., luminance) and semantic factors (i.e., categorical similarity and affective valence of expression) made minor or no contribution to discrimination efficiency. This suggests that visual saliency of distinctive facial features, rather than the significance of expression, is used to make both early and later expression discrimination decisions. Copyright © 2011 Elsevier Ltd. All rights reserved.
Sadato, Norihiro; Okada, Tomohisa; Kubota, Kiyokazu; Yonekura, Yoshiharu
2004-04-08
The occipital cortex of blind subjects is known to be activated during tactile discrimination tasks such as Braille reading. To investigate whether this is due to long-term learning of Braille or to sensory deafferentation, we used fMRI to study tactile discrimination tasks in subjects who had recently lost their sight and never learned Braille. The occipital cortex of the blind subjects without Braille training was activated during the tactile discrimination task, whereas that of control sighted subjects was not. This finding suggests that the activation of the visual cortex of the blind during performance of a tactile discrimination task may be due to sensory deafferentation, wherein a competitive imbalance favors the tactile over the visual modality.
Visual discrimination transfer and modulation by biogenic amines in honeybees.
Vieira, Amanda Rodrigues; Salles, Nayara; Borges, Marco; Mota, Theo
2018-05-10
For more than a century, visual learning and memory have been studied in the honeybee Apis mellifera using operant appetitive conditioning. Although honeybees show impressive visual learning capacities in this well-established protocol, operant training of free-flying animals cannot be combined with invasive protocols for studying the neurobiological basis of visual learning. In view of this, different attempts have been made to develop new classical conditioning protocols for studying visual learning in harnessed honeybees, though learning performance remains considerably poorer than that for free-flying animals. Here, we investigated the ability of honeybees to use visual information acquired during classical conditioning in a new operant context. We performed differential visual conditioning of the proboscis extension reflex (PER) followed by visual orientation tests in a Y-maze. Classical conditioning and Y-maze retention tests were performed using the same pair of perceptually isoluminant chromatic stimuli, to avoid the influence of phototaxis during free-flying orientation. Visual discrimination transfer was clearly observed, with pre-trained honeybees significantly orienting their flights towards the former positive conditioned stimulus (CS+), thus showing that visual memories acquired by honeybees are resistant to context changes between conditioning and the retention test. We combined this visual discrimination approach with selective pharmacological injections to evaluate the effect of dopamine and octopamine in appetitive visual learning. Both octopaminergic and dopaminergic antagonists impaired visual discrimination performance, suggesting that both these biogenic amines modulate appetitive visual learning in honeybees. Our study brings new insight into cognitive and neurobiological mechanisms underlying visual learning in honeybees. © 2018. Published by The Company of Biologists Ltd.
Supervised linear dimensionality reduction with robust margins for object recognition
NASA Astrophysics Data System (ADS)
Dornaika, F.; Assoum, A.
2013-01-01
Linear Dimensionality Reduction (LDR) techniques have been increasingly important in computer vision and pattern recognition since they permit a relatively simple mapping of data onto a lower dimensional subspace, leading to simple and computationally efficient classification strategies. Recently, many linear discriminant methods have been developed in order to reduce the dimensionality of visual data and to enhance the discrimination between different groups or classes. Many existing linear embedding techniques relied on the use of local margins in order to get a good discrimination performance. However, dealing with outliers and within-class diversity has not been addressed by margin-based embedding method. In this paper, we explored the use of different margin-based linear embedding methods. More precisely, we propose to use the concepts of Median miss and Median hit for building robust margin-based criteria. Based on such margins, we seek the projection directions (linear embedding) such that the sum of local margins is maximized. Our proposed approach has been applied to the problem of appearance-based face recognition. Experiments performed on four public face databases show that the proposed approach can give better generalization performance than the classic Average Neighborhood Margin Maximization (ANMM). Moreover, thanks to the use of robust margins, the proposed method down-grades gracefully when label outliers contaminate the training data set. In particular, we show that the concept of Median hit was crucial in order to get robust performance in the presence of outliers.
Thomson, Eric E.; Zea, Ivan; França, Wendy
2017-01-01
Abstract Adult rats equipped with a sensory prosthesis, which transduced infrared (IR) signals into electrical signals delivered to somatosensory cortex (S1), took approximately 4 d to learn a four-choice IR discrimination task. Here, we show that when such IR signals are projected to the primary visual cortex (V1), rats that are pretrained in a visual-discrimination task typically learn the same IR discrimination task on their first day of training. However, without prior training on a visual discrimination task, the learning rates for S1- and V1-implanted animals converged, suggesting there is no intrinsic difference in learning rate between the two areas. We also discovered that animals were able to integrate IR information into the ongoing visual processing stream in V1, performing a visual-IR integration task in which they had to combine IR and visual information. Furthermore, when the IR prosthesis was implanted in S1, rats showed no impairment in their ability to use their whiskers to perform a tactile discrimination task. Instead, in some rats, this ability was actually enhanced. Cumulatively, these findings suggest that cortical sensory neuroprostheses can rapidly augment the representational scope of primary sensory areas, integrating novel sources of information into ongoing processing while incurring minimal loss of native function. PMID:29279860
Jolij, Jacob; Scholte, H Steven; van Gaal, Simon; Hodgson, Timothy L; Lamme, Victor A F
2011-12-01
Humans largely guide their behavior by their visual representation of the world. Recent studies have shown that visual information can trigger behavior within 150 msec, suggesting that visually guided responses to external events, in fact, precede conscious awareness of those events. However, is such a view correct? By using a texture discrimination task, we show that the brain relies on long-latency visual processing in order to guide perceptual decisions. Decreasing stimulus saliency leads to selective changes in long-latency visually evoked potential components reflecting scene segmentation. These latency changes are accompanied by almost equal changes in simple RTs and points of subjective simultaneity. Furthermore, we find a strong correlation between individual RTs and the latencies of scene segmentation related components in the visually evoked potentials, showing that the processes underlying these late brain potentials are critical in triggering a response. However, using the same texture stimuli in an antisaccade task, we found that reflexive, but erroneous, prosaccades, but not antisaccades, can be triggered by earlier visual processes. In other words: The brain can act quickly, but decides late. Differences between our study and earlier findings suggesting that action precedes conscious awareness can be explained by assuming that task demands determine whether a fast and unconscious, or a slower and conscious, representation is used to initiate a visually guided response.
Perceptual learning as improved probabilistic inference in early sensory areas.
Bejjanki, Vikranth R; Beck, Jeffrey M; Lu, Zhong-Lin; Pouget, Alexandre
2011-05-01
Extensive training on simple tasks such as fine orientation discrimination results in large improvements in performance, a form of learning known as perceptual learning. Previous models have argued that perceptual learning is due to either sharpening and amplification of tuning curves in early visual areas or to improved probabilistic inference in later visual areas (at the decision stage). However, early theories are inconsistent with the conclusions of psychophysical experiments manipulating external noise, whereas late theories cannot explain the changes in neural responses that have been reported in cortical areas V1 and V4. Here we show that we can capture both the neurophysiological and behavioral aspects of perceptual learning by altering only the feedforward connectivity in a recurrent network of spiking neurons so as to improve probabilistic inference in early visual areas. The resulting network shows modest changes in tuning curves, in line with neurophysiological reports, along with a marked reduction in the amplitude of pairwise noise correlations.
Meng, Xiangzhi; Lin, Ou; Wang, Fang; Jiang, Yuzheng; Song, Yan
2014-01-01
Background High order cognitive processing and learning, such as reading, interact with lower-level sensory processing and learning. Previous studies have reported that visual perceptual training enlarges visual span and, consequently, improves reading speed in young and old people with amblyopia. Recently, a visual perceptual training study in Chinese-speaking children with dyslexia found that the visual texture discrimination thresholds of these children in visual perceptual training significantly correlated with their performance in Chinese character recognition, suggesting that deficits in visual perceptual processing/learning might partly underpin the difficulty in reading Chinese. Methodology/Principal Findings To further clarify whether visual perceptual training improves the measures of reading performance, eighteen children with dyslexia and eighteen typically developed readers that were age- and IQ-matched completed a series of reading measures before and after visual texture discrimination task (TDT) training. Prior to the TDT training, each group of children was split into two equivalent training and non-training groups in terms of all reading measures, IQ, and TDT. The results revealed that the discrimination threshold SOAs of TDT were significantly higher for the children with dyslexia than for the control children before training. Interestingly, training significantly decreased the discrimination threshold SOAs of TDT for both the typically developed readers and the children with dyslexia. More importantly, the training group with dyslexia exhibited significant enhancement in reading fluency, while the non-training group with dyslexia did not show this improvement. Additional follow-up tests showed that the improvement in reading fluency is a long-lasting effect and could be maintained for up to two months in the training group with dyslexia. Conclusion/Significance These results suggest that basic visual perceptual processing/learning and reading ability in Chinese might at least partially rely on overlapping mechanisms. PMID:25247602
Visual Discrimination and Motor Reproduction of Movement by Individuals with Mental Retardation.
ERIC Educational Resources Information Center
Shinkfield, Alison J.; Sparrow, W. A.; Day, R. H.
1997-01-01
Visual discrimination and motor reproduction tasks involving computer-simulated arm movements were administered to 12 adults with mental retardation and a gender-matched control group. The purpose was to examine whether inadequacies in visual perception account for the poorer motor performance of this population. Results indicate both perceptual…
Perceptual and academic patterns of learning-disabled/gifted students.
Waldron, K A; Saphire, D G
1992-04-01
This research explored ways gifted children with learning disabilities perceive and recall auditory and visual input and apply this information to reading, mathematics, and spelling. 24 learning-disabled/gifted children and a matched control group of normally achieving gifted students were tested for oral reading, word recognition and analysis, listening comprehension, and spelling. In mathematics, they were tested for numeration, mental and written computation, word problems, and numerical reasoning. To explore perception and memory skills, students were administered formal tests of visual and auditory memory as well as auditory discrimination of sounds. Their responses to reading and to mathematical computations were further considered for evidence of problems in visual discrimination, visual sequencing, and visual spatial areas. Analyses indicated that these learning-disabled/gifted students were significantly weaker than controls in their decoding skills, in spelling, and in most areas of mathematics. They were also significantly weaker in auditory discrimination and memory, and in visual discrimination, sequencing, and spatial abilities. Conclusions are that these underlying perceptual and memory deficits may be related to students' academic problems.
Götz, Theresa; Hanke, David; Huonker, Ralph; Weiss, Thomas; Klingner, Carsten; Brodoehl, Stefan; Baumbach, Philipp; Witte, Otto W
2017-06-01
We often close our eyes to improve perception. Recent results have shown a decrease of perception thresholds accompanied by an increase in somatosensory activity after eye closure. However, does somatosensory spatial discrimination also benefit from eye closure? We previously showed that spatial discrimination is accompanied by a reduction of somatosensory activity. Using magnetoencephalography, we analyzed the magnitude of primary somatosensory (somatosensory P50m) and primary auditory activity (auditory P50m) during a one-back discrimination task in 21 healthy volunteers. In complete darkness, participants were requested to pay attention to either the somatosensory or auditory stimulation and asked to open or close their eyes every 6.5 min. Somatosensory P50m was reduced during a task requiring the distinguishing of stimulus location changes at the distal phalanges of different fingers. The somatosensory P50m was further reduced and detection performance was higher during eyes open. A similar reduction was found for the auditory P50m during a task requiring the distinguishing of changing tones. The function of eye closure is more than controlling visual input. It might be advantageous for perception because it is an effective way to reduce interference from other modalities, but disadvantageous for spatial discrimination because it requires at least one top-down processing stage. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Bahrick, Lorraine E.; Lickliter, Robert; Castellanos, Irina
2014-01-01
Although research has demonstrated impressive face perception skills of young infants, little attention has focused on conditions that enhance versus impair infant face perception. The present studies tested the prediction, generated from the Intersensory Redundancy Hypothesis (IRH), that face discrimination, which relies on detection of visual featural information, would be impaired in the context of intersensory redundancy provided by audiovisual speech, and enhanced in the absence of intersensory redundancy (unimodal visual and asynchronous audiovisual speech) in early development. Later in development, following improvements in attention, faces should be discriminated in both redundant audiovisual and nonredundant stimulation. Results supported these predictions. Two-month-old infants discriminated a novel face in unimodal visual and asynchronous audiovisual speech but not in synchronous audiovisual speech. By 3 months, face discrimination was evident even during synchronous audiovisual speech. These findings indicate that infant face perception is enhanced and emerges developmentally earlier following unimodal visual than synchronous audiovisual exposure and that intersensory redundancy generated by naturalistic audiovisual speech can interfere with face processing. PMID:23244407
Olfactory discrimination: when vision matters?
Demattè, M Luisa; Sanabria, Daniel; Spence, Charles
2009-02-01
Many previous studies have attempted to investigate the effect of visual cues on olfactory perception in humans. The majority of this research has only looked at the modulatory effect of color, which has typically been explained in terms of multisensory perceptual interactions. However, such crossmodal effects may equally well relate to interactions taking place at a higher level of information processing as well. In fact, it is well-known that semantic knowledge can have a substantial effect on people's olfactory perception. In the present study, we therefore investigated the influence of visual cues, consisting of color patches and/or shapes, on people's olfactory discrimination performance. Participants had to make speeded odor discrimination responses (lemon vs. strawberry) while viewing a red or yellow color patch, an outline drawing of a strawberry or lemon, or a combination of these color and shape cues. Even though participants were instructed to ignore the visual stimuli, our results demonstrate that the accuracy of their odor discrimination responses was influenced by visual distractors. This result shows that both color and shape information are taken into account during speeded olfactory discrimination, even when such information is completely task irrelevant, hinting at the automaticity of such higher level visual-olfactory crossmodal interactions.
Kibby, Michelle Y.; Dyer, Sarah M.; Vadnais, Sarah A.; Jagger, Audreyana C.; Casher, Gabriel A.; Stacy, Maria
2015-01-01
Whether visual processing deficits are common in reading disorders (RD), and related to reading ability in general, has been debated for decades. The type of visual processing affected also is debated, although visual discrimination and short-term memory (STM) may be more commonly related to reading ability. Reading disorders are frequently comorbid with ADHD, and children with ADHD often have subclinical reading problems. Hence, children with ADHD were used as a comparison group in this study. ADHD and RD may be dissociated in terms of visual processing. Whereas RD may be associated with deficits in visual discrimination and STM for order, ADHD is associated with deficits in visual-spatial processing. Thus, we hypothesized that children with RD would perform worse than controls and children with ADHD only on a measure of visual discrimination and a measure of visual STM that requires memory for order. We expected all groups would perform comparably on the measure of visual STM that does not require sequential processing. We found children with RD or ADHD were commensurate to controls on measures of visual discrimination and visual STM that do not require sequential processing. In contrast, both RD groups (RD, RD/ADHD) performed worse than controls on the measure of visual STM that requires memory for order, and children with comorbid RD/ADHD performed worse than those with ADHD. In addition, of the three visual measures, only sequential visual STM predicted reading ability. Hence, our findings suggest there is a deficit in visual sequential STM that is specific to RD and is related to basic reading ability. The source of this deficit is worthy of further research, but it may include both reduced memory for order and poorer verbal mediation. PMID:26579020
Kibby, Michelle Y; Dyer, Sarah M; Vadnais, Sarah A; Jagger, Audreyana C; Casher, Gabriel A; Stacy, Maria
2015-01-01
Whether visual processing deficits are common in reading disorders (RD), and related to reading ability in general, has been debated for decades. The type of visual processing affected also is debated, although visual discrimination and short-term memory (STM) may be more commonly related to reading ability. Reading disorders are frequently comorbid with ADHD, and children with ADHD often have subclinical reading problems. Hence, children with ADHD were used as a comparison group in this study. ADHD and RD may be dissociated in terms of visual processing. Whereas RD may be associated with deficits in visual discrimination and STM for order, ADHD is associated with deficits in visual-spatial processing. Thus, we hypothesized that children with RD would perform worse than controls and children with ADHD only on a measure of visual discrimination and a measure of visual STM that requires memory for order. We expected all groups would perform comparably on the measure of visual STM that does not require sequential processing. We found children with RD or ADHD were commensurate to controls on measures of visual discrimination and visual STM that do not require sequential processing. In contrast, both RD groups (RD, RD/ADHD) performed worse than controls on the measure of visual STM that requires memory for order, and children with comorbid RD/ADHD performed worse than those with ADHD. In addition, of the three visual measures, only sequential visual STM predicted reading ability. Hence, our findings suggest there is a deficit in visual sequential STM that is specific to RD and is related to basic reading ability. The source of this deficit is worthy of further research, but it may include both reduced memory for order and poorer verbal mediation.
Sounds Activate Visual Cortex and Improve Visual Discrimination
Störmer, Viola S.; Martinez, Antigona; McDonald, John J.; Hillyard, Steven A.
2014-01-01
A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. PMID:25031419
Size Constancy in Bat Biosonar? Perceptual Interaction of Object Aperture and Distance
Heinrich, Melina; Wiegrebe, Lutz
2013-01-01
Perception and encoding of object size is an important feature of sensory systems. In the visual system object size is encoded by the visual angle (visual aperture) on the retina, but the aperture depends on the distance of the object. As object distance is not unambiguously encoded in the visual system, higher computational mechanisms are needed. This phenomenon is termed “size constancy”. It is assumed to reflect an automatic re-scaling of visual aperture with perceived object distance. Recently, it was found that in echolocating bats, the ‘sonar aperture’, i.e., the range of angles from which sound is reflected from an object back to the bat, is unambiguously perceived and neurally encoded. Moreover, it is well known that object distance is accurately perceived and explicitly encoded in bat sonar. Here, we addressed size constancy in bat biosonar, recruiting virtual-object techniques. Bats of the species Phyllostomus discolor learned to discriminate two simple virtual objects that only differed in sonar aperture. Upon successful discrimination, test trials were randomly interspersed using virtual objects that differed in both aperture and distance. It was tested whether the bats spontaneously assigned absolute width information to these objects by combining distance and aperture. The results showed that while the isolated perceptual cues encoding object width, aperture, and distance were all perceptually well resolved by the bats, the animals did not assign absolute width information to the test objects. This lack of sonar size constancy may result from the bats relying on different modalities to extract size information at different distances. Alternatively, it is conceivable that familiarity with a behaviorally relevant, conspicuous object is required for sonar size constancy, as it has been argued for visual size constancy. Based on the current data, it appears that size constancy is not necessarily an essential feature of sonar perception in bats. PMID:23630598
Size constancy in bat biosonar? Perceptual interaction of object aperture and distance.
Heinrich, Melina; Wiegrebe, Lutz
2013-01-01
Perception and encoding of object size is an important feature of sensory systems. In the visual system object size is encoded by the visual angle (visual aperture) on the retina, but the aperture depends on the distance of the object. As object distance is not unambiguously encoded in the visual system, higher computational mechanisms are needed. This phenomenon is termed "size constancy". It is assumed to reflect an automatic re-scaling of visual aperture with perceived object distance. Recently, it was found that in echolocating bats, the 'sonar aperture', i.e., the range of angles from which sound is reflected from an object back to the bat, is unambiguously perceived and neurally encoded. Moreover, it is well known that object distance is accurately perceived and explicitly encoded in bat sonar. Here, we addressed size constancy in bat biosonar, recruiting virtual-object techniques. Bats of the species Phyllostomus discolor learned to discriminate two simple virtual objects that only differed in sonar aperture. Upon successful discrimination, test trials were randomly interspersed using virtual objects that differed in both aperture and distance. It was tested whether the bats spontaneously assigned absolute width information to these objects by combining distance and aperture. The results showed that while the isolated perceptual cues encoding object width, aperture, and distance were all perceptually well resolved by the bats, the animals did not assign absolute width information to the test objects. This lack of sonar size constancy may result from the bats relying on different modalities to extract size information at different distances. Alternatively, it is conceivable that familiarity with a behaviorally relevant, conspicuous object is required for sonar size constancy, as it has been argued for visual size constancy. Based on the current data, it appears that size constancy is not necessarily an essential feature of sonar perception in bats.
Robust Visual Tracking via Online Discriminative and Low-Rank Dictionary Learning.
Zhou, Tao; Liu, Fanghui; Bhaskar, Harish; Yang, Jie
2017-09-12
In this paper, we propose a novel and robust tracking framework based on online discriminative and low-rank dictionary learning. The primary aim of this paper is to obtain compact and low-rank dictionaries that can provide good discriminative representations of both target and background. We accomplish this by exploiting the recovery ability of low-rank matrices. That is if we assume that the data from the same class are linearly correlated, then the corresponding basis vectors learned from the training set of each class shall render the dictionary to become approximately low-rank. The proposed dictionary learning technique incorporates a reconstruction error that improves the reliability of classification. Also, a multiconstraint objective function is designed to enable active learning of a discriminative and robust dictionary. Further, an optimal solution is obtained by iteratively computing the dictionary, coefficients, and by simultaneously learning the classifier parameters. Finally, a simple yet effective likelihood function is implemented to estimate the optimal state of the target during tracking. Moreover, to make the dictionary adaptive to the variations of the target and background during tracking, an online update criterion is employed while learning the new dictionary. Experimental results on a publicly available benchmark dataset have demonstrated that the proposed tracking algorithm performs better than other state-of-the-art trackers.
Visual modifications on the P300 speller BCI paradigm
NASA Astrophysics Data System (ADS)
Salvaris, M.; Sepulveda, F.
2009-08-01
The best known P300 speller brain-computer interface (BCI) paradigm is the Farwell and Donchin paradigm. In this paper, various changes to the visual aspects of this protocol are explored as well as their effects on classification. Changes to the dimensions of the symbols, the distance between the symbols and the colours used were tested. The purpose of the present work was not to achieve the highest possible accuracy results, but to ascertain whether these simple modifications to the visual protocol will provide classification differences between them and what these differences will be. Eight subjects were used, with each subject carrying out a total of six different experiments. In each experiment, the user spelt a total of 39 characters. Two types of classifiers were trained and tested to determine whether the results were classifier dependant. These were a support vector machine (SVM) with a radial basis function (RBF) kernel and Fisher's linear discriminant (FLD). The single-trial classification results and multiple-trial classification results were recorded and compared. Although no visual protocol was the best for all subjects, the best performances, across both classifiers, were obtained with the white background (WB) visual protocol. The worst performance was obtained with the small symbol size (SSS) visual protocol.
Visual discrimination predicts naming and semantic association accuracy in Alzheimer disease.
Harnish, Stacy M; Neils-Strunjas, Jean; Eliassen, James; Reilly, Jamie; Meinzer, Marcus; Clark, John Greer; Joseph, Jane
2010-12-01
Language impairment is a common symptom of Alzheimer disease (AD), and is thought to be related to semantic processing. This study examines the contribution of another process, namely visual perception, on measures of confrontation naming and semantic association abilities in persons with probable AD. Twenty individuals with probable mild-moderate Alzheimer disease and 20 age-matched controls completed a battery of neuropsychologic measures assessing visual perception, naming, and semantic association ability. Visual discrimination tasks that varied in the degree to which they likely accessed stored structural representations were used to gauge whether structural processing deficits could account for deficits in naming and in semantic association in AD. Visual discrimination abilities of nameable objects in AD strongly predicted performance on both picture naming and semantic association ability, but lacked the same predictive value for controls. Although impaired, performance on visual discrimination tests of abstract shapes and novel faces showed no significant relationship with picture naming and semantic association. These results provide additional evidence to support that structural processing deficits exist in AD, and may contribute to object recognition and naming deficits. Our findings suggest that there is a common deficit in discrimination of pictures using nameable objects, picture naming, and semantic association of pictures in AD. Disturbances in structural processing of pictured items may be associated with lexical-semantic impairment in AD, owing to degraded internal storage of structural knowledge.
The visual discrimination of negative facial expressions by younger and older adults.
Mienaltowski, Andrew; Johnson, Ellen R; Wittman, Rebecca; Wilson, Anne-Taylor; Sturycz, Cassandra; Norman, J Farley
2013-04-05
Previous research has demonstrated that older adults are not as accurate as younger adults at perceiving negative emotions in facial expressions. These studies rely on emotion recognition tasks that involve choosing between many alternatives, creating the possibility that age differences emerge for cognitive rather than perceptual reasons. In the present study, an emotion discrimination task was used to investigate younger and older adults' ability to visually discriminate between negative emotional facial expressions (anger, sadness, fear, and disgust) at low (40%) and high (80%) expressive intensity. Participants completed trials blocked by pairs of emotions. Discrimination ability was quantified from the participants' responses using signal detection measures. In general, the results indicated that older adults had more difficulty discriminating between low intensity expressions of negative emotions than did younger adults. However, younger and older adults did not differ when discriminating between anger and sadness. These findings demonstrate that age differences in visual emotion discrimination emerge when signal detection measures are used but that these differences are not uniform and occur only in specific contexts.
Object localization, discrimination, and grasping with the optic nerve visual prosthesis.
Duret, Florence; Brelén, Måten E; Lambert, Valerie; Gérard, Benoît; Delbeke, Jean; Veraart, Claude
2006-01-01
This study involved a volunteer completely blind from retinis pigmentosa who had previously been implanted with an optic nerve visual prosthesis. The aim of this two-year study was to train the volunteer to localize a given object in nine different positions, to discriminate the object within a choice of six, and then to grasp it. In a closed-loop protocol including a head worn video camera, the nerve was stimulated whenever a part of the processed image of the object being scrutinized matched the center of an elicitable phosphene. The accessible visual field included 109 phosphenes in a 14 degrees x 41 degrees area. Results showed that training was required to succeed in the localization and discrimination tasks, but practically no training was required for grasping the object. The volunteer was able to successfully complete all tasks after training. The volunteer systematically performed several left-right and bottom-up scanning movements during the discrimination task. Discrimination strategies included stimulation phases and no-stimulation phases of roughly similar duration. This study provides a step towards the practical use of the optic nerve visual prosthesis in current daily life.
Effective 3-D shape discrimination survives retinal blur.
Norman, J Farley; Beers, Amanda M; Holmin, Jessica S; Boswell, Alexandria M
2010-08-01
A single experiment evaluated observers' ability to visually discriminate 3-D object shape, where the 3-D structure was defined by motion, texture, Lambertian shading, and occluding contours. The observers' vision was degraded to varying degrees by blurring the experimental stimuli, using 2.0-, 2.5-, and 3.0-diopter convex lenses. The lenses reduced the observers' acuity from -0.091 LogMAR (in the no-blur conditions) to 0.924 LogMAR (in the conditions with the most blur; 3.0-diopter lenses). This visual degradation, although producing severe reductions in visual acuity, had only small (but significant) effects on the observers' ability to discriminate 3-D shape. The observers' shape discrimination performance was facilitated by the objects' rotation in depth, regardless of the presence or absence of blur. Our results indicate that accurate global shape discrimination survives a considerable amount of retinal blur.
Lawton, Teri
2016-01-01
There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) targeting linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning.
Brébion, Gildas; David, Anthony S; Pilowsky, Lyn S; Jones, Hugh
2004-11-01
Verbal and visual recognition tasks were administered to 40 patients with schizophrenia and 40 healthy comparison subjects. The verbal recognition task consisted of discriminating between 16 target words and 16 new words. The visual recognition task consisted of discriminating between 16 target pictures (8 black-and-white and 8 color) and 16 new pictures (8 black-and-white and 8 color). Visual recognition was followed by a spatial context discrimination task in which subjects were required to remember the spatial location of the target pictures at encoding. Results showed that recognition deficit in patients was similar for verbal and visual material. In both schizophrenic and healthy groups, men, but not women, obtained better recognition scores for the colored than for the black-and-white pictures. However, men and women similarly benefited from color to reduce spatial context discrimination errors. Patients showed a significant deficit in remembering the spatial location of the pictures, independently of accuracy in remembering the pictures themselves. These data suggest that patients are impaired in the amount of visual information that they can encode. With regards to the perceptual attributes of the stimuli, memory for spatial information appears to be affected, but not processing of color information.
Both hand position and movement direction modulate visual attention
Festman, Yariv; Adam, Jos J.; Pratt, Jay; Fischer, Martin H.
2013-01-01
The current study explored effects of continuous hand motion on the allocation of visual attention. A concurrent paradigm was used to combine visually concealed continuous hand movements with an attentionally demanding letter discrimination task. The letter probe appeared contingent upon the moving right hand passing through one of six positions. Discrimination responses were then collected via a keyboard press with the static left hand. Both the right hand's position and its movement direction systematically contributed to participants' visual sensitivity. Discrimination performance increased substantially when the right hand was distant from, but moving toward the visual probe location (replicating the far-hand effect, Festman et al., 2013). However, this effect disappeared when the probe appeared close to the static left hand, supporting the view that static and dynamic features of both hands combine in modulating pragmatic maps of attention. PMID:24098288
Effects of Hand Proximity and Movement Direction in Spatial and Temporal Gap Discrimination.
Wiemers, Michael; Fischer, Martin H
2016-01-01
Previous research on the interplay between static manual postures and visual attention revealed enhanced visual selection near the hands (near-hand effect). During active movements there is also superior visual performance when moving toward compared to away from the stimulus (direction effect). The "modulated visual pathways" hypothesis argues that differential involvement of magno- and parvocellular visual processing streams causes the near-hand effect. The key finding supporting this hypothesis is an increase in temporal and a reduction in spatial processing in near-hand space (Gozli et al., 2012). Since this hypothesis has, so far, only been tested with static hand postures, we provide a conceptual replication of Gozli et al.'s (2012) result with moving hands, thus also probing the generality of the direction effect. Participants performed temporal or spatial gap discriminations while their right hand was moving below the display. In contrast to Gozli et al. (2012), temporal gap discrimination was superior at intermediate and not near hand proximity. In spatial gap discrimination, a direction effect without hand proximity effect suggests that pragmatic attentional maps overshadowed temporal/spatial processing biases for far/near-hand space.
Visual adaptation enhances action sound discrimination.
Barraclough, Nick E; Page, Steve A; Keefe, Bruce D
2017-01-01
Prolonged exposure, or adaptation, to a stimulus in 1 modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in 1 modality can bias perception in another modality. Here, we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory, or audiovisual hand actions enhanced discrimination between 2 subsequently presented hand action sounds. Discrimination was most enhanced when the visual action "matched" the auditory action. In addition, prior adaptation to a visual, auditory, or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation-induced crossmodal enhancements cannot be explained by postperceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli.
Adaptation in human visual cortex as a mechanism for rapid discrimination of aversive stimuli.
Keil, Andreas; Stolarova, Margarita; Moratti, Stephan; Ray, William J
2007-06-01
The ability to react rapidly and efficiently to adverse stimuli is crucial for survival. Neuroscience and behavioral studies have converged to show that visual information associated with aversive content is processed quickly and accurately and is associated with rapid amplification of the neural responses. In particular, unpleasant visual information has repeatedly been shown to evoke increased cortical activity during early visual processing between 60 and 120 ms following the onset of a stimulus. However, the nature of these early responses is not well understood. Using neutral versus unpleasant colored pictures, the current report examines the time course of short-term changes in the human visual cortex when a subject is repeatedly exposed to simple grating stimuli in a classical conditioning paradigm. We analyzed changes in amplitude and synchrony of large-scale oscillatory activity across 2 days of testing, which included baseline measurements, 2 conditioning sessions, and a final extinction session. We found a gradual increase in amplitude and synchrony of very early cortical oscillations in the 20-35 Hz range across conditioning sessions, specifically for conditioned stimuli predicting aversive visual events. This increase for conditioned stimuli affected stimulus-locked cortical oscillations at a latency of around 60-90 ms and disappeared during extinction. Our findings suggest that reorganization of neural connectivity on the level of the visual cortex acts to optimize early perception of specific features indicative of emotional relevance.
Attentional limits on the perception and memory of visual information.
Palmer, J
1990-05-01
Attentional limits on perception and memory were measured by the decline in performance with increasing numbers of objects in a display. Multiple objects were presented to Ss who discriminated visual attributes. In a representative condition, 4 lines were briefly presented followed by a single line in 1 of the same locations. Ss were required to judge if the single line in the 2nd display was longer or shorter than the line in the corresponding location of the 1st display. The length difference threshold was calculated as a function of the number of objects. The difference thresholds doubled when the number of objects was increased from 1 to 4. This effect was generalized in several ways, and nonattentional explanations were ruled out. Further analyses showed that the attentional processes must share information from at least 4 objects and can be described by a simple model.
Sounds activate visual cortex and improve visual discrimination.
Feng, Wenfeng; Störmer, Viola S; Martinez, Antigona; McDonald, John J; Hillyard, Steven A
2014-07-16
A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. Copyright © 2014 the authors 0270-6474/14/349817-08$15.00/0.
Performance, physiological, and oculometer evaluation of VTOL landing displays
NASA Technical Reports Server (NTRS)
North, R. A.; Stackhouse, S. P.; Graffunder, K.
1979-01-01
A methodological approach to measuring workload was investigated for evaluation of new concepts in VTOL aircraft displays. Physiological, visual response, and conventional flight performance measures were recorded for landing approaches performed in the NASA Visual Motion Simulator (VMS). Three displays (two computer graphic and a conventional flight director), three crosswind amplitudes, and two motion base conditions (fixed vs. moving base) were tested in a factorial design. Multivariate discriminant functions were formed from flight performance and/or visual response variables. The flight performance variable discriminant showed maximum differentation between crosswind conditions. The visual response measure discriminant maximized differences between fixed vs. motion base conditions and experimental displays. Physiological variables were used to attempt to predict the discriminant function values for each subject/condition trial. The weights of the physiological variables in these equations showed agreement with previous studies. High muscle tension, light but irregular breathing patterns, and higher heart rate with low amplitude all produced higher scores on this scale and thus represent higher workload levels.
Figure-ground discrimination in the avian brain: the nucleus rotundus and its inhibitory complex.
Acerbo, Martin J; Lazareva, Olga F; McInnerney, John; Leiker, Emily; Wasserman, Edward A; Poremba, Amy
2012-10-01
In primates, neurons sensitive to figure-ground status are located in striate cortex (area V1) and extrastriate cortex (area V2). Although much is known about the anatomical structure and connectivity of the avian visual pathway, the functional organization of the avian brain remains largely unexplored. To pinpoint the areas associated with figure-ground segregation in the avian brain, we used a radioactively labeled glucose analog to compare differences in glucose uptake after figure-ground, color, and shape discriminations. We also included a control group that received food on a variable-interval schedule, but was not required to learn a visual discrimination. Although the discrimination task depended on group assignment, the stimulus displays were identical for all three experimental groups, ensuring that all animals were exposed to the same visual input. Our analysis concentrated on the primary thalamic nucleus associated with visual processing, the nucleus rotundus (Rt), and two nuclei providing regulatory feedback, the pretectum (PT) and the nucleus subpretectalis/interstitio-pretecto-subpretectalis complex (SP/IPS). We found that figure-ground discrimination was associated with strong and nonlateralized activity of Rt and SP/IPS, whereas color discrimination produced strong and lateralized activation in Rt alone. Shape discrimination was associated with lower activity of Rt than in the control group. Taken together, our results suggest that figure-ground discrimination is associated with Rt and that SP/IPS may be a main source of inhibitory control. Thus, figure-ground segregation in the avian brain may occur earlier than in the primate brain. Copyright © 2012 Elsevier Ltd. All rights reserved.
Figure-ground discrimination in the avian brain: The nucleus rotundus and its inhibitory complex
Acerbo, Martin J.; Lazareva, Olga F.; McInnerney, John; Leiker, Emily; Wasserman, Edward A.; Poremba, Amy
2012-01-01
In primates, neurons sensitive to figure-ground status are located in striate cortex (area V1) and extrastriate cortex (area V2). Although much is known about the anatomical structure and connectivity of the avian visual pathway, the functional organization of the avian brain remains largely unexplored. To pinpoint the areas associated with figure-ground segregation in the avian brain, we used a radioactively labeled glucose analog to compare differences in glucose uptake after figure-ground, color, and shape discriminations. We also included a control group that received food on a variable-interval schedule, but was not required to learn a visual discrimination. Although the discrimination task depended on group assignment, the stimulus displays were identical for all three experimental groups, ensuring that all animals were exposed to the same visual input. Our analysis concentrated on the primary thalamic nucleus associated with visual processing, the nucleus rotundus (Rt), and two nuclei providing regulatory feedback, the pretectum (PT) and the nucleus subpretectalis/interstitio-pretecto-subpretectalis complex (SP/IPS). We found that figure-ground discrimination was associated with strong and nonlateralized activity of Rt and SP/IPS, whereas color discrimination produced strong and lateralized activation in Rt alone. Shape discrimination was associated with lower activity of Rt than in the control group. Taken together, our results suggest that figure-ground discrimination is associated with Rt and that SP/IPS may be a main source of inhibitory control. Thus, figure-ground segregation in the avian brain may occur earlier than in the primate brain. PMID:22917681
A PDP model of the simultaneous perception of multiple objects
NASA Astrophysics Data System (ADS)
Henderson, Cynthia M.; McClelland, James L.
2011-06-01
Illusory conjunctions in normal and simultanagnosic subjects are two instances where the visual features of multiple objects are incorrectly 'bound' together. A connectionist model explores how multiple objects could be perceived correctly in normal subjects given sufficient time, but could give rise to illusory conjunctions with damage or time pressure. In this model, perception of two objects benefits from lateral connections between hidden layers modelling aspects of the ventral and dorsal visual pathways. As with simultanagnosia, simulations of dorsal lesions impair multi-object recognition. In contrast, a large ventral lesion has minimal effect on dorsal functioning, akin to dissociations between simple object manipulation (retained in visual form agnosia and semantic dementia) and object discrimination (impaired in these disorders) [Hodges, J.R., Bozeat, S., Lambon Ralph, M.A., Patterson, K., and Spatt, J. (2000), 'The Role of Conceptual Knowledge: Evidence from Semantic Dementia', Brain, 123, 1913-1925; Milner, A.D., and Goodale, M.A. (2006), The Visual Brain in Action (2nd ed.), New York: Oxford]. It is hoped that the functioning of this model might suggest potential processes underlying dorsal and ventral contributions to the correct perception of multiple objects.
The marmoset monkey as a model for visual neuroscience
Mitchell, Jude F.; Leopold, David A.
2015-01-01
The common marmoset (Callithrix jacchus) has been valuable as a primate model in biomedical research. Interest in this species has grown recently, in part due to the successful demonstration of transgenic marmosets. Here we examine the prospects of the marmoset model for visual neuroscience research, adopting a comparative framework to place the marmoset within a broader evolutionary context. The marmoset’s small brain bears most of the organizational features of other primates, and its smooth surface offers practical advantages over the macaque for areal mapping, laminar electrode penetration, and two-photon and optical imaging. Behaviorally, marmosets are more limited at performing regimented psychophysical tasks, but do readily accept the head restraint that is necessary for accurate eye tracking and neurophysiology, and can perform simple discriminations. Their natural gaze behavior closely resembles that of other primates, with a tendency to focus on objects of social interest including faces. Their immaturity at birth and routine twinning also makes them ideal for the study of postnatal visual development. These experimental factors, together with the theoretical advantages inherent in comparing anatomy, physiology, and behavior across related species, make the marmoset an excellent model for visual neuroscience. PMID:25683292
Discrimination of human and dog faces and inversion responses in domestic dogs (Canis familiaris).
Racca, Anaïs; Amadei, Eleonora; Ligout, Séverine; Guo, Kun; Meints, Kerstin; Mills, Daniel
2010-05-01
Although domestic dogs can respond to many facial cues displayed by other dogs and humans, it remains unclear whether they can differentiate individual dogs or humans based on facial cues alone and, if so, whether they would demonstrate the face inversion effect, a behavioural hallmark commonly used in primates to differentiate face processing from object processing. In this study, we first established the applicability of the visual paired comparison (VPC or preferential looking) procedure for dogs using a simple object discrimination task with 2D pictures. The animals demonstrated a clear looking preference for novel objects when simultaneously presented with prior-exposed familiar objects. We then adopted this VPC procedure to assess their face discrimination and inversion responses. Dogs showed a deviation from random behaviour, indicating discrimination capability when inspecting upright dog faces, human faces and object images; but the pattern of viewing preference was dependent upon image category. They directed longer viewing time at novel (vs. familiar) human faces and objects, but not at dog faces, instead, a longer viewing time at familiar (vs. novel) dog faces was observed. No significant looking preference was detected for inverted images regardless of image category. Our results indicate that domestic dogs can use facial cues alone to differentiate individual dogs and humans and that they exhibit a non-specific inversion response. In addition, the discrimination response by dogs of human and dog faces appears to differ with the type of face involved.
Neuronal pattern separation of motion-relevant input in LIP activity
Berberian, Nareg; MacPherson, Amanda; Giraud, Eloïse; Richardson, Lydia
2016-01-01
In various regions of the brain, neurons discriminate sensory stimuli by decreasing the similarity between ambiguous input patterns. Here, we examine whether this process of pattern separation may drive the rapid discrimination of visual motion stimuli in the lateral intraparietal area (LIP). Starting with a simple mean-rate population model that captures neuronal activity in LIP, we show that overlapping input patterns can be reformatted dynamically to give rise to separated patterns of neuronal activity. The population model predicts that a key ingredient of pattern separation is the presence of heterogeneity in the response of individual units. Furthermore, the model proposes that pattern separation relies on heterogeneity in the temporal dynamics of neural activity and not merely in the mean firing rates of individual neurons over time. We confirm these predictions in recordings of macaque LIP neurons and show that the accuracy of pattern separation is a strong predictor of behavioral performance. Overall, results propose that LIP relies on neuronal pattern separation to facilitate decision-relevant discrimination of sensory stimuli. NEW & NOTEWORTHY A new hypothesis is proposed on the role of the lateral intraparietal (LIP) region of cortex during rapid decision making. This hypothesis suggests that LIP alters the representation of ambiguous inputs to reduce their overlap, thus improving sensory discrimination. A combination of computational modeling, theoretical analysis, and electrophysiological data shows that the pattern separation hypothesis links neural activity to behavior and offers novel predictions on the role of LIP during sensory discrimination. PMID:27881719
ERIC Educational Resources Information Center
Giersch, Anne; Glaser, Bronwyn; Pasca, Catherine; Chabloz, Mélanie; Debbané, Martin; Eliez, Stephan
2014-01-01
Individuals with 22q11.2 deletion syndrome (22q11.2DS) are impaired at exploring visual information in space; however, not much is known about visual form discrimination in the syndrome. Thirty-five individuals with 22q11.2DS and 41 controls completed a form discrimination task with global forms made up of local elements. Affected individuals…
Visual Aversive Learning Compromises Sensory Discrimination.
Shalev, Lee; Paz, Rony; Avidan, Galia
2018-03-14
Aversive learning is thought to modulate perceptual thresholds, which can lead to overgeneralization. However, it remains undetermined whether this modulation is domain specific or a general effect. Moreover, despite the unique role of the visual modality in human perception, it is unclear whether this aspect of aversive learning exists in this modality. The current study was designed to examine the effect of visual aversive outcomes on the perception of basic visual and auditory features. We tested the ability of healthy participants, both males and females, to discriminate between neutral stimuli, before and after visual learning. In each experiment, neutral stimuli were associated with aversive images in an experimental group and with neutral images in a control group. Participants demonstrated a deterioration in discrimination (higher discrimination thresholds) only after aversive learning. This deterioration was measured for both auditory (tone frequency) and visual (orientation and contrast) features. The effect was replicated in five different experiments and lasted for at least 24 h. fMRI neural responses and pupil size were also measured during learning. We showed an increase in neural activations in the anterior cingulate cortex, insula, and amygdala during aversive compared with neutral learning. Interestingly, the early visual cortex showed increased brain activity during aversive compared with neutral context trials, with identical visual information. Our findings imply the existence of a central multimodal mechanism, which modulates early perceptual properties, following exposure to negative situations. Such a mechanism could contribute to abnormal responses that underlie anxiety states, even in new and safe environments. SIGNIFICANCE STATEMENT Using a visual aversive-learning paradigm, we found deteriorated discrimination abilities for visual and auditory stimuli that were associated with visual aversive stimuli. We showed increased neural activations in the anterior cingulate cortex, insula, and amygdala during aversive learning, compared with neutral learning. Importantly, similar findings were also evident in the early visual cortex during trials with aversive/neutral context, but with identical visual information. The demonstration of this phenomenon in the visual modality is important, as it provides support to the notion that aversive learning can influence perception via a central mechanism, independent of input modality. Given the dominance of the visual system in human perception, our findings hold relevance to daily life, as well as imply a potential etiology for anxiety disorders. Copyright © 2018 the authors 0270-6474/18/382766-14$15.00/0.
Multi-class ERP-based BCI data analysis using a discriminant space self-organizing map.
Onishi, Akinari; Natsume, Kiyohisa
2014-01-01
Emotional or non-emotional image stimulus is recently applied to event-related potential (ERP) based brain computer interfaces (BCI). Though the classification performance is over 80% in a single trial, a discrimination between those ERPs has not been considered. In this research we tried to clarify the discriminability of four-class ERP-based BCI target data elicited by desk, seal, spider images and letter intensifications. A conventional self organizing map (SOM) and newly proposed discriminant space SOM (ds-SOM) were applied, then the discriminabilites were visualized. We also classify all pairs of those ERPs by stepwise linear discriminant analysis (SWLDA) and verify the visualization of discriminabilities. As a result, the ds-SOM showed understandable visualization of the data with a shorter computational time than the traditional SOM. We also confirmed the clear boundary between the letter cluster and the other clusters. The result was coherent with the classification performances by SWLDA. The method might be helpful not only for developing a new BCI paradigm, but also for the big data analysis.
Kyllingsbæk, Søren; Sy, Jocelyn L; Giesbrecht, Barry
2011-05-01
The allocation of visual processing capacity is a key topic in studies and theories of visual attention. The load theory of Lavie (1995) proposes that allocation happens in two steps where processing resources are first allocated to task-relevant stimuli and secondly remaining capacity 'spills over' to task-irrelevant distractors. In contrast, the Theory of Visual Attention (TVA) proposed by Bundesen (1990) assumes that allocation happens in a single step where processing capacity is allocated to all stimuli, both task-relevant and task-irrelevant, in proportion to their relative attentional weight. Here we present data from two partial report experiments where we varied the number and discriminability of the task-irrelevant stimuli (Experiment 1) and perceptual load (Experiment 2). The TVA fitted the data of the two experiments well thus favoring the simple explanation with a single step of capacity allocation. We also show that the effects of varying perceptual load can only be explained by a combined effect of allocation of processing capacity as well as limits in visual working memory. Finally, we link the results to processing capacity understood at the neural level based on the neural theory of visual attention by Bundesen et al. (2005). Copyright © 2010 Elsevier Ltd. All rights reserved.
Koen, Joshua D; Borders, Alyssa A; Petzold, Michael T; Yonelinas, Andrew P
2017-02-01
The medial temporal lobe (MTL) plays a critical role in episodic long-term memory, but whether the MTL is necessary for visual short-term memory is controversial. Some studies have indicated that MTL damage disrupts visual short-term memory performance whereas other studies have failed to find such evidence. To account for these mixed results, it has been proposed that the hippocampus is critical in supporting short-term memory for high resolution complex bindings, while the cortex is sufficient to support simple, low resolution bindings. This hypothesis was tested in the current study by assessing visual short-term memory in patients with damage to the MTL and controls for high resolution and low resolution object-location and object-color associations. In the location tests, participants encoded sets of two or four objects in different locations on the screen. After each set, participants performed a two-alternative forced-choice task in which they were required to discriminate the object in the target location from the object in a high or low resolution lure location (i.e., the object locations were very close or far away from the target location, respectively). Similarly, in the color tests, participants were presented with sets of two or four objects in a different color and, after each set, were required to discriminate the object in the target color from the object in a high or low resolution lure color (i.e., the lure color was very similar or very different, respectively, to the studied color). The patients were significantly impaired in visual short-term memory, but importantly, they were more impaired for high resolution object-location and object-color bindings. The results are consistent with the proposal that the hippocampus plays a critical role in forming and maintaining complex, high resolution bindings. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Lawton, Teri
2016-01-01
There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) targeting linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning. PMID:27551263
Do Visually Impaired People Develop Superior Smell Ability?
Majchrzak, Dorota; Eberhard, Julia; Kalaus, Barbara; Wagner, Karl-Heinz
2017-10-01
It is well known that visually impaired people perform better in orientation by sound than sighted individuals, but it is not clear whether this enhanced awareness also extends to other senses. Therefore, the aim of this study was to observe whether visually impaired subjects develop superior abilities in olfactory perception to compensate for their lack of vision. We investigated the odor perception of visually impaired individuals aged 7 to 89 ( n = 99; 52 women, 47 men) and compared them with subjects of a control group aged 8 to 82 years ( n = 100; 45 women, 55 men) without any visual impairment. The participants were evaluated by Sniffin' Sticks odor identification and discrimination test. Identification ability was assessed for 16 common odors presented in felt-tip pens. In the odor discrimination task, subjects had to determine which of three pens in 16 triplets had a different odor. The median number of correctly identified odorant pens in both groups was the same, 13 of the offered 16. In the discrimination test, there was also no significant difference observed. Gender did not influence results. Age-related changes were observed in both groups with olfactory perception decreasing after the age of 51. We could not confirm that visually impaired people were better in smell identification and discrimination ability than sighted individuals.
Rocker: Open source, easy-to-use tool for AUC and enrichment calculations and ROC visualization.
Lätti, Sakari; Niinivehmas, Sanna; Pentikäinen, Olli T
2016-01-01
Receiver operating characteristics (ROC) curve with the calculation of area under curve (AUC) is a useful tool to evaluate the performance of biomedical and chemoinformatics data. For example, in virtual drug screening ROC curves are very often used to visualize the efficiency of the used application to separate active ligands from inactive molecules. Unfortunately, most of the available tools for ROC analysis are implemented into commercially available software packages, or are plugins in statistical software, which are not always the easiest to use. Here, we present Rocker, a simple ROC curve visualization tool that can be used for the generation of publication quality images. Rocker also includes an automatic calculation of the AUC for the ROC curve and Boltzmann-enhanced discrimination of ROC (BEDROC). Furthermore, in virtual screening campaigns it is often important to understand the early enrichment of active ligand identification, for this Rocker offers automated calculation routine. To enable further development of Rocker, it is freely available (MIT-GPL license) for use and modifications from our web-site (http://www.jyu.fi/rocker).
Colour processing in complex environments: insights from the visual system of bees
Dyer, Adrian G.; Paulk, Angelique C.; Reser, David H.
2011-01-01
Colour vision enables animals to detect and discriminate differences in chromatic cues independent of brightness. How the bee visual system manages this task is of interest for understanding information processing in miniaturized systems, as well as the relationship between bee pollinators and flowering plants. Bees can quickly discriminate dissimilar colours, but can also slowly learn to discriminate very similar colours, raising the question as to how the visual system can support this, or whether it is simply a learning and memory operation. We discuss the detailed neuroanatomical layout of the brain, identify probable brain areas for colour processing, and suggest that there may be multiple systems in the bee brain that mediate either coarse or fine colour discrimination ability in a manner dependent upon individual experience. These multiple colour pathways have been identified along both functional and anatomical lines in the bee brain, providing us with some insights into how the brain may operate to support complex colour discrimination behaviours. PMID:21147796
Some distinguishing characteristics of contour and texture phenomena in images
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.
1992-01-01
The development of generalized contour/texture discrimination techniques is a central element necessary for machine vision recognition and interpretation of arbitrary images. Here, the visual perception of texture, selected studies of texture analysis in machine vision, and diverse small samples of contour and texture are all used to provide insights into the fundamental characteristics of contour and texture. From these, an experimental discrimination scheme is developed and tested on a battery of natural images. The visual perception of texture defined fine texture as a subclass which is interpreted as shading and is distinct from coarse figural similarity textures. Also, perception defined the smallest scale for contour/texture discrimination as eight to nine visual acuity units. Three contour/texture discrimination parameters were found to be moderately successful for this scale discrimination: (1) lightness change in a blurred version of the image, (2) change in lightness change in the original image, and (3) percent change in edge counts relative to local maximum.
Berditchevskaia, A.; Cazé, R. D.; Schultz, S. R.
2016-01-01
In recent years, simple GO/NOGO behavioural tasks have become popular due to the relative ease with which they can be combined with technologies such as in vivo multiphoton imaging. To date, it has been assumed that behavioural performance can be captured by the average performance across a session, however this neglects the effect of motivation on behaviour within individual sessions. We investigated the effect of motivation on mice performing a GO/NOGO visual discrimination task. Performance within a session tended to follow a stereotypical trajectory on a Receiver Operating Characteristic (ROC) chart, beginning with an over-motivated state with many false positives, and transitioning through a more or less optimal regime to end with a low hit rate after satiation. Our observations are reproduced by a new model, the Motivated Actor-Critic, introduced here. Our results suggest that standard measures of discriminability, obtained by averaging across a session, may significantly underestimate behavioural performance. PMID:27272438
Enhanced attentional gain as a mechanism for generalized perceptual learning in human visual cortex.
Byers, Anna; Serences, John T
2014-09-01
Learning to better discriminate a specific visual feature (i.e., a specific orientation in a specific region of space) has been associated with plasticity in early visual areas (sensory modulation) and with improvements in the transmission of sensory information from early visual areas to downstream sensorimotor and decision regions (enhanced readout). However, in many real-world scenarios that require perceptual expertise, observers need to efficiently process numerous exemplars from a broad stimulus class as opposed to just a single stimulus feature. Some previous data suggest that perceptual learning leads to highly specific neural modulations that support the discrimination of specific trained features. However, the extent to which perceptual learning acts to improve the discriminability of a broad class of stimuli via the modulation of sensory responses in human visual cortex remains largely unknown. Here, we used functional MRI and a multivariate analysis method to reconstruct orientation-selective response profiles based on activation patterns in the early visual cortex before and after subjects learned to discriminate small offsets in a set of grating stimuli that were rendered in one of nine possible orientations. Behavioral performance improved across 10 training sessions, and there was a training-related increase in the amplitude of orientation-selective response profiles in V1, V2, and V3 when orientation was task relevant compared with when it was task irrelevant. These results suggest that generalized perceptual learning can lead to modified responses in the early visual cortex in a manner that is suitable for supporting improved discriminability of stimuli drawn from a large set of exemplars. Copyright © 2014 the American Physiological Society.
The Learning of Difficult Visual Discriminations by the Moderately and Severely Retarded
ERIC Educational Resources Information Center
Gold, Marc W.; Barclay, Craig R.
2015-01-01
A procedure to effectively and efficiently train moderately and severely retarded individuals to make fine visual discriminations is described. Results suggest that expectancies for such individuals are in need of examination. Implications for sheltered workshops, work activity centers and classrooms are discussed. [This article appeared…
1989-08-14
DISCRIMINATE SIMILAR KANJt CHARACTERS. Yoshihiro Mori, Kazuhiko Yokosawa . 12 FURTHER EXPLORATIONS IN THE LEARNING OF VISUALLY-GUIDED REACHING: MAKING MURPHY...NETWORKS THAT LEARN TO DISCRIMINATE SIMILAR KANJI CHARACTERS YOSHIHIRO MORI, KAZUHIKO YOKOSAWA , ATR Auditory and Visual Perception Research Laboratories
ERIC Educational Resources Information Center
Bahrick, Lorraine E.; Krogh-Jespersen, Sheila; Argumosa, Melissa A.; Lopez, Hassel
2014-01-01
Although infants and children show impressive face-processing skills, little research has focused on the conditions that facilitate versus impair face perception. According to the intersensory redundancy hypothesis (IRH), face discrimination, which relies on detection of visual featural information, should be impaired in the context of…
Object detection in natural backgrounds predicted by discrimination performance and models
NASA Technical Reports Server (NTRS)
Rohaly, A. M.; Ahumada, A. J. Jr; Watson, A. B.
1997-01-01
Many models of visual performance predict image discriminability, the visibility of the difference between a pair of images. We compared the ability of three image discrimination models to predict the detectability of objects embedded in natural backgrounds. The three models were: a multiple channel Cortex transform model with within-channel masking; a single channel contrast sensitivity filter model; and a digital image difference metric. Each model used a Minkowski distance metric (generalized vector magnitude) to summate absolute differences between the background and object plus background images. For each model, this summation was implemented with three different exponents: 2, 4 and infinity. In addition, each combination of model and summation exponent was implemented with and without a simple contrast gain factor. The model outputs were compared to measures of object detectability obtained from 19 observers. Among the models without the contrast gain factor, the multiple channel model with a summation exponent of 4 performed best, predicting the pattern of observer d's with an RMS error of 2.3 dB. The contrast gain factor improved the predictions of all three models for all three exponents. With the factor, the best exponent was 4 for all three models, and their prediction errors were near 1 dB. These results demonstrate that image discrimination models can predict the relative detectability of objects in natural scenes.
Real-time detection and discrimination of visual perception using electrocorticographic signals
NASA Astrophysics Data System (ADS)
Kapeller, C.; Ogawa, H.; Schalk, G.; Kunii, N.; Coon, W. G.; Scharinger, J.; Guger, C.; Kamada, K.
2018-06-01
Objective. Several neuroimaging studies have demonstrated that the ventral temporal cortex contains specialized regions that process visual stimuli. This study investigated the spatial and temporal dynamics of electrocorticographic (ECoG) responses to different types and colors of visual stimulation that were presented to four human participants, and demonstrated a real-time decoder that detects and discriminates responses to untrained natural images. Approach. ECoG signals from the participants were recorded while they were shown colored and greyscale versions of seven types of visual stimuli (images of faces, objects, bodies, line drawings, digits, and kanji and hiragana characters), resulting in 14 classes for discrimination (experiment I). Additionally, a real-time system asynchronously classified ECoG responses to faces, kanji and black screens presented via a monitor (experiment II), or to natural scenes (i.e. the face of an experimenter, natural images of faces and kanji, and a mirror) (experiment III). Outcome measures in all experiments included the discrimination performance across types based on broadband γ activity. Main results. Experiment I demonstrated an offline classification accuracy of 72.9% when discriminating among the seven types (without color separation). Further discrimination of grey versus colored images reached an accuracy of 67.1%. Discriminating all colors and types (14 classes) yielded an accuracy of 52.1%. In experiment II and III, the real-time decoder correctly detected 73.7% responses to face, kanji and black computer stimuli and 74.8% responses to presented natural scenes. Significance. Seven different types and their color information (either grey or color) could be detected and discriminated using broadband γ activity. Discrimination performance maximized for combined spatial-temporal information. The discrimination of stimulus color information provided the first ECoG-based evidence for color-related population-level cortical broadband γ responses in humans. Stimulus categories can be detected by their ECoG responses in real time within 500 ms with respect to stimulus onset.
Kavcic, Voyko; Triplett, Regina L.; Das, Anasuya; Martin, Tim; Huxlin, Krystel R.
2015-01-01
Partial cortical blindness is a visual deficit caused by unilateral damage to the primary visual cortex, a condition previously considered beyond hopes of rehabilitation. However, recent data demonstrate that patients may recover both simple and global motion discrimination following intensive training in their blind field. The present experiments characterized motion-induced neural activity of cortically blind (CB) subjects prior to the onset of visual rehabilitation. This was done to provide information about visual processing capabilities available to mediate training-induced visual improvements. Visual Evoked Potentials (VEPs) were recorded from two experimental groups consisting of 9 CB subjects and 9 age-matched, visually-intact controls. VEPs were collected following lateralized stimulus presentation to each of the 4 visual field quadrants. VEP waveforms were examined for both stimulus-onset (SO) and motion-onset (MO) related components in postero-lateral electrodes. While stimulus presentation to intact regions of the visual field elicited normal SO-P1, SO-N1, SO-P2 and MO-N2 amplitudes and latencies in contralateral brain regions of CB subjects, these components were not observed contralateral to stimulus presentation in blind quadrants of the visual field. In damaged brain hemispheres, SO-VEPs were only recorded following stimulus presentation to intact visual field quadrants, via inter-hemispheric transfer. MO-VEPs were only recorded from damaged left brain hemispheres, possibly reflecting a native left/right asymmetry in inter-hemispheric connections. The present findings suggest that damaged brain hemispheres contain areas capable of responding to visual stimulation. However, in the absence of training or rehabilitation, these areas only generate detectable VEPs in response to stimulation of the intact hemifield of vision. PMID:25575450
Lawton, Teri; Shelley-Tremblay, John
2017-01-01
The purpose of this study was to determine whether neurotraining to discriminate a moving test pattern relative to a stationary background, figure-ground discrimination, improves vision and cognitive functioning in dyslexics, as well as typically-developing normal students. We predict that improving the speed and sensitivity of figure-ground movement discrimination ( PATH to Reading neurotraining) acts to remediate visual timing deficits in the dorsal stream, thereby improving processing speed, reading fluency, and the executive control functions of attention and working memory in both dyslexic and normal students who had PATH neurotraining more than in those students who had no neurotraining. This prediction was evaluated by measuring whether dyslexic and normal students improved on standardized tests of cognitive skills following neurotraining exercises, more than following computer-based guided reading ( Raz-Kids ( RK )). The neurotraining used in this study was visually-based training designed to improve magnocellular function at both low and high levels in the dorsal stream: the input to the executive control networks coding working memory and attention. This approach represents a paradigm shift from the phonologically-based treatment for dyslexia, which concentrates on high-level speech and reading areas. This randomized controlled-validation study was conducted by training the entire second and third grade classrooms (42 students) for 30 min twice a week before guided reading. Standardized tests were administered at the beginning and end of 12-weeks of intervention training to evaluate improvements in academic skills. Only movement-discrimination training remediated both low-level visual timing deficits and high-level cognitive functioning, including selective and sustained attention, reading fluency and working memory for both dyslexic and normal students. Remediating visual timing deficits in the dorsal stream revealed the causal role of visual movement discrimination training in improving high-level cognitive functions such as attention, reading acquisition and working memory. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways in the dorsal stream is a fundamental cause of dyslexia and being at-risk for reading problems in normal students, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological or language deficits, requiring a paradigm shift from phonologically-based treatment of dyslexia to a visually-based treatment. This study shows that visual movement-discrimination can be used not only to diagnose dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning.
Lawton, Teri; Shelley-Tremblay, John
2017-01-01
The purpose of this study was to determine whether neurotraining to discriminate a moving test pattern relative to a stationary background, figure-ground discrimination, improves vision and cognitive functioning in dyslexics, as well as typically-developing normal students. We predict that improving the speed and sensitivity of figure-ground movement discrimination (PATH to Reading neurotraining) acts to remediate visual timing deficits in the dorsal stream, thereby improving processing speed, reading fluency, and the executive control functions of attention and working memory in both dyslexic and normal students who had PATH neurotraining more than in those students who had no neurotraining. This prediction was evaluated by measuring whether dyslexic and normal students improved on standardized tests of cognitive skills following neurotraining exercises, more than following computer-based guided reading (Raz-Kids (RK)). The neurotraining used in this study was visually-based training designed to improve magnocellular function at both low and high levels in the dorsal stream: the input to the executive control networks coding working memory and attention. This approach represents a paradigm shift from the phonologically-based treatment for dyslexia, which concentrates on high-level speech and reading areas. This randomized controlled-validation study was conducted by training the entire second and third grade classrooms (42 students) for 30 min twice a week before guided reading. Standardized tests were administered at the beginning and end of 12-weeks of intervention training to evaluate improvements in academic skills. Only movement-discrimination training remediated both low-level visual timing deficits and high-level cognitive functioning, including selective and sustained attention, reading fluency and working memory for both dyslexic and normal students. Remediating visual timing deficits in the dorsal stream revealed the causal role of visual movement discrimination training in improving high-level cognitive functions such as attention, reading acquisition and working memory. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways in the dorsal stream is a fundamental cause of dyslexia and being at-risk for reading problems in normal students, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological or language deficits, requiring a paradigm shift from phonologically-based treatment of dyslexia to a visually-based treatment. This study shows that visual movement-discrimination can be used not only to diagnose dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning. PMID:28555097
Strength and coherence of binocular rivalry depends on shared stimulus complexity.
Alais, David; Melcher, David
2007-01-01
Presenting incompatible images to the eyes results in alternations of conscious perception, a phenomenon known as binocular rivalry. We examined rivalry using either simple stimuli (oriented gratings) or coherent visual objects (faces, houses etc). Two rivalry characteristics were measured: Depth of rivalry suppression and coherence of alternations. Rivalry between coherent visual objects exhibits deep suppression and coherent rivalry, whereas rivalry between gratings exhibits shallow suppression and piecemeal rivalry. Interestingly, rivalry between a simple and a complex stimulus displays the same characteristics (shallow and piecemeal) as rivalry between two simple stimuli. Thus, complex stimuli fail to rival globally unless the fellow stimulus is also global. We also conducted a face adaptation experiment. Adaptation to rivaling faces improved subsequent face discrimination (as expected), but adaptation to a rivaling face/grating pair did not. To explain this, we suggest rivalry must be an early and local process (at least initially), instigated by the failure of binocular fusion, which can then become globally organized by feedback from higher-level areas when both rivalry stimuli are global, so that rivalry tends to oscillate coherently. These globally assembled images then flow through object processing areas, with the dominant image gaining in relative strength in a form of 'biased competition', therefore accounting for the deeper suppression of global images. In contrast, when only one eye receives a global image, local piecemeal suppression from the fellow eye overrides the organizing effects of global feedback to prevent coherent image formation. This indicates the primacy of local over global processes in rivalry.
Similarity, not complexity, determines visual working memory performance.
Jackson, Margaret C; Linden, David E J; Roberts, Mark V; Kriegeskorte, Nikolaus; Haenschel, Corinna
2015-11-01
A number of studies have shown that visual working memory (WM) is poorer for complex versus simple items, traditionally accounted for by higher information load placing greater demands on encoding and storage capacity limits. Other research suggests that it may not be complexity that determines WM performance per se, but rather increased perceptual similarity between complex items as a result of a large amount of overlapping information. Increased similarity is thought to lead to greater comparison errors between items encoded into WM and the test item(s) presented at retrieval. However, previous studies have used different object categories to manipulate complexity and similarity, raising questions as to whether these effects are simply due to cross-category differences. For the first time, here the relationship between complexity and similarity in WM using the same stimulus category (abstract polygons) are investigated. The authors used a delayed discrimination task to measure WM for 1-4 complex versus simple simultaneously presented items and manipulated the similarity between the single test item at retrieval and the sample items at encoding. WM was poorer for complex than simple items only when the test item was similar to 1 of the encoding items, and not when it was dissimilar or identical. The results provide clear support for reinterpretation of the complexity effect in WM as a similarity effect and highlight the importance of the retrieval stage in governing WM performance. The authors discuss how these findings can be reconciled with current models of WM capacity limits. (c) 2015 APA, all rights reserved).
Trick, G L; Burde, R M; Gordon, M O; Santiago, J V; Kilo, C
1988-05-01
In an attempt to elucidate more fully the pathophysiologic basis of early visual dysfunction in patients with diabetes mellitus, color vision (hue discrimination) and spatial resolution (contrast sensitivity) were tested in diabetic patients with little or no retinopathy (n = 57) and age-matched visual normals (n = 35). Some evidence of visual dysfunction was observed in 37.8% of the diabetics with no retinopathy and 60.0% of the diabetics with background retinopathy. Although significant hue discrimination and contrast sensitivity deficits were observed in both groups of diabetic patients, contrast sensitivity was abnormal more frequently than hue discrimination. However, only 5.4% of the diabetics with no retinopathy and 10.0% of the diabetics with background retinopathy exhibited both abnormal hue discrimination and abnormal contrast sensitivity. Contrary to previous reports, blue-yellow (B-Y) and red-green (R-G) hue discrimination deficits were observed with approximately equal frequency. In the diabetic group, contrast sensitivity was reduced at all spatial frequencies tested, but for individual diabetic patients, significant deficits were only evident for the mid-range spatial frequencies. Among diabetic patients, the hue discrimination deficits, but not the contrast sensitivity abnormalities, were correlated with the patients' hemoglobin A1 level. A negative correlation between contrast sensitivity at 6.0 cpd and the duration of diabetes also was observed.
Wang, Changming; Xiong, Shi; Hu, Xiaoping; Yao, Li; Zhang, Jiacai
2012-10-01
Categorization of images containing visual objects can be successfully recognized using single-trial electroencephalograph (EEG) measured when subjects view images. Previous studies have shown that task-related information contained in event-related potential (ERP) components could discriminate two or three categories of object images. In this study, we investigated whether four categories of objects (human faces, buildings, cats and cars) could be mutually discriminated using single-trial EEG data. Here, the EEG waveforms acquired while subjects were viewing four categories of object images were segmented into several ERP components (P1, N1, P2a and P2b), and then Fisher linear discriminant analysis (Fisher-LDA) was used to classify EEG features extracted from ERP components. Firstly, we compared the classification results using features from single ERP components, and identified that the N1 component achieved the highest classification accuracies. Secondly, we discriminated four categories of objects using combining features from multiple ERP components, and showed that combination of ERP components improved four-category classification accuracies by utilizing the complementarity of discriminative information in ERP components. These findings confirmed that four categories of object images could be discriminated with single-trial EEG and could direct us to select effective EEG features for classifying visual objects.
Color discrimination performance in patients with Alzheimer's disease.
Salamone, Giovanna; Di Lorenzo, Concetta; Mosti, Serena; Lupo, Federica; Cravello, Luca; Palmer, Katie; Musicco, Massimo; Caltagirone, Carlo
2009-01-01
Visual deficits are frequent in Alzheimer's disease (AD), yet little is known about the nature of these disturbances. The aim of the present study was to investigate color discrimination in patients with AD to determine whether impairment of this visual function is a cognitive or perceptive/sensory disturbance. A cross-sectional clinical study was conducted in a specialized dementia unit on 20 patients with mild/moderate AD and 21 age-matched normal controls. Color discrimination was measured by the Farnsworth-Munsell 100 hue test. Cognitive functioning was measured with the Mini-Mental State Examination (MMSE) and a comprehensive battery of neuropsychological tests. The scores obtained on the color discrimination test were compared between AD patients and controls adjusting for global and domain-specific cognitive performance. Color discrimination performance was inversely related to MMSE score. AD patients had a higher number of errors in color discrimination than controls (mean +/- SD total error score: 442.4 +/- 84.5 vs. 304.1 +/- 45.9). This trend persisted even after adjustment for MMSE score and cognitive performance on specific cognitive domains. A specific reduction of color discrimination capacity is present in AD patients. This deficit does not solely depend upon cognitive impairment, and involvement of the primary visual cortex and/or retinal ganglionar cells may be contributory.
[Visual perception and its disorders].
Ruf-Bächtiger, L
1989-11-21
It's the brain and not the eye that decides what is perceived. In spite of this fact, quite a lot is known about the functioning of the eye and the first sections of the optic tract, but little about the actual process of perception. Examination of visual perception and its malfunctions relies therefore on certain hypotheses. Proceeding from the model of functional brain systems, variant functional domains of visual perception can be distinguished. Among the more important of these domains are: digit span, visual discrimination and figure-ground discrimination. Evaluation of these functional domains allows us to understand those children with disorders of visual perception better and to develop more effective treatment methods.
HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.
Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye
2017-02-09
In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.
Ono, T; Tamura, R; Nishijo, H; Nakamura, K; Tabuchi, E
1989-02-01
Visual information processing was investigated in the inferotemporal cortical (ITCx)-amygdalar (AM)-lateral hypothalamic (LHA) axis which contributes to food-nonfood discrimination. Neuronal activity was recorded from monkey AM and LHA during discrimination of sensory stimuli including sight of food or nonfood. The task had four phases: control, visual, bar press, and ingestion. Of 710 AM neurons tested, 220 (31.0%) responded during visual phase: 48 to only visual stimulation, 13 (1.9%) to visual plus oral sensory stimulation, 142 (20.0%) to multimodal stimulation and 17 (2.4%) to one affectively significant item. Of 669 LHA neurons tested, 106 (15.8%) responded in the visual phase. Of 80 visual-related neurons tested systematically, 33 (41.2%) responded selectively to the sight of any object predicting the availability of reward, and 47 (58.8%) responded nondifferentially to both food and nonfood. Many of AM neuron responses were graded according to the degree of affective significance of sensory stimuli (sensory-affective association), but responses of LHA food responsive neurons did not depend on the kind of reward indicated by the sensory stimuli (stimulus-reinforcement association). Some AM and LHA food responses were modulated by extinction or reversal. Dynamic information processing in ITCx-AM-LHA axis was investigated by reversible deficits of bilateral ITCx or AM by cooling. ITCx cooling suppressed discrimination by vision responding AM neurons (8/17). AM cooling suppressed LHA responses to food (9/22). We suggest deep AM-LHA involvement in food-nonfood discrimination based on AM sensory-affective association and LHA stimulus-reinforcement association.
Visuoperceptual impairment in dementia with Lewy bodies.
Mori, E; Shimomura, T; Fujimori, M; Hirono, N; Imamura, T; Hashimoto, M; Tanimukai, S; Kazui, H; Hanihara, T
2000-04-01
In dementia with Lewy bodies (DLB), vision-related cognitive and behavioral symptoms are common, and involvement of the occipital visual cortices has been demonstrated in functional neuroimaging studies. To delineate visuoperceptual disturbance in patients with DLB in comparison with that in patients with Alzheimer disease and to explore the relationship between visuoperceptual disturbance and the vision-related cognitive and behavioral symptoms. Case-control study. Research-oriented hospital. Twenty-four patients with probable DLB (based on criteria of the Consortium on DLB International Workshop) and 48 patients with probable Alzheimer disease (based on criteria of the National Institute of Neurological and Communicative Disorders and Stroke-Alzheimer's Disease and Related Disorders Association) who were matched to those with DLB 2:1 by age, sex, education, and Mini-Mental State Examination score. Four test items to examine visuoperceptual functions, including the object size discrimination, form discrimination, overlapping figure identification, and visual counting tasks. Compared with patients with probable Alzheimer disease, patients with probable DLB scored significantly lower on all the visuoperceptive tasks (P<.04 to P<.001). In the DLB group, patients with visual hallucinations (n = 18) scored significantly lower on the overlapping figure identification (P = .01) than those without them (n = 6), and patients with television misidentifications (n = 5) scored significantly lower on the size discrimination (P<.001), form discrimination (P = .01), and visual counting (P = .007) than those without them (n = 19). Visual perception is defective in probable DLB. The defective visual perception plays a role in development of visual hallucinations, delusional misidentifications, visual agnosias, and visuoconstructive disability charcteristic of DLB.
Hippocampus, Perirhinal Cortex, and Complex Visual Discriminations in Rats and Humans
ERIC Educational Resources Information Center
Hales, Jena B.; Broadbent, Nicola J.; Velu, Priya D.; Squire, Larry R.; Clark, Robert E.
2015-01-01
Structures in the medial temporal lobe, including the hippocampus and perirhinal cortex, are known to be essential for the formation of long-term memory. Recent animal and human studies have investigated whether perirhinal cortex might also be important for visual perception. In our study, using a simultaneous oddity discrimination task, rats with…
ERIC Educational Resources Information Center
Turchi, Janita; Buffalari, Deanne; Mishkin, Mortimer
2008-01-01
Monkeys trained in either one-trial recognition at 8- to 10-min delays or multi-trial discrimination habits with 24-h intertrial intervals received systemic cholinergic and dopaminergic antagonists, scopolamine and haloperidol, respectively, in separate sessions. Recognition memory was impaired markedly by scopolamine but not at all by…
ERIC Educational Resources Information Center
Kodak, Tiffany; Clements, Andrea; LeBlanc, Brittany
2013-01-01
The purpose of the present investigation was to evaluate a rapid assessment procedure to identify effective instructional strategies to teach auditory-visual conditional discriminations to children diagnosed with autism. We replicated and extended previous rapid skills assessments (Lerman, Vorndran, Addison, & Kuhn, 2004) by evaluating the effects…
Speaker Identity Supports Phonetic Category Learning
ERIC Educational Resources Information Center
Mani, Nivedita; Schneider, Signe
2013-01-01
Visual cues from the speaker's face, such as the discriminable mouth movements used to produce speech sounds, improve discrimination of these sounds by adults. The speaker's face, however, provides more information than just the mouth movements used to produce speech--it also provides a visual indexical cue of the identity of the speaker. The…
Scopolamine effects on visual discrimination: modifications related to stimulus control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, H.L.
1975-01-01
Stumptail monkeys (Macaca arctoides) performed a discrete trial, three-choice visual discrimination. The discrimination behavior was controlled by the shape of the visual stimuli. Strength of the stimuli in controlling behavior was systematically related to a physical property of the stimuli, luminance. Low luminance provided weak control, resulting in a low accuracy of discrimination, a low response probability and maximal sensitivity to scopolamine (7.5-60 ..mu..g/kg). In contrast, high luminance provided strong control of behavior and attenuated the effects of scopolamine. Methylscopolamine had no effect in doses of 30 to 90 ..mu..g/kg. Scopolamine effects resembled the effects of reducing stimulus control inmore » undrugged monkeys. Since behavior under weak control seems to be especially sensitive to drugs, manipulations of stimulus control may be particularly useful whenever determination of the minimally-effective dose is important, as in behavioral toxicology. Present results are interpreted as specific visual effects of the drug, since nonsensory factors such as base-line response rate, reinforcement schedule, training history, motor performance and motivation were controlled. Implications for state-dependent effects of drugs are discussed.« less
Discrimination of numerical proportions: A comparison of binomial and Gaussian models.
Raidvee, Aire; Lember, Jüri; Allik, Jüri
2017-01-01
Observers discriminated the numerical proportion of two sets of elements (N = 9, 13, 33, and 65) that differed either by color or orientation. According to the standard Thurstonian approach, the accuracy of proportion discrimination is determined by irreducible noise in the nervous system that stochastically transforms the number of presented visual elements onto a continuum of psychological states representing numerosity. As an alternative to this customary approach, we propose a Thurstonian-binomial model, which assumes discrete perceptual states, each of which is associated with a certain visual element. It is shown that the probability β with which each visual element can be noticed and registered by the perceptual system can explain data of numerical proportion discrimination at least as well as the continuous Thurstonian-Gaussian model, and better, if the greater parsimony of the Thurstonian-binomial model is taken into account using AIC model selection. We conclude that Gaussian and binomial models represent two different fundamental principles-internal noise vs. using only a fraction of available information-which are both plausible descriptions of visual perception.
Cong, Lin-Juan; Wang, Ru-Jie; Yu, Cong; Zhang, Jun-Yun
2016-01-01
Visual perceptual learning is known to be specific to the trained retinal location, feature, and task. However, location and feature specificity can be eliminated by double-training or TPE training protocols, in which observers receive additional exposure to the transfer location or feature dimension via an irrelevant task besides the primary learning task Here we tested whether these new training protocols could even make learning transfer across different tasks involving discrimination of basic visual features (e.g., orientation and contrast). Observers practiced a near-threshold orientation (or contrast) discrimination task. Following a TPE training protocol, they also received exposure to the transfer task via performing suprathreshold contrast (or orientation) discrimination in alternating blocks of trials in the same sessions. The results showed no evidence for significant learning transfer to the untrained near-threshold contrast (or orientation) discrimination task after discounting the pretest effects and the suprathreshold practice effects. These results thus do not support a hypothetical task-independent component in perceptual learning of basic visual features. They also set the boundary of the new training protocols in their capability to enable learning transfer.
A practical workshop for generating simple DNA fingerprints of plants.
Rouzière, A-S; Redman, J E
2011-01-01
Gel electrophoresis DNA fingerprints offer a graphical and visually appealing illumination of the similarities and differences between DNA sequences of different species and individuals. A polymerase chain reaction (PCR) and restriction digest protocol was designed to give high-school students the opportunity to generate simple fingerprints of plants thereby illustrating concepts and techniques in genetics and molecular biology. Three combinations of primers/restriction enzyme targeting chloroplast DNA were sufficient to generate patterns that enabled visual discrimination of plant species. The protocol was tested with a range of common fruit, vegetable, and herb plants that could be easily cultivated and handled in the laboratory. Toxic or hazardous materials such as ethidium bromide and liquid nitrogen were avoided. The protocol was validated as a university outreach workshop targeted at a group of up to 10 high-school students. In a teaching laboratory, students sampled plants, setup the PCR reaction and restriction digest using microliter pipettes, and loaded the digested samples on an agarose gel. The workshop was structured as 2 × 2.5-hour sessions on separate days. The main challenges stemmed from the speed and accuracy of pipetting, especially at the gel loading stage. Feedback from students was largely positive, with the majority reporting that they had both enjoyed and learnt from the experience. Copyright © 2010 Wiley Periodicals, Inc.
Mastering algebra retrains the visual system to perceive hierarchical structure in equations.
Marghetis, Tyler; Landy, David; Goldstone, Robert L
2016-01-01
Formal mathematics is a paragon of abstractness. It thus seems natural to assume that the mathematical expert should rely more on symbolic or conceptual processes, and less on perception and action. We argue instead that mathematical proficiency relies on perceptual systems that have been retrained to implement mathematical skills. Specifically, we investigated whether the visual system-in particular, object-based attention-is retrained so that parsing algebraic expressions and evaluating algebraic validity are accomplished by visual processing. Object-based attention occurs when the visual system organizes the world into discrete objects, which then guide the deployment of attention. One classic signature of object-based attention is better perceptual discrimination within, rather than between, visual objects. The current study reports that object-based attention occurs not only for simple shapes but also for symbolic mathematical elements within algebraic expressions-but only among individuals who have mastered the hierarchical syntax of algebra. Moreover, among these individuals, increased object-based attention within algebraic expressions is associated with a better ability to evaluate algebraic validity. These results suggest that, in mastering the rules of algebra, people retrain their visual system to represent and evaluate abstract mathematical structure. We thus argue that algebraic expertise involves the regimentation and reuse of evolutionarily ancient perceptual processes. Our findings implicate the visual system as central to learning and reasoning in mathematics, leading us to favor educational approaches to mathematics and related STEM fields that encourage students to adapt, not abandon, their use of perception.
Treviño, Mario
2014-01-01
Animal choices depend on direct sensory information, but also on the dynamic changes in the magnitude of reward. In visual discrimination tasks, the emergence of lateral biases in the choice record from animals is often described as a behavioral artifact, because these are highly correlated with error rates affecting psychophysical measurements. Here, we hypothesized that biased choices could constitute a robust behavioral strategy to solve discrimination tasks of graded difficulty. We trained mice to swim in a two-alterative visual discrimination task with escape from water as the reward. Their prevalence of making lateral choices increased with stimulus similarity and was present in conditions of high discriminability. While lateralization occurred at the individual level, it was absent, on average, at the population level. Biased choice sequences obeyed the generalized matching law and increased task efficiency when stimulus similarity was high. A mathematical analysis revealed that strongly-biased mice used information from past rewards but not past choices to make their current choices. We also found that the amount of lateralized choices made during the first day of training predicted individual differences in the average learning behavior. This framework provides useful analysis tools to study individualized visual-learning trajectories in mice. PMID:25524257
Determinants of Global Color-Based Selection in Human Visual Cortex.
Bartsch, Mandy V; Boehler, Carsten N; Stoppel, Christian M; Merkel, Christian; Heinze, Hans-Jochen; Schoenfeld, Mircea A; Hopf, Jens-Max
2015-09-01
Feature attention operates in a spatially global way, with attended feature values being prioritized for selection outside the focus of attention. Accounts of global feature attention have emphasized feature competition as a determining factor. Here, we use magnetoencephalographic recordings in humans to test whether competition is critical for global feature selection to arise. Subjects performed a color/shape discrimination task in one visual field (VF), while irrelevant color probes were presented in the other unattended VF. Global effects of color attention were assessed by analyzing the response to the probe as a function of whether or not the probe's color was a target-defining color. We find that global color selection involves a sequence of modulations in extrastriate cortex, with an initial phase in higher tier areas (lateral occipital complex) followed by a later phase in lower tier retinotopic areas (V3/V4). Importantly, these modulations appeared with and without color competition in the focus of attention. Moreover, early parts of the modulation emerged for a task-relevant color not even present in the focus of attention. All modulations, however, were eliminated during simple onset-detection of the colored target. These results indicate that global color-based attention depends on target discrimination independent of feature competition in the focus of attention. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Jemel, Boutheina; Achenbach, Christiane; Müller, Bernhard W; Röpcke, Bernd; Oades, Robert D
2002-01-01
The event-related potential (ERP) reflecting auditory change detection (mismatch negativity, MMN) registers automatic selective processing of a deviant sound with respect to a working memory template resulting from a series of standard sounds. Controversy remains whether MMN can be generated in the frontal as well as the temporal cortex. Our aim was to see if frontal as well as temporal lobe dipoles could explain MMN recorded after pitch-deviants (Pd-MMN) and duration deviants (Dd-MMN). EEG recordings were taken from 32 sites in 14 healthy subjects during a passive 3-tone oddball presented during a simple visual discrimination and an active auditory discrimination condition. Both conditions were repeated after one month. The Pd-MMN was larger, peaked earlier and correlated better between sessions than the Dd-MMN. Two dipoles in the auditory cortex and two in the frontal lobe (left cingulate and right inferior frontal cortex) were found to be similarly placed for Pd- and Dd-MMN, and were well replicated on retest. This study confirms interactions between activity generated in the frontal and auditory temporal cortices in automatic attention-like processes that resemble initial brain imaging reports of unconscious visual change detection. The lack of interference between sessions shows that the situation is likely to be sensitive to treatment or illness effects on fronto-temporal interactions involving repeated measures.
Bett, David; Allison, Elizabeth; Murdoch, Lauren H.; Kaefer, Karola; Wood, Emma R.; Dudchenko, Paul A.
2012-01-01
Vicarious trial-and-errors (VTEs) are back-and-forth movements of the head exhibited by rodents and other animals when faced with a decision. These behaviors have recently been associated with prospective sweeps of hippocampal place cell firing, and thus may reflect a rodent model of deliberative decision-making. The aim of the current study was to test whether the hippocampus is essential for VTEs in a spatial memory task and in a simple visual discrimination (VD) task. We found that lesions of the hippocampus with ibotenic acid produced a significant impairment in the accuracy of choices in a serial spatial reversal (SR) task. In terms of VTEs, whereas sham-lesioned animals engaged in more VTE behavior prior to identifying the location of the reward as opposed to repeated trials after it had been located, the lesioned animals failed to show this difference. In contrast, damage to the hippocampus had no effect on acquisition of a VD or on the VTEs seen in this task. For both lesion and sham-lesion animals, adding an additional choice to the VD increased the number of VTEs and decreased the accuracy of choices. Together, these results suggest that the hippocampus may be specifically involved in VTE behavior during spatial decision making. PMID:23115549
Zhong, Xianhua; Li, Dan; Du, Wei; Yan, Mengqiu; Wang, You; Huo, Danqun; Hou, Changjun
2018-06-01
Volatile organic compounds (VOCs) in breath can be used as biomarkers to identify early stages of lung cancer. Herein, we report a disposable colorimetric array that has been constructed from diverse chemo-responsive colorants. Distinguishable difference maps were plotted within 4 min for specifically targeted VOCs. Through the consideration of various chemical interactions with VOCs, the arrays successfully discriminate between 20 different volatile organic compounds in breath that are related to lung cancer. VOCs were identified either with the visualized difference maps or through pattern recognition with an accuracy of at least 90%. No uncertainties or errors were observed in the hierarchical cluster analysis (HCA). Finally, good reproducibility and stability of the array was achieved against changes in humidity. Generally, this work provides fundamental support for construction of simple and rapid VOC sensors. More importantly, this approach provides a hypothesis-free array method for breath testing via VOC profiling. Therefore, this small, rapid, non-invasive, inexpensive, and visualized sensor array is a powerful and promising tool for early screening of lung cancer. Graphical abstract A disposable colorimetric array has been developed with broadly chemo-responsive dyes to incorporate various chemical interactions, through which the arrays successfully discriminate 20 VOCs that are related to lung cancer via difference maps alone or chemometrics within 4 min. The hydrophobic porous matrix provides good stability against changes in humidity.
Nawroth, Christian; Prentice, Pamela M; McElligott, Alan G
2017-01-01
Variation in common personality traits, such as boldness or exploration, is often associated with risk-reward trade-offs and behavioural flexibility. To date, only a few studies have examined the effects of consistent behavioural traits on both learning and cognition. We investigated whether certain personality traits ('exploration' and 'sociability') of individuals were related to cognitive performance, learning flexibility and learning style in a social ungulate species, the goat (Capra hircus). We also investigated whether a preference for feature cues rather than impaired learning abilities can explain performance variation in a visual discrimination task. We found that personality scores were consistent across time and context. Less explorative goats performed better in a non-associative cognitive task, in which subjects had to follow the trajectory of a hidden object (i.e. testing their ability for object permanence). We also found that less sociable subjects performed better compared to more sociable goats in a visual discrimination task. Good visual learning performance was associated with a preference for feature cues, indicating personality-dependent learning strategies in goats. Our results suggest that personality traits predict the outcome in visual discrimination and non-associative cognitive tasks in goats and that impaired performance in a visual discrimination tasks does not necessarily imply impaired learning capacities, but rather can be explained by a varying preference for feature cues. Copyright © 2016 Elsevier B.V. All rights reserved.
Roth, Zvi N
2016-01-01
Neural responses in visual cortex are governed by a topographic mapping from retinal locations to cortical responses. Moreover, at the voxel population level early visual cortex (EVC) activity enables accurate decoding of stimuli locations. However, in many cases information enabling one to discriminate between locations (i.e., discriminative information) may be less relevant than information regarding the relative location of two objects (i.e., relative information). For example, when planning to grab a cup, determining whether the cup is located at the same retinal location as the hand is hardly relevant, whereas the location of the cup relative to the hand is crucial for performing the action. We have previously used multivariate pattern analysis techniques to measure discriminative location information, and found the highest levels in EVC, in line with other studies. Here we show, using representational similarity analysis, that availability of discriminative information in fMRI activation patterns does not entail availability of relative information. Specifically, we find that relative location information can be reliably extracted from activity patterns in posterior intraparietal sulcus (pIPS), but not from EVC, where we find the spatial representation to be warped. We further show that this variability in relative information levels between regions can be explained by a computational model based on an array of receptive fields. Moreover, when the model's receptive fields are extended to include inhibitory surround regions, the model can account for the spatial warping in EVC. These results demonstrate how size and shape properties of receptive fields in human visual cortex contribute to the transformation of discriminative spatial representations into relative spatial representations along the visual stream.
Roth, Zvi N.
2016-01-01
Neural responses in visual cortex are governed by a topographic mapping from retinal locations to cortical responses. Moreover, at the voxel population level early visual cortex (EVC) activity enables accurate decoding of stimuli locations. However, in many cases information enabling one to discriminate between locations (i.e., discriminative information) may be less relevant than information regarding the relative location of two objects (i.e., relative information). For example, when planning to grab a cup, determining whether the cup is located at the same retinal location as the hand is hardly relevant, whereas the location of the cup relative to the hand is crucial for performing the action. We have previously used multivariate pattern analysis techniques to measure discriminative location information, and found the highest levels in EVC, in line with other studies. Here we show, using representational similarity analysis, that availability of discriminative information in fMRI activation patterns does not entail availability of relative information. Specifically, we find that relative location information can be reliably extracted from activity patterns in posterior intraparietal sulcus (pIPS), but not from EVC, where we find the spatial representation to be warped. We further show that this variability in relative information levels between regions can be explained by a computational model based on an array of receptive fields. Moreover, when the model's receptive fields are extended to include inhibitory surround regions, the model can account for the spatial warping in EVC. These results demonstrate how size and shape properties of receptive fields in human visual cortex contribute to the transformation of discriminative spatial representations into relative spatial representations along the visual stream. PMID:27242455
Hogarth, Lee; Dickinson, Anthony; Duka, Theodora
2003-08-01
Incentive salience theory states that acquired bias in selective attention for stimuli associated with tobacco-smoke reinforcement controls the selective performance of tobacco-seeking and tobacco-taking behaviour. To support this theory, we assessed whether a stimulus that had acquired control of a tobacco-seeking response in a discrimination procedure would command the focus of visual attention in a subsequent test phase. Smokers received discrimination training in which an instrumental key-press response was followed by tobacco-smoke reinforcement when one visual discriminative stimulus (S+) was present, but not when another stimulus (S-) was present. The skin conductance response to the S+ and S- assessed whether Pavlovian conditioning to the S+ had taken place. In a subsequent test phase, the S+ and S- were presented in the dot-probe task and the allocation of the focus of visual attention to these stimuli was measured. Participants learned to perform the instrumental tobacco-seeking response selectively in the presence of the S+ relative to the S-, and showed a greater skin conductance response to the S+ than the S-. In the subsequent test phase, participants allocated the focus of visual attention to the S+ in preference to the S-. Correlation analysis revealed that the visual attentional bias for the S+ was positively associated with the number of times the S+ had been paired with tobacco-smoke in training, the skin conductance response to the S+ and with subjective craving to smoke. Furthermore, increased exposure to tobacco-smoke in the natural environment was associated with reduced discrimination learning. These data demonstrate that discriminative stimuli that signal that tobacco-smoke reinforcement is available acquire the capacity to command selective attentional and elicit instrumental tobacco-seeking behaviour.
Prestimulus oscillatory activity in the alpha band predicts visual discrimination ability.
van Dijk, Hanneke; Schoffelen, Jan-Mathijs; Oostenveld, Robert; Jensen, Ole
2008-02-20
Although the resting and baseline states of the human electroencephalogram and magnetoencephalogram (MEG) are dominated by oscillations in the alpha band (approximately 10 Hz), the functional role of these oscillations remains unclear. In this study we used MEG to investigate how spontaneous oscillations in humans presented before visual stimuli modulate visual perception. Subjects had to report if there was a subtle difference in gray levels between two superimposed presented discs. We then compared the prestimulus brain activity for correctly (hits) versus incorrectly (misses) identified stimuli. We found that visual discrimination ability decreased with an increase in prestimulus alpha power. Given that reaction times did not vary systematically with prestimulus alpha power changes in vigilance are not likely to explain the change in discrimination ability. Source reconstruction using spatial filters allowed us to identify the brain areas accounting for this effect. The dominant sources modulating visual perception were localized around the parieto-occipital sulcus. We suggest that the parieto-occipital alpha power reflects functional inhibition imposed by higher level areas, which serves to modulate the gain of the visual stream.
The informativity of sound modulates crossmodal facilitation of visual discrimination: a fMRI study.
Li, Qi; Yu, Hongtao; Li, Xiujun; Sun, Hongzan; Yang, Jingjing; Li, Chunlin
2017-01-18
Many studies have investigated behavioral crossmodal facilitation when a visual stimulus is accompanied by a concurrent task-irrelevant sound. Lippert and colleagues reported that a concurrent task-irrelevant sound reduced the uncertainty of the timing of the visual display and improved perceptional responses (informative sound). However, the neural mechanism by which the informativity of sound affected crossmodal facilitation of visual discrimination remained unclear. In this study, we used event-related functional MRI to investigate the neural mechanisms underlying the role of informativity of sound in crossmodal facilitation of visual discrimination. Significantly faster reaction times were observed when there was an informative relationship between auditory and visual stimuli. The functional MRI results showed sound informativity-induced activation enhancement including the left fusiform gyrus and the right lateral occipital complex. Further correlation analysis showed that the right lateral occipital complex was significantly correlated with the behavioral benefit in reaction times. This suggests that this region was modulated by the informative relationship within audiovisual stimuli that was learnt during the experiment, resulting in late-stage multisensory integration and enhanced behavioral responses.
Fornix and medial temporal lobe lesions lead to comparable deficits in complex visual perception.
Lech, Robert K; Koch, Benno; Schwarz, Michael; Suchan, Boris
2016-05-04
Recent research dealing with the structures of the medial temporal lobe (MTL) has shifted away from exclusively investigating memory-related processes and has repeatedly incorporated the investigation of complex visual perception. Several studies have demonstrated that higher level visual tasks can recruit structures like the hippocampus and perirhinal cortex in order to successfully perform complex visual discriminations, leading to a perceptual-mnemonic or representational view of the medial temporal lobe. The current study employed a complex visual discrimination paradigm in two patients suffering from brain lesions with differing locations and origin. Both patients, one with extensive medial temporal lobe lesions (VG) and one with a small lesion of the anterior fornix (HJK), were impaired in complex discriminations while showing otherwise mostly intact cognitive functions. The current data confirmed previous results while also extending the perceptual-mnemonic theory of the MTL to the main output structure of the hippocampus, the fornix. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Timing of target discrimination in human frontal eye fields.
O'Shea, Jacinta; Muggleton, Neil G; Cowey, Alan; Walsh, Vincent
2004-01-01
Frontal eye field (FEF) neurons discharge in response to behaviorally relevant stimuli that are potential targets for saccades. Distinct visual and motor processes have been dissociated in the FEF of macaque monkeys, but little is known about the visual processing capacity of FEF in humans. We used double-pulse transcranial magnetic stimulation [(d)TMS] to investigate the timing of target discrimination during visual conjunction search. We applied dual TMS pulses separated by 40 msec over the right FEF and vertex. These were applied in five timing conditions to sample separate time windows within the first 200 msec of visual processing. (d)TMS impaired search performance, reflected in reduced d' scores. This effect was limited to a time window between 40 and 80 msec after search array onset. These parameters correspond with single-cell activity in FEF that predicts monkeys' behavioral reports on hit, miss, false alarm, and correct rejection trials. Our findings demonstrate a crucial early role for human FEF in visual target discrimination that is independent of saccade programming.
Pre-cooling moderately enhances visual discrimination during exercise in the heat.
Clarke, Neil D; Duncan, Michael J; Smith, Mike; Hankey, Joanne
2017-02-01
Pre-cooling has been reported to attenuate the increase in core temperature, although, information regarding the effects of pre-cooling on cognitive function is limited. The present study investigated the effects of pre-cooling on visual discrimination during exercise in the heat. Eight male recreational runners completed 90 min of treadmill running at 65% [Formula: see text] 2max in the heat [32.4 ± 0.9°C and 46.8 ± 6.4% relative humidity (r.h.)] on two occasions in a randomised, counterbalanced crossover design. Participants underwent pre-cooling by means of water immersion (20.3 ± 0.3°C) for 60 min or remained seated for 60 min in a laboratory (20.2 ± 1.7°C and 60.2 ± 2.5% r.h.). Rectal temperature (T rec ) and mean skin temperature (T skin ) were monitored throughout the protocol. At 30-min intervals participants performed a visual discrimination task. Following pre-cooling, T rec (P = 0.040; [Formula: see text] = 0.48) was moderately lower at 0 and 30 min and T skin (P = 0.003; [Formula: see text] = 0.75) lower to a large extent at 0 min of exercise. Visual discrimination was moderately more accurate at 60 and 90 min of exercise following pre-cooling (P = 0.067; [Formula: see text] = 0.40). Pre-cooling resulted in small improvements in visual discrimination sensitivity (F 1,7 = 2.188; P = 0.183; [Formula: see text] = 0.24), criterion (F 1,7 = 1.298; P = 0.292; [Formula: see text] = 0.16) and bias (F 1,7 = 2.202; P = 0.181; [Formula: see text] = 0.24). Pre-cooling moderately improves visual discrimination accuracy during exercise in the heat.
Computing local edge probability in natural scenes from a population of oriented simple cells
Ramachandra, Chaithanya A.; Mel, Bartlett W.
2013-01-01
A key computation in visual cortex is the extraction of object contours, where the first stage of processing is commonly attributed to V1 simple cells. The standard model of a simple cell—an oriented linear filter followed by a divisive normalization—fits a wide variety of physiological data, but is a poor performing local edge detector when applied to natural images. The brain's ability to finely discriminate edges from nonedges therefore likely depends on information encoded by local simple cell populations. To gain insight into the corresponding decoding problem, we used Bayes's rule to calculate edge probability at a given location/orientation in an image based on a surrounding filter population. Beginning with a set of ∼ 100 filters, we culled out a subset that were maximally informative about edges, and minimally correlated to allow factorization of the joint on- and off-edge likelihood functions. Key features of our approach include a new, efficient method for ground-truth edge labeling, an emphasis on achieving filter independence, including a focus on filters in the region orthogonal rather than tangential to an edge, and the use of a customized parametric model to represent the individual filter likelihood functions. The resulting population-based edge detector has zero parameters, calculates edge probability based on a sum of surrounding filter influences, is much more sharply tuned than the underlying linear filters, and effectively captures fine-scale edge structure in natural scenes. Our findings predict nonmonotonic interactions between cells in visual cortex, wherein a cell may for certain stimuli excite and for other stimuli inhibit the same neighboring cell, depending on the two cells' relative offsets in position and orientation, and their relative activation levels. PMID:24381295
Kahn, Julia B; Ward, Ryan D; Kahn, Lora W; Rudy, Nicole M; Kandel, Eric R; Balsam, Peter D; Simpson, Eleanor H
2012-10-16
Working memory and attention are complex cognitive functions that are disrupted in several neuropsychiatric disorders. Mouse models of such human diseases are commonly subjected to maze-based tests that can neither distinguish between these cognitive functions nor isolate specific aspects of either function. Here, we have adapted a simple visual discrimination task, and by varying only the timing of events within the same task construct, we are able to measure independently the behavioral response to increasing attentional demand and increasing length of time that information must be maintained in working memory. We determined that mPFC lesions in mice impair attention but not working memory maintenance.
Surguladze, Simon A; Chkonia, Eka D; Kezeli, Archil R; Roinishvili, Maya O; Stahl, Daniel; David, Anthony S
2012-05-01
Abnormalities in visual processing have been found consistently in schizophrenia patients, including deficits in early visual processing, perceptual organization, and facial emotion recognition. There is however no consensus as to whether these abnormalities represent heritable illness traits and what their contribution is to psychopathology. Fifty patients with schizophrenia, 61 of their first-degree healthy relatives, and 50 psychiatrically healthy volunteers were tested with regard to facial affect (FA) discrimination and susceptibility to develop the color-contingent illusion [the McCollough Effect (ME)]. Both patients and relatives demonstrated significantly lower accuracy in FA discrimination compared with controls. There was also a significant effect of familiality: Participants from the same families had more similar accuracy scores than those who belonged to different families. Experiments with the ME showed that schizophrenia patients required longer time to develop the illusion than relatives and controls, which indicated poor visual adaptation in schizophrenia. Relatives were marginally slower than controls. There was no significant association between the measures of FA discrimination accuracy and ME in any of the participant groups. Facial emotion discrimination was associated with the degree of interpersonal problems, as measured by the Schizotypal Personality Questionnaire in relatives and healthy volunteers, whereas the ME was associated with the perceptual-cognitive symptoms of schizotypy and positive symptoms of schizophrenia. Our results support the heritability of FA discrimination deficits as a trait and indicate visual adaptation abnormalities in schizophrenia, which are symptom related.
Stereoscopic processing of crossed and uncrossed disparities in the human visual cortex.
Li, Yuan; Zhang, Chuncheng; Hou, Chunping; Yao, Li; Zhang, Jiacai; Long, Zhiying
2017-12-21
Binocular disparity provides a powerful cue for depth perception in a stereoscopic environment. Despite increasing knowledge of the cortical areas that process disparity from neuroimaging studies, the neural mechanism underlying disparity sign processing [crossed disparity (CD)/uncrossed disparity (UD)] is still poorly understood. In the present study, functional magnetic resonance imaging (fMRI) was used to explore different neural features that are relevant to disparity-sign processing. We performed an fMRI experiment on 27 right-handed healthy human volunteers by using both general linear model (GLM) and multi-voxel pattern analysis (MVPA) methods. First, GLM was used to determine the cortical areas that displayed different responses to different disparity signs. Second, MVPA was used to determine how the cortical areas discriminate different disparity signs. The GLM analysis results indicated that shapes with UD induced significantly stronger activity in the sub-region (LO) of the lateral occipital cortex (LOC) than those with CD. The results of MVPA based on region of interest indicated that areas V3d and V3A displayed higher accuracy in the discrimination of crossed and uncrossed disparities than LOC. The results of searchlight-based MVPA indicated that the dorsal visual cortex showed significantly higher prediction accuracy than the ventral visual cortex and the sub-region LO of LOC showed high accuracy in the discrimination of crossed and uncrossed disparities. The results may suggest the dorsal visual areas are more discriminative to the disparity signs than the ventral visual areas although they are not sensitive to the disparity sign processing. Moreover, the LO in the ventral visual cortex is relevant to the recognition of shapes with different disparity signs and discriminative to the disparity sign.
Kamitani, Toshiaki; Kuroiwa, Yoshiyuki
2009-01-01
Recent studies demonstrated an altered P3 component and prolonged reaction time during the visual discrimination tasks in multiple system atrophy (MSA). In MSA, however, little is known about the N2 component which is known to be closely related to the visual discrimination process. We therefore compared the N2 component as well as the N1 and P3 components in 17 MSA patients with these components in 10 normal controls, by using a visual selective attention task to color or to shape. While the P3 in MSA was significantly delayed in selective attention to shape, the N2 in MSA was significantly delayed in selective attention to color. N1 was normally preserved both in attention to color and in attention to shape. Our electrophysiological results indicate that the color discrimination process during selective attention is impaired in MSA.
Behavioral evaluation of visual function of rats using a visual discrimination apparatus.
Thomas, Biju B; Samant, Deedar M; Seiler, Magdalene J; Aramant, Robert B; Sheikholeslami, Sharzad; Zhang, Kevin; Chen, Zhenhai; Sadda, SriniVas R
2007-05-15
A visual discrimination apparatus was developed to evaluate the visual sensitivity of normal pigmented rats (n=13) and S334ter-line-3 retinal degenerate (RD) rats (n=15). The apparatus is a modified Y maze consisting of two chambers leading to the rats' home cage. Rats were trained to find a one-way exit door leading into their home cage, based on distinguishing between two different visual alternatives (either a dark background or black and white stripes at varying luminance levels) which were randomly displayed on the back of each chamber. Within 2 weeks of training, all rats were able to distinguish between these two visual patterns. The discrimination threshold of normal pigmented rats was a luminance level of -5.37+/-0.05 log cd/m(2); whereas the threshold level of 100-day-old RD rats was -1.14+/-0.09 log cd/m(2) with considerable variability in performance. When tested at a later age (about 150 days), the threshold level of RD rats was significantly increased (-0.82+/-0.09 log cd/m(2), p<0.03, paired t-test). This apparatus could be useful to train rats at a very early age to distinguish between two different visual stimuli and may be effective for visual functional evaluations following therapeutic interventions.
ERIC Educational Resources Information Center
Kemner, Chantal; van Ewijk, Lizet; van Engeland, Herman; Hooge, Ignace
2008-01-01
Subjects with PDD excel on certain visuo-spatial tasks, amongst which visual search tasks, and this has been attributed to enhanced perceptual discrimination. However, an alternative explanation is that subjects with PDD show a different, more effective search strategy. The present study aimed to test both hypotheses, by measuring eye movements…
A Further Evaluation of Picture Prompts during Auditory-Visual Conditional Discrimination Training
ERIC Educational Resources Information Center
Carp, Charlotte L.; Peterson, Sean P.; Arkel, Amber J.; Petursdottir, Anna I.; Ingvarsson, Einar T.
2012-01-01
This study was a systematic replication and extension of Fisher, Kodak, and Moore (2007), in which a picture prompt embedded into a least-to-most prompting sequence facilitated acquisition of auditory-visual conditional discriminations. Participants were 4 children who had been diagnosed with autism; 2 had limited prior receptive skills, and 2 had…
ERIC Educational Resources Information Center
Janssen, David Rainsford
This study investigated alternate methods of letter discrimination pretraining and word recognition training in young children. Seventy kindergarten children were trained to recognize eight printed words in a vocabulary list by a mixed-list paired-associate method. Four of the stimulus words had visual response choices (pictures) and four had…
NASA Technical Reports Server (NTRS)
Laverghetta, A. V.; Shimizu, T.
1999-01-01
The nucleus rotundus is a large thalamic nucleus in birds and plays a critical role in many visual discrimination tasks. In order to test the hypothesis that there are functionally distinct subdivisions in the nucleus rotundus, effects of selective lesions of the nucleus were studied in pigeons. The birds were trained to discriminate between different types of stationary objects and between different directions of moving objects. Multiple regression analyses revealed that lesions in the anterior, but not posterior, division caused deficits in discrimination of small stationary stimuli. Lesions in neither the anterior nor posterior divisions predicted effects in discrimination of moving stimuli. These results are consistent with a prediction led from the hypothesis that the nucleus is composed of functional subdivisions.
Face adaptation improves gender discrimination.
Yang, Hua; Shen, Jianhong; Chen, Juan; Fang, Fang
2011-01-01
Adaptation to a visual pattern can alter the sensitivities of neuronal populations encoding the pattern. However, the functional roles of adaptation, especially in high-level vision, are still equivocal. In the present study, we performed three experiments to investigate if face gender adaptation could affect gender discrimination. Experiments 1 and 2 revealed that adapting to a male/female face could selectively enhance discrimination for male/female faces. Experiment 3 showed that the discrimination enhancement induced by face adaptation could transfer across a substantial change in three-dimensional face viewpoint. These results provide further evidence suggesting that, similar to low-level vision, adaptation in high-level vision could calibrate the visual system to current inputs of complex shapes (i.e. face) and improve discrimination at the adapted characteristic. Copyright © 2010 Elsevier Ltd. All rights reserved.
Third-Degree Price Discrimination Revisited
ERIC Educational Resources Information Center
Kwon, Youngsun
2006-01-01
The author derives the probability that price discrimination improves social welfare, using a simple model of third-degree price discrimination assuming two independent linear demands. The probability that price discrimination raises social welfare increases as the preferences or incomes of consumer groups become more heterogeneous. He derives the…
Stobbe, Nina; Westphal-Fitch, Gesche; Aust, Ulrike; Fitch, W. Tecumseh
2012-01-01
Artificial grammar learning (AGL) provides a useful tool for exploring rule learning strategies linked to general purpose pattern perception. To be able to directly compare performance of humans with other species with different memory capacities, we developed an AGL task in the visual domain. Presenting entire visual patterns simultaneously instead of sequentially minimizes the amount of required working memory. This approach allowed us to evaluate performance levels of two bird species, kea (Nestor notabilis) and pigeons (Columba livia), in direct comparison to human participants. After being trained to discriminate between two types of visual patterns generated by rules at different levels of computational complexity and presented on a computer screen, birds and humans received further training with a series of novel stimuli that followed the same rules, but differed in various visual features from the training stimuli. Most avian and all human subjects continued to perform well above chance during this initial generalization phase, suggesting that they were able to generalize learned rules to novel stimuli. However, detailed testing with stimuli that violated the intended rules regarding the exact number of stimulus elements indicates that neither bird species was able to successfully acquire the intended pattern rule. Our data suggest that, in contrast to humans, these birds were unable to master a simple rule above the finite-state level, even with simultaneous item presentation and despite intensive training. PMID:22688635
Microcontroller based fibre-optic visual presentation system for multisensory neuroimaging.
Kurniawan, Veldri; Klemen, Jane; Chambers, Christopher D
2011-10-30
Presenting visual stimuli in physical 3D space during fMRI experiments carries significant technical challenges. Certain types of multisensory visuotactile experiments and visuomotor tasks require presentation of visual stimuli in peripersonal space, which cannot be accommodated by ordinary projection screens or binocular goggles. However, light points produced by a group of LEDs can be transmitted through fibre-optic cables and positioned anywhere inside the MRI scanner. Here we describe the design and implementation of a microcontroller-based programmable digital device for controlling fibre-optically transmitted LED lights from a PC. The main feature of this device is the ability to independently control the colour, brightness, and timing of each LED. Moreover, the device was designed in a modular and extensible way, which enables easy adaptation for various experimental paradigms. The device was tested and validated in three fMRI experiments involving basic visual perception, a simple colour discrimination task, and a blocked multisensory visuo-tactile task. The results revealed significant lateralized activation in occipital cortex of all participants, a reliable response in ventral occipital areas to colour stimuli elicited by the device, and strong activations in multisensory brain regions in the multisensory task. Overall, these findings confirm the suitability of this device for presenting complex fibre-optic visual and cross-modal stimuli inside the scanner. Copyright © 2011 Elsevier B.V. All rights reserved.
Tunnel vision: sharper gradient of spatial attention in autism.
Robertson, Caroline E; Kravitz, Dwight J; Freyberg, Jan; Baron-Cohen, Simon; Baker, Chris I
2013-04-17
Enhanced perception of detail has long been regarded a hallmark of autism spectrum conditions (ASC), but its origins are unknown. Normal sensitivity on all fundamental perceptual measures-visual acuity, contrast discrimination, and flicker detection-is strongly established in the literature. If individuals with ASC do not have superior low-level vision, how is perception of detail enhanced? We argue that this apparent paradox can be resolved by considering visual attention, which is known to enhance basic visual sensitivity, resulting in greater acuity and lower contrast thresholds. Here, we demonstrate that the focus of attention and concomitant enhancement of perception are sharper in human individuals with ASC than in matched controls. Using a simple visual acuity task embedded in a standard cueing paradigm, we mapped the spatial and temporal gradients of attentional enhancement by varying the distance and onset time of visual targets relative to an exogenous cue, which obligatorily captures attention. Individuals with ASC demonstrated a greater fall-off in performance with distance from the cue than controls, indicating a sharper spatial gradient of attention. Further, this sharpness was highly correlated with the severity of autistic symptoms in ASC, as well as autistic traits across both ASC and control groups. These findings establish the presence of a form of "tunnel vision" in ASC, with far-reaching implications for our understanding of the social and neurobiological aspects of autism.
Visual Network Asymmetry and Default Mode Network Function in ADHD: An fMRI Study
Hale, T. Sigi; Kane, Andrea M.; Kaminsky, Olivia; Tung, Kelly L.; Wiley, Joshua F.; McGough, James J.; Loo, Sandra K.; Kaplan, Jonas T.
2014-01-01
Background: A growing body of research has identified abnormal visual information processing in attention-deficit hyperactivity disorder (ADHD). In particular, slow processing speed and increased reliance on visuo-perceptual strategies have become evident. Objective: The current study used recently developed fMRI methods to replicate and further examine abnormal rightward biased visual information processing in ADHD and to further characterize the nature of this effect; we tested its association with several large-scale distributed network systems. Method: We examined fMRI BOLD response during letter and location judgment tasks, and directly assessed visual network asymmetry and its association with large-scale networks using both a voxelwise and an averaged signal approach. Results: Initial within-group analyses revealed a pattern of left-lateralized visual cortical activity in controls but right-lateralized visual cortical activity in ADHD children. Direct analyses of visual network asymmetry confirmed atypical rightward bias in ADHD children compared to controls. This ADHD characteristic was atypically associated with reduced activation across several extra-visual networks, including the default mode network (DMN). We also found atypical associations between DMN activation and ADHD subjects’ inattentive symptoms and task performance. Conclusion: The current study demonstrated rightward VNA in ADHD during a simple letter discrimination task. This result adds an important novel consideration to the growing literature identifying abnormal visual processing in ADHD. We postulate that this characteristic reflects greater perceptual engagement of task-extraneous content, and that it may be a basic feature of less efficient top-down task-directed control over visual processing. We additionally argue that abnormal DMN function may contribute to this characteristic. PMID:25076915
Xu, Yang; D'Lauro, Christopher; Pyles, John A.; Kass, Robert E.; Tarr, Michael J.
2013-01-01
Humans are remarkably proficient at categorizing visually-similar objects. To better understand the cortical basis of this categorization process, we used magnetoencephalography (MEG) to record neural activity while participants learned–with feedback–to discriminate two highly-similar, novel visual categories. We hypothesized that although prefrontal regions would mediate early category learning, this role would diminish with increasing category familiarity and that regions within the ventral visual pathway would come to play a more prominent role in encoding category-relevant information as learning progressed. Early in learning we observed some degree of categorical discriminability and predictability in both prefrontal cortex and the ventral visual pathway. Predictability improved significantly above chance in the ventral visual pathway over the course of learning with the left inferior temporal and fusiform gyri showing the greatest improvement in predictability between 150 and 250 ms (M200) during category learning. In contrast, there was no comparable increase in discriminability in prefrontal cortex with the only significant post-learning effect being a decrease in predictability in the inferior frontal gyrus between 250 and 350 ms (M300). Thus, the ventral visual pathway appears to encode learned visual categories over the long term. At the same time these results add to our understanding of the cortical origins of previously reported signature temporal components associated with perceptual learning. PMID:24146656
ERIC Educational Resources Information Center
Hendrickson, Homer
1988-01-01
Spelling problems arise due to problems with form discrimination and inadequate visualization. A child's sequence of visual development involves learning motor control and coordination, with vision directing and monitoring the movements; learning visual comparison of size, shape, directionality, and solidity; developing visual memory or recall;…
Reliability, validity and sensitivity of a computerized visual analog scale measuring state anxiety.
Abend, Rany; Dan, Orrie; Maoz, Keren; Raz, Sivan; Bar-Haim, Yair
2014-12-01
Assessment of state anxiety is frequently required in clinical and research settings, but its measurement using standard multi-item inventories entails practical challenges. Such inventories are increasingly complemented by paper-and-pencil, single-item visual analog scales measuring state anxiety (VAS-A), which allow rapid assessment of current anxiety states. Computerized versions of VAS-A offer additional advantages, including facilitated and accurate data collection and analysis, and applicability to computer-based protocols. Here, we establish the psychometric properties of a computerized VAS-A. Experiment 1 assessed the reliability, convergent validity, and discriminant validity of the computerized VAS-A in a non-selected sample. Experiment 2 assessed its sensitivity to increase in state anxiety following social stress induction, in participants with high levels of social anxiety. Experiment 1 demonstrated the computerized VAS-A's test-retest reliability (r = .44, p < .001); convergent validity with the State-Trait Anxiety Inventory's state subscale (STAI-State; r = .60, p < .001); and discriminant validity as indicated by significantly lower correlations between VAS-A and different psychological measures relative to the correlation between VAS-A and STAI-State. Experiment 2 demonstrated the VAS-A's sensitivity to changes in state anxiety via a significant pre- to during-stressor rise in VAS-A scores (F(1,48) = 25.13, p < .001). Set-order administration of measures, absence of clinically-anxious population, and gender-unbalanced samples. The adequate psychometric characteristics, combined with simple and rapid administration, make the computerized VAS-A a valuable self-rating tool for state anxiety. It may prove particularly useful for clinical and research settings where multi-item inventories are less applicable, including computer-based treatment and assessment protocols. The VAS-A is freely available: http://people.socsci.tau.ac.il/mu/anxietytrauma/visual-analog-scale/. Copyright © 2014 Elsevier Ltd. All rights reserved.
McKean, Danielle L.; Tsao, Jack W.; Chan, Annie W.-Y.
2017-01-01
The Body Inversion Effect (BIE; reduced visual discrimination performance for inverted compared to upright bodies) suggests that bodies are visually processed configurally; however, the specific importance of head posture information in the BIE has been indicated in reports of BIE reduction for whole bodies with fixed head position and for headless bodies. Through measurement of gaze patterns and investigation of the causal relation of fixation location to visual body discrimination performance, the present study reveals joint contributions of feature and configuration processing to visual body discrimination. Participants predominantly gazed at the (body-centric) upper body for upright bodies and the lower body for inverted bodies in the context of an experimental paradigm directly comparable to that of prior studies of the BIE. Subsequent manipulation of fixation location indicates that these preferential gaze locations causally contributed to the BIE for whole bodies largely due to the informative nature of gazing at or near the head. Also, a BIE was detected for both whole and headless bodies even when fixation location on the body was held constant, indicating a role of configural processing in body discrimination, though inclusion of the head posture information was still highly discriminative in the context of such processing. Interestingly, the impact of configuration (upright and inverted) to the BIE appears greater than that of differential preferred gaze locations. PMID:28085894
Comparative psychophysics of bumblebee and honeybee colour discrimination and object detection.
Dyer, Adrian G; Spaethe, Johannes; Prack, Sabina
2008-07-01
Bumblebee (Bombus terrestris) discrimination of targets with broadband reflectance spectra was tested using simultaneous viewing conditions, enabling an accurate determination of the perceptual limit of colour discrimination excluding confounds from memory coding (experiment 1). The level of colour discrimination in bumblebees, and honeybees (Apis mellifera) (based upon previous observations), exceeds predictions of models considering receptor noise in the honeybee. Bumblebee and honeybee photoreceptors are similar in spectral shape and spacing, but bumblebees exhibit significantly poorer colour discrimination in behavioural tests, suggesting possible differences in spatial or temporal signal processing. Detection of stimuli in a Y-maze was evaluated for bumblebees (experiment 2) and honeybees (experiment 3). Honeybees detected stimuli containing both green-receptor-contrast and colour contrast at a visual angle of approximately 5 degrees , whilst stimuli that contained only colour contrast were only detected at a visual angle of 15 degrees . Bumblebees were able to detect these stimuli at a visual angle of 2.3 degrees and 2.7 degrees , respectively. A comparison of the experiments suggests a tradeoff between colour discrimination and colour detection in these two species, limited by the need to pool colour signals to overcome receptor noise. We discuss the colour processing differences and possible adaptations to specific ecological habitats.
Support for Lateralization of the Whorf Effect beyond the Realm of Color Discrimination
ERIC Educational Resources Information Center
Gilbert, Aubrey L.; Regier, Terry; Kay, Paul; Ivry, Richard B.
2008-01-01
Recent work has shown that Whorf effects of language on color discrimination are stronger in the right visual field than in the left. Here we show that this phenomenon is not limited to color: The perception of animal figures (cats and dogs) was more strongly affected by linguistic categories for stimuli presented to the right visual field than…
ERIC Educational Resources Information Center
Patching, Geoffrey R.; Englund, Mats P.; Hellstrom, Ake
2012-01-01
Despite the importance of both response probability and response time for testing models of choice, there is a dearth of chronometric studies examining systematic asymmetries that occur over time- and space-orders in the method of paired comparisons. In this study, systematic asymmetries in discriminating the magnitude of paired visual stimuli are…
Cong, Lin-Juan; Wang, Ru-Jie; Yu, Cong; Zhang, Jun-Yun
2016-01-01
Visual perceptual learning is known to be specific to the trained retinal location, feature, and task. However, location and feature specificity can be eliminated by double-training or TPE training protocols, in which observers receive additional exposure to the transfer location or feature dimension via an irrelevant task besides the primary learning task Here we tested whether these new training protocols could even make learning transfer across different tasks involving discrimination of basic visual features (e.g., orientation and contrast). Observers practiced a near-threshold orientation (or contrast) discrimination task. Following a TPE training protocol, they also received exposure to the transfer task via performing suprathreshold contrast (or orientation) discrimination in alternating blocks of trials in the same sessions. The results showed no evidence for significant learning transfer to the untrained near-threshold contrast (or orientation) discrimination task after discounting the pretest effects and the suprathreshold practice effects. These results thus do not support a hypothetical task-independent component in perceptual learning of basic visual features. They also set the boundary of the new training protocols in their capability to enable learning transfer. PMID:26873777
Melanopsin-based brightness discrimination in mice and humans.
Brown, Timothy M; Tsujimura, Sei-Ichi; Allen, Annette E; Wynne, Jonathan; Bedford, Robert; Vickery, Graham; Vugler, Anthony; Lucas, Robert J
2012-06-19
Photoreception in the mammalian retina is not restricted to rods and cones but extends to a small number of intrinsically photoreceptive retinal ganglion cells (ipRGCs), expressing the photopigment melanopsin. ipRGCs are known to support various accessory visual functions including circadian photoentrainment and pupillary reflexes. However, despite anatomical and physiological evidence that they contribute to the thalamocortical visual projection, no aspect of visual discrimination has been shown to rely upon ipRGCs. Based on their currently known roles, we hypothesized that ipRGCs may contribute to distinguishing brightness. This percept is related to an object's luminance-a photometric measure of light intensity relevant for cone photoreceptors. However, the perceived brightness of different sources is not always predicted by their respective luminance. Here, we used parallel behavioral and electrophysiological experiments to first show that melanopsin contributes to brightness discrimination in both retinally degenerate and fully sighted mice. We continued to use comparable paradigms in psychophysical experiments to provide evidence for a similar role in healthy human subjects. These data represent the first direct evidence that an aspect of visual discrimination in normally sighted subjects can be supported by inner retinal photoreceptors. Copyright © 2012 Elsevier Ltd. All rights reserved.
Vitu, Françoise; Engbert, Ralf; Kliegl, Reinhold
2016-01-01
Saccades to single targets in peripheral vision are typically characterized by an undershoot bias. Putting this bias to a test, Kapoula [1] used a paradigm in which observers were presented with two different sets of target eccentricities that partially overlapped each other. Her data were suggestive of a saccadic range effect (SRE): There was a tendency for saccades to overshoot close targets and undershoot far targets in a block, suggesting that there was a response bias towards the center of eccentricities in a given block. Our Experiment 1 was a close replication of the original study by Kapoula [1]. In addition, we tested whether the SRE is sensitive to top-down requirements associated with the task, and we also varied the target presentation duration. In Experiments 1 and 2, we expected to replicate the SRE for a visual discrimination task. The simple visual saccade-targeting task in Experiment 3, entailing minimal top-down influence, was expected to elicit a weaker SRE. Voluntary saccades to remembered target locations in Experiment 3 were expected to elicit the strongest SRE. Contrary to these predictions, we did not observe a SRE in any of the tasks. Our findings complement the results reported by Gillen et al. [2] who failed to find the effect in a saccade-targeting task with a very brief target presentation. Together, these results suggest that unlike arm movements, saccadic eye movements are not biased towards making saccades of a constant, optimal amplitude for the task. PMID:27658191
Kim, HyungGoo R.; Pitkow, Xaq; Angelaki, Dora E.
2016-01-01
Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: “congruent” and “opposite” cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs. PMID:27334948
Nonlinear dimensionality reduction methods for synthetic biology biobricks' visualization.
Yang, Jiaoyun; Wang, Haipeng; Ding, Huitong; An, Ning; Alterovitz, Gil
2017-01-19
Visualizing data by dimensionality reduction is an important strategy in Bioinformatics, which could help to discover hidden data properties and detect data quality issues, e.g. data noise, inappropriately labeled data, etc. As crowdsourcing-based synthetic biology databases face similar data quality issues, we propose to visualize biobricks to tackle them. However, existing dimensionality reduction methods could not be directly applied on biobricks datasets. Hereby, we use normalized edit distance to enhance dimensionality reduction methods, including Isomap and Laplacian Eigenmaps. By extracting biobricks from synthetic biology database Registry of Standard Biological Parts, six combinations of various types of biobricks are tested. The visualization graphs illustrate discriminated biobricks and inappropriately labeled biobricks. Clustering algorithm K-means is adopted to quantify the reduction results. The average clustering accuracy for Isomap and Laplacian Eigenmaps are 0.857 and 0.844, respectively. Besides, Laplacian Eigenmaps is 5 times faster than Isomap, and its visualization graph is more concentrated to discriminate biobricks. By combining normalized edit distance with Isomap and Laplacian Eigenmaps, synthetic biology biobircks are successfully visualized in two dimensional space. Various types of biobricks could be discriminated and inappropriately labeled biobricks could be determined, which could help to assess crowdsourcing-based synthetic biology databases' quality, and make biobricks selection.
Akiyama, Yoshihiro B; Iseri, Erina; Kataoka, Tomoya; Tanaka, Makiko; Katsukoshi, Kiyonori; Moki, Hirotada; Naito, Ryoji; Hem, Ramrav; Okada, Tomonari
2017-02-15
In the present study, we determined the common morphological characteristics of the feces of Mytilus galloprovincialis to develop a method for visually discriminating the feces of this mussel in deposited materials. This method can be used to assess the effect of mussel feces on benthic environments. The accuracy of visual morphology-based discrimination of mussel feces in deposited materials was confirmed by DNA analysis. Eighty-nine percent of mussel feces shared five common morphological characteristics. Of the 372 animal species investigated, only four species shared all five of these characteristics. More than 96% of the samples were visually identified as M. galloprovincialis feces on the basis of morphology of the particles containing the appropriate mitochondrial DNA. These results suggest that mussel feces can be discriminated with high accuracy on the basis of their morphological characteristics. Thus, our method can be used to quantitatively assess the effect of mussel feces on local benthic environments. Copyright © 2016 Elsevier Ltd. All rights reserved.
Humans do not have direct access to retinal flow during walking
Souman, Jan L.; Freeman, Tom C.A.; Eikmeier, Verena; Ernst, Marc O.
2013-01-01
Perceived visual speed has been reported to be reduced during walking. This reduction has been attributed to a partial subtraction of walking speed from visual speed (Durgin & Gigone, 2007; Durgin, Gigone, & Scott, 2005). We tested whether observers still have access to the retinal flow before subtraction takes place. Observers performed a 2IFC visual speed discrimination task while walking on a treadmill. In one condition, walking speed was identical in the two intervals, while in a second condition walking speed differed between intervals. If observers have access to the retinal flow before subtraction, any changes in walking speed across intervals should not affect their ability to discriminate retinal flow speed. Contrary to this “direct-access hypothesis”, we found that observers were worse at discrimination when walking speed differed between intervals. The results therefore suggest that observers do not have access to retinal flow before subtraction. We also found that the amount of subtraction depended on the visual speed presented, suggesting that the interaction between the processing of visual input and of self-motion is more complex than previously proposed. PMID:20884509
Sneve, Markus H; Magnussen, Svein; Alnæs, Dag; Endestad, Tor; D'Esposito, Mark
2013-11-01
Visual STM of simple features is achieved through interactions between retinotopic visual cortex and a set of frontal and parietal regions. In the present fMRI study, we investigated effective connectivity between central nodes in this network during the different task epochs of a modified delayed orientation discrimination task. Our univariate analyses demonstrate that the inferior frontal junction (IFJ) is preferentially involved in memory encoding, whereas activity in the putative FEFs and anterior intraparietal sulcus (aIPS) remains elevated throughout periods of memory maintenance. We have earlier reported, using the same task, that areas in visual cortex sustain information about task-relevant stimulus properties during delay intervals [Sneve, M. H., Alnæs, D., Endestad, T., Greenlee, M. W., & Magnussen, S. Visual short-term memory: Activity supporting encoding and maintenance in retinotopic visual cortex. Neuroimage, 63, 166-178, 2012]. To elucidate the temporal dynamics of the IFJ-FEF-aIPS-visual cortex network during memory operations, we estimated Granger causality effects between these regions with fMRI data representing memory encoding/maintenance as well as during memory retrieval. We also investigated a set of control conditions involving active processing of stimuli not associated with a memory task and passive viewing. In line with the developing understanding of IFJ as a region critical for control processes with a possible initiating role in visual STM operations, we observed influence from IFJ to FEF and aIPS during memory encoding. Furthermore, FEF predicted activity in a set of higher-order visual areas during memory retrieval, a finding consistent with its suggested role in top-down biasing of sensory cortex.
Can responses to basic non-numerical visual features explain neural numerosity responses?
Harvey, Ben M; Dumoulin, Serge O
2017-04-01
Humans and many animals can distinguish between stimuli that differ in numerosity, the number of objects in a set. Human and macaque parietal lobes contain neurons that respond to changes in stimulus numerosity. However, basic non-numerical visual features can affect neural responses to and perception of numerosity, and visual features often co-vary with numerosity. Therefore, it is debated whether numerosity or co-varying low-level visual features underlie neural and behavioral responses to numerosity. To test the hypothesis that non-numerical visual features underlie neural numerosity responses in a human parietal numerosity map, we analyze responses to a group of numerosity stimulus configurations that have the same numerosity progression but vary considerably in their non-numerical visual features. Using ultra-high-field (7T) fMRI, we measure responses to these stimulus configurations in an area of posterior parietal cortex whose responses are believed to reflect numerosity-selective activity. We describe an fMRI analysis method to distinguish between alternative models of neural response functions, following a population receptive field (pRF) modeling approach. For each stimulus configuration, we first quantify the relationships between numerosity and several non-numerical visual features that have been proposed to underlie performance in numerosity discrimination tasks. We then determine how well responses to these non-numerical visual features predict the observed fMRI responses, and compare this to the predictions of responses to numerosity. We demonstrate that a numerosity response model predicts observed responses more accurately than models of responses to simple non-numerical visual features. As such, neural responses in cognitive processing need not reflect simpler properties of early sensory inputs. Copyright © 2017 Elsevier Inc. All rights reserved.
Smell or vision? The use of different sensory modalities in predator discrimination.
Fischer, Stefan; Oberhummer, Evelyne; Cunha-Saraiva, Filipa; Gerber, Nina; Taborsky, Barbara
2017-01-01
Theory predicts that animals should adjust their escape responses to the perceived predation risk. The information animals obtain about potential predation risk may differ qualitatively depending on the sensory modality by which a cue is perceived. For instance, olfactory cues may reveal better information about the presence or absence of threats, whereas visual information can reliably transmit the position and potential attack distance of a predator. While this suggests a differential use of information perceived through the two sensory channels, the relative importance of visual vs. olfactory cues when distinguishing between different predation threats is still poorly understood. Therefore, we exposed individuals of the cooperatively breeding cichlid Neolamprologus pulcher to a standardized threat stimulus combined with either predator or non-predator cues presented either visually or chemically. We predicted that flight responses towards a threat stimulus are more pronounced if cues of dangerous rather than harmless heterospecifics are presented and that N. pulcher , being an aquatic species, relies more on olfaction when discriminating between dangerous and harmless heterospecifics. N. pulcher responded faster to the threat stimulus, reached a refuge faster and entered a refuge more likely when predator cues were perceived. Unexpectedly, the sensory modality used to perceive the cues did not affect the escape response or the duration of the recovery phase. This suggests that N. pulcher are able to discriminate heterospecific cues with similar acuity when using vision or olfaction. We discuss that this ability may be advantageous in aquatic environments where the visibility conditions strongly vary over time. The ability to rapidly discriminate between dangerous predators and harmless heterospecifics is crucial for the survival of prey animals. In seasonally fluctuating environment, sensory conditions may change over the year and may make the use of multiple sensory modalities for heterospecific discrimination highly beneficial. Here we compared the efficacy of visual and olfactory senses in the discrimination ability of the cooperatively breeding cichlid Neolamprologus pulcher . We presented individual fish with visual or olfactory cues of predators or harmless heterospecifics and recorded their flight response. When exposed to predator cues, individuals responded faster, reached a refuge faster and were more likely to enter the refuge. Unexpectedly, the olfactory and visual senses seemed to be equally efficient in this discrimination task, suggesting that seasonal variation of water conditions experienced by N. pulcher may necessitate the use of multiple sensory channels for the same task.
Morgan, Erin E.; Woods, Steven Paul; Poquette, Amelia J.; Vigil, Ofilio; Heaton, Robert K.; Grant, Igor
2012-01-01
Objective Chronic use of methamphetamine (MA) has moderate effects on neurocognitive functions associated with frontal systems, including the executive aspects of verbal episodic memory. Extending this literature, the current study examined the effects of MA on visual episodic memory with the hypothesis that a profile of deficient strategic encoding and retrieval processes would be revealed for visuospatial information (i.e., simple geometric designs), including possible differential effects on source versus item recall. Method The sample comprised 114 MA-dependent (MA+) and 110 demographically-matched MA-nondependent comparison participants (MA−) who completed the Brief Visuospatial Memory Test – Revised (BVMT-R), which was scored for standard learning and memory indices, as well as novel item (i.e., figure) and source (i.e., location) memory indices. Results Results revealed a profile of impaired immediate and delayed free recall (p < .05) in the context of preserved learning slope, retention, and recognition discriminability in the MA+ group. The MA+ group also performed more poorly than MA− participants on Item visual memory (p < .05) but not Source visual memory (p > .05), and no group by task-type interaction was observed (p > .05). Item visual memory demonstrated significant associations with executive dysfunction, deficits in working memory, and shorter length of abstinence from MA use (p < 0.05). Conclusions These visual memory findings are commensurate with studies reporting deficient strategic verbal encoding and retrieval in MA users that are posited to reflect the vulnerability of frontostriatal circuits to the neurotoxic effects of MA. Potential clinical implications of these visual memory deficits are discussed. PMID:22311530
Semantic congruency and the (reversed) Colavita effect in children and adults.
Wille, Claudia; Ebersbach, Mirjam
2016-01-01
When presented with auditory, visual, or bimodal audiovisual stimuli in a discrimination task, adults tend to ignore the auditory component in bimodal stimuli and respond to the visual component only (i.e., Colavita visual dominance effect). The same is true for older children, whereas young children are dominated by the auditory component of bimodal audiovisual stimuli. This suggests a change of sensory dominance during childhood. The aim of the current study was to investigate, in three experimental conditions, whether children and adults show sensory dominance when presented with complex semantic stimuli and whether this dominance can be modulated by stimulus characteristics such as semantic (in)congruency, frequency of bimodal trials, and color information. Semantic (in)congruency did not affect the magnitude of the auditory dominance effect in 6-year-olds or the visual dominance effect in adults, but it was a modulating factor of the visual dominance in 9-year-olds (Conditions 1 and 2). Furthermore, the absence of color information (Condition 3) did not affect auditory dominance in 6-year-olds and hardly affected visual dominance in adults, whereas the visual dominance in 9-year-olds disappeared. Our results suggest that (a) sensory dominance in children and adults is not restricted to simple lights and sounds, as used in previous research, but can be extended to semantically meaningful stimuli and that (b) sensory dominance is more robust in 6-year-olds and adults than in 9-year-olds, implying a transitional stage around this age. Copyright © 2015 Elsevier Inc. All rights reserved.
A neurocomputational model of figure-ground discrimination and target tracking.
Sun, H; Liu, L; Guo, A
1999-01-01
A neurocomputational model is presented for figureground discrimination and target tracking. In the model, the elementary motion detectors of the correlation type, the computational modules of saccadic and smooth pursuit eye movement, an oscillatory neural-network motion perception module and a selective attention module are involved. It is shown that through the oscillatory amplitude and frequency encoding, and selective synchronization of phase oscillators, the figure and the ground can be successfully discriminated from each other. The receptive fields developed by hidden units of the networks were surprisingly similar to the actual receptive fields and columnar organization found in the primate visual cortex. It is suggested that equivalent mechanisms may exist in the primate visual cortex to discriminate figure-ground in both temporal and spatial domains.
Evaluation of a pilot workload metric for simulated VTOL landing tasks
NASA Technical Reports Server (NTRS)
North, R. A.; Graffunder, K.
1979-01-01
A methodological approach to measuring workload was investigated for evaluation of new concepts in VTOL aircraft displays. Multivariate discriminant functions were formed from conventional flight performance and/or visual response variables to maximize detection of experimental differences. The flight performance variable discriminant showed maximum differentiation between crosswind conditions. The visual response measure discriminant maximized differences between fixed vs. motion base conditions and experimental displays. Physiological variables were used to attempt to predict the discriminant function values for each subject/condition/trial. The weights of the physiological variables in these equations showed agreement with previous studies. High muscle tension, light but irregular breathing patterns, and higher heart rate with low amplitude all produced higher scores on this scale and thus, represented higher workload levels.
Visual perceptual load induces inattentional deafness.
Macdonald, James S P; Lavie, Nilli
2011-08-01
In this article, we establish a new phenomenon of "inattentional deafness" and highlight the level of load on visual attention as a critical determinant of this phenomenon. In three experiments, we modified an inattentional blindness paradigm to assess inattentional deafness. Participants made either a low- or high-load visual discrimination concerning a cross shape (respectively, a discrimination of line color or of line length with a subtle length difference). A brief pure tone was presented simultaneously with the visual task display on a final trial. Failures to notice the presence of this tone (i.e., inattentional deafness) reached a rate of 79% in the high-visual-load condition, significantly more than in the low-load condition. These findings establish the phenomenon of inattentional deafness under visual load, thereby extending the load theory of attention (e.g., Lavie, Journal of Experimental Psychology. Human Perception and Performance, 25, 596-616, 1995) to address the cross-modal effects of visual perceptual load.
Characteristic and intermingled neocortical circuits encode different visual object discriminations.
Zhang, Guo-Rong; Zhao, Hua; Cook, Nathan; Svestka, Michael; Choi, Eui M; Jan, Mary; Cook, Robert G; Geller, Alfred I
2017-07-28
Synaptic plasticity and neural network theories hypothesize that the essential information for advanced cognitive tasks is encoded in specific circuits and neurons within distributed neocortical networks. However, these circuits are incompletely characterized, and we do not know if a specific discrimination is encoded in characteristic circuits among multiple animals. Here, we determined the spatial distribution of active neurons for a circuit that encodes some of the essential information for a cognitive task. We genetically activated protein kinase C pathways in several hundred spatially-grouped glutamatergic and GABAergic neurons in rat postrhinal cortex, a multimodal associative area that is part of a distributed circuit that encodes visual object discriminations. We previously established that this intervention enhances accuracy for specific discriminations. Moreover, the genetically-modified, local circuit in POR cortex encodes some of the essential information, and this local circuit is preferentially activated during performance, as shown by activity-dependent gene imaging. Here, we mapped the positions of the active neurons, which revealed that two image sets are encoded in characteristic and different circuits. While characteristic circuits are known to process sensory information, in sensory areas, this is the first demonstration that characteristic circuits encode specific discriminations, in a multimodal associative area. Further, the circuits encoding the two image sets are intermingled, and likely overlapping, enabling efficient encoding. Consistent with reconsolidation theories, intermingled and overlapping encoding could facilitate formation of associations between related discriminations, including visually similar discriminations or discriminations learned at the same time or place. Copyright © 2017 Elsevier B.V. All rights reserved.
The visual discrimination of bending.
Norman, J Farley; Wiesemann, Elizabeth Y; Norman, Hideko F; Taylor, M Jett; Craft, Warren D
2007-01-01
The sensitivity of observers to nonrigid bending was evaluated in two experiments. In both experiments, observers were required to discriminate on any given trial which of two bending rods was more elastic. In experiment 1, both rods bent within the same oriented plane, and bent either in a frontoparallel plane or bent in depth. In experiment 2, the two rods within any given trial bent in different, randomly chosen orientations in depth. The results of both experiments revealed that human observers are sensitive to, and can reliably detect, relatively small differences in bending (the average Weber fraction across experiments 1 and 2 was 9.0%). The performance of the human observers was compared to that of models that based their elasticity judgments upon either static projected curvature or mean and maximal projected speed. Despite the fact that all of the observers reported compelling 3-D perceptions of bending in depth, their judgments were both qualitatively and quantitatively consistent with the performance of the models. This similarity suggests that relatively straightforward information about the elasticity of simple bending objects is available in projected retinal images.
ERIC Educational Resources Information Center
Lewkowicz, David J.
2003-01-01
Three experiments examined 4- to 10-month-olds' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Results established that infants of all ages could discriminate between two different audio-visual rhythmic events. Only 10-month-olds detected a desynchronization of the auditory and visual…
LaRoche, Ronee B; Morgan, Russell E
2007-01-01
Over the past two decades the use of selective serotonin reuptake inhibitors (SSRIs) to treat behavioral disorders in children has grown rapidly, despite little evidence regarding the safety and efficacy of these drugs for use in children. Utilizing a rat model, this study investigated whether post-weaning exposure to a prototype SSRI, fluoxetine (FLX), influenced performance on visual tasks designed to measure discrimination learning, sustained attention, inhibitory control, and reaction time. Additionally, sex differences in response to varying doses of fluoxetine were examined. In Experiment 1, female rats were administered (P.O.) fluoxetine (10 mg/kg ) or vehicle (apple juice) from PND 25 thru PND 49. After a 14 day washout period, subjects were trained to perform a simultaneous visual discrimination task. Subjects were then tested for 20 sessions on a visual attention task that consisted of varied stimulus delays (0, 3, 6, or 9 s) and cue durations (200, 400, or 700 ms). In Experiment 2, both male and female Long-Evans rats (24 F, 24 M) were administered fluoxetine (0, 5, 10, or 15 mg/kg) then tested in the same visual tasks used in Experiment 1, with the addition of open-field and elevated plus-maze testing. Few FLX-related differences were seen in the visual discrimination, open field, or plus-maze tasks. However, results from the visual attention task indicated a dose-dependent reduction in the performance of fluoxetine-treated males, whereas fluoxetine-treated females tended to improve over baseline. These findings indicate that enduring, behaviorally-relevant alterations of the CNS can occur following pharmacological manipulation of the serotonin system during postnatal development.
Enhancement of equivalence class formation by pretraining discriminative functions.
Nartey, Richard K; Arntzen, Erik; Fields, Lanny
2015-03-01
The present experiment showed that a simple discriminative function acquired by an abstract stimulus through simultaneous and/or successive discrimination training enhanced the formation of an equivalence class of which that stimulus was a member. College students attempted to form three equivalence classes composed of three nodes and five members (A→B→C→D→E), using the simultaneous protocol. In the PIC group, the C stimuli were pictures and the A, B, D, and E stimuli were abstract shapes. In the ABS group, all of the stimuli were abstract shapes. In the SIM + SUCC (simultaneous and successive) group, simple discriminations were formed with the C stimuli through both simultaneous and successive discrimination training before class formation. Finally, in the SIM-only and SUCC-only groups, prior to class formation, simple discriminations were established for the C stimuli with a simultaneous procedure and a successive procedure, respectively. Equivalence classes were formed by 80% and 70% of the participants in the PIC and SIM + SUCC groups respectively, by 30% in the SUCC-only group, and by 10% apiece in the ABS and SIM-only groups. Thus, pretraining of combined simultaneous and successive discriminations enhanced class formation, as did the inclusion of a meaningful stimulus in a class. The isolated effect of forming successive discriminations was more influential than that of forming simultaneous discriminations. The establishment of both discriminations together produced an enhancement greater than the sum of the two procedures alone. Finally, a sorting test documented the maintenance of the classes formed during the simultaneous protocol. These results also provide a stimulus control-function account of the class-enhancing effects of meaningful stimuli.
Dynamic crossmodal links revealed by steady-state responses in auditory-visual divided attention.
de Jong, Ritske; Toffanin, Paolo; Harbers, Marten
2010-01-01
Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were smaller, but still significant, in detection than in discrimination. The auditory steady-state response (SSR) showed no effects of attention at frontocentral locations, but did so at occipital locations where it was evident only when attention was divided between audition and vision. Similarly, the visual SSR at occipital locations was substantially enhanced when attention was divided across modalities. Both effects were equally present in detection and discrimination. We suggest that both effects reflect a common cause: An attention-dependent influence of auditory information processing on early cortical stages of visual information processing, mediated by enhanced effective connectivity between the two modalities under conditions of divided attention. Copyright (c) 2009 Elsevier B.V. All rights reserved.
Scully, Erin N; Acerbo, Martin J; Lazareva, Olga F
2014-01-01
Earlier, we reported that nucleus rotundus (Rt) together with its inhibitory complex, nucleus subpretectalis/interstitio-pretecto-subpretectalis (SP/IPS), had significantly higher activity in pigeons performing figure-ground discrimination than in the control group that did not perform any visual discriminations. In contrast, color discrimination produced significantly higher activity than control in the Rt but not in the SP/IPS. Finally, shape discrimination produced significantly lower activity than control in both the Rt and the SP/IPS. In this study, we trained pigeons to simultaneously perform three visual discriminations (figure-ground, color, and shape) using the same stimulus displays. When birds learned to perform all three tasks concurrently at high levels of accuracy, we conducted bilateral chemical lesions of the SP/IPS. After a period of recovery, the birds were retrained on the same tasks to evaluate the effect of lesions on maintenance of these discriminations. We found that the lesions of the SP/IPS had no effect on color or shape discrimination and that they significantly impaired figure-ground discrimination. Together with our earlier data, these results suggest that the nucleus Rt and the SP/IPS are the key structures involved in figure-ground discrimination. These results also imply that thalamic processing is critical for figure-ground segregation in avian brain.
Metabolic Pathways Visualization Skills Development by Undergraduate Students
ERIC Educational Resources Information Center
dos Santos, Vanessa J. S. V.; Galembeck, Eduardo
2015-01-01
We have developed a metabolic pathways visualization skill test (MPVST) to gain greater insight into our students' abilities to comprehend the visual information presented in metabolic pathways diagrams. The test is able to discriminate students' visualization ability with respect to six specific visualization skills that we identified as key to…
Comparison of Automated and Human Instruction for Developmentally Retarded Preschool Children.
ERIC Educational Resources Information Center
Richmond, Glenn
1983-01-01
Twenty developmentally retarded preschool children were trained on two visual discriminations with automated instruction and two discriminations with human instruction. Results showed human instruction significantly better than automated instruction. Nine Ss reached criterion for both discriminations with automated instruction, therefore showing…
A horse's eye view: size and shape discrimination compared with other mammals.
Tomonaga, Masaki; Kumazaki, Kiyonori; Camus, Florine; Nicod, Sophie; Pereira, Carlos; Matsuzawa, Tetsuro
2015-11-01
Mammals have adapted to a variety of natural environments from underwater to aerial and these different adaptations have affected their specific perceptive and cognitive abilities. This study used a computer-controlled touchscreen system to examine the visual discrimination abilities of horses, particularly regarding size and shape, and compared the results with those from chimpanzee, human and dolphin studies. Horses were able to discriminate a difference of 14% in circle size but showed worse discrimination thresholds than chimpanzees and humans; these differences cannot be explained by visual acuity. Furthermore, the present findings indicate that all species use length cues rather than area cues to discriminate size. In terms of shape discrimination, horses exhibited perceptual similarities among shapes with curvatures, vertical/horizontal lines and diagonal lines, and the relative contributions of each feature to perceptual similarity in horses differed from those for chimpanzees, humans and dolphins. Horses pay more attention to local components than to global shapes. © 2015 The Author(s).
Fast transfer of crossmodal time interval training.
Chen, Lihan; Zhou, Xiaolin
2014-06-01
Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.
Graeber, R C; Schroeder, D M; Jane, J A; Ebbesson, S O
1978-07-15
An instrumental conditioning task was used to examine the role of the nurse shark telencephalon in black-white (BW) and horizontal-vertical stripes (HV) discrimination performance. In the first experiment, subjects initially received either bilateral anterior telencephalic control lesions or bilateral posterior telencephalic lesions aimed at destroying the central telencephalic nuclei (CN), which are known to receive direct input from the thalamic visual area. Postoperatively, the sharks were trained first on BW and then on HV. Those with anterior lesions learned both tasks as rapidly as unoperated subjects. Those with posterior lesions exhibited visual discrimination deficits related to the amount of damage to the CN and its connecting pathways. Severe damage resulted in an inability to learn either task but caused no impairments in motivation or general learning ability. In the second experiment, the sharks were first trained on BW and HV and then operated. Suction ablations were used to remove various portions of the CN. Sharks with 10% or less damage to the CN retained the preoperatively acquired discriminations almost perfectly. Those with 11-50% damage had to be retrained on both tasks. Almost total removal of the CN produced behavioral indications of blindness along with an inability to perform above the chance level on BW despite excellent retention of both discriminations over a 28-day period before surgery. It appears, however, that such sharks can still detect light. These results implicate the central telencephalic nuclei in the control of visually guided behavior in sharks.
Mental workload while driving: effects on visual search, discrimination, and decision making.
Recarte, Miguel A; Nunes, Luis M
2003-06-01
The effects of mental workload on visual search and decision making were studied in real traffic conditions with 12 participants who drove an instrumented car. Mental workload was manipulated by having participants perform several mental tasks while driving. A simultaneous visual-detection and discrimination test was used as performance criteria. Mental tasks produced spatial gaze concentration and visual-detection impairment, although no tunnel vision occurred. According to ocular behavior analysis, this impairment was due to late detection and poor identification more than to response selection. Verbal acquisition tasks were innocuous compared with production tasks, and complex conversations, whether by phone or with a passenger, are dangerous for road safety.
How category learning affects object representations: Not all morphspaces stretch alike
Folstein, Jonathan R.; Gauthier, Isabel; Palmeri, Thomas J.
2012-01-01
How does learning to categorize objects affect how we visually perceive them? Behavioral, neurophysiological, and neuroimaging studies have tested the degree to which category learning influences object representations, with conflicting results. Some studies find that objects become more visually discriminable along dimensions relevant to previously learned categories, while others find no such effect. One critical factor we explore here lies in the structure of the morphspaces used in different studies. Studies finding no increase in discriminability often use “blended” morphspaces, with morphparents lying at corners of the space. By contrast, studies finding increases in discriminability use “factorial” morphspaces, defined by separate morphlines forming axes of the space. Using the same four morphparents, we created both factorial and blended morphspaces matched in pairwise discriminability. Category learning caused a selective increase in discriminability along the relevant dimension of the factorial space, but not in the blended space, and led to the creation of functional dimensions in the factorial space, but not in the blended space. These findings demonstrate that not all morphspaces stretch alike: Only some morphspaces support enhanced discriminability to relevant object dimensions following category learning. Our results have important implications for interpreting neuroimaging studies reporting little or no effect of category learning on object representations in the visual system: Those studies may have been limited by their use of blended morphspaces. PMID:22746950
Optimal visuotactile integration for velocity discrimination of self-hand movements
Chancel, M.; Blanchard, C.; Guerraz, M.; Montagnini, A.
2016-01-01
Illusory hand movements can be elicited by a textured disk or a visual pattern rotating under one's hand, while proprioceptive inputs convey immobility information (Blanchard C, Roll R, Roll JP, Kavounoudias A. PLoS One 8: e62475, 2013). Here, we investigated whether visuotactile integration can optimize velocity discrimination of illusory hand movements in line with Bayesian predictions. We induced illusory movements in 15 volunteers by visual and/or tactile stimulation delivered at six angular velocities. Participants had to compare hand illusion velocities with a 5°/s hand reference movement in an alternative forced choice paradigm. Results showed that the discrimination threshold decreased in the visuotactile condition compared with unimodal (visual or tactile) conditions, reflecting better bimodal discrimination. The perceptual strength (gain) of the illusions also increased: the stimulation required to give rise to a 5°/s illusory movement was slower in the visuotactile condition compared with each of the two unimodal conditions. The maximum likelihood estimation model satisfactorily predicted the improved discrimination threshold but not the increase in gain. When we added a zero-centered prior, reflecting immobility information, the Bayesian model did actually predict the gain increase but systematically overestimated it. Interestingly, the predicted gains better fit the visuotactile performances when a proprioceptive noise was generated by covibrating antagonist wrist muscles. These findings show that kinesthetic information of visual and tactile origins is optimally integrated to improve velocity discrimination of self-hand movements. However, a Bayesian model alone could not fully describe the illusory phenomenon pointing to the crucial importance of the omnipresent muscle proprioceptive cues with respect to other sensory cues for kinesthesia. PMID:27385802
Giovenzana, Valentina; Civelli, Raffaele; Beghi, Roberto; Oberti, Roberto; Guidetti, Riccardo
2015-11-01
The aim of this work was to test a simplified optical prototype for a rapid estimation of the ripening parameters of white grape for Franciacorta wine directly in field. Spectral acquisition based on reflectance at four wavelengths (630, 690, 750 and 850 nm) was proposed. The integration of a simple processing algorithm in the microcontroller software would allow to visualize real time values of spectral reflectance. Non-destructive analyses were carried out on 95 grape bunches for a total of 475 berries. Samplings were performed weekly during the last ripening stages. Optical measurements were carried out both using the simplified system and a portable commercial vis/NIR spectrophotometer, as reference instrument for performance comparison. Chemometric analyses were performed in order to extract the maximum useful information from optical data. Principal component analysis (PCA) was performed for a preliminary evaluation of the data. Correlations between the optical data matrix and ripening parameters (total soluble solids content, SSC; titratable acidity, TA) were carried out using partial least square (PLS) regression for spectra and using multiple linear regression (MLR) for data from the simplified device. Classification analysis were also performed with the aim of discriminate ripe and unripe samples. PCA, MLR and classification analyses show the effectiveness of the simplified system in separating samples among different sampling dates and in discriminating ripe from unripe samples. Finally, simple equations for SSC and TA prediction were calculated. Copyright © 2015 Elsevier B.V. All rights reserved.
Takemoto, Atsushi; Miwa, Miki; Koba, Reiko; Yamaguchi, Chieko; Suzuki, Hiromi; Nakamura, Katsuki
2015-04-01
Detailed information about the characteristics of learning behavior in marmosets is useful for future marmoset research. We trained 42 marmosets in visual discrimination and reversal learning. All marmosets could learn visual discrimination, and all but one could complete reversal learning, though some marmosets failed to touch the visual stimuli and were screened out. In 87% of measurements, the final percentage of correct responses was over 95%. We quantified performance with two measures: onset trial and dynamic interval. Onset trial represents the number of trials that elapsed before the marmoset started to learn. Dynamic interval represents the number of trials from the start before reaching the final percentage of correct responses. Both measures decreased drastically as a result of the formation of discrimination learning sets. In reversal learning, both measures worsened, but the effect on onset trial was far greater. The effects of age and sex were not significant as far as we used adolescent or young adult marmosets. Unexpectedly, experimental circumstance (in the colony or isolator) had only a subtle effect on performance. However, we found that marmosets from different families exhibited different learning process characteristics, suggesting some family effect on learning. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Montare, Alberto
2013-06-01
The three classical Donders' reaction time (RT) tasks (simple, choice, and discriminative RTs) were employed to compare reaction time scores from college students obtained by use of Montare's simplest chronoscope (meterstick) methodology to scores obtained by use of a digital-readout multi-choice reaction timer (machine). Five hypotheses were tested. Simple RT, choice RT, and discriminative RT were faster when obtained by meterstick than by machine. The meterstick method showed higher reliability than the machine method and was less variable. The meterstick method of the simplest chronoscope may help to alleviate the longstanding problems of low reliability and high variability of reaction time performances; while at the same time producing faster performance on Donders' simple, choice and discriminative RT tasks than the machine method.
Surround-Masking Affects Visual Estimation Ability
Jastrzebski, Nicola R.; Hugrass, Laila E.; Crewther, Sheila G.; Crewther, David P.
2017-01-01
Visual estimation of numerosity involves the discrimination of magnitude between two distributions or perceptual sets that vary in number of elements. How performance on such estimation depends on peripheral sensory stimulation is unclear, even in typically developing adults. Here, we varied the central and surround contrast of stimuli that comprised a visual estimation task in order to determine whether mechanisms involved with the removal of unessential visual input functionally contributes toward number acuity. The visual estimation judgments of typically developed adults were significantly impaired for high but not low contrast surround stimulus conditions. The center and surround contrasts of the stimuli also differentially affected the accuracy of numerosity estimation depending on whether fewer or more dots were presented. Remarkably, observers demonstrated the highest mean percentage accuracy across stimulus conditions in the discrimination of more elements when the surround contrast was low and the background luminance of the central region containing the elements was dark (black center). Conversely, accuracy was severely impaired during the discrimination of fewer elements when the surround contrast was high and the background luminance of the central region was mid level (gray center). These findings suggest that estimation ability is functionally related to the quality of low-order filtration of unessential visual information. These surround masking results may help understanding of the poor visual estimation ability commonly observed in developmental dyscalculia. PMID:28360845
USDA-ARS?s Scientific Manuscript database
Two simple fingerprinting methods, flow-injection UV spectroscopy (FIUV) and 1H nuclear magnetic resonance (NMR), for discrimination of Aurantii FructusImmaturus and Fructus Poniciri TrifoliataeImmaturususing were described. Both methods were combined with partial least-squares discriminant analysis...
Neurons Forming Optic Glomeruli Compute Figure–Ground Discriminations in Drosophila
Aptekar, Jacob W.; Keleş, Mehmet F.; Lu, Patrick M.; Zolotova, Nadezhda M.
2015-01-01
Many animals rely on visual figure–ground discrimination to aid in navigation, and to draw attention to salient features like conspecifics or predators. Even figures that are similar in pattern and luminance to the visual surroundings can be distinguished by the optical disparity generated by their relative motion against the ground, and yet the neural mechanisms underlying these visual discriminations are not well understood. We show in flies that a diverse array of figure–ground stimuli containing a motion-defined edge elicit statistically similar behavioral responses to one another, and statistically distinct behavioral responses from ground motion alone. From studies in larger flies and other insect species, we hypothesized that the circuitry of the lobula—one of the four, primary neuropiles of the fly optic lobe—performs this visual discrimination. Using calcium imaging of input dendrites, we then show that information encoded in cells projecting from the lobula to discrete optic glomeruli in the central brain group these sets of figure–ground stimuli in a homologous manner to the behavior; “figure-like” stimuli are coded similar to one another and “ground-like” stimuli are encoded differently. One cell class responds to the leading edge of a figure and is suppressed by ground motion. Two other classes cluster any figure-like stimuli, including a figure moving opposite the ground, distinctly from ground alone. This evidence demonstrates that lobula outputs provide a diverse basis set encoding visual features necessary for figure detection. PMID:25972183
Neurons forming optic glomeruli compute figure-ground discriminations in Drosophila.
Aptekar, Jacob W; Keleş, Mehmet F; Lu, Patrick M; Zolotova, Nadezhda M; Frye, Mark A
2015-05-13
Many animals rely on visual figure-ground discrimination to aid in navigation, and to draw attention to salient features like conspecifics or predators. Even figures that are similar in pattern and luminance to the visual surroundings can be distinguished by the optical disparity generated by their relative motion against the ground, and yet the neural mechanisms underlying these visual discriminations are not well understood. We show in flies that a diverse array of figure-ground stimuli containing a motion-defined edge elicit statistically similar behavioral responses to one another, and statistically distinct behavioral responses from ground motion alone. From studies in larger flies and other insect species, we hypothesized that the circuitry of the lobula--one of the four, primary neuropiles of the fly optic lobe--performs this visual discrimination. Using calcium imaging of input dendrites, we then show that information encoded in cells projecting from the lobula to discrete optic glomeruli in the central brain group these sets of figure-ground stimuli in a homologous manner to the behavior; "figure-like" stimuli are coded similar to one another and "ground-like" stimuli are encoded differently. One cell class responds to the leading edge of a figure and is suppressed by ground motion. Two other classes cluster any figure-like stimuli, including a figure moving opposite the ground, distinctly from ground alone. This evidence demonstrates that lobula outputs provide a diverse basis set encoding visual features necessary for figure detection. Copyright © 2015 the authors 0270-6474/15/357587-13$15.00/0.
ERIC Educational Resources Information Center
Friar, John T.
Two factors of predicted learning disorders were investigated: (1) inability to maintain appropriate classroom behavior (BEH), (2) perceptual discrimination deficit (PERC). Three groups of first-graders (BEH, PERC, normal control) were administered measures of impulse control, distractability, auditory discrimination, and visual discrimination.…
Prestimulus alpha-band power biases visual discrimination confidence, but not accuracy.
Samaha, Jason; Iemi, Luca; Postle, Bradley R
2017-09-01
The magnitude of power in the alpha-band (8-13Hz) of the electroencephalogram (EEG) prior to the onset of a near threshold visual stimulus predicts performance. Together with other findings, this has been interpreted as evidence that alpha-band dynamics reflect cortical excitability. We reasoned, however, that non-specific changes in excitability would be expected to influence signal and noise in the same way, leaving actual discriminability unchanged. Indeed, using a two-choice orientation discrimination task, we found that discrimination accuracy was unaffected by fluctuations in prestimulus alpha power. Decision confidence, on the other hand, was strongly negatively correlated with prestimulus alpha power. This finding constitutes a clear dissociation between objective and subjective measures of visual perception as a function of prestimulus cortical excitability. This dissociation is predicted by a model where the balance of evidence supporting each choice drives objective performance but only the magnitude of evidence supporting the selected choice drives subjective reports, suggesting that human perceptual confidence can be suboptimal with respect to tracking objective accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.
Pérez-Garín, Daniel; Recio, Patricia; Magallares, Alejandro; Molero, Fernando; García-Ael, Cristina
2018-05-15
The purpose of this study is to assess the discourse of people with disabilities regarding their perception of discrimination and stigma. Semi-structured interviews were conducted with ten adults with physical disabilities, ten with hearing impairments and seven with visual impairments. The agreement between the coders showed an excellent reliability for all three groups, with kappa coefficients between .82 and .96. Differences were assessed between the three groups regarding the types of discrimination they experienced and their most frequent emotional responses. People with physical disabilities mainly reported being stared at, undervalued, and subtly discriminated at work, whereas people with hearing impairments mainly reported encountering barriers in leisure activities, and people with visual impairments spoke of a lack of equal opportunities, mockery and/or bullying, and overprotection. Regarding their emotional reactions, people with physical disabilities mainly reported feeling anxious and depressed, whereas people with hearing impairments reported feeling helpless, and people with visual impairments reported feeling anger and self-pity. Findings are relevant to guide future research and interventions on the stigma of disability.
Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.
Bigelow, James; Poremba, Amy
2014-01-01
Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s). However, at longer retention intervals (8-32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.
Hecker, Elizabeth A.; Serences, John T.; Srinivasan, Ramesh
2013-01-01
Interacting with the environment requires the ability to flexibly direct attention to relevant features. We examined the degree to which individuals attend to visual features within and across Detection, Fine Discrimination, and Coarse Discrimination tasks. Electroencephalographic (EEG) responses were measured to an unattended peripheral flickering (4 or 6 Hz) grating while individuals (n = 33) attended to orientations that were offset by 0°, 10°, 20°, 30°, 40°, and 90° from the orientation of the unattended flicker. These unattended responses may be sensitive to attentional gain at the attended spatial location, since attention to features enhances early visual responses throughout the visual field. We found no significant differences in tuning curves across the three tasks in part due to individual differences in strategies. We sought to characterize individual attention strategies using hierarchical Bayesian modeling, which grouped individuals into families of curves that reflect attention to the physical target orientation (“on-channel”) or away from the target orientation (“off-channel”) or a uniform distribution of attention. The different curves were related to behavioral performance; individuals with “on-channel” curves had lower thresholds than individuals with uniform curves. Individuals with “off-channel” curves during Fine Discrimination additionally had lower thresholds than those assigned to uniform curves, highlighting the perceptual benefits of attending away from the physical target orientation during fine discriminations. Finally, we showed that a subset of individuals with optimal curves (“on-channel”) during Detection also demonstrated optimal curves (“off-channel”) during Fine Discrimination, indicating that a subset of individuals can modulate tuning optimally for detection and discrimination. PMID:23678013
Techniques for Programming Visual Demonstrations.
ERIC Educational Resources Information Center
Gropper, George L.
Visual demonstrations may be used as part of programs to deliver both content objectives and process objectives. Research has shown that learning of concepts is easier, more accurate, and more broadly applied when it is accompanied by visual examples. The visual examples supporting content learning should emphasize both discrimination and…
Clark, R. C.; Brebner, J. S.
2017-01-01
Researchers must assess similarities and differences in colour from an animal's eye view when investigating hypotheses in ecology, evolution and behaviour. Nervous systems generate colour perceptions by comparing the responses of different spectral classes of photoreceptor through colour opponent mechanisms, and the performance of these mechanisms is limited by photoreceptor noise. Accordingly, the receptor noise limited (RNL) colour distance model of Vorobyev and Osorio (Vorobyev & Osorio 1998 Proc. R. Soc. Lond. B 265, 351–358 (doi:10.1098/rspb.1998.0302)) generates predictions about the discriminability of colours that agree with behavioural data, and consequently it has found wide application in studies of animal colour vision. Vorobyev and Osorio (1998) provide equations to calculate RNL colour distances for animals with di-, tri- and tetrachromatic vision, which is adequate for many species. However, researchers may sometimes wish to compute RNL colour distances for potentially more complex colour visual systems. Thus, we derive a simple, single formula for the computation of RNL distance between two measurements of colour, equivalent to the published di-, tri- and tetrachromatic equations of Vorobyev and Osorio (1998), and valid for colour visual systems with any number of types of noisy photoreceptors. This formula will allow the easy application of this important colour visual model across the fields of ecology, evolution and behaviour. PMID:28989773
Eyetracking Metrics in Young Onset Alzheimer’s Disease: A Window into Cognitive Visual Functions
Pavisic, Ivanna M.; Firth, Nicholas C.; Parsons, Samuel; Rego, David Martinez; Shakespeare, Timothy J.; Yong, Keir X. X.; Slattery, Catherine F.; Paterson, Ross W.; Foulkes, Alexander J. M.; Macpherson, Kirsty; Carton, Amelia M.; Alexander, Daniel C.; Shawe-Taylor, John; Fox, Nick C.; Schott, Jonathan M.; Crutch, Sebastian J.; Primativo, Silvia
2017-01-01
Young onset Alzheimer’s disease (YOAD) is defined as symptom onset before the age of 65 years and is particularly associated with phenotypic heterogeneity. Atypical presentations, such as the clinic-radiological visual syndrome posterior cortical atrophy (PCA), often lead to delays in accurate diagnosis. Eyetracking has been used to demonstrate basic oculomotor impairments in individuals with dementia. In the present study, we aim to explore the relationship between eyetracking metrics and standard tests of visual cognition in individuals with YOAD. Fifty-seven participants were included: 36 individuals with YOAD (n = 26 typical AD; n = 10 PCA) and 21 age-matched healthy controls. Participants completed three eyetracking experiments: fixation, pro-saccade, and smooth pursuit tasks. Summary metrics were used as outcome measures and their predictive value explored looking at correlations with visuoperceptual and visuospatial metrics. Significant correlations between eyetracking metrics and standard visual cognitive estimates are reported. A machine-learning approach using a classification method based on the smooth pursuit raw eyetracking data discriminates with approximately 95% accuracy patients and controls in cross-validation tests. Results suggest that the eyetracking paradigms of a relatively simple and specific nature provide measures not only reflecting basic oculomotor characteristics but also predicting higher order visuospatial and visuoperceptual impairments. Eyetracking measures can represent extremely useful markers during the diagnostic phase and may be exploited as potential outcome measures for clinical trials. PMID:28824534
Eyetracking Metrics in Young Onset Alzheimer's Disease: A Window into Cognitive Visual Functions.
Pavisic, Ivanna M; Firth, Nicholas C; Parsons, Samuel; Rego, David Martinez; Shakespeare, Timothy J; Yong, Keir X X; Slattery, Catherine F; Paterson, Ross W; Foulkes, Alexander J M; Macpherson, Kirsty; Carton, Amelia M; Alexander, Daniel C; Shawe-Taylor, John; Fox, Nick C; Schott, Jonathan M; Crutch, Sebastian J; Primativo, Silvia
2017-01-01
Young onset Alzheimer's disease (YOAD) is defined as symptom onset before the age of 65 years and is particularly associated with phenotypic heterogeneity. Atypical presentations, such as the clinic-radiological visual syndrome posterior cortical atrophy (PCA), often lead to delays in accurate diagnosis. Eyetracking has been used to demonstrate basic oculomotor impairments in individuals with dementia. In the present study, we aim to explore the relationship between eyetracking metrics and standard tests of visual cognition in individuals with YOAD. Fifty-seven participants were included: 36 individuals with YOAD ( n = 26 typical AD; n = 10 PCA) and 21 age-matched healthy controls. Participants completed three eyetracking experiments: fixation, pro-saccade, and smooth pursuit tasks. Summary metrics were used as outcome measures and their predictive value explored looking at correlations with visuoperceptual and visuospatial metrics. Significant correlations between eyetracking metrics and standard visual cognitive estimates are reported. A machine-learning approach using a classification method based on the smooth pursuit raw eyetracking data discriminates with approximately 95% accuracy patients and controls in cross-validation tests. Results suggest that the eyetracking paradigms of a relatively simple and specific nature provide measures not only reflecting basic oculomotor characteristics but also predicting higher order visuospatial and visuoperceptual impairments. Eyetracking measures can represent extremely useful markers during the diagnostic phase and may be exploited as potential outcome measures for clinical trials.
Zhao, Henan; Bryant, Garnett W.; Griffin, Wesley; Terrill, Judith E.; Chen, Jian
2017-01-01
We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks. PMID:28113469
Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen
2017-06-01
We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.
Aging and the interaction of sensory cortical function and structure.
Peiffer, Ann M; Hugenschmidt, Christina E; Maldjian, Joseph A; Casanova, Ramon; Srikanth, Ryali; Hayasaka, Satoru; Burdette, Jonathan H; Kraft, Robert A; Laurienti, Paul J
2009-01-01
Even the healthiest older adults experience changes in cognitive and sensory function. Studies show that older adults have reduced neural responses to sensory information. However, it is well known that sensory systems do not act in isolation but function cooperatively to either enhance or suppress neural responses to individual environmental stimuli. Very little research has been dedicated to understanding how aging affects the interactions between sensory systems, especially cross-modal deactivations or the ability of one sensory system (e.g., audition) to suppress the neural responses in another sensory system cortex (e.g., vision). Such cross-modal interactions have been implicated in attentional shifts between sensory modalities and could account for increased distractibility in older adults. To assess age-related changes in cross-modal deactivations, functional MRI studies were performed in 61 adults between 18 and 80 years old during simple auditory and visual discrimination tasks. Results within visual cortex confirmed previous findings of decreased responses to visual stimuli for older adults. Age-related changes in the visual cortical response to auditory stimuli were, however, much more complex and suggested an alteration with age in the functional interactions between the senses. Ventral visual cortical regions exhibited cross-modal deactivations in younger but not older adults, whereas more dorsal aspects of visual cortex were suppressed in older but not younger adults. These differences in deactivation also remained after adjusting for age-related reductions in brain volume of sensory cortex. Thus, functional differences in cortical activity between older and younger adults cannot solely be accounted for by differences in gray matter volume. (c) 2007 Wiley-Liss, Inc.
Image jitter enhances visual performance when spatial resolution is impaired.
Watson, Lynne M; Strang, Niall C; Scobie, Fraser; Love, Gordon D; Seidel, Dirk; Manahilov, Velitchko
2012-09-06
Visibility of low-spatial frequency stimuli improves when their contrast is modulated at 5 to 10 Hz compared with stationary stimuli. Therefore, temporal modulations of visual objects could enhance the performance of low vision patients who primarily perceive images of low-spatial frequency content. We investigated the effect of retinal-image jitter on word recognition speed and facial emotion recognition in subjects with central visual impairment. Word recognition speed and accuracy of facial emotion discrimination were measured in volunteers with AMD under stationary and jittering conditions. Computer-driven and optoelectronic approaches were used to induce retinal-image jitter with duration of 100 or 166 ms and amplitude within the range of 0.5 to 2.6° visual angle. Word recognition speed was also measured for participants with simulated (Bangerter filters) visual impairment. Text jittering markedly enhanced word recognition speed for people with severe visual loss (101 ± 25%), while for those with moderate visual impairment, this effect was weaker (19 ± 9%). The ability of low vision patients to discriminate the facial emotions of jittering images improved by a factor of 2. A prototype of optoelectronic jitter goggles produced similar improvement in facial emotion discrimination. Word recognition speed in participants with simulated visual impairment was enhanced for interjitter intervals over 100 ms and reduced for shorter intervals. Results suggest that retinal-image jitter with optimal frequency and amplitude is an effective strategy for enhancing visual information processing in the absence of spatial detail. These findings will enable the development of novel tools to improve the quality of life of low vision patients.
Nimodipine alters acquisition of a visual discrimination task in chicks.
Deyo, R; Panksepp, J; Conner, R L
1990-03-01
Chicks 5 days old received intraperitoneal injections of nimodipine 30 min before training on either a visual discrimination task (0, 0.5, 1.0, or 5.0 mg/kg) or a test of separation-induced distress vocalizations (0, 0.5, or 2.5 mg/kg). Chicks receiving 1.0 mg/kg nimodipine made significantly fewer visual discrimination errors than vehicle controls by trials 41-60, but did not differ from controls 24 h later. Chicks in the 5 mg/kg group made significantly more errors when compared to controls both during acquisition of the task and during retention. Nimodipine did not alter separation-induced distress vocalizations at any of the doses tested, suggesting that nimodipine's effects on learning cannot be attributed to a reduction in separation distress. These data indicate that nimodipine's facilitation of learning in young subjects is dose dependent, but nimodipine failed to enhance retention.
Category learning increases discriminability of relevant object dimensions in visual cortex.
Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel
2013-04-01
Learning to categorize objects can transform how they are perceived, causing relevant perceptual dimensions predictive of object category to become enhanced. For example, an expert mycologist might become attuned to species-specific patterns of spacing between mushroom gills but learn to ignore cap textures attributable to varying environmental conditions. These selective changes in perception can persist beyond the act of categorizing objects and influence our ability to discriminate between them. Using functional magnetic resonance imaging adaptation, we demonstrate that such category-specific perceptual enhancements are associated with changes in the neural discriminability of object representations in visual cortex. Regions within the anterior fusiform gyrus became more sensitive to small variations in shape that were relevant during prior category learning. In addition, extrastriate occipital areas showed heightened sensitivity to small variations in shape that spanned the category boundary. Visual representations in cortex, just like our perception, are sensitive to an object's history of categorization.
VISUAL FUNCTION CHANGES AFTER SUBCHRONIC TOLUENE INHALATION IN LONG-EVANS RATS.
Chronic exposure to volatile organic compounds, including toluene, has been associated with visual deficits such as reduced visual contrast sensitivity or impaired color discrimination in studies of occupational or residential exposure. These reports remain controversial, howeve...
ERIC Educational Resources Information Center
Washington County Public Schools, Washington, PA.
Symptoms displayed by primary age children with learning disabilities are listed; perceptual handicaps are explained. Activities are suggested for developing visual perception and perception involving motor activities. Also suggested are activities to develop body concept, visual discrimination and attentiveness, visual memory, and figure ground…
Heinen, Klaartje; Jolij, Jacob; Lamme, Victor A F
2005-09-08
Discriminating objects from their surroundings by the visual system is known as figure-ground segregation. This process entails two different subprocesses: boundary detection and subsequent surface segregation or 'filling in'. In this study, we used transcranial magnetic stimulation to test the hypothesis that temporally distinct processes in V1 and related early visual areas such as V2 or V3 are causally related to the process of figure-ground segregation. Our results indicate that correct discrimination between two visual stimuli, which relies on figure-ground segregation, requires two separate periods of information processing in the early visual cortex: one around 130-160 ms and the other around 250-280 ms.
ERIC Educational Resources Information Center
Wessel, Dorothy
A 10-week classroom intervention program was implemented to facilitate the fine-motor development of eight first-grade children assessed as being deficient in motor skills. The program was divided according to five deficits to be remediated: visual motor, visual discrimination, visual sequencing, visual figure-ground, and visual memory. Each area…
Hierarchical Learning of Tree Classifiers for Large-Scale Plant Species Identification.
Fan, Jianping; Zhou, Ning; Peng, Jinye; Gao, Ling
2015-11-01
In this paper, a hierarchical multi-task structural learning algorithm is developed to support large-scale plant species identification, where a visual tree is constructed for organizing large numbers of plant species in a coarse-to-fine fashion and determining the inter-related learning tasks automatically. For a given parent node on the visual tree, it contains a set of sibling coarse-grained categories of plant species or sibling fine-grained plant species, and a multi-task structural learning algorithm is developed to train their inter-related classifiers jointly for enhancing their discrimination power. The inter-level relationship constraint, e.g., a plant image must first be assigned to a parent node (high-level non-leaf node) correctly if it can further be assigned to the most relevant child node (low-level non-leaf node or leaf node) on the visual tree, is formally defined and leveraged to learn more discriminative tree classifiers over the visual tree. Our experimental results have demonstrated the effectiveness of our hierarchical multi-task structural learning algorithm on training more discriminative tree classifiers for large-scale plant species identification.
Beneficial effects of verbalization and visual distinctiveness on remembering and knowing faces.
Brown, Charity; Lloyd-Jones, Toby J
2006-03-01
We examined the effect of verbally describing faces upon visual memory. In particular, we examined the locus of the facilitative effects of verbalization by manipulating the visual distinctiveness ofthe to-be-remembered faces and using the remember/know procedure as a measure of recognition performance (i.e., remember vs. know judgments). Participants were exposed to distinctive faces intermixed with typical faces and described (or not, in the control condition) each face following its presentation. Subsequently, the participants discriminated the original faces from distinctive and typical distractors in a yes/no recognition decision and made remember/know judgments. Distinctive faces elicited better discrimination performance than did typical faces. Furthermore, for both typical and distinctive faces, better discrimination performance was obtained in the description than in the control condition. Finally, these effects were evident for both recollection- and familiarity-based recognition decisions. We argue that verbalization and visual distinctiveness independently benefit face recognition, and we discuss these findings in terms of the nature of verbalization and the role of recollective and familiarity-based processes in recognition.
de Borst, Aline W; de Gelder, Beatrice
2017-08-01
Previous studies have shown that the early visual cortex contains content-specific representations of stimuli during visual imagery, and that these representational patterns of imagery content have a perceptual basis. To date, there is little evidence for the presence of a similar organization in the auditory and tactile domains. Using fMRI-based multivariate pattern analyses we showed that primary somatosensory, auditory, motor, and visual cortices are discriminative for imagery of touch versus sound. In the somatosensory, motor and visual cortices the imagery modality discriminative patterns were similar to perception modality discriminative patterns, suggesting that top-down modulations in these regions rely on similar neural representations as bottom-up perceptual processes. Moreover, we found evidence for content-specific representations of the stimuli during auditory imagery in the primary somatosensory and primary motor cortices. Both the imagined emotions and the imagined identities of the auditory stimuli could be successfully classified in these regions. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Do Rats Use Shape to Solve "Shape Discriminations"?
ERIC Educational Resources Information Center
Minini, Loredana; Jeffery, Kathryn J.
2006-01-01
Visual discrimination tasks are increasingly used to explore the neurobiology of vision in rodents, but it remains unclear how the animals solve these tasks: Do they process shapes holistically, or by using low-level features such as luminance and angle acuity? In the present study we found that when discriminating triangles from squares, rats did…
Face and Object Discrimination in Autism, and Relationship to IQ and Age
ERIC Educational Resources Information Center
Pallett, Pamela M.; Cohen, Shereen J.; Dobkins, Karen R.
2014-01-01
The current study tested fine discrimination of upright and inverted faces and objects in adolescents with Autism Spectrum Disorder (ASD) as compared to age- and IQ-matched controls. Discrimination sensitivity was tested using morphed faces and morphed objects, and all stimuli were equated in low-level visual characteristics (luminance, contrast,…
[Discrimination of varieties of brake fluid using visual-near infrared spectra].
Jiang, Lu-lu; Tan, Li-hong; Qiu, Zheng-jun; Lu, Jiang-feng; He, Yong
2008-06-01
A new method was developed to fast discriminate brands of brake fluid by means of visual-near infrared spectroscopy. Five different brands of brake fluid were analyzed using a handheld near infrared spectrograph, manufactured by ASD Company, and 60 samples were gotten from each brand of brake fluid. The samples data were pretreated using average smoothing and standard normal variable method, and then analyzed using principal component analysis (PCA). A 2-dimensional plot was drawn based on the first and the second principal components, and the plot indicated that the clustering characteristic of different brake fluid is distinct. The foregoing 6 principal components were taken as input variable, and the band of brake fluid as output variable to build the discriminate model by stepwise discriminant analysis method. Two hundred twenty five samples selected randomly were used to create the model, and the rest 75 samples to verify the model. The result showed that the distinguishing rate was 94.67%, indicating that the method proposed in this paper has good performance in classification and discrimination. It provides a new way to fast discriminate different brands of brake fluid.
Basic visual function and cortical thickness patterns in posterior cortical atrophy.
Lehmann, Manja; Barnes, Josephine; Ridgway, Gerard R; Wattam-Bell, John; Warrington, Elizabeth K; Fox, Nick C; Crutch, Sebastian J
2011-09-01
Posterior cortical atrophy (PCA) is characterized by a progressive decline in higher-visual object and space processing, but the extent to which these deficits are underpinned by basic visual impairments is unknown. This study aimed to assess basic and higher-order visual deficits in 21 PCA patients. Basic visual skills including form detection and discrimination, color discrimination, motion coherence, and point localization were measured, and associations and dissociations between specific basic visual functions and measures of higher-order object and space perception were identified. All participants showed impairment in at least one aspect of basic visual processing. However, a number of dissociations between basic visual skills indicated a heterogeneous pattern of visual impairment among the PCA patients. Furthermore, basic visual impairments were associated with particular higher-order object and space perception deficits, but not with nonvisual parietal tasks, suggesting the specific involvement of visual networks in PCA. Cortical thickness analysis revealed trends toward lower cortical thickness in occipitotemporal (ventral) and occipitoparietal (dorsal) regions in patients with visuoperceptual and visuospatial deficits, respectively. However, there was also a lot of overlap in their patterns of cortical thinning. These findings suggest that different presentations of PCA represent points in a continuum of phenotypical variation.
Understanding the Implications of Neural Population Activity on Behavior
NASA Astrophysics Data System (ADS)
Briguglio, John
Learning how neural activity in the brain leads to the behavior we exhibit is one of the fundamental questions in Neuroscience. In this dissertation, several lines of work are presented to that use principles of neural coding to understand behavior. In one line of work, we formulate the efficient coding hypothesis in a non-traditional manner in order to test human perceptual sensitivity to complex visual textures. We find a striking agreement between how variable a particular texture signal is and how sensitive humans are to its presence. This reveals that the efficient coding hypothesis is still a guiding principle for neural organization beyond the sensory periphery, and that the nature of cortical constraints differs from the peripheral counterpart. In another line of work, we relate frequency discrimination acuity to neural responses from auditory cortex in mice. It has been previously observed that optogenetic manipulation of auditory cortex, in addition to changing neural responses, evokes changes in behavioral frequency discrimination. We are able to account for changes in frequency discrimination acuity on an individual basis by examining the Fisher information from the neural population with and without optogenetic manipulation. In the third line of work, we address the question of what a neural population should encode given that its inputs are responses from another group of neurons. Drawing inspiration from techniques in machine learning, we train Deep Belief Networks on fake retinal data and show the emergence of Garbor-like filters, reminiscent of responses in primary visual cortex. In the last line of work, we model the state of a cortical excitatory-inhibitory network during complex adaptive stimuli. Using a rate model with Wilson-Cowan dynamics, we demonstrate that simple non-linearities in the signal transferred from inhibitory to excitatory neurons can account for real neural recordings taken from auditory cortex. This work establishes and tests a variety of hypotheses that will be useful in helping to understand the relationship between neural activity and behavior as recorded neural populations continue to grow.
ERIC Educational Resources Information Center
Smyth, Sinead; Barnes-Holmes, Dermot; Forsyth, John P.
2006-01-01
Two experiments investigated the derived transfer of functions through equivalence relations established using a stimulus pairing observation procedure. In Experiment 1, participants were trained on a simple discrimination (A1+/A2-) and then a stimulus pairing observation procedure was used to establish 4 stimulus pairings (A1-B1, A2-B2, B1-C1,…
The performance of ravens on simple discrimination tasks: a preliminary study
Range, Friederike; Bugnyar, Thomas; Kotrschal, Kurt
2015-01-01
Recent studies suggest the existence of primate-like cognitive abilities in corvids. Although the learning abilities of corvids in comparison to other species have been investigated before, little is known on how corvids perform on simple discrimination tasks if tested in experimental settings comparable to those that have been used for studying complex cognitive abilities. In this study, we tested a captive group of 12 ravens (Corvus corax) on four discrimination problems and their reversals. In contrast to other studies investigating learning abilities, our ravens were not food deprived and participation in experiments was voluntary. This preliminary study showed that all ravens successfully solved feature and position discriminations and several of the ravens could solve new tasks in a few trials, making very few mistakes. PMID:25948877
Melara, Robert D.; Singh, Shalini; Hien, Denise A.
2018-01-01
Two groups of healthy young adults were exposed to 3 weeks of cognitive training in a modified version of the visual flanker task, one group trained to discriminate the target (discrimination training) and the other group to ignore the flankers (inhibition training). Inhibition training, but not discrimination training, led to significant reductions in both Garner interference, indicating improved selective attention, and in Stroop interference, indicating more efficient resolution of stimulus conflict. The behavioral gains from training were greatest in participants who showed the poorest selective attention at pretest. Electrophysiological recordings revealed that inhibition training increased the magnitude of Rejection Positivity (RP) to incongruent distractors, an event-related potential (ERP) component associated with inhibitory control. Source modeling of RP uncovered a dipole in the medial frontal gyrus for those participants receiving inhibition training, but in the cingulate gyrus for those participants receiving discrimination training. Results suggest that inhibitory control is plastic; inhibition training improves conflict resolution, particularly in individuals with poor attention skills. PMID:29875644
Melara, Robert D; Singh, Shalini; Hien, Denise A
2018-01-01
Two groups of healthy young adults were exposed to 3 weeks of cognitive training in a modified version of the visual flanker task, one group trained to discriminate the target (discrimination training) and the other group to ignore the flankers (inhibition training). Inhibition training, but not discrimination training, led to significant reductions in both Garner interference, indicating improved selective attention, and in Stroop interference, indicating more efficient resolution of stimulus conflict. The behavioral gains from training were greatest in participants who showed the poorest selective attention at pretest. Electrophysiological recordings revealed that inhibition training increased the magnitude of Rejection Positivity (RP) to incongruent distractors, an event-related potential (ERP) component associated with inhibitory control. Source modeling of RP uncovered a dipole in the medial frontal gyrus for those participants receiving inhibition training, but in the cingulate gyrus for those participants receiving discrimination training. Results suggest that inhibitory control is plastic; inhibition training improves conflict resolution, particularly in individuals with poor attention skills.
Wu, Lin; Wang, Yang; Pan, Shirui
2017-12-01
It is now well established that sparse representation models are working effectively for many visual recognition tasks, and have pushed forward the success of dictionary learning therein. Recent studies over dictionary learning focus on learning discriminative atoms instead of purely reconstructive ones. However, the existence of intraclass diversities (i.e., data objects within the same category but exhibit large visual dissimilarities), and interclass similarities (i.e., data objects from distinct classes but share much visual similarities), makes it challenging to learn effective recognition models. To this end, a large number of labeled data objects are required to learn models which can effectively characterize these subtle differences. However, labeled data objects are always limited to access, committing it difficult to learn a monolithic dictionary that can be discriminative enough. To address the above limitations, in this paper, we propose a weakly-supervised dictionary learning method to automatically learn a discriminative dictionary by fully exploiting visual attribute correlations rather than label priors. In particular, the intrinsic attribute correlations are deployed as a critical cue to guide the process of object categorization, and then a set of subdictionaries are jointly learned with respect to each category. The resulting dictionary is highly discriminative and leads to intraclass diversity aware sparse representations. Extensive experiments on image classification and object recognition are conducted to show the effectiveness of our approach.
Monkey pulvinar neurons fire differentially to snake postures.
Le, Quan Van; Isbell, Lynne A; Matsumoto, Jumpei; Le, Van Quang; Hori, Etsuro; Tran, Anh Hai; Maior, Rafael S; Tomaz, Carlos; Ono, Taketoshi; Nishijo, Hisao
2014-01-01
There is growing evidence from both behavioral and neurophysiological approaches that primates are able to rapidly discriminate visually between snakes and innocuous stimuli. Recent behavioral evidence suggests that primates are also able to discriminate the level of threat posed by snakes, by responding more intensely to a snake model poised to strike than to snake models in coiled or sinusoidal postures (Etting and Isbell 2014). In the present study, we examine the potential for an underlying neurological basis for this ability. Previous research indicated that the pulvinar is highly sensitive to snake images. We thus recorded pulvinar neurons in Japanese macaques (Macaca fuscata) while they viewed photos of snakes in striking and non-striking postures in a delayed non-matching to sample (DNMS) task. Of 821 neurons recorded, 78 visually responsive neurons were tested with the all snake images. We found that pulvinar neurons in the medial and dorsolateral pulvinar responded more strongly to snakes in threat displays poised to strike than snakes in non-threat-displaying postures with no significant difference in response latencies. A multidimensional scaling analysis of the 78 visually responsive neurons indicated that threat-displaying and non-threat-displaying snakes were separated into two different clusters in the first epoch of 50 ms after stimulus onset, suggesting bottom-up visual information processing. These results indicate that pulvinar neurons in primates discriminate between poised to strike from those in non-threat-displaying postures. This neuronal ability likely facilitates behavioral discrimination and has clear adaptive value. Our results are thus consistent with the Snake Detection Theory, which posits that snakes were instrumental in the evolution of primate visual systems.
Visual cues for woodpeckers: light reflectance of decayed wood varies by decay fungus
O'Daniels, Sean T.; Kesler, Dylan C.; Mihail, Jeanne D.; Webb, Elisabeth B.; Werner, Scott J.
2018-01-01
The appearance of wood substrates is likely relevant to bird species with life histories that require regular interactions with wood for food and shelter. Woodpeckers detect decayed wood for cavity placement or foraging, and some species may be capable of detecting trees decayed by specific fungi; however, a mechanism allowing for such specificity remains unidentified. We hypothesized that decay fungi associated with woodpecker cavity sites alter the substrate reflectance in a species-specific manner that is visually discriminable by woodpeckers. We grew 10 species of wood decay fungi from pure cultures on sterile wood substrates of 3 tree species. We then measured the relative reflectance spectra of decayed and control wood wafers and compared them using the receptor noise-limited (RNL) color discrimination model. The RNL model has been used in studies of feather coloration, egg shells, flowers, and fruit to model how the colors of objects appear to birds. Our analyses indicated 6 of 10 decayed substrate/control comparisons were above the threshold of discrimination (i.e., indicating differences discriminable by avian viewers), and 12 of 13 decayed substrate comparisons were also above threshold for a hypothetical woodpecker. We conclude that woodpeckers should be capable of visually detecting decayed wood on trees where bark is absent, and they should also be able to detect visually species-specific differences in wood substrates decayed by fungi used in this study. Our results provide evidence for a visual mechanism by which woodpeckers could identify and select substrates decayed by specific fungi, which has implications for understanding ecologically important woodpecker–fungus interactions.
Discrimination of holograms and real objects by pigeons (Columba livia) and humans (Homo sapiens).
Stephan, Claudia; Steurer, Michael M; Aust, Ulrike
2014-08-01
The type of stimulus material employed in visual tasks is crucial to all comparative cognition research that involves object recognition. There is considerable controversy about the use of 2-dimensional stimuli and the impact that the lack of the 3rd dimension (i.e., depth) may have on animals' performance in tests for their visual and cognitive abilities. We report evidence of discrimination learning using a completely novel type of stimuli, namely, holograms. Like real objects, holograms provide full 3-dimensional shape information but they also offer many possibilities for systematically modifying the appearance of a stimulus. Hence, they provide a promising means for investigating visual perception and cognition of different species in a comparative way. We trained pigeons and humans to discriminate either between 2 real objects or between holograms of the same 2 objects, and we subsequently tested both species for the transfer of discrimination to the other presentation mode. The lack of any decrements in accuracy suggests that real objects and holograms were perceived as equivalent in both species and shows the general appropriateness of holograms as stimuli in visual tasks. A follow-up experiment involving the presentation of novel views of the training objects and holograms revealed some interspecies differences in rotational invariance, thereby confirming and extending the results of previous studies. Taken together, these results suggest that holograms may not only provide a promising tool for investigating yet unexplored issues, but their use may also lead to novel insights into some crucial aspects of comparative visual perception and categorization.
Prestimulus EEG Power Predicts Conscious Awareness But Not Objective Visual Performance
Veniero, Domenica
2017-01-01
Abstract Prestimulus oscillatory neural activity has been linked to perceptual outcomes during performance of psychophysical detection and discrimination tasks. Specifically, the power and phase of low frequency oscillations have been found to predict whether an upcoming weak visual target will be detected or not. However, the mechanisms by which baseline oscillatory activity influences perception remain unclear. Recent studies suggest that the frequently reported negative relationship between α power and stimulus detection may be explained by changes in detection criterion (i.e., increased target present responses regardless of whether the target was present/absent) driven by the state of neural excitability, rather than changes in visual sensitivity (i.e., more veridical percepts). Here, we recorded EEG while human participants performed a luminance discrimination task on perithreshold stimuli in combination with single-trial ratings of perceptual awareness. Our aim was to investigate whether the power and/or phase of prestimulus oscillatory activity predict discrimination accuracy and/or perceptual awareness on a trial-by-trial basis. Prestimulus power (3–28 Hz) was inversely related to perceptual awareness ratings (i.e., higher ratings in states of low prestimulus power/high excitability) but did not predict discrimination accuracy. In contrast, prestimulus oscillatory phase did not predict awareness ratings or accuracy in any frequency band. These results provide evidence that prestimulus α power influences the level of subjective awareness of threshold visual stimuli but does not influence visual sensitivity when a decision has to be made regarding stimulus features. Hence, we find a clear dissociation between the influence of ongoing neural activity on conscious awareness and objective performance. PMID:29255794
Neural mechanisms of coarse-to-fine discrimination in the visual cortex.
Purushothaman, Gopathy; Chen, Xin; Yampolsky, Dmitry; Casagrande, Vivien A
2014-12-01
Vision is a dynamic process that refines the spatial scale of analysis over time, as evidenced by a progressive improvement in the ability to detect and discriminate finer details. To understand coarse-to-fine discrimination, we studied the dynamics of spatial frequency (SF) response using reverse correlation in the primary visual cortex (V1) of the primate. In a majority of V1 cells studied, preferred SF either increased monotonically with time (group 1) or changed nonmonotonically, with an initial increase followed by a decrease (group 2). Monotonic shift in preferred SF occurred with or without an early suppression at low SFs. Late suppression at high SFs always accompanied nonmonotonic SF dynamics. Bayesian analysis showed that SF discrimination performance and best discriminable SF frequencies changed with time in different ways in the two groups of neurons. In group 1 neurons, SF discrimination performance peaked on both left and right flanks of the SF tuning curve at about the same time. In group 2 neurons, peak discrimination occurred on the right flank (high SFs) later than on the left flank (low SFs). Group 2 neurons were also better discriminators of high SFs. We examined the relationship between the time at which SF discrimination performance peaked on either flank of the SF tuning curve and the corresponding best discriminable SFs in both neuronal groups. This analysis showed that the population best discriminable SF increased with time in V1. These results suggest neural mechanisms for coarse-to-fine discrimination behavior and that this process originates in V1 or earlier. Copyright © 2014 the American Physiological Society.
The use of visual cues in gravity judgements on parabolic motion.
Jörges, Björn; Hagenfeld, Lena; López-Moliner, Joan
2018-06-21
Evidence suggests that humans rely on an earth gravity prior for sensory-motor tasks like catching or reaching. Even under earth-discrepant conditions, this prior biases perception and action towards assuming a gravitational downwards acceleration of 9.81 m/s 2 . This can be particularly detrimental in interactions with virtual environments employing earth-discrepant gravity conditions for their visual presentation. The present study thus investigates how well humans discriminate visually presented gravities and which cues they use to extract gravity from the visual scene. To this end, we employed a Two-Interval Forced-Choice Design. In Experiment 1, participants had to judge which of two presented parabolas had the higher underlying gravity. We used two initial vertical velocities, two horizontal velocities and a constant target size. Experiment 2 added a manipulation of the reliability of the target size. Experiment 1 shows that participants have generally high discrimination thresholds for visually presented gravities, with weber fractions of 13 to beyond 30%. We identified the rate of change of the elevation angle (ẏ) and the visual angle (θ) as major cues. Experiment 2 suggests furthermore that size variability has a small influence on discrimination thresholds, while at the same time larger size variability increases reliance on ẏ and decreases reliance on θ. All in all, even though we use all available information, humans display low precision when extracting the governing gravity from a visual scene, which might further impact our capabilities of adapting to earth-discrepant gravity conditions with visual information alone. Copyright © 2018. Published by Elsevier Ltd.
ERIC Educational Resources Information Center
Cattaneo, Zaira; Mattavelli, Giulia; Papagno, Costanza; Herbert, Andrew; Silvanto, Juha
2011-01-01
The human visual system is able to efficiently extract symmetry information from the visual environment. Prior neuroimaging evidence has revealed symmetry-preferring neuronal representations in the dorsolateral extrastriate visual cortex; the objective of the present study was to investigate the necessity of these representations in symmetry…
ERIC Educational Resources Information Center
Chen, Y.; Norton, D. J.; McBain, R.; Gold, J.; Frazier, J. A.; Coyle, J. T.
2012-01-01
An important issue for understanding visual perception in autism concerns whether individuals with this neurodevelopmental disorder possess an advantage in processing local visual information, and if so, what is the nature of this advantage. Perception of movement speed is a visual process that relies on computation of local spatiotemporal signals…
Evaluation of a visual layering methodology for colour coding control room displays.
Van Laar, Darren; Deshe, Ofer
2002-07-01
Eighteen people participated in an experiment in which they were asked to search for targets on control room like displays which had been produced using three different coding methods. The monochrome coding method displayed the information in black and white only, the maximally discriminable method contained colours chosen for their high perceptual discriminability, the visual layers method contained colours developed from psychological and cartographic principles which grouped information into a perceptual hierarchy. The visual layers method produced significantly faster search times than the other two coding methods which did not differ significantly from each other. Search time also differed significantly for presentation order and for the method x order interaction. There was no significant difference between the methods in the number of errors made. Participants clearly preferred the visual layers coding method. Proposals are made for the design of experiments to further test and develop the visual layers colour coding methodology.
Left hemispheric advantage for numerical abilities in the bottlenose dolphin.
Kilian, Annette; von Fersen, Lorenzo; Güntürkün, Onur
2005-02-28
In a two-choice discrimination paradigm, a bottlenose dolphin discriminated relational dimensions between visual numerosity stimuli under monocular viewing conditions. After prior binocular acquisition of the task, two monocular test series with different number stimuli were conducted. In accordance with recent studies on visual lateralization in the bottlenose dolphin, our results revealed an overall advantage of the right visual field. Due to the complete decussation of the optic nerve fibers, this suggests a specialization of the left hemisphere for analysing relational features between stimuli as required in tests for numerical abilities. These processes are typically right hemisphere-based in other mammals (including humans) and birds. The present data provide further evidence for a general right visual field advantage in bottlenose dolphins for visual information processing. It is thus assumed that dolphins possess a unique functional architecture of their cerebral asymmetries. (c) 2004 Elsevier B.V. All rights reserved.
Perceived visual speed constrained by image segmentation
NASA Technical Reports Server (NTRS)
Verghese, P.; Stone, L. S.
1996-01-01
Little is known about how or where the visual system parses the visual scene into objects or surfaces. However, it is generally assumed that the segmentation and grouping of pieces of the image into discrete entities is due to 'later' processing stages, after the 'early' processing of the visual image by local mechanisms selective for attributes such as colour, orientation, depth, and motion. Speed perception is also thought to be mediated by early mechanisms tuned for speed. Here we show that manipulating the way in which an image is parsed changes the way in which local speed information is processed. Manipulations that cause multiple stimuli to appear as parts of a single patch degrade speed discrimination, whereas manipulations that perceptually divide a single large stimulus into parts improve discrimination. These results indicate that processes as early as speed perception may be constrained by the parsing of the visual image into discrete entities.
ERIC Educational Resources Information Center
Cabrera, Laurianne; Lorenzi, Christian; Bertoncini, Josiane
2015-01-01
Purpose: This study assessed the role of spectro-temporal modulation cues in the discrimination of 2 phonetic contrasts (voicing and place) for young infants. Method: A visual-habituation procedure was used to assess the ability of French-learning 6-month-old infants with normal hearing to discriminate voiced versus unvoiced (/aba/-/apa/) and…
Impaired Discrimination Learning in Mice Lacking the NMDA Receptor NR2A Subunit
ERIC Educational Resources Information Center
Brigman, Jonathan L.; Feyder, Michael; Saksida, Lisa M.; Bussey, Timothy J.; Mishina, Masayoshi; Holmes, Andrew
2008-01-01
N-Methyl-D-aspartate receptors (NMDARs) mediate certain forms of synaptic plasticity and learning. We used a touchscreen system to assess NR2A subunit knockout mice (KO) for (1) pairwise visual discrimination and reversal learning and (2) acquisition and extinction of an instrumental response requiring no pairwise discrimination. NR2A KO mice…
Matsumoto, Narihisa; Eldridge, Mark A G; Saunders, Richard C; Reoli, Rachel; Richmond, Barry J
2016-01-06
In primates, visual recognition of complex objects depends on the inferior temporal lobe. By extension, categorizing visual stimuli based on similarity ought to depend on the integrity of the same area. We tested three monkeys before and after bilateral anterior inferior temporal cortex (area TE) removal. Although mildly impaired after the removals, they retained the ability to assign stimuli to previously learned categories, e.g., cats versus dogs, and human versus monkey faces, even with trial-unique exemplars. After the TE removals, they learned in one session to classify members from a new pair of categories, cars versus trucks, as quickly as they had learned the cats versus dogs before the removals. As with the dogs and cats, they generalized across trial-unique exemplars of cars and trucks. However, as seen in earlier studies, these monkeys with TE removals had difficulty learning to discriminate between two simple black and white stimuli. These results raise the possibility that TE is needed for memory of simple conjunctions of basic features, but that it plays only a small role in generalizing overall configural similarity across a large set of stimuli, such as would be needed for perceptual categorical assignment. The process of seeing and recognizing objects is attributed to a set of sequentially connected brain regions stretching forward from the primary visual cortex through the temporal lobe to the anterior inferior temporal cortex, a region designated area TE. Area TE is considered the final stage for recognizing complex visual objects, e.g., faces. It has been assumed, but not tested directly, that this area would be critical for visual generalization, i.e., the ability to place objects such as cats and dogs into their correct categories. Here, we demonstrate that monkeys rapidly and seemingly effortlessly categorize large sets of complex images (cats vs dogs, cars vs trucks), surprisingly, even after removal of area TE, leaving a puzzle about how this generalization is done. Copyright © 2016 the authors 0270-6474/16/360043-11$15.00/0.
Zhou, Zhe Charles; Yu, Chunxiu; Sellers, Kristin K.; Fröhlich, Flavio
2016-01-01
Visual discrimination requires sensory processing followed by a perceptual decision. Despite a growing understanding of visual areas in this behavior, it is unclear what role top-down signals from prefrontal cortex play, in particular as a function of perceptual difficulty. To address this gap, we investigated how neurons in dorso-lateral frontal cortex (dl-FC) of freely-moving ferrets encode task variables in a two-alternative forced choice visual discrimination task with high- and low-contrast visual input. About two-thirds of all recorded neurons in dl-FC were modulated by at least one of the two task variables, task difficulty and target location. More neurons in dl-FC preferred the hard trials; no such preference bias was found for target location. In individual neurons, this preference for specific task types was limited to brief epochs. Finally, optogenetic stimulation confirmed the functional role of the activity in dl-FC before target touch; suppression of activity in pyramidal neurons with the ArchT silencing opsin resulted in a decrease in reaction time to touch the target but not to retrieve reward. In conclusion, dl-FC activity is differentially recruited for high perceptual difficulty in the freely-moving ferret and the resulting signal may provide top-down behavioral inhibition. PMID:27025995
Zhou, Zhe Charles; Yu, Chunxiu; Sellers, Kristin K; Fröhlich, Flavio
2016-03-30
Visual discrimination requires sensory processing followed by a perceptual decision. Despite a growing understanding of visual areas in this behavior, it is unclear what role top-down signals from prefrontal cortex play, in particular as a function of perceptual difficulty. To address this gap, we investigated how neurons in dorso-lateral frontal cortex (dl-FC) of freely-moving ferrets encode task variables in a two-alternative forced choice visual discrimination task with high- and low-contrast visual input. About two-thirds of all recorded neurons in dl-FC were modulated by at least one of the two task variables, task difficulty and target location. More neurons in dl-FC preferred the hard trials; no such preference bias was found for target location. In individual neurons, this preference for specific task types was limited to brief epochs. Finally, optogenetic stimulation confirmed the functional role of the activity in dl-FC before target touch; suppression of activity in pyramidal neurons with the ArchT silencing opsin resulted in a decrease in reaction time to touch the target but not to retrieve reward. In conclusion, dl-FC activity is differentially recruited for high perceptual difficulty in the freely-moving ferret and the resulting signal may provide top-down behavioral inhibition.
Electrophysiological Evidence for Ventral Stream Deficits in Schizophrenia Patients
Plomp, Gijs; Roinishvili, Maya; Chkonia, Eka; Kapanadze, George; Kereselidze, Maia; Brand, Andreas; Herzog, Michael H.
2013-01-01
Schizophrenic patients suffer from many deficits including visual, attentional, and cognitive ones. Visual deficits are of particular interest because they are at the fore-end of information processing and can provide clear examples of interactions between sensory, perceptual, and higher cognitive functions. Visual deficits in schizophrenic patients are often attributed to impairments in the dorsal (where) rather than the ventral (what) stream of visual processing. We used a visual-masking paradigm in which patients and matched controls discriminated small vernier offsets. We analyzed the evoked electroencephalography (EEG) responses and applied distributed electrical source imaging techniques to estimate activity differences between conditions and groups throughout the brain. Compared with controls, patients showed strongly reduced discrimination accuracy, confirming previous work. The behavioral deficits corresponded to pronounced decreases in the evoked EEG response at around 200 ms after stimulus onset. At this latency, patients showed decreased activity for targets in left parietal cortex (dorsal stream), but the decrease was most pronounced in lateral occipital cortex (in the ventral stream). These deficiencies occurred at latencies that reflect object processing and fine shape discriminations. We relate the reduced ventral stream activity to deficient top-down processing of target stimuli and provide a framework for relating the commonly observed dorsal stream deficiencies with the currently observed ventral stream deficiencies. PMID:22258884
Electrophysiological evidence for ventral stream deficits in schizophrenia patients.
Plomp, Gijs; Roinishvili, Maya; Chkonia, Eka; Kapanadze, George; Kereselidze, Maia; Brand, Andreas; Herzog, Michael H
2013-05-01
Schizophrenic patients suffer from many deficits including visual, attentional, and cognitive ones. Visual deficits are of particular interest because they are at the fore-end of information processing and can provide clear examples of interactions between sensory, perceptual, and higher cognitive functions. Visual deficits in schizophrenic patients are often attributed to impairments in the dorsal (where) rather than the ventral (what) stream of visual processing. We used a visual-masking paradigm in which patients and matched controls discriminated small vernier offsets. We analyzed the evoked electroencephalography (EEG) responses and applied distributed electrical source imaging techniques to estimate activity differences between conditions and groups throughout the brain. Compared with controls, patients showed strongly reduced discrimination accuracy, confirming previous work. The behavioral deficits corresponded to pronounced decreases in the evoked EEG response at around 200 ms after stimulus onset. At this latency, patients showed decreased activity for targets in left parietal cortex (dorsal stream), but the decrease was most pronounced in lateral occipital cortex (in the ventral stream). These deficiencies occurred at latencies that reflect object processing and fine shape discriminations. We relate the reduced ventral stream activity to deficient top-down processing of target stimuli and provide a framework for relating the commonly observed dorsal stream deficiencies with the currently observed ventral stream deficiencies.
Hu, Weiming; Gao, Jin; Xing, Junliang; Zhang, Chao; Maybank, Stephen
2017-01-01
An appearance model adaptable to changes in object appearance is critical in visual object tracking. In this paper, we treat an image patch as a two-order tensor which preserves the original image structure. We design two graphs for characterizing the intrinsic local geometrical structure of the tensor samples of the object and the background. Graph embedding is used to reduce the dimensions of the tensors while preserving the structure of the graphs. Then, a discriminant embedding space is constructed. We prove two propositions for finding the transformation matrices which are used to map the original tensor samples to the tensor-based graph embedding space. In order to encode more discriminant information in the embedding space, we propose a transfer-learning- based semi-supervised strategy to iteratively adjust the embedding space into which discriminative information obtained from earlier times is transferred. We apply the proposed semi-supervised tensor-based graph embedding learning algorithm to visual tracking. The new tracking algorithm captures an object's appearance characteristics during tracking and uses a particle filter to estimate the optimal object state. Experimental results on the CVPR 2013 benchmark dataset demonstrate the effectiveness of the proposed tracking algorithm.
Do rhesus monkeys (Macaca mulatta) perceive illusory motion?
Agrillo, Christian; Gori, Simone; Beran, Michael J
2015-07-01
During the last decade, visual illusions have been used repeatedly to understand similarities and differences in visual perception of human and non-human animals. However, nearly all studies have focused only on illusions not related to motion perception, and to date, it is unknown whether non-human primates perceive any kind of motion illusion. In the present study, we investigated whether rhesus monkeys (Macaca mulatta) perceived one of the most popular motion illusions in humans, the Rotating Snake illusion (RSI). To this purpose, we set up four experiments. In Experiment 1, subjects initially were trained to discriminate static versus dynamic arrays. Once reaching the learning criterion, they underwent probe trials in which we presented the RSI and a control stimulus identical in overall configuration with the exception that the order of the luminance sequence was changed in a way that no apparent motion is perceived by humans. The overall performance of monkeys indicated that they spontaneously classified RSI as a dynamic array. Subsequently, we tested adult humans in the same task with the aim of directly comparing the performance of human and non-human primates (Experiment 2). In Experiment 3, we found that monkeys can be successfully trained to discriminate between the RSI and a control stimulus. Experiment 4 showed that a simple change in luminance sequence in the two arrays could not explain the performance reported in Experiment 3. These results suggest that some rhesus monkeys display a human-like perception of this motion illusion, raising the possibility that the neurocognitive systems underlying motion perception may be similar between human and non-human primates.
Examining the relationship between skilled music training and attention.
Wang, Xiao; Ossher, Lynn; Reuter-Lorenz, Patricia A
2015-11-01
While many aspects of cognition have been investigated in relation to skilled music training, surprisingly little work has examined the connection between music training and attentional abilities. The present study investigated the performance of skilled musicians on cognitively demanding sustained attention tasks, measuring both temporal and visual discrimination over a prolonged duration. Participants with extensive formal music training were found to have superior performance on a temporal discrimination task, but not a visual discrimination task, compared to participants with no music training. In addition, no differences were found between groups in vigilance decrement in either type of task. Although no differences were evident in vigilance per se, the results indicate that performance in an attention-demanding temporal discrimination task was superior in individuals with extensive music training. We speculate that this basic cognitive ability may contribute to advantages that musicians show in other cognitive measures. Copyright © 2015 Elsevier Inc. All rights reserved.
Radić, Josipa; Ljutić, Dragan; Radić, Mislav; Kovačić, Vedran; Sain, Milenka; Dodig-Ćurković, Katarina
2011-01-01
Change in cognitive function is one of the well-known consequences of the end-stage renal disease (ESRD). The aim of this study was to determine the effect of hemodialysis (HD) and continuous ambulatory peritoneal dialysis (CAPD) on cognitive and motor functions. In this cross-sectional study, cognitive and motor functions were investigated in a selected population of 42 patients with ESRD (22 patients on chronic HD and 20 patients on CAPD, aged 50.31 ± 11.07 years). Assessment of cognitive and motor functions was performed by Symbol Digit Modalities Test (SDMT) and Complex Reactiometer Drenovac (CRD-series), a battery of computer-generated psychological tests to measure simple visual discrimination of signal location, short-term memory, simple convergent visual orientation, and convergent thinking. The statistically significant difference in cognitive-motor functions between HD and CAPD patients was not found in any of the time-related parameters in all CRD-series tests or SDMT score. Higher serum levels of albumin, creatinine, and calcium were correlated with better cognitive-motor performance among all patients regardless of dialysis modality. The significant correlation between ultrafiltration rate per HD and short-term memory actualization test score (CRD-324 MT) among HD patients was found (r = 0.434, p = 0.025). This study has demonstrated that well-nourished and medically stable HD and CAPD patients without clinical signs of dementia or cognitive impairment and without significant difference in age and level of education performed all tests of cognitive-motor abilities without statistically significant difference.
Tcheng, David K.; Nayak, Ashwin K.; Fowlkes, Charless C.; Punyasena, Surangi W.
2016-01-01
Discriminating between black and white spruce (Picea mariana and Picea glauca) is a difficult palynological classification problem that, if solved, would provide valuable data for paleoclimate reconstructions. We developed an open-source visual recognition software (ARLO, Automated Recognition with Layered Optimization) capable of differentiating between these two species at an accuracy on par with human experts. The system applies pattern recognition and machine learning to the analysis of pollen images and discovers general-purpose image features, defined by simple features of lines and grids of pixels taken at different dimensions, size, spacing, and resolution. It adapts to a given problem by searching for the most effective combination of both feature representation and learning strategy. This results in a powerful and flexible framework for image classification. We worked with images acquired using an automated slide scanner. We first applied a hash-based “pollen spotting” model to segment pollen grains from the slide background. We next tested ARLO’s ability to reconstruct black to white spruce pollen ratios using artificially constructed slides of known ratios. We then developed a more scalable hash-based method of image analysis that was able to distinguish between the pollen of black and white spruce with an estimated accuracy of 83.61%, comparable to human expert performance. Our results demonstrate the capability of machine learning systems to automate challenging taxonomic classifications in pollen analysis, and our success with simple image representations suggests that our approach is generalizable to many other object recognition problems. PMID:26867017
NASA Astrophysics Data System (ADS)
Pratavieira, S.; Santos, P. L. A.; Bagnato, V. S.; Kurachi, C.
2009-06-01
Oral and skin cancers constitute a major global health problem that cause great impact in patients. The most common screening method for oral cancer is visual inspection and palpation of the mouth. Visual examination relies heavily on the experience and skills of the physician to identify and delineate early premalignant and cancer changes, which is not simple due to the similar characteristics of early stage cancers and benign lesions. Optical imaging has the potential to address these clinical challenges. Contrast between normal and neoplastic areas may be increased, distinct to the conventional white light, when using illumination and detection conditions. Reflectance imaging can detect local changes in tissue scattering and absorption and fluorescence imaging can probe changes in the biochemical composition. These changes have shown to be indicatives of malignant progression. Widefield optical imaging systems are interesting because they may enhance the screening ability in large regions allowing the discrimination and the delineation of neoplastic and potentially of occult lesions. Digital image processing allows the combination of autofluorescence and reflectance images in order to objectively identify and delineate the peripheral extent of neoplastic lesions in the skin and oral cavity. Combining information from different imaging modalities has the potential of increasing diagnostic performance, due to distinct provided information. A simple widefiled imaging device based on fluorescence and reflectance modes together with a digital image processing was assembled and its performance tested in an animal study.
Perception of animacy in dogs and humans.
Abdai, Judit; Ferdinandy, Bence; Terencio, Cristina Baño; Pogány, Ákos; Miklósi, Ádám
2017-06-01
Humans have a tendency to perceive inanimate objects as animate based on simple motion cues. Although animacy is considered as a complex cognitive property, this recognition seems to be spontaneous. Researchers have found that young human infants discriminate between dependent and independent movement patterns. However, quick visual perception of animate entities may be crucial to non-human species as well. Based on general mammalian homology, dogs may possess similar skills to humans. Here, we investigated whether dogs and humans discriminate similarly between dependent and independent motion patterns performed by geometric shapes. We projected a side-by-side video display of the two patterns and measured looking times towards each side, in two trials. We found that in Trial 1, both dogs and humans were equally interested in the two patterns, but in Trial 2 of both species, looking times towards the dependent pattern decreased, whereas they increased towards the independent pattern. We argue that dogs and humans spontaneously recognized the specific pattern and habituated to it rapidly, but continued to show interest in the 'puzzling' pattern. This suggests that both species tend to recognize inanimate agents as animate relying solely on their motions. © 2017 The Author(s).
THE VISUAL DISCRIMINATION OF INTENSITY AND THE WEBER-FECHNER LAW
Hecht, Selig
1924-01-01
1. A study of the historical development of the Weber-Fechner law shows that it fails to describe intensity perception; first, because it is based on observations which do not record intensity discrimination accurately, and second, because it omits the essentially discontinuous nature of the recognition of intensity differences. 2. There is presented a series of data, assembled from various sources, which proves that in the visual discrimination of intensity the threshold difference ΔI bears no constant relation to the intensity I. The evidence shows unequivocally that as the intensity rises, the ratio See PDF for Equation first decreases and then increases. 3. The data are then subjected to analysis in terms of a photochemical system already proposed for the visual activity of the rods and cones. It is found that for the retinal elements to discriminate between one intensity and the next perceptible one, the transition from one to the other must involve the decomposition of a constant amount of photosensitive material. 4. The magnitude of this unitary increment in the quantity of photochemical action is greater for the rods than for the cones. Therefore, below a certain critical illumination—the cone threshold—intensity discrimination is controlled by the rods alone, but above this point it is determined by the cones alone. 5. The unitary increments in retinal photochemical action may be interpreted as being recorded by each rod and cone; or as conditioning the variability of the retinal cells so that each increment involves a constant increase in the number of active elements; or as a combination of the two interpretations. 6. Comparison with critical data of such diverse nature as dark adaptation, absolute thresholds, and visual acuity shows that the analysis is consistent with well established facts of vision. PMID:19872133
Visual Deficit in Albino Rats Following Fetal X Irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
VAN DER ELST, DIRK H.; PORTER, PAUL B.; SHARP, JOSEPH C.
1963-02-01
To investigate the effect of radiation on visual ability, five groups of rats on the 15th day of gestation received x irradiation in doses of 0, 50, 75, 100, or 150 r at 50 r/ min. Two-thirds of the newborn rats died or were killed and eaten during the first postnatal week. The 75- and 50-r groups were lost entirely. The cannibalism occurred in all groups, so that its cause was uncertain. The remaining rats, which as fetuses had received 0, 100, and 150 r, were tested for visual discrimination in a water-flooded T. All 3 groups discriminated a lightedmore » escape ladder from the unlighted arm of the T with near- equal facility. Thereafter, as the light was dimmed progressively, performance declined in relation to dose. With the light turned off, but the bulb and ladder visible in ambient illumination, the 150-r group performed at chance, the 100-r group reliably better, and the control group better still. Thus, in the more precise task the irradiated animals failed. Since irradiation on the 15th day primarily damages the cortex, central blindness seems the most likely explanation. All animals had previously demonstrated their ability to solve the problem conceptually; hence a conclusion of visual deficiency seems justified. The similar performances of all groups during the easiest light discrimination test showed that the heavily irradiated and severely injured animals of the 150-r group were nonetheless able to learn readily. Finally, contrary to earlier studies in which irradiated rats were retarded in discriminating a light in a Skinner box, present tests reveal impairment neither in learning rate nor light discrimination.« less
A Neural Marker of Medical Visual Expertise: Implications for Training
ERIC Educational Resources Information Center
Rourke, Liam; Cruikshank, Leanna C.; Shapke, Larissa; Singhal, Anthony
2016-01-01
Researchers have identified a component of the EEG that discriminates visual experts from novices. The marker indexes a comprehensive model of visual processing, and if it is apparent in physicians, it could be used to investigate the development and training of their visual expertise. The purpose of this study was to determine whether a neural…
Impaired Filtering of Behaviourally Irrelevant Visual Information in Dyslexia
ERIC Educational Resources Information Center
Roach, Neil W.; Hogben, John H.
2007-01-01
A recent proposal suggests that dyslexic individuals suffer from attentional deficiencies, which impair the ability to selectively process incoming visual information. To investigate this possibility, we employed a spatial cueing procedure in conjunction with a single fixation visual search task measuring thresholds for discriminating the…
Colour discrimination and categorisation in Williams syndrome.
Farran, Emily K; Cranwell, Matthew B; Alvarez, James; Franklin, Anna
2013-10-01
Individuals with Williams syndrome (WS) present with impaired functioning of the dorsal visual stream relative to the ventral visual stream. As such, little attention has been given to ventral stream functions in WS. We investigated colour processing, a predominantly ventral stream function, for the first time in nineteen individuals with Williams syndrome. Colour discrimination was assessed using the Farnsworth-Munsell 100 hue test. Colour categorisation was assessed using a match-to-sample test and a colour naming task. A visual search task was also included as a measure of sensitivity to the size of perceptual colour difference. Results showed that individuals with WS have reduced colour discrimination relative to typically developing participants matched for chronological age; performance was commensurate with a typically developing group matched for non-verbal ability. In contrast, categorisation was typical in WS, although there was some evidence that sensitivity to the size of perceptual colour differences was reduced in this group. Copyright © 2013 Elsevier Ltd. All rights reserved.
Hippocampus, perirhinal cortex, and complex visual discriminations in rats and humans
Hales, Jena B.; Broadbent, Nicola J.; Velu, Priya D.
2015-01-01
Structures in the medial temporal lobe, including the hippocampus and perirhinal cortex, are known to be essential for the formation of long-term memory. Recent animal and human studies have investigated whether perirhinal cortex might also be important for visual perception. In our study, using a simultaneous oddity discrimination task, rats with perirhinal lesions were impaired and did not exhibit the normal preference for exploring the odd object. Notably, rats with hippocampal lesions exhibited the same impairment. Thus, the deficit is unlikely to illuminate functions attributed specifically to perirhinal cortex. Both lesion groups were able to acquire visual discriminations involving the same objects used in the oddity task. Patients with hippocampal damage or larger medial temporal lobe lesions were intact in a similar oddity task that allowed participants to explore objects quickly using eye movements. We suggest that humans were able to rely on an intact working memory capacity to perform this task, whereas rats (who moved slowly among the objects) needed to rely on long-term memory. PMID:25593294
NASA Astrophysics Data System (ADS)
Clawson, Wesley Patrick
Previous studies, both theoretical and experimental, of network level dynamics in the cerebral cortex show evidence for a statistical phenomenon called criticality; a phenomenon originally studied in the context of phase transitions in physical systems and that is associated with favorable information processing in the context of the brain. The focus of this thesis is to expand upon past results with new experimentation and modeling to show a relationship between criticality and the ability to detect and discriminate sensory input. A line of theoretical work predicts maximal sensory discrimination as a functional benefit of criticality, which can then be characterized using mutual information between sensory input, visual stimulus, and neural response,. The primary finding of our experiments in the visual cortex in turtles and neuronal network modeling confirms this theoretical prediction. We show that sensory discrimination is maximized when visual cortex operates near criticality. In addition to presenting this primary finding in detail, this thesis will also address our preliminary results on change-point-detection in experimentally measured cortical dynamics.
Color discrimination errors associate with axial motor impairments in Parkinson's disease.
Bohnen, Nicolaas I; Haugen, Jacob; Ridder, Andrew; Kotagal, Vikas; Albin, Roger L; Frey, Kirk A; Müller, Martijn L T M
2017-01-01
Visual function deficits are more common in imbalance-predominant compared to tremor-predominant PD suggesting a pathophysiological role of impaired visual functions in axial motor impairments. To investigate the relationship between changes in color discrimination and motor impairments in PD while accounting for cognitive or other confounder factors. PD subjects (n=49, age 66.7±8.3 years; Hoehn & Yahr stage 2.6±0.6) completed color discrimination assessment using the Farnsworth-Munsell 100 Hue Color Vision Test, neuropsychological, motor assessments and [ 11 C]dihydrotetrabenazine vesicular monoamine transporter type 2 PET imaging. MDS-UPDRS sub-scores for cardinal motor features were computed. Timed up and go mobility and walking tests were assessed in 48 subjects. Bivariate correlation coefficients between color discrimination and motor variables were significant only for the Timed up and go (R S =0.44, P=0.0018) and the MDS-UPDRS axial motor scores (R S =0.38, P=0.0068). Multiple regression confounder analysis using the Timed up and go as outcome parameter showed a significant total model (F (5,43) = 7.3, P<0.0001) with significant regressor effects for color discrimination (standardized β=0.32, t=2.6, P=0.012), global cognitive Z-score (β=-0.33, t=-2.5, P=0.018), duration of disease (β=0.26, t=1.8, P=0.038), but not for age or striatal dopaminergic binding. The color discrimination test was also a significant independent regressor in the MDS-UPDRS axial motor model (standardized β=0.29, t=2.4, P=0.022; total model t (5,43) = 6.4, P=0.0002). Color discrimination errors associate with axial motor features in PD independent of cognitive deficits, nigrostriatal dopaminergic denervation, and other confounder variables. These findings may reflect shared pathophysiology between color discrimination visual impairments and axial motor burden in PD.
Fumagalli, Giorgio G; Basilico, Paola; Arighi, Andrea; Bocchetta, Martina; Dick, Katrina M; Cash, David M; Harding, Sophie; Mercurio, Matteo; Fenoglio, Chiara; Pietroboni, Anna M; Ghezzi, Laura; van Swieten, John; Borroni, Barbara; de Mendonça, Alexandre; Masellis, Mario; Tartaglia, Maria C; Rowe, James B; Graff, Caroline; Tagliavini, Fabrizio; Frisoni, Giovanni B; Laforce, Robert; Finger, Elizabeth; Sorbi, Sandro; Scarpini, Elio; Rohrer, Jonathan D; Galimberti, Daniela
2018-05-24
In patients with frontotemporal dementia, it has been shown that brain atrophy occurs earliest in the anterior cingulate, insula and frontal lobes. We used visual rating scales to investigate whether identifying atrophy in these areas may be helpful in distinguishing symptomatic patients carrying different causal mutations in the microtubule-associated protein tau (MAPT), progranulin (GRN) and chromosome 9 open reading frame (C9ORF72) genes. We also analysed asymptomatic carriers to see whether it was possible to visually identify brain atrophy before the appearance of symptoms. Magnetic resonance imaging of 343 subjects (63 symptomatic mutation carriers, 132 presymptomatic mutation carriers and 148 control subjects) from the Genetic Frontotemporal Dementia Initiative study were analysed by two trained raters using a protocol of six visual rating scales that identified atrophy in key regions of the brain (orbitofrontal, anterior cingulate, frontoinsula, anterior and medial temporal lobes and posterior cortical areas). Intra- and interrater agreement were greater than 0.73 for all the scales. Voxel-based morphometric analysis demonstrated a strong correlation between the visual rating scale scores and grey matter atrophy in the same region for each of the scales. Typical patterns of atrophy were identified: symmetric anterior and medial temporal lobe involvement for MAPT, asymmetric frontal and parietal loss for GRN, and a more widespread pattern for C9ORF72. Presymptomatic MAPT carriers showed greater atrophy in the medial temporal region than control subjects, but the visual rating scales could not identify presymptomatic atrophy in GRN or C9ORF72 carriers. These simple-to-use and reproducible scales may be useful tools in the clinical setting for the discrimination of different mutations of frontotemporal dementia, and they may even help to identify atrophy prior to onset in those with MAPT mutations.
Visual feature discrimination versus compression ratio for polygonal shape descriptors
NASA Astrophysics Data System (ADS)
Heuer, Joerg; Sanahuja, Francesc; Kaup, Andre
2000-10-01
In the last decade several methods for low level indexing of visual features appeared. Most often these were evaluated with respect to their discrimination power using measures like precision and recall. Accordingly, the targeted application was indexing of visual data within databases. During the standardization process of MPEG-7 the view on indexing of visual data changed, taking also communication aspects into account where coding efficiency is important. Even if the descriptors used for indexing are small compared to the size of images, it is recognized that there can be several descriptors linked to an image, characterizing different features and regions. Beside the importance of a small memory footprint for the transmission of the descriptor and the memory footprint in a database, eventually the search and filtering can be sped up by reducing the dimensionality of the descriptor if the metric of the matching can be adjusted. Based on a polygon shape descriptor presented for MPEG-7 this paper compares the discrimination power versus memory consumption of the descriptor. Different methods based on quantization are presented and their effect on the retrieval performance are measured. Finally an optimized computation of the descriptor is presented.
Bastien, Maude; Moffet, Hélène; Bouyer, Laurent; Perron, Marc; Hébert, Luc J; Leblond, Jean
2014-02-01
The Star Excursion Balance Test (SEBT) has frequently been used to measure motor control and residual functional deficits at different stages of recovery from lateral ankle sprain (LAS) in various populations. However, the validity of the measure used to characterize performance--the maximal reach distance (MRD) measured by visual estimation--is still unknown. To evaluate the concurrent validity of the MRD in the SEBT estimated visually vs the MRD measured with a 3D motion-capture system and evaluate and compare the discriminant validity of 2 MRD-normalization methods (by height or by lower-limb length) in participants with or without LAS (n = 10 per group). There is a high concurrent validity and a good degree of accuracy between the visual estimation measurement and the MRD gold-standard measurement for both groups and under all conditions. The Cohen d ratios between groups and MANOVA products were higher when computed from MRD data normalized by height. The results support the concurrent validity of visual estimation of the MRD and the use of the SEBT to evaluate motor control. Moreover, normalization of MRD data by height appears to increase the discriminant validity of this test.
Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J
2017-01-01
Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.
Katyal, Sucharit; Engel, Stephen A.; Oxenham, Andrew J.
2017-01-01
Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects. PMID:28107359
Castillo-Padilla, Diana V; Funke, Klaus
2016-01-01
Early cortical critical period resembles a state of enhanced neuronal plasticity enabling the establishment of specific neuronal connections during first sensory experience. Visual performance with regard to pattern discrimination is impaired if the cortex is deprived from visual input during the critical period. We wondered how unspecific activation of the visual cortex before closure of the critical period using repetitive transcranial magnetic stimulation (rTMS) could affect the critical period and the visual performance of the experimental animals. Would it cause premature closure of the plastic state and thus worsen experience-dependent visual performance, or would it be able to preserve plasticity? Effects of intermittent theta-burst stimulation (iTBS) were compared with those of an enriched environment (EE) during dark-rearing (DR) from birth. Rats dark-reared in a standard cage showed poor improvement in a visual pattern discrimination task, while rats housed in EE or treated with iTBS showed a performance indistinguishable from rats reared in normal light/dark cycle. The behavioral effects were accompanied by correlated changes in the expression of brain-derived neurotrophic factor (BDNF) and atypical PKC (PKCζ/PKMζ), two factors controlling stabilization of synaptic potentiation. It appears that not only nonvisual sensory activity and exercise but also cortical activation induced by rTMS has the potential to alleviate the effects of DR on cortical development, most likely due to stimulation of BDNF synthesis and release. As we showed previously, iTBS reduced the expression of parvalbumin in inhibitory cortical interneurons, indicating that modulation of the activity of fast-spiking interneurons contributes to the observed effects of iTBS. © 2015 Wiley Periodicals, Inc.
Monkey Pulvinar Neurons Fire Differentially to Snake Postures
Le, Quan Van; Isbell, Lynne A.; Matsumoto, Jumpei; Le, Van Quang; Hori, Etsuro; Tran, Anh Hai; Maior, Rafael S.; Tomaz, Carlos; Ono, Taketoshi; Nishijo, Hisao
2014-01-01
There is growing evidence from both behavioral and neurophysiological approaches that primates are able to rapidly discriminate visually between snakes and innocuous stimuli. Recent behavioral evidence suggests that primates are also able to discriminate the level of threat posed by snakes, by responding more intensely to a snake model poised to strike than to snake models in coiled or sinusoidal postures (Etting and Isbell 2014). In the present study, we examine the potential for an underlying neurological basis for this ability. Previous research indicated that the pulvinar is highly sensitive to snake images. We thus recorded pulvinar neurons in Japanese macaques (Macaca fuscata) while they viewed photos of snakes in striking and non-striking postures in a delayed non-matching to sample (DNMS) task. Of 821 neurons recorded, 78 visually responsive neurons were tested with the all snake images. We found that pulvinar neurons in the medial and dorsolateral pulvinar responded more strongly to snakes in threat displays poised to strike than snakes in non-threat-displaying postures with no significant difference in response latencies. A multidimensional scaling analysis of the 78 visually responsive neurons indicated that threat-displaying and non-threat-displaying snakes were separated into two different clusters in the first epoch of 50 ms after stimulus onset, suggesting bottom-up visual information processing. These results indicate that pulvinar neurons in primates discriminate between poised to strike from those in non-threat-displaying postures. This neuronal ability likely facilitates behavioral discrimination and has clear adaptive value. Our results are thus consistent with the Snake Detection Theory, which posits that snakes were instrumental in the evolution of primate visual systems. PMID:25479158
The role of Broca's area in speech perception: evidence from aphasia revisited.
Hickok, Gregory; Costanzo, Maddalena; Capasso, Rita; Miceli, Gabriele
2011-12-01
Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that individuals with Broca's aphasia, and therefore inferred damage to Broca's area, can have deficits in speech sound discrimination. Here we re-examine this issue in 24 patients with radiologically confirmed lesions to Broca's area and various degrees of associated non-fluent speech production. Patients performed two same-different discrimination tasks involving pairs of CV syllables, one in which both CVs were presented auditorily, and the other in which one syllable was auditorily presented and the other visually presented as an orthographic form; word comprehension was also assessed using word-to-picture matching tasks in both auditory and visual forms. Discrimination performance on the all-auditory task was four standard deviations above chance, as measured using d', and was unrelated to the degree of non-fluency in the patients' speech production. Performance on the auditory-visual task, however, was worse than, and not correlated with, the all-auditory task. The auditory-visual task was related to the degree of speech non-fluency. Word comprehension was at ceiling for the auditory version (97% accuracy) and near ceiling for the orthographic version (90% accuracy). We conclude that the motor speech system is not necessary for speech perception as measured both by discrimination and comprehension paradigms, but may play a role in orthographic decoding or in auditory-visual matching of phonological forms. 2011 Elsevier Inc. All rights reserved.
2018-01-01
Objective To study the performance of multifocal-visual-evoked-potential (mfVEP) signals filtered using empirical mode decomposition (EMD) in discriminating, based on amplitude, between control and multiple sclerosis (MS) patient groups, and to reduce variability in interocular latency in control subjects. Methods MfVEP signals were obtained from controls, clinically definitive MS and MS-risk progression patients (radiologically isolated syndrome (RIS) and clinically isolated syndrome (CIS)). The conventional method of processing mfVEPs consists of using a 1–35 Hz bandpass frequency filter (XDFT). The EMD algorithm was used to decompose the XDFT signals into several intrinsic mode functions (IMFs). This signal processing was assessed by computing the amplitudes and latencies of the XDFT and IMF signals (XEMD). The amplitudes from the full visual field and from ring 5 (9.8–15° eccentricity) were studied. The discrimination index was calculated between controls and patients. Interocular latency values were computed from the XDFT and XEMD signals in a control database to study variability. Results Using the amplitude of the mfVEP signals filtered with EMD (XEMD) obtains higher discrimination index values than the conventional method when control, MS-risk progression (RIS and CIS) and MS subjects are studied. The lowest variability in interocular latency computations from the control patient database was obtained by comparing the XEMD signals with the XDFT signals. Even better results (amplitude discrimination and latency variability) were obtained in ring 5 (9.8–15° eccentricity of the visual field). Conclusions Filtering mfVEP signals using the EMD algorithm will result in better identification of subjects at risk of developing MS and better accuracy in latency studies. This could be applied to assess visual cortex activity in MS diagnosis and evolution studies. PMID:29677200
Invariant recognition drives neural representations of action sequences
Poggio, Tomaso
2017-01-01
Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864
Tapper, Anthony; Gonzalez, Dave; Roy, Eric; Niechwiej-Szwedo, Ewa
2017-02-01
The purpose of this study was to examine executive functions in team sport athletes with and without a history of concussion. Executive functions comprise many cognitive processes including, working memory, attention and multi-tasking. Past research has shown that concussions cause difficulties in vestibular-visual and vestibular-auditory dual-tasking, however, visual-auditory tasks have been examined rarely. Twenty-nine intercollegiate varsity ice hockey athletes (age = 19.13, SD = 1.56; 15 females) performed an experimental dual-task paradigm that required simultaneously processing visual and auditory information. A brief interview, event description and self-report questionnaires were used to assign participants to each group (concussion, no-concussion). Eighteen athletes had a history of concussion and 11 had no concussion history. The two tests involved visuospatial working memory (i.e., Corsi block test) and auditory tone discrimination. Participants completed both tasks individually, then simultaneously. Two outcome variables were measured, Corsi block memory span and auditory tone discrimination accuracy. No differences were shown when each task was performed alone; however, athletes with a history of concussion had a significantly worse performance on the tone discrimination task in the dual-task condition. In conclusion, long-term deficits in executive functions were associated with a prior history of concussion when cognitive resources were stressed. Evaluations of executive functions and divided attention appear to be helpful in discriminating participants with and without a history concussion.
Wang, Zhengke; Cheng-Lai, Alice; Song, Yan; Cutting, Laurie; Jiang, Yuzheng; Lin, Ou; Meng, Xiangzhi; Zhou, Xiaolin
2014-08-01
Learning to read involves discriminating between different written forms and establishing connections with phonology and semantics. This process may be partially built upon visual perceptual learning, during which the ability to process the attributes of visual stimuli progressively improves with practice. The present study investigated to what extent Chinese children with developmental dyslexia have deficits in perceptual learning by using a texture discrimination task, in which participants were asked to discriminate the orientation of target bars. Experiment l demonstrated that, when all of the participants started with the same initial stimulus-to-mask onset asynchrony (SOA) at 300 ms, the threshold SOA, adjusted according to response accuracy for reaching 80% accuracy, did not show a decrement over 5 days of training for children with dyslexia, whereas this threshold SOA steadily decreased over the training for the control group. Experiment 2 used an adaptive procedure to determine the threshold SOA for each participant during training. Results showed that both the group of dyslexia and the control group attained perceptual learning over the sessions in 5 days, although the threshold SOAs were significantly higher for the group of dyslexia than for the control group; moreover, over individual participants, the threshold SOA negatively correlated with their performance in Chinese character recognition. These findings suggest that deficits in visual perceptual processing and learning might, in part, underpin difficulty in reading Chinese. Copyright © 2014 John Wiley & Sons, Ltd.
Li, Li; MaBouDi, HaDi; Egertová, Michaela; Elphick, Maurice R.
2017-01-01
Synaptic plasticity is considered to be a basis for learning and memory. However, the relationship between synaptic arrangements and individual differences in learning and memory is poorly understood. Here, we explored how the density of microglomeruli (synaptic complexes) within specific regions of the bumblebee (Bombus terrestris) brain relates to both visual learning and inter-individual differences in learning and memory performance on a visual discrimination task. Using whole-brain immunolabelling, we measured the density of microglomeruli in the collar region (visual association areas) of the mushroom bodies of the bumblebee brain. We found that bumblebees which made fewer errors during training in a visual discrimination task had higher microglomerular density. Similarly, bumblebees that had better retention of the learned colour-reward associations two days after training had higher microglomerular density. Further experiments indicated experience-dependent changes in neural circuitry: learning a colour-reward contingency with 10 colours (but not two colours) does result, and exposure to many different colours may result, in changes to microglomerular density in the collar region of the mushroom bodies. These results reveal the varying roles that visual experience, visual learning and foraging activity have on neural structure. Although our study does not provide a causal link between microglomerular density and performance, the observed positive correlations provide new insights for future studies into how neural structure may relate to inter-individual differences in learning and memory. PMID:28978727
Li, Li; MaBouDi, HaDi; Egertová, Michaela; Elphick, Maurice R; Chittka, Lars; Perry, Clint J
2017-10-11
Synaptic plasticity is considered to be a basis for learning and memory. However, the relationship between synaptic arrangements and individual differences in learning and memory is poorly understood. Here, we explored how the density of microglomeruli (synaptic complexes) within specific regions of the bumblebee ( Bombus terrestris ) brain relates to both visual learning and inter-individual differences in learning and memory performance on a visual discrimination task. Using whole-brain immunolabelling, we measured the density of microglomeruli in the collar region (visual association areas) of the mushroom bodies of the bumblebee brain. We found that bumblebees which made fewer errors during training in a visual discrimination task had higher microglomerular density. Similarly, bumblebees that had better retention of the learned colour-reward associations two days after training had higher microglomerular density. Further experiments indicated experience-dependent changes in neural circuitry: learning a colour-reward contingency with 10 colours (but not two colours) does result, and exposure to many different colours may result, in changes to microglomerular density in the collar region of the mushroom bodies. These results reveal the varying roles that visual experience, visual learning and foraging activity have on neural structure. Although our study does not provide a causal link between microglomerular density and performance, the observed positive correlations provide new insights for future studies into how neural structure may relate to inter-individual differences in learning and memory. © 2017 The Authors.
Visual recovery in cortical blindness is limited by high internal noise
Cavanaugh, Matthew R.; Zhang, Ruyuan; Melnick, Michael D.; Das, Anasuya; Roberts, Mariel; Tadin, Duje; Carrasco, Marisa; Huxlin, Krystel R.
2015-01-01
Damage to the primary visual cortex typically causes cortical blindness (CB) in the hemifield contralateral to the damaged hemisphere. Recent evidence indicates that visual training can partially reverse CB at trained locations. Whereas training induces near-complete recovery of coarse direction and orientation discriminations, deficits in fine motion processing remain. Here, we systematically disentangle components of the perceptual inefficiencies present in CB fields before and after coarse direction discrimination training. In seven human CB subjects, we measured threshold versus noise functions before and after coarse direction discrimination training in the blind field and at corresponding intact field locations. Threshold versus noise functions were analyzed within the framework of the linear amplifier model and the perceptual template model. Linear amplifier model analysis identified internal noise as a key factor differentiating motion processing across the tested areas, with visual training reducing internal noise in the blind field. Differences in internal noise also explained residual perceptual deficits at retrained locations. These findings were confirmed with perceptual template model analysis, which further revealed that the major residual deficits between retrained and intact field locations could be explained by differences in internal additive noise. There were no significant differences in multiplicative noise or the ability to process external noise. Together, these results highlight the critical role of altered internal noise processing in mediating training-induced visual recovery in CB fields, and may explain residual perceptual deficits relative to intact regions of the visual field. PMID:26389544
Visual training improves perceptual grouping based on basic stimulus features.
Kurylo, Daniel D; Waxman, Richard; Kidron, Rachel; Silverstein, Steven M
2017-10-01
Training on visual tasks improves performance on basic and higher order visual capacities. Such improvement has been linked to changes in connectivity among mediating neurons. We investigated whether training effects occur for perceptual grouping. It was hypothesized that repeated engagement of integration mechanisms would enhance grouping processes. Thirty-six participants underwent 15 sessions of training on a visual discrimination task that required perceptual grouping. Participants viewed 20 × 20 arrays of dots or Gabor patches and indicated whether the array appeared grouped as vertical or horizontal lines. Across trials stimuli became progressively disorganized, contingent upon successful discrimination. Four visual dimensions were examined, in which grouping was based on similarity in luminance, color, orientation, and motion. Psychophysical thresholds of grouping were assessed before and after training. Results indicate that performance in all four dimensions improved with training. Training on a control condition, which paralleled the discrimination task but without a grouping component, produced no improvement. In addition, training on only the luminance and orientation dimensions improved performance for those conditions as well as for grouping by color, on which training had not occurred. However, improvement from partial training did not generalize to motion. Results demonstrate that a training protocol emphasizing stimulus integration enhanced perceptual grouping. Results suggest that neural mechanisms mediating grouping by common luminance and/or orientation contribute to those mediating grouping by color but do not share resources for grouping by common motion. Results are consistent with theories of perceptual learning emphasizing plasticity in early visual processing regions.
Experience, Context, and the Visual Perception of Human Movement
ERIC Educational Resources Information Center
Jacobs, Alissa; Pinto, Jeannine; Shiffrar, Maggie
2004-01-01
Why are human observers particularly sensitive to human movement? Seven experiments examined the roles of visual experience and motor processes in human movement perception by comparing visual sensitivities to point-light displays of familiar, unusual, and impossible gaits across gait-speed and identity discrimination tasks. In both tasks, visual…
Wynn, Gareth J; Todd, Derick M; Webber, Matthew; Bonnett, Laura; McShane, James; Kirchhof, Paulus; Gupta, Dhiraj
2014-07-01
To validate the European Heart Rhythm Association (EHRA) symptom classification in atrial fibrillation (AF) and test whether its discriminative ability could be improved by a simple modification. We compared the EHRA classification with three quality of life (QoL) measures: the AF-specific Atrial Fibrillation Effect on QualiTy-of-life (AFEQT) questionnaire; two components of the EQ-5D instrument, a health-related utility which can be used to calculate cost-effectiveness, and the visual analogue scale (VAS) which demonstrates patients' own assessment of health status. We then proposed a simple modification [modified EHRA (mEHRA)] to improve discrimination at the point where major treatment decisions are made. quality of life data and clinician-allocated EHRA class were prospectively collected on 362 patients with AF. A step-wise, negative association was seen between the EHRA class and both the AFEQT and the VAS scores. Health-related utility was only significantly different between Classes 2 and 3 (P < 0.001). We developed and validated the mEHRA score separating Class 2 (symptomatic AF not limiting daily activities), based on whether the patients were 'troubled by their AF' (Class 2b) or not (Class 2a). This produced two distinct groups with lower AFEQT and VAS scores and, importantly, both clinically and statistically significant lower health utility (Δutility 0.9, P = 0.01) in Class 2b than Class 2a. Based on patients' own assessment of their health status and the disease-specific AFEQT, the EHRA score can be considered a useful semi-quantitative classification. The mEHRA score has a clearer separation in health utility to assess the cost efficacy of interventions such as ablation, where Class 2b symptoms appear to be the appropriate treatment threshold. © The Author 2014. Published by Oxford University Press on behalf of the European Society of Cardiology.
Effects of Peripheral Eccentricity and Head Orientation on Gaze Discrimination.
Palanica, Adam; Itier, Roxane J
2014-01-01
Visual search tasks support a special role for direct gaze in human cognition, while classic gaze judgment tasks suggest the congruency between head orientation and gaze direction plays a central role in gaze perception. Moreover, whether gaze direction can be accurately discriminated in the periphery using covert attention is unknown. In the present study, individual faces in frontal and in deviated head orientations with a direct or an averted gaze were flashed for 150 ms across the visual field; participants focused on a centred fixation while judging the gaze direction. Gaze discrimination speed and accuracy varied with head orientation and eccentricity. The limit of accurate gaze discrimination was less than ±6° eccentricity. Response times suggested a processing facilitation for direct gaze in fovea, irrespective of head orientation, however, by ±3° eccentricity, head orientation started biasing gaze judgments, and this bias increased with eccentricity. Results also suggested a special processing of frontal heads with direct gaze in central vision, rather than a general congruency effect between eye and head cues. Thus, while both head and eye cues contribute to gaze discrimination, their role differs with eccentricity.
Saliency affects feedforward more than feedback processing in early visual cortex.
Emmanouil, Tatiana Aloi; Avigan, Philip; Persuh, Marjan; Ro, Tony
2013-07-01
Early visual cortex activity is influenced by both bottom-up and top-down factors. To investigate the influences of bottom-up (saliency) and top-down (task) factors on different stages of visual processing, we used transcranial magnetic stimulation (TMS) of areas V1/V2 to induce visual suppression at varying temporal intervals. Subjects were asked to detect and discriminate the color or the orientation of briefly-presented small lines that varied on color saliency based on color contrast with the surround. Regardless of task, color saliency modulated the magnitude of TMS-induced visual suppression, especially at earlier temporal processing intervals that reflect the feedforward stage of visual processing in V1/V2. In a second experiment we found that our color saliency effects were also influenced by an inherent advantage of the color red relative to other hues and that color discrimination difficulty did not affect visual suppression. These results support the notion that early visual processing is stimulus driven and that feedforward and feedback processing encode different types of information about visual scenes. They further suggest that certain hues can be prioritized over others within our visual systems by being more robustly represented during early temporal processing intervals. Copyright © 2013 Elsevier Ltd. All rights reserved.
Poltavski, Dmitri; Biberdorf, David
2015-01-01
Abstract In the growing field of sports vision little is still known about unique attributes of visual processing in ice hockey and what role visual processing plays in the overall athlete's performance. In the present study we evaluated whether visual, perceptual and cognitive/motor variables collected using the Nike SPARQ Sensory Training Station have significant relevance to the real game statistics of 38 Division I collegiate male and female hockey players. The results demonstrated that 69% of variance in the goals made by forwards in 2011-2013 could be predicted by their faster reaction time to a visual stimulus, better visual memory, better visual discrimination and a faster ability to shift focus between near and far objects. Approximately 33% of variance in game points was significantly related to better discrimination among competing visual stimuli. In addition, reaction time to a visual stimulus as well as stereoptic quickness significantly accounted for 24% of variance in the mean duration of the player's penalty time. This is one of the first studies to show that some of the visual skills that state-of-the-art generalised sports vision programmes are purported to target may indeed be important for hockey players' actual performance on the ice.
The sonar aperture and its neural representation in bats.
Heinrich, Melina; Warmbold, Alexander; Hoffmann, Susanne; Firzlaff, Uwe; Wiegrebe, Lutz
2011-10-26
As opposed to visual imaging, biosonar imaging of spatial object properties represents a challenge for the auditory system because its sensory epithelium is not arranged along space axes. For echolocating bats, object width is encoded by the amplitude of its echo (echo intensity) but also by the naturally covarying spread of angles of incidence from which the echoes impinge on the bat's ears (sonar aperture). It is unclear whether bats use the echo intensity and/or the sonar aperture to estimate an object's width. We addressed this question in a combined psychophysical and electrophysiological approach. In three virtual-object playback experiments, bats of the species Phyllostomus discolor had to discriminate simple reflections of their own echolocation calls differing in echo intensity, sonar aperture, or both. Discrimination performance for objects with physically correct covariation of sonar aperture and echo intensity ("object width") did not differ from discrimination performances when only the sonar aperture was varied. Thus, the bats were able to detect changes in object width in the absence of intensity cues. The psychophysical results are reflected in the responses of a population of units in the auditory midbrain and cortex that responded strongest to echoes from objects with a specific sonar aperture, regardless of variations in echo intensity. Neurometric functions obtained from cortical units encoding the sonar aperture are sufficient to explain the behavioral performance of the bats. These current data show that the sonar aperture is a behaviorally relevant and reliably encoded cue for object size in bat sonar.
McLean, Rachael; Hoek, Janet; Hedderley, Duncan
2012-05-01
Dietary sodium reduction is a cost-effective public health intervention to reduce chronic disease. In response to calls for further research into front-of-pack labelling systems, we examined how alternative sodium nutrition label formats and nutrition claims influenced consumers' choice behaviour and whether consumers with or without a diagnosis of hypertension differed in their choice patterns. An anonymous online experiment in which participants viewed ten choice sets featuring three fictitious brands of baked beans with varied label formats and nutritional profiles (high and low sodium) and indicated which brand in each set they would purchase if shopping for this product. Participants were recruited from New Zealand's largest online nationwide research panel. Five hundred people with self-reported hypertension and 191 people without hypertension aged 18 to 79 years. The addition of a front-of-pack label increased both groups' ability to discriminate between products with high and low sodium, while the Traffic Light label enabled better identification of the high-sodium product. Both front-of-pack formats enhanced discrimination in the presence of a reduced salt claim, but the Traffic Light label also performed better than the Percentage Daily Intake label in moderating the effect of the claim for the high-sodium product. Front-of-pack labels, particularly those with simple visual cues, enhance consumers' ability to discriminate between high- and low-sodium products, even when those products feature nutrition claims.
Kopp, Bruno; Tabeling, Sandra; Moschner, Carsten; Wessel, Karl
2007-08-17
Decision-making is a fundamental capacity which is crucial to many higher-order psychological functions. We recorded event-related potentials (ERPs) during a visual target-identification task that required go-nogo choices. Targets were identified on the basis of cross-dimensional conjunctions of particular colors and forms. Color discriminability was manipulated in three conditions to determine the effects of color distinctiveness on component processes of decision-making. Target identification was accompanied by the emergence of prefrontal P2a and P3b. Selection negativity (SN) revealed that target-compatible features captured attention more than target-incompatible features, suggesting that intra-dimensional attentional capture was goal-contingent. No changes of cross-dimensional selection priorities were measurable when color discriminability was altered. Peak latencies of the color-related SN provided a chronometric measure of the duration of attention-related neural processing. ERPs recorded over the frontocentral scalp (N2c, P3a) revealed that color-overlap distractors, more than form-overlap distractors, required additional late selection. The need for additional response selection induced by color-overlap distractors was severely reduced when color discriminability decreased. We propose a simple model of cross-dimensional perceptual decision-making. The temporal synchrony of separate color-related and form-related choices determines whether or not distractor processing includes post-perceptual stages. ERP measures contribute to a comprehensive explanation of the temporal dynamics of component processes of perceptual decision-making.
Gessaroli, Erica; Hithersay, Rosalyn; Mitolo, Micaela; Didino, Daniele; Kanai, Ryota; Cohen Kadosh, Roi; Walsh, Vincent
2013-01-01
Improvement in performance following cognitive training is known to be further enhanced when coupled with brain stimulation. Here we ask whether training-induced changes can be maintained long term and, crucially, whether they can extend to other related but untrained skills. We trained overall 40 human participants on a simple and well established paradigm assessing the ability to discriminate numerosity–or the number of items in a set–which is thought to rely on an “approximate number sense” (ANS) associated with parietal lobes. We coupled training with parietal stimulation in the form of transcranial random noise stimulation (tRNS), a noninvasive technique that modulates neural activity. This yielded significantly better and longer lasting improvement (up to 16 weeks post-training) of the precision of the ANS compared with cognitive training in absence of stimulation, stimulation in absence of cognitive training, and cognitive training coupled to stimulation to a control site (motor areas). Critically, only ANS improvement induced by parietal tRNS + Training transferred to proficiency in other parietal lobe-based quantity judgment, i.e., time and space discrimination, but not to quantity-unrelated tasks measuring attention, executive functions, and visual pattern recognition. These results indicate that coupling intensive cognitive training with tRNS to critical brain regions resulted not only in the greatest and longer lasting improvement of numerosity discrimination, but importantly in this enhancement being transferable when trained and untrained abilities are carefully chosen to share common cognitive and neuronal components. PMID:24027289
Cappelletti, Marinella; Gessaroli, Erica; Hithersay, Rosalyn; Mitolo, Micaela; Didino, Daniele; Kanai, Ryota; Cohen Kadosh, Roi; Walsh, Vincent
2013-09-11
Improvement in performance following cognitive training is known to be further enhanced when coupled with brain stimulation. Here we ask whether training-induced changes can be maintained long term and, crucially, whether they can extend to other related but untrained skills. We trained overall 40 human participants on a simple and well established paradigm assessing the ability to discriminate numerosity--or the number of items in a set--which is thought to rely on an "approximate number sense" (ANS) associated with parietal lobes. We coupled training with parietal stimulation in the form of transcranial random noise stimulation (tRNS), a noninvasive technique that modulates neural activity. This yielded significantly better and longer lasting improvement (up to 16 weeks post-training) of the precision of the ANS compared with cognitive training in absence of stimulation, stimulation in absence of cognitive training, and cognitive training coupled to stimulation to a control site (motor areas). Critically, only ANS improvement induced by parietal tRNS + Training transferred to proficiency in other parietal lobe-based quantity judgment, i.e., time and space discrimination, but not to quantity-unrelated tasks measuring attention, executive functions, and visual pattern recognition. These results indicate that coupling intensive cognitive training with tRNS to critical brain regions resulted not only in the greatest and longer lasting improvement of numerosity discrimination, but importantly in this enhancement being transferable when trained and untrained abilities are carefully chosen to share common cognitive and neuronal components.
Physical performance measures that predict faller status in community-dwelling older adults.
Macrae, P G; Lacourse, M; Moldavon, R
1992-01-01
Falls are a leading cause of fatal and nonfatal injuries among the elderly. Accurate determination of risk factors associated with falls in older adults is necessary, not only for individual patient management, but also for the development of fall prevention programs. The purpose of this study was to evaluate the effectiveness of clinical measures, such as the one-legged stance test (OLST), sit-to-stand test (STST), manual muscle tests (MMT), and response speed in predicting faller status in community-dwelling older adults (N = 94, age 60-89 years). The variables assessed were single-leg standing (as measured by OLST), STST, and MMT of 12 different muscle groups (hip flexors, hip abductors, hip adductors, knee flexors, knee extensors, ankle dorsiflexors, ankle plantarflexors, shoulder flexors, shoulder abductors, elbow flexors, elbow extensors, and finger flexors), and speed of response (as measured by a visual hand reaction and movement time task). Of the 94 older adults assessed, 28 (29.7%) reported at least one fall within the previous year. The discriminant analysis revealed that there were six variables that significantly discriminated between fallers and nonfallers. These variables included MMT of the ankle dorsiflexors, knee flexors, hip abductors, and knee extensors, as well as time on the OLST and the STST. The results indicate that simple clinical measures of musculoskeletal function can discriminate fallers from nonfallers in community-dwelling older adults. J Orthop Sports Phys Ther 1992;16(3):123-128.
Attentional Capture of Objects Referred to by Spoken Language
ERIC Educational Resources Information Center
Salverda, Anne Pier; Altmann, Gerry T. M.
2011-01-01
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…
PCANet: A Simple Deep Learning Baseline for Image Classification?
Chan, Tsung-Han; Jia, Kui; Gao, Shenghua; Lu, Jiwen; Zeng, Zinan; Ma, Yi
2015-12-01
In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.
Face-gender discrimination is possible in the near-absence of attention.
Reddy, Leila; Wilken, Patrick; Koch, Christof
2004-03-02
The attentional cost associated with the visual discrimination of the gender of a face was investigated. Participants performed a face-gender discrimination task either alone (single-task) or concurrently (dual-task) with a known attentional demanding task (5-letter T/L discrimination). Overall performance on face-gender discrimination suffered remarkably little under the dual-task condition compared to the single-task condition. Similar results were obtained in experiments that controlled for potential training effects or the use of low-level cues in this discrimination task. Our results provide further evidence against the notion that only low-level representations can be accessed outside the focus of attention.
Dopson, Jemma C; Williams, Natalie A; Esber, Guillem R; Pearce, John M
2010-11-01
According to established theories of attention (e.g., Mackintosh, 1975; Sutherland & Mackintosh, 1971), simple discriminations of the form AX+ BX- result in an increase in attention to stimuli A and B, which are relevant to the outcome that follows them, at the expense of X, which is irrelevant. Experiments that have apparently shown such changes in attention have failed to determine whether attention is enhanced to both A and B, which signal reinforcement and nonreinforcement, respectively, or just to A. In Experiments 1 and 2, pigeons were trained with a number of discriminations of the kind AX+ BX-, before compounds that had been consistently nonreinforced were involved in a subsequent discrimination. Both experiments provided support for theories that propose that more attention is paid to stimuli that consistently signal nonreinforcement than to irrelevant stimuli in simple discriminations.
Lind, O; Delhey, K
2015-03-01
Birds have sophisticated colour vision mediated by four cone types that cover a wide visual spectrum including ultraviolet (UV) wavelengths. Many birds have modest UV sensitivity provided by violet-sensitive (VS) cones with sensitivity maxima between 400 and 425 nm. However, some birds have evolved higher UV sensitivity and a larger visual spectrum given by UV-sensitive (UVS) cones maximally sensitive at 360-370 nm. The reasons for VS-UVS transitions and their relationship to visual ecology remain unclear. It has been hypothesized that the evolution of UVS-cone vision is linked to plumage colours so that visual sensitivity and feather coloration are 'matched'. This leads to the specific prediction that UVS-cone vision enhances the discrimination of plumage colours of UVS birds while such an advantage is absent or less pronounced for VS-bird coloration. We test this hypothesis using knowledge of the complex distribution of UVS cones among birds combined with mathematical modelling of colour discrimination during different viewing conditions. We find no support for the hypothesis, which, combined with previous studies, suggests only a weak relationship between UVS-cone vision and plumage colour evolution. Instead, we suggest that UVS-cone vision generally favours colour discrimination, which creates a nonspecific selection pressure for the evolution of UVS cones. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.
A Discussion of Assessment Needs in Manual Communication for Pre-College Students.
ERIC Educational Resources Information Center
Cokely, Dennis R.
The paper reviews issues in evaluating the manual communications skills of pre-college hearing impaired students, including testing of visual discrimination and visual memory, simultaneous communication, and attention span. (CL)
A simple randomisation procedure for validating discriminant analysis: a methodological note.
Wastell, D G
1987-04-01
Because the goal of discriminant analysis (DA) is to optimise classification, it designedly exaggerates between-group differences. This bias complicates validation of DA. Jack-knifing has been used for validation but is inappropriate when stepwise selection (SWDA) is employed. A simple randomisation test is presented which is shown to give correct decisions for SWDA. The general superiority of randomisation tests over orthodox significance tests is discussed. Current work on non-parametric methods of estimating the error rates of prediction rules is briefly reviewed.
Turchi, Janita; Devan, Bryan; Yin, Pingbo; Sigrist, Emmalynn; Mishkin, Mortimer
2010-01-01
The monkey's ability to learn a set of visual discriminations presented concurrently just once a day on successive days (24-hr ITI task) is based on habit formation, which is known to rely on a visuo-striatal circuit and to be independent of visuo-rhinal circuits that support one-trial memory. Consistent with this dissociation, we recently reported that performance on the 24-hr ITI task is impaired by a striatal-function blocking agent, the dopaminergic antagonist haloperidol, and not by a rhinal-function blocking agent, the muscarinic cholinergic antagonist scopolamine. In the present study, monkeys were trained on a short-ITI form of concurrent visual discrimination learning, one in which a set of stimulus pairs is repeated not only across daily sessions but also several times within each session (in this case, at about 4-min ITIs). Asymptotic discrimination learning rates in the non-drug condition were reduced by half, from ~11 trials/pair on the 24-hr ITI task to ~5 trials/pair on the 4-min ITI task, and this faster learning was impaired by systemic injections of either haloperidol or scopolamine. The results suggest that in the version of concurrent discrimination learning used here, the short ITIs within a session recruit both visuo-rhinal and visuo-striatal circuits, and that the final performance level is driven by both cognitive memory and habit formation working in concert. PMID:20144631
Neural networks for Braille reading by the blind.
Sadato, N; Pascual-Leone, A; Grafman, J; Deiber, M P; Ibañez, V; Hallett, M
1998-07-01
To explore the neural networks used for Braille reading, we measured regional cerebral blood flow with PET during tactile tasks performed both by Braille readers blinded early in life and by sighted subjects. Eight proficient Braille readers were studied during Braille reading with both right and left index fingers. Eight-character, non-contracted Braille-letter strings were used, and subjects were asked to discriminate between words and non-words. To compare the behaviour of the brain of the blind and the sighted directly, non-Braille tactile tasks were performed by six different blind subjects and 10 sighted control subjects using the right index finger. The tasks included a non-discrimination task and three discrimination tasks (angle, width and character). Irrespective of reading finger (right or left), Braille reading by the blind activated the inferior parietal lobule, primary visual cortex, superior occipital gyri, fusiform gyri, ventral premotor area, superior parietal lobule, cerebellum and primary sensorimotor area bilaterally, also the right dorsal premotor cortex, right middle occipital gyrus and right prefrontal area. During non-Braille discrimination tasks, in blind subjects, the ventral occipital regions, including the primary visual cortex and fusiform gyri bilaterally were activated while the secondary somatosensory area was deactivated. The reverse pattern was found in sighted subjects where the secondary somatosensory area was activated while the ventral occipital regions were suppressed. These findings suggest that the tactile processing pathways usually linked in the secondary somatosensory area are rerouted in blind subjects to the ventral occipital cortical regions originally reserved for visual shape discrimination.
Turchi, Janita; Devan, Bryan; Yin, Pingbo; Sigrist, Emmalynn; Mishkin, Mortimer
2010-07-01
The monkey's ability to learn a set of visual discriminations presented concurrently just once a day on successive days (24-h ITI task) is based on habit formation, which is known to rely on a visuo-striatal circuit and to be independent of visuo-rhinal circuits that support one-trial memory. Consistent with this dissociation, we recently reported that performance on the 24-h ITI task is impaired by a striatal-function blocking agent, the dopaminergic antagonist haloperidol, and not by a rhinal-function blocking agent, the muscarinic cholinergic antagonist scopolamine. In the present study, monkeys were trained on a short-ITI form of concurrent visual discrimination learning, one in which a set of stimulus pairs is repeated not only across daily sessions but also several times within each session (in this case, at about 4-min ITIs). Asymptotic discrimination learning rates in the non-drug condition were reduced by half, from approximately 11 trials/pair on the 24-h ITI task to approximately 5 trials/pair on the 4-min ITI task, and this faster learning was impaired by systemic injections of either haloperidol or scopolamine. The results suggest that in the version of concurrent discrimination learning used here, the short ITIs within a session recruit both visuo-rhinal and visuo-striatal circuits, and that the final performance level is driven by both cognitive memory and habit formation working in concert.
Similarities in neural activations of face and Chinese character discrimination.
Liu, Jiangang; Tian, Jie; Li, Jun; Gong, Qiyong; Lee, Kang
2009-02-18
This study compared Chinese participants' visual discrimination of Chinese faces with that of Chinese characters, which are highly similar to faces on a variety of dimensions. Both Chinese faces and characters activated the bilateral middle fusiform with high levels of correlations. These findings suggest that although the expertise systems for faces and written symbols are known to be anatomically differentiated at the later stages of processing to serve face processing or written-symbol-specific processing purposes, they may share similar neural structures in the ventral occipitotemporal cortex at the stages of visual processing.
Discrimination among Panax species using spectral fingerprinting
USDA-ARS?s Scientific Manuscript database
Spectral fingerprints of samples of three Panax species (P. quinquefolius L., P. ginseng, and P. notoginseng) were acquired using UV, NIR, and MS spectrometry. With principal components analysis (PCA), all three methods allowed visual discrimination between all three species. All three methods wer...
Colour thresholds in a coral reef fish
Vorobyev, M.; Marshall, N. J.
2016-01-01
Coral reef fishes are among the most colourful animals in the world. Given the diversity of lifestyles and habitats on the reef, it is probable that in many instances coloration is a compromise between crypsis and communication. However, human observation of this coloration is biased by our primate visual system. Most animals have visual systems that are ‘tuned’ differently to humans; optimized for different parts of the visible spectrum. To understand reef fish colours, we need to reconstruct the appearance of colourful patterns and backgrounds as they are seen through the eyes of fish. Here, the coral reef associated triggerfish, Rhinecanthus aculeatus, was tested behaviourally to determine the limits of its colour vision. This is the first demonstration of behavioural colour discrimination thresholds in a coral reef species and is a critical step in our understanding of communication and speciation in this vibrant colourful habitat. Fish were trained to discriminate between a reward colour stimulus and series of non-reward colour stimuli and the discrimination thresholds were found to correspond well with predictions based on the receptor noise limited visual model and anatomy of the eye. Colour discrimination abilities of both reef fish and a variety of animals can therefore now be predicted using the parameters described here. PMID:27703704
Zhang, Mengliang; Zhao, Yang; Harrington, Peter de B; Chen, Pei
2016-03-01
Two simple fingerprinting methods, flow-injection coupled to ultraviolet spectroscopy and proton nuclear magnetic resonance, were used for discriminating between Aurantii fructus immaturus and Fructus poniciri trifoliatae immaturus . Both methods were combined with partial least-squares discriminant analysis. In the flow-injection method, four data representations were evaluated: total ultraviolet absorbance chromatograms, averaged ultraviolet spectra, absorbance at 193, 205, 225, and 283 nm, and absorbance at 225 and 283 nm. Prediction rates of 100% were achieved for all data representations by partial least-squares discriminant analysis using leave-one-sample-out cross-validation. The prediction rate for the proton nuclear magnetic resonance data by partial least-squares discriminant analysis with leave-one-sample-out cross-validation was also 100%. A new validation set of data was collected by flow-injection with ultraviolet spectroscopic detection two weeks later and predicted by partial least-squares discriminant analysis models constructed by the initial data representations with no parameter changes. The classification rates were 95% with the total ultraviolet absorbance chromatograms datasets and 100% with the other three datasets. Flow-injection with ultraviolet detection and proton nuclear magnetic resonance are simple, high throughput, and low-cost methods for discrimination studies.
Ritterbusch, Amy E
2016-01-01
This paper presents the participatory visual research design and findings from a qualitative assessment of the social impact of bazuco and inhalant/glue consumption among street youth in Bogotá, Colombia. The paper presents the visual methodologies our participatory action research (PAR) team employed in order to identify and overcome the stigmas and discrimination that street youth experience in society and within state-sponsored drug rehabilitation programmes. I call for critical reflection regarding the broad application of the terms 'participation' and 'participatory' in visual research and urge scholars and public health practitioners to consider the transformative potential of PAR for both the research and practice of global public health in general and rehabilitation programmes for street-based substance abuse in Colombia in particular. The paper concludes with recommendations as to how participatory visual methods can be used to promote social inclusion practices and to work against stigma and discrimination in health-related research and within health institutions.
Time course influences transfer of visual perceptual learning across spatial location.
Larcombe, S J; Kennard, C; Bridge, H
2017-06-01
Visual perceptual learning describes the improvement of visual perception with repeated practice. Previous research has established that the learning effects of perceptual training may be transferable to untrained stimulus attributes such as spatial location under certain circumstances. However, the mechanisms involved in transfer have not yet been fully elucidated. Here, we investigated the effect of altering training time course on the transferability of learning effects. Participants were trained on a motion direction discrimination task or a sinusoidal grating orientation discrimination task in a single visual hemifield. The 4000 training trials were either condensed into one day, or spread evenly across five training days. When participants were trained over a five-day period, there was transfer of learning to both the untrained visual hemifield and the untrained task. In contrast, when the same amount of training was condensed into a single day, participants did not show any transfer of learning. Thus, learning time course may influence the transferability of perceptual learning effects. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sigurdardottir, Heida Maria; Fridriksdottir, Liv Elisabet; Gudjonsdottir, Sigridur; Kristjánsson, Árni
2018-06-01
Evidence of interdependencies of face and word processing mechanisms suggest possible links between reading problems and abnormal face processing. In two experiments we assessed such high-level visual deficits in people with a history of reading problems. Experiment 1 showed that people who were worse at face matching had greater reading problems. In experiment 2, matched dyslexic and typical readers were tested, and difficulties with face matching were consistently found to predict dyslexia over and above both novel-object matching as well as matching noise patterns that shared low-level visual properties with faces. Furthermore, ADHD measures could not account for face matching problems. We speculate that reading difficulties in dyslexia are partially caused by specific deficits in high-level visual processing, in particular for visual object categories such as faces and words with which people have extensive experience. Copyright © 2018 Elsevier B.V. All rights reserved.
Local connected fractal dimension analysis in gill of fish experimentally exposed to toxicants.
Manera, Maurizio; Giari, Luisa; De Pasquale, Joseph A; Sayyaf Dezfuli, Bahram
2016-06-01
An operator-neutral method was implemented to objectively assess European seabass, Dicentrarchus labrax (Linnaeus, 1758) gill pathology after experimental exposure to cadmium (Cd) and terbuthylazine (TBA) for 24 and 48h. An algorithm-derived local connected fractal dimension (LCFD) frequency measure was used in this comparative analysis. Canonical variates (CVA) and linear discriminant analysis (LDA) were used to evaluate the discrimination power of the method among exposure classes (unexposed, Cd exposed, TBA exposed). Misclassification, sensitivity and specificity, both with original and cross-validated cases, were determined. LCFDs frequencies enhanced the differences among classes which were visually selected after their means, respective variances and the differences between Cd and TBA exposed means, with respect to unexposed mean, were analyzed by scatter plots. Selected frequencies were then scanned by means of LDA, stepwise analysis, and Mahalanobis distance to detect the most discriminative frequencies out of ten originally selected. Discrimination resulted in 91.7% of cross-validated cases correctly classified (22 out of 24 total cases), with sensitivity and specificity, respectively, of 95.5% (1 false negative with respect to 21 really positive cases) and 75% (1 false positive with respect to 3 really negative cases). CVA with convex hull polygons ensured prompt, visually intuitive discrimination among exposure classes and graphically supported the false positive case. The combined use of semithin sections, which enhanced the visual evaluation of the overall lamellar structure; of LCFD analysis, which objectively detected local variation in complexity, without the possible bias connected to human personnel; and of CVA/LDA, could be an objective, sensitive and specific approach to study fish gill lamellar pathology. Furthermore this approach enabled discrimination with sufficient confidence between exposure classes or pathological states and avoided misdiagnosis. Copyright © 2016 Elsevier B.V. All rights reserved.
Perceptual asymmetry in texture perception.
Williams, D; Julesz, B
1992-07-15
A fundamental property of human visual perception is our ability to distinguish between textures. A concerted effort has been made to account for texture segregation in terms of linear spatial filter models and their nonlinear extensions. However, for certain texture pairs the ease of discrimination changes when the role of figure and ground are reversed. This asymmetry poses a problem for both linear and nonlinear models. We have isolated a property of texture perception that can account for this asymmetry in discrimination: subjective closure. This property, which is also responsible for visual illusions, appears to be explainable by early visual processes alone. Our results force a reexamination of the process of human texture segregation and of some recent models that were introduced to explain it.
Handwriting Error Patterns of Children with Mild Motor Difficulties.
ERIC Educational Resources Information Center
Malloy-Miller, Theresa; And Others
1995-01-01
A test of handwriting legibility and 6 perceptual-motor tests were completed by 66 children ages 7-12. Among handwriting error patterns, execution was associated with visual-motor skill and sensory discrimination, aiming with visual-motor and fine-motor skills. The visual-spatial factor had no significant association with perceptual-motor…
An empirical investigation of the visual rightness theory of picture perception.
Locher, Paul J
2003-10-01
This research subjected the visual rightness theory of picture perception to experimental scrutiny. It investigated the ability of adults untrained in the visual arts to discriminate between reproductions of original abstract and representational paintings by renowned artists from two experimentally manipulated less well-organized versions of each art stimulus. Perturbed stimuli contained either minor or major disruptions in the originals' principal structural networks. It was found that participants were significantly more successful in discriminating between originals and their highly altered, but not slightly altered, perturbation than expected by chance. Accuracy of detection was found to be a function of style of painting and a viewer's way of thinking about a work as determined from their verbal reactions to it. Specifically, hit rates for originals were highest for abstract works when participants focused on their compositional style and form and highest for representational works when their content and realism were the focus of attention. Findings support the view that visually right (i.e., "good") compositions have efficient structural organizations that are visually salient to viewers who lack formal training in the visual arts.
Visual Equivalence and Amodal Completion in Cuttlefish
Lin, I-Rong; Chiao, Chuan-Chin
2017-01-01
Modern cephalopods are notably the most intelligent invertebrates and this is accompanied by keen vision. Despite extensive studies investigating the visual systems of cephalopods, little is known about their visual perception and object recognition. In the present study, we investigated the visual processing of the cuttlefish Sepia pharaonis, including visual equivalence and amodal completion. Cuttlefish were trained to discriminate images of shrimp and fish using the operant conditioning paradigm. After cuttlefish reached the learning criteria, a series of discrimination tasks were conducted. In the visual equivalence experiment, several transformed versions of the training images, such as images reduced in size, images reduced in contrast, sketches of the images, the contours of the images, and silhouettes of the images, were used. In the amodal completion experiment, partially occluded views of the original images were used. The results showed that cuttlefish were able to treat the training images of reduced size and sketches as the visual equivalence. Cuttlefish were also capable of recognizing partially occluded versions of the training image. Furthermore, individual differences in performance suggest that some cuttlefish may be able to recognize objects when visual information was partly removed. These findings support the hypothesis that the visual perception of cuttlefish involves both visual equivalence and amodal completion. The results from this research also provide insights into the visual processing mechanisms used by cephalopods. PMID:28220075
A simple and fast representation space for classifying complex time series
NASA Astrophysics Data System (ADS)
Zunino, Luciano; Olivares, Felipe; Bariviera, Aurelio F.; Rosso, Osvaldo A.
2017-03-01
In the context of time series analysis considerable effort has been directed towards the implementation of efficient discriminating statistical quantifiers. Very recently, a simple and fast representation space has been introduced, namely the number of turning points versus the Abbe value. It is able to separate time series from stationary and non-stationary processes with long-range dependences. In this work we show that this bidimensional approach is useful for distinguishing complex time series: different sets of financial and physiological data are efficiently discriminated. Additionally, a multiscale generalization that takes into account the multiple time scales often involved in complex systems has been also proposed. This multiscale analysis is essential to reach a higher discriminative power between physiological time series in health and disease.
NASA Astrophysics Data System (ADS)
Carelli, P.; Chiarello, F.; Torrioli, G.; Castellano, M. G.
2017-03-01
We present an apparatus for terahertz discrimination of materials designed to be fast, simple, compact, and economical in order to be suitable for preliminary on-field analysis. The system working principles, bio-inspired by the human vision of colors, are based on the use of an incoherent source, a room temperature detector, a series of microfabricated metamaterials selective filters, a very compact optics based on metallic ellipsoidal mirrors in air, and a treatment of the mirrors' surfaces that select the frequency band of interest. We experimentally demonstrate the operation of the apparatus in discriminating simple substances such as salt, staple foods, and grease. We present the system and the obtained results and discuss issues and possible developments.
Systematic distortions of perceptual stability investigated using immersive virtual reality
Tcheang, Lili; Gilson, Stuart J.; Glennerster, Andrew
2010-01-01
Using an immersive virtual reality system, we measured the ability of observers to detect the rotation of an object when its movement was yoked to the observer's own translation. Most subjects had a large bias such that a static object appeared to rotate away from them as they moved. Thresholds for detecting target rotation were similar to those for an equivalent speed discrimination task carried out by static observers, suggesting that visual discrimination is the predominant limiting factor in detecting target rotation. Adding a stable visual reference frame almost eliminated the bias. Varying the viewing distance of the target had little effect, consistent with observers under-estimating distance walked. However, accuracy of walking to a briefly presented visual target was high and not consistent with an under-estimation of distance walked. We discuss implications for theories of a task-independent representation of visual space. PMID:15845248
Lacerda, Eliza Maria da Costa Brito; Lima, Monica Gomes; Rodrigues, Anderson Raiol; Teixeira, Cláudio Eduardo Correa; de Lima, Lauro José Barata; Ventura, Dora Fix; Silveira, Luiz Carlos de Lima
2012-01-01
The purpose of this paper was to evaluate achromatic and chromatic vision of workers chronically exposed to organic solvents through psychophysical methods. Thirty-one gas station workers (31.5 ± 8.4 years old) were evaluated. Psychophysical tests were achromatic tests (Snellen chart, spatial and temporal contrast sensitivity, and visual perimetry) and chromatic tests (Ishihara's test, color discrimination ellipses, and Farnsworth-Munsell 100 hue test—FM100). Spatial contrast sensitivities of exposed workers were lower than the control at spatial frequencies of 20 and 30 cpd whilst the temporal contrast sensitivity was preserved. Visual field losses were found in 10–30 degrees of eccentricity in the solvent exposed workers. The exposed workers group had higher error values of FM100 and wider color discrimination ellipses area compared to the controls. Workers occupationally exposed to organic solvents had abnormal visual functions, mainly color vision losses and visual field constriction. PMID:22220188
Pavan, Andrea; Boyce, Matthew; Ghin, Filippo
2016-10-01
Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response. © The Author(s) 2016.
The effect of age upon the perception of 3-D shape from motion.
Norman, J Farley; Cheeseman, Jacob R; Pyles, Jessica; Baxter, Michael W; Thomason, Kelsey E; Calloway, Autum B
2013-12-18
Two experiments evaluated the ability of 50 older, middle-aged, and younger adults to discriminate the 3-dimensional (3-D) shape of curved surfaces defined by optical motion. In Experiment 1, temporal correspondence was disrupted by limiting the lifetimes of the moving surface points. In order to discriminate 3-D surface shape reliably, the younger and middle-aged adults needed a surface point lifetime of approximately 4 views (in the apparent motion sequences). In contrast, the older adults needed a much longer surface point lifetime of approximately 9 views in order to reliably perform the same task. In Experiment 2, the negative effect of age upon 3-D shape discrimination from motion was replicated. In this experiment, however, the participants' abilities to discriminate grating orientation and speed were also assessed. Edden et al. (2009) have recently demonstrated that behavioral grating orientation discrimination correlates with GABA (gamma aminobutyric acid) concentration in human visual cortex. Our results demonstrate that the negative effect of age upon 3-D shape perception from motion is not caused by impairments in the ability to perceive motion per se, but does correlate significantly with grating orientation discrimination. This result suggests that the age-related decline in 3-D shape discrimination from motion is related to decline in GABA concentration in visual cortex. Copyright © 2013 Elsevier B.V. All rights reserved.
Processing of unattended, simple negative pictures resists perceptual load.
Sand, Anders; Wiens, Stefan
2011-05-11
As researchers debate whether emotional pictures can be processed irrespective of spatial attention and perceptual load, negative and neutral pictures of simple figure-ground composition were shown at fixation and were surrounded by one, two, or three letters. When participants performed a picture discrimination task, there was evidence for motivated attention; that is, an early posterior negativity (EPN) and late positive potential (LPP) to negative versus neutral pictures. When participants performed a letter discrimination task, the EPN was unaffected whereas the LPP was reduced. Although performance decreased substantially with the number of letters (one to three), the LPP did not decrease further. Therefore, attention to simple, negative pictures at fixation seems to resist manipulations of perceptual load.
Neuron analysis of visual perception
NASA Technical Reports Server (NTRS)
Chow, K. L.
1980-01-01
The receptive fields of single cells in the visual system of cat and squirrel monkey were studied investigating the vestibular input affecting the cells, and the cell's responses during visual discrimination learning process. The receptive field characteristics of the rabbit visual system, its normal development, its abnormal development following visual deprivation, and on the structural and functional re-organization of the visual system following neo-natal and prenatal surgery were also studied. The results of each individual part of each investigation are detailed.
Kim, Jahae; Cho, Sang-Geon; Song, Minchul; Kang, Sae-Ryung; Kwon, Seong Young; Choi, Kang-Ho; Choi, Seong-Min; Kim, Byeong-Chae; Song, Ho-Chun
2016-01-01
Abstract To compare diagnostic performance and confidence of a standard visual reading and combined 3-dimensional stereotactic surface projection (3D-SSP) results to discriminate between Alzheimer disease (AD)/mild cognitive impairment (MCI), dementia with Lewy bodies (DLB), and frontotemporal dementia (FTD). [18F]fluorodeoxyglucose (FDG) PET brain images were obtained from 120 patients (64 AD/MCI, 38 DLB, and 18 FTD) who were clinically confirmed over 2 years follow-up. Three nuclear medicine physicians performed the diagnosis and rated diagnostic confidence twice; once by standard visual methods, and once by adding of 3D-SSP. Diagnostic performance and confidence were compared between the 2 methods. 3D-SSP showed higher sensitivity, specificity, accuracy, positive, and negative predictive values to discriminate different types of dementia compared with the visual method alone, except for AD/MCI specificity and FTD sensitivity. Correction of misdiagnosis after adding 3D-SSP images was greatest for AD/MCI (56%), followed by DLB (13%) and FTD (11%). Diagnostic confidence also increased in DLB (visual: 3.2; 3D-SSP: 4.1; P < 0.001), followed by AD/MCI (visual: 3.1; 3D-SSP: 3.8; P = 0.002) and FTD (visual: 3.5; 3D-SSP: 4.2; P = 0.022). Overall, 154/360 (43%) cases had a corrected misdiagnosis or improved diagnostic confidence for the correct diagnosis. The addition of 3D-SSP images to visual analysis helped to discriminate different types of dementia in FDG PET scans, by correcting misdiagnoses and enhancing diagnostic confidence in the correct diagnosis. Improvement of diagnostic accuracy and confidence by 3D-SSP images might help to determine the cause of dementia and appropriate treatment. PMID:27930593
Acquisition of a visual discrimination and reversal learning task by Labrador retrievers.
Lazarowski, Lucia; Foster, Melanie L; Gruen, Margaret E; Sherman, Barbara L; Case, Beth C; Fish, Richard E; Milgram, Norton W; Dorman, David C
2014-05-01
Optimal cognitive ability is likely important for military working dogs (MWD) trained to detect explosives. An assessment of a dog's ability to rapidly learn discriminations might be useful in the MWD selection process. In this study, visual discrimination and reversal tasks were used to assess cognitive performance in Labrador retrievers selected for an explosives detection program using a modified version of the Toronto General Testing Apparatus (TGTA), a system developed for assessing performance in a battery of neuropsychological tests in canines. The results of the current study revealed that, as previously found with beagles tested using the TGTA, Labrador retrievers (N = 16) readily acquired both tasks and learned the discrimination task significantly faster than the reversal task. The present study confirmed that the modified TGTA system is suitable for cognitive evaluations in Labrador retriever MWDs and can be used to further explore effects of sex, phenotype, age, and other factors in relation to canine cognition and learning, and may provide an additional screening tool for MWD selection.
Visual Selective Attention Biases Contribute to the Other-Race Effect Among 9-Month-Old Infants
Oakes, Lisa M.; Amso, Dima
2016-01-01
During the first year of life, infants maintain their ability to discriminate faces from their own race but become less able to differentiate other-race faces. Though this is likely due to daily experience with own-race faces, the mechanisms linking repeated exposure to optimal face processing remain unclear. One possibility is that frequent experience with own-race faces generates a selective attention bias to these faces. Selective attention elicits enhancement of attended information and suppression of distraction to improve visual processing of attended objects. Thus attention biases to own-race faces may boost processing and discrimination of these faces relative to other-race faces. We used a spatial cueing task to bias attention to own- or other-race faces among Caucasian 9-month-old infants. Infants discriminated faces in the focus of the attention bias, regardless of race, indicating that infants remained sensitive to differences among other-race faces. Instead, efficacy of face discrimination reflected the extent of attention engagement. PMID:26486228
Visual selective attention biases contribute to the other-race effect among 9-month-old infants.
Markant, Julie; Oakes, Lisa M; Amso, Dima
2016-04-01
During the first year of life, infants maintain their ability to discriminate faces from their own race but become less able to differentiate other-race faces. Though this is likely due to daily experience with own-race faces, the mechanisms linking repeated exposure to optimal face processing remain unclear. One possibility is that frequent experience with own-race faces generates a selective attention bias to these faces. Selective attention elicits enhancement of attended information and suppression of distraction to improve visual processing of attended objects. Thus attention biases to own-race faces may boost processing and discrimination of these faces relative to other-race faces. We used a spatial cueing task to bias attention to own- or other-race faces among Caucasian 9-month-old infants. Infants discriminated faces in the focus of the attention bias, regardless of race, indicating that infants remained sensitive to differences among other-race faces. Instead, efficacy of face discrimination reflected the extent of attention engagement. © 2015 Wiley Periodicals, Inc.
Gonzalez-Neira, Eliana Maria; Jimenez-Mendoza, Claudia Patricia; Rugeles-Quintero, Saul
2016-01-01
Objective: This study aims at determining if a collection of 16 motor tests on a physical simulator can objectively discriminate and evaluate practitioners' competency level, i.e. novice, resident, and expert. Methods: An experimental design with three study groups (novice, resident, and expert) was developed to test the evaluation power of each of the 16 simple tests. An ANOVA and a Student Newman-Keuls (SNK) test were used to analyze results of each test to determine which of them can discriminate participants' competency level. Results: Four of the 16 tests used discriminated all of the three competency levels and 15 discriminated at least two of the three groups (α= 0.05). Moreover, other two tests differentiate beginners' level from intermediate, and other seven tests differentiate intermediate level from expert. Conclusion: The competency level of a practitioner of minimally invasive surgery can be evaluated by a specific collection of basic tests in a physical surgical simulator. Reduction of the number of tests needed to discriminate the competency level of surgeons can be the aim of future research. PMID:27226664
Gonzalez-Neira, Eliana Maria; Jimenez-Mendoza, Claudia Patricia; Suarez, Daniel R; Rugeles-Quintero, Saul
2016-03-30
This study aims at determining if a collection of 16 motor tests on a physical simulator can objectively discriminate and evaluate practitioners' competency level, i.e. novice, resident, and expert. An experimental design with three study groups (novice, resident, and expert) was developed to test the evaluation power of each of the 16 simple tests. An ANOVA and a Student Newman-Keuls (SNK) test were used to analyze results of each test to determine which of them can discriminate participants' competency level. Four of the 16 tests used discriminated all of the three competency levels and 15 discriminated at least two of the three groups (α= 0.05). Moreover, other two tests differentiate beginners' level from intermediate, and other seven tests differentiate intermediate level from expert. The competency level of a practitioner of minimally invasive surgery can be evaluated by a specific collection of basic tests in a physical surgical simulator. Reduction of the number of tests needed to discriminate the competency level of surgeons can be the aim of future research.
Roberts, Daniel J; Woollams, Anna M; Kim, Esther; Beeson, Pelagie M; Rapcsak, Steven Z; Lambon Ralph, Matthew A
2013-11-01
Recent visual neuroscience investigations suggest that ventral occipito-temporal cortex is retinotopically organized, with high acuity foveal input projecting primarily to the posterior fusiform gyrus (pFG), making this region crucial for coding high spatial frequency information. Because high spatial frequencies are critical for fine-grained visual discrimination, we hypothesized that damage to the left pFG should have an adverse effect not only on efficient reading, as observed in pure alexia, but also on the processing of complex non-orthographic visual stimuli. Consistent with this hypothesis, we obtained evidence that a large case series (n = 20) of patients with lesions centered on left pFG: 1) Exhibited reduced sensitivity to high spatial frequencies; 2) demonstrated prolonged response latencies both in reading (pure alexia) and object naming; and 3) were especially sensitive to visual complexity and similarity when discriminating between novel visual patterns. These results suggest that the patients' dual reading and non-orthographic recognition impairments have a common underlying mechanism and reflect the loss of high spatial frequency visual information normally coded in the left pFG.
Tian, Yunfei; Wu, Peng; Wu, Xi; Jiang, Xiaoming; Xu, Kailai; Hou, Xiandeng
2013-04-21
A simple and economical multi-channel optical sensor using corona discharge radical emission spectroscopy is developed and explored as an optical nose for discrimination analysis of volatile organic compounds, wines, and even isomers.
Visual processing affects the neural basis of auditory discrimination.
Kislyuk, Daniel S; Möttönen, Riikka; Sams, Mikko
2008-12-01
The interaction between auditory and visual speech streams is a seamless and surprisingly effective process. An intriguing example is the "McGurk effect": The acoustic syllable /ba/ presented simultaneously with a mouth articulating /ga/ is typically heard as /da/ [McGurk, H., & MacDonald, J. Hearing lips and seeing voices. Nature, 264, 746-748, 1976]. Previous studies have demonstrated the interaction of auditory and visual streams at the auditory cortex level, but the importance of these interactions for the qualitative perception change remained unclear because the change could result from interactions at higher processing levels as well. In our electroencephalogram experiment, we combined the McGurk effect with mismatch negativity (MMN), a response that is elicited in the auditory cortex at a latency of 100-250 msec by any above-threshold change in a sequence of repetitive sounds. An "odd-ball" sequence of acoustic stimuli consisting of frequent /va/ syllables (standards) and infrequent /ba/ syllables (deviants) was presented to 11 participants. Deviant stimuli in the unisensory acoustic stimulus sequence elicited a typical MMN, reflecting discrimination of acoustic features in the auditory cortex. When the acoustic stimuli were dubbed onto a video of a mouth constantly articulating /va/, the deviant acoustic /ba/ was heard as /va/ due to the McGurk effect and was indistinguishable from the standards. Importantly, such deviants did not elicit MMN, indicating that the auditory cortex failed to discriminate between the acoustic stimuli. Our findings show that visual stream can qualitatively change the auditory percept at the auditory cortex level, profoundly influencing the auditory cortex mechanisms underlying early sound discrimination.
Bertone, Armando; Mottron, Laurent; Jelenic, Patricia; Faubert, Jocelyn
2005-10-01
Visuo-perceptual processing in autism is characterized by intact or enhanced performance on static spatial tasks and inferior performance on dynamic tasks, suggesting a deficit of dorsal visual stream processing in autism. However, previous findings by Bertone et al. indicate that neuro-integrative mechanisms used to detect complex motion, rather than motion perception per se, may be impaired in autism. We present here the first demonstration of concurrent enhanced and decreased performance in autism on the same visuo-spatial static task, wherein the only factor dichotomizing performance was the neural complexity required to discriminate grating orientation. The ability of persons with autism was found to be superior for identifying the orientation of simple, luminance-defined (or first-order) gratings but inferior for complex, texture-defined (or second-order) gratings. Using a flicker contrast sensitivity task, we demonstrated that this finding is probably not due to abnormal information processing at a sub-cortical level (magnocellular and parvocellular functioning). Together, these findings are interpreted as a clear indication of altered low-level perceptual information processing in autism, and confirm that the deficits and assets observed in autistic visual perception are contingent on the complexity of the neural network required to process a given type of visual stimulus. We suggest that atypical neural connectivity, resulting in enhanced lateral inhibition, may account for both enhanced and decreased low-level information processing in autism.
A Role for Mouse Primary Visual Cortex in Motion Perception.
Marques, Tiago; Summers, Mathew T; Fioreze, Gabriela; Fridman, Marina; Dias, Rodrigo F; Feller, Marla B; Petreanu, Leopoldo
2018-06-04
Visual motion is an ethologically important stimulus throughout the animal kingdom. In primates, motion perception relies on specific higher-order cortical regions. Although mouse primary visual cortex (V1) and higher-order visual areas show direction-selective (DS) responses, their role in motion perception remains unknown. Here, we tested whether V1 is involved in motion perception in mice. We developed a head-fixed discrimination task in which mice must report their perceived direction of motion from random dot kinematograms (RDKs). After training, mice made around 90% correct choices for stimuli with high coherence and performed significantly above chance for 16% coherent RDKs. Accuracy increased with both stimulus duration and visual field coverage of the stimulus, suggesting that mice in this task integrate motion information in time and space. Retinal recordings showed that thalamically projecting On-Off DS ganglion cells display DS responses when stimulated with RDKs. Two-photon calcium imaging revealed that neurons in layer (L) 2/3 of V1 display strong DS tuning in response to this stimulus. Thus, RDKs engage motion-sensitive retinal circuits as well as downstream visual cortical areas. Contralateral V1 activity played a key role in this motion direction discrimination task because its reversible inactivation with muscimol led to a significant reduction in performance. Neurometric-psychometric comparisons showed that an ideal observer could solve the task with the information encoded in DS L2/3 neurons. Motion discrimination of RDKs presents a powerful behavioral tool for dissecting the role of retino-forebrain circuits in motion processing. Copyright © 2018 Elsevier Ltd. All rights reserved.
Visual perception of fatigued lifting actions.
Fischer, Steven L; Albert, Wayne J; McGarry, Tim
2012-12-01
Fatigue-related changes in lifting kinematics may expose workers to undue injury risks. Early detection of accumulating fatigue offers the prospect of intervention strategies to mitigate such fatigue-related risks. In a first step towards this objective, this study investigated whether fatigue detection was accessible to visual perception and, if so, what was the key visual information required for successful fatigue discrimination. Eighteen participants were tasked with identifying fatigued lifts when viewing 24 trials presented using both video and point-light representations. Each trial comprised a pair of lifting actions containing a fresh and a fatigued lift from the same individual presented in counter-balanced sequence. Confidence intervals demonstrated that the frequency of correct responses for both sexes exceeded chance expectations (50%) for both video (68%±12%) and point-light representations (67%±10%), demonstrating that fatigued lifting kinematics are open to visual perception. There were no significant differences between sexes or viewing condition, the latter result indicating kinematic dynamics as providing sufficient information for successful fatigue discrimination. Moreover, results from single viewer investigation reported fatigue detection (75%) from point-light information describing only the kinematics of the box lifted. These preliminary findings may have important workplace applications if fatigue discrimination rates can be improved upon through future research. Copyright © 2012 Elsevier B.V. All rights reserved.
Visual variability affects early verb learning.
Twomey, Katherine E; Lush, Lauren; Pearce, Ruth; Horst, Jessica S
2014-09-01
Research demonstrates that within-category visual variability facilitates noun learning; however, the effect of visual variability on verb learning is unknown. We habituated 24-month-old children to a novel verb paired with an animated star-shaped actor. Across multiple trials, children saw either a single action from an action category (identical actions condition, for example, travelling while repeatedly changing into a circle shape) or multiple actions from that action category (variable actions condition, for example, travelling while changing into a circle shape, then a square shape, then a triangle shape). Four test trials followed habituation. One paired the habituated verb with a new action from the habituated category (e.g., 'dacking' + pentagon shape) and one with a completely novel action (e.g., 'dacking' + leg movement). The others paired a new verb with a new same-category action (e.g., 'keefing' + pentagon shape), or a completely novel category action (e.g., 'keefing' + leg movement). Although all children discriminated novel verb/action pairs, children in the identical actions condition discriminated trials that included the completely novel verb, while children in the variable actions condition discriminated the out-of-category action. These data suggest that - as in noun learning - visual variability affects verb learning and children's ability to form action categories. © 2014 The British Psychological Society.
Burgansky-Eliash, Zvia; Wollstein, Gadi; Chu, Tianjiao; Ramsey, Joseph D.; Glymour, Clark; Noecker, Robert J.; Ishikawa, Hiroshi; Schuman, Joel S.
2007-01-01
Purpose Machine-learning classifiers are trained computerized systems with the ability to detect the relationship between multiple input parameters and a diagnosis. The present study investigated whether the use of machine-learning classifiers improves optical coherence tomography (OCT) glaucoma detection. Methods Forty-seven patients with glaucoma (47 eyes) and 42 healthy subjects (42 eyes) were included in this cross-sectional study. Of the glaucoma patients, 27 had early disease (visual field mean deviation [MD] ≥ −6 dB) and 20 had advanced glaucoma (MD < −6 dB). Machine-learning classifiers were trained to discriminate between glaucomatous and healthy eyes using parameters derived from OCT output. The classifiers were trained with all 38 parameters as well as with only 8 parameters that correlated best with the visual field MD. Five classifiers were tested: linear discriminant analysis, support vector machine, recursive partitioning and regression tree, generalized linear model, and generalized additive model. For the last two classifiers, a backward feature selection was used to find the minimal number of parameters that resulted in the best and most simple prediction. The cross-validated receiver operating characteristic (ROC) curve and accuracies were calculated. Results The largest area under the ROC curve (AROC) for glaucoma detection was achieved with the support vector machine using eight parameters (0.981). The sensitivity at 80% and 95% specificity was 97.9% and 92.5%, respectively. This classifier also performed best when judged by cross-validated accuracy (0.966). The best classification between early glaucoma and advanced glaucoma was obtained with the generalized additive model using only three parameters (AROC = 0.854). Conclusions Automated machine classifiers of OCT data might be useful for enhancing the utility of this technology for detecting glaucomatous abnormality. PMID:16249492
Using digital colour to increase the realistic appearance of SEM micrographs of bloodstains.
Hortolà, Policarp
2010-10-01
Although in the scientific-research literature the micrographs from scanning electron microscopes (SEMs) are usually displayed in greyscale, the potential of colour resources provided by the SEM-coupled image-acquiring systems and, subsidiarily, by image-manipulation free softwares deserves be explored as a tool for colouring SEM micrographs of bloodstains. After acquiring greyscale SEM micrographs of a (dark red to the naked eye) human blood smear on grey chert, they were manually obtained in red tone using both the SEM-coupled image-acquiring system and an image-manipulation free software, as well as they were automatically generated in thermal tone using the SEM-coupled system. Red images obtained by the SEM-coupled system demonstrated lower visual-discrimination capability than the other coloured images, whereas those in red generated by the free software rendered better magnitude of scopic information than the red images generated by the SEM-coupled system. Thermal-tone images, although were further from the real sample colour than the red ones, not only increased their realistic appearance over the greyscale images, but also yielded the best visual-discrimination capability among all the coloured SEM micrographs, and fairly enhanced the relief effect of the SEM micrographs over both the greyscale and the red images. The application of digital colour by means of the facilities provided by an SEM-coupled image-acquiring system or, when required, by an image-manipulation free software provides a user-friendly, quick and inexpensive way of obtaining coloured SEM micrographs of bloodstains, avoiding to do sophisticated, time-consuming colouring procedures. Although this work was focused on bloodstains, well probably other monochromatic or quasi-monochromatic samples are also susceptible of increasing their realistic appearance by colouring them using the simple methods utilized in this study.
Zhang, Yi; Chen, Lihan
2016-01-01
Recent studies of brain plasticity that pertain to time perception have shown that fast training of temporal discrimination in one modality, for example, the auditory modality, can improve performance of temporal discrimination in another modality, such as the visual modality. We here examined whether the perception of visual Ternus motion could be recalibrated through fast crossmodal statistical binding of temporal information and stimuli properties binding. We conducted two experiments, composed of three sessions each: pre-test, learning, and post-test. In both the pre-test and the post-test, participants classified the Ternus display as either “element motion” or “group motion.” For the training session in Experiment 1, we constructed two types of temporal structures, in which two consecutively presented sound beeps were dominantly (80%) flanked by one leading visual Ternus frame and by one lagging visual Ternus frame (VAAV) or dominantly inserted by two Ternus visual frames (AVVA). Participants were required to respond which interval (auditory vs. visual) was longer. In Experiment 2, we presented only a single auditory–visual pair but with similar temporal configurations as in Experiment 1, and asked participants to perform an audio–visual temporal order judgment. The results of these two experiments support that statistical binding of temporal information and stimuli properties can quickly and selectively recalibrate the sensitivity of perceiving visual motion, according to the protocols of the specific bindings. PMID:27065910
Acute effects of aerobic exercise promote learning
Perini, Renza; Bortoletto, Marta; Capogrosso, Michela; Fertonani, Anna; Miniussi, Carlo
2016-01-01
The benefits that physical exercise confers on cardiovascular health are well known, whereas the notion that physical exercise can also improve cognitive performance has only recently begun to be explored and has thus far yielded only controversial results. In the present study, we used a sample of young male subjects to test the effects that a single bout of aerobic exercise has on learning. Two tasks were run: the first was an orientation discrimination task involving the primary visual cortex, and the second was a simple thumb abduction motor task that relies on the primary motor cortex. Forty-four and forty volunteers participated in the first and second experiments, respectively. We found that a single bout of aerobic exercise can significantly facilitate learning mechanisms within visual and motor domains and that these positive effects can persist for at least 30 minutes following exercise. This finding suggests that physical activity, at least of moderate intensity, might promote brain plasticity. By combining physical activity–induced plasticity with specific cognitive training–induced plasticity, we favour a gradual up-regulation of a functional network due to a steady increase in synaptic strength, promoting associative Hebbian-like plasticity. PMID:27146330
Peripheral Vision of Youths with Low Vision: Motion Perception, Crowding, and Visual Search
Tadin, Duje; Nyquist, Jeffrey B.; Lusk, Kelly E.; Corn, Anne L.; Lappin, Joseph S.
2012-01-01
Purpose. Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. Methods. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10–17) and low vision (n = 24, ages 9–18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. Results. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Conclusions. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function. PMID:22836766
Peripheral vision of youths with low vision: motion perception, crowding, and visual search.
Tadin, Duje; Nyquist, Jeffrey B; Lusk, Kelly E; Corn, Anne L; Lappin, Joseph S
2012-08-24
Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10-17) and low vision (n = 24, ages 9-18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function.
Effects of Attention and Laterality on Motion and Orientation Discrimination in Deaf Signers
ERIC Educational Resources Information Center
Bosworth, Rain G.; Petrich, Jennifer A. F.; Dobkins, Karen R.
2013-01-01
Previous studies have asked whether visual sensitivity and attentional processing in deaf signers are enhanced or altered as a result of their different sensory experiences during development, i.e., auditory deprivation and exposure to a visual language. In particular, deaf and hearing signers have been shown to exhibit a right visual field/left…
Visual Sensitivities and Discriminations and Their Roles in Aviation.
1986-03-01
D. Low contrast letter charts in early diabetic retinopathy , octrlar hypertension, glaucoma and Parkinson’s disease. Br J Ophthalmol, 1984, 68, 885...to detect a camouflaged object that was visible only when moving, and compared these data with similar measurements for conventional objects that were...3) Compare visual detection (i.e. visual acquisition) of camouflaged objects whose edges are defined by velocity differences with visual detection
Efficiencies for the statistics of size discrimination.
Solomon, Joshua A; Morgan, Michael; Chubb, Charles
2011-10-19
Different laboratories have achieved a consensus regarding how well human observers can estimate the average orientation in a set of N objects. Such estimates are not only limited by visual noise, which perturbs the visual signal of each object's orientation, they are also inefficient: Observers effectively use only √N objects in their estimates (e.g., S. C. Dakin, 2001; J. A. Solomon, 2010). More controversial is the efficiency with which observers can estimate the average size in an array of circles (e.g., D. Ariely, 2001, 2008; S. C. Chong, S. J. Joo, T.-A. Emmanouil, & A. Treisman, 2008; K. Myczek & D. J. Simons, 2008). Of course, there are some important differences between orientation and size; nonetheless, it seemed sensible to compare the two types of estimate against the same ideal observer. Indeed, quantitative evaluation of statistical efficiency requires this sort of comparison (R. A. Fisher, 1925). Our first step was to measure the noise that limits size estimates when only two circles are compared. Our results (Weber fractions between 0.07 and 0.14 were necessary for 84% correct 2AFC performance) are consistent with the visual system adding the same amount of Gaussian noise to all logarithmically transduced circle diameters. We exaggerated this visual noise by randomly varying the diameters in (uncrowded) arrays of 1, 2, 4, and 8 circles and measured its effect on discrimination between mean sizes. Efficiencies inferred from all four observers significantly exceed 25% and, in two cases, approach 100%. More consistent are our measurements of just-noticeable differences in size variance. These latter results suggest between 62 and 75% efficiency for variance discriminations. Although our observers were no more efficient comparing size variances than they were at comparing mean sizes, they were significantly more precise. In other words, our results contain evidence for a non-negligible source of late noise that limits mean discriminations but not variance discriminations.
Dry-eye screening by using a functional visual acuity measurement system: the Osaka Study.
Kaido, Minako; Uchino, Miki; Yokoi, Norihiko; Uchino, Yuichi; Dogru, Murat; Kawashima, Motoko; Komuro, Aoi; Sonomura, Yukiko; Kato, Hiroaki; Kinoshita, Shigeru; Tsubota, Kazuo
2014-05-06
We determined whether functional visual acuity (VA) parameters and a dry eyes (DEs) symptoms questionnaire could predict DEs in a population of visual terminal display (VDT) users. This prospective study included 491 VDT users from the Osaka Study. Subjects with definite DE, diagnosed with the presence of DE symptoms, tear abnormality (Schirmer test ≤ 5 mm or tear breakup time [TBUT] ≤ 5 seconds), and conjunctivocorneal epithelial damage (total staining score of ≥3 points), or probable DE, diagnosed with the presence of two of them, were assigned to a DE group, and the remainder to a non-DE group. Functional VA was assessed, and DE questionnaires were administered. We assessed whether univariate and discriminant analyses could determine to which group a subject belonged. Sensitivity and specificity were assessed. Of 491 subjects, 320 and 171 were assigned to the DE and non-DE groups, respectively. No significant differences were observed between DE and non-DE groups in Schirmer test value and epithelial damage, but TBUT value (3.1 ± 1.5 vs. 5.9 ± 3.0 seconds). The sensitivity and specificity of single test using functional VA parameters were 59% and 49% in functional VA, 60% and 50% in visual maintenance ratio, and 83% and 30% in frequency of blinking, respectively. According to a discriminant analysis using a combination of functional VA parameters and a DE questionnaire, six variables were selected for the discriminant equation, of which area under the curve (AUC) was 0.735. Sensitivity and specificity of diagnoses predicted by the discriminant equation were 85.9% and 45.6%, respectively. The discriminant equation obtained using functional VA measurement combined with a symptoms questionnaire may suggest the possibility for the first step screening of DE with unstable tear film. Since the questionnaire has an overall poor sensitivity and specificity, further amelioration may be necessary for the actual utilization of this screening tool. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.
A Simple Visualization of Double Bond Properties: Chemical Reactivity and UV Fluorescence
ERIC Educational Resources Information Center
Grayson, Scott M.
2012-01-01
A simple, easily visualized thin-layer chromatography (TLC) staining experiment is presented that highlights the difference in reactivity between aromatic double bonds and nonaromatic double bonds. Although the stability of aromatic systems is a major theme in organic chemistry, the concept is rarely reinforced "visually" in the undergraduate…
Mele, Sonia; Ghirardi, Valentina; Craighero, Laila
2017-12-01
A long-term debate concerns whether the sensorimotor coding carried out during transitive actions observation reflects the low-level movement implementation details or the movement goals. On the contrary, phonemes and emotional facial expressions are intransitive actions that do not fall into this debate. The investigation of phonemes discrimination has proven to be a good model to demonstrate that the sensorimotor system plays a role in understanding actions acoustically presented. In the present study, we adapted the experimental paradigms already used in phonemes discrimination during face posture manipulation, to the discrimination of emotional facial expressions. We submitted participants to a lower or to an upper face posture manipulation during the execution of a four alternative labelling task of pictures randomly taken from four morphed continua between two emotional facial expressions. The results showed that the implementation of low-level movement details influence the discrimination of ambiguous facial expressions differing for a specific involvement of those movement details. These findings indicate that facial expressions discrimination is a good model to test the role of the sensorimotor system in the perception of actions visually presented.
Effects of Peripheral Eccentricity and Head Orientation on Gaze Discrimination
Palanica, Adam; Itier, Roxane J.
2017-01-01
Visual search tasks support a special role for direct gaze in human cognition, while classic gaze judgment tasks suggest the congruency between head orientation and gaze direction plays a central role in gaze perception. Moreover, whether gaze direction can be accurately discriminated in the periphery using covert attention is unknown. In the present study, individual faces in frontal and in deviated head orientations with a direct or an averted gaze were flashed for 150 ms across the visual field; participants focused on a centred fixation while judging the gaze direction. Gaze discrimination speed and accuracy varied with head orientation and eccentricity. The limit of accurate gaze discrimination was less than ±6° eccentricity. Response times suggested a processing facilitation for direct gaze in fovea, irrespective of head orientation, however, by ±3° eccentricity, head orientation started biasing gaze judgments, and this bias increased with eccentricity. Results also suggested a special processing of frontal heads with direct gaze in central vision, rather than a general congruency effect between eye and head cues. Thus, while both head and eye cues contribute to gaze discrimination, their role differs with eccentricity. PMID:28344501
Discriminative power of visual attributes in dermatology.
Giotis, Ioannis; Visser, Margaretha; Jonkman, Marcel; Petkov, Nicolai
2013-02-01
Visual characteristics such as color and shape of skin lesions play an important role in the diagnostic process. In this contribution, we quantify the discriminative power of such attributes using an information theoretical approach. We estimate the probability of occurrence of each attribute as a function of the skin diseases. We use the distribution of this probability across the studied diseases and its entropy to define the discriminative power of the attribute. The discriminative power has a maximum value for attributes that occur (or do not occur) for only one disease and a minimum value for those which are equally likely to be observed among all diseases. Verrucous surface, red and brown colors, and the presence of more than 10 lesions are among the most informative attributes. A ranking of attributes is also carried out and used together with a naive Bayesian classifier, yielding results that confirm the soundness of the proposed method. proposed measure is proven to be a reliable way of assessing the discriminative power of dermatological attributes, and it also helps generate a condensed dermatological lexicon. Therefore, it can be of added value to the manual or computer-aided diagnostic process. © 2012 John Wiley & Sons A/S.
Wei, Meifen; Yeh, Christine Jean; Chao, Ruth Chu-Lien; Carrera, Stephanie; Su, Jenny C
2013-07-01
This study was conducted to examine under what situation (i.e., when individuals used more or less family support) and for whom (i.e., those with high or low self-esteem) perceived racial discrimination would or would not have a significant positive association with psychological distress. A total of 95 Asian American male college students completed an online survey. A hierarchical regression analysis indicated a significant 3-way interaction of family support, self-esteem, and perceived racial discrimination in predicting psychological distress after controlling for perceived general stress. A simple effect analysis was used to explore the nature of the interaction. When Asian American male college students used more family support to cope with racial discrimination, the association between perceived racial discrimination and psychological distress was not significant for those with high or low self-esteem. The result from the simple interaction indicated that, when more family support was used, the 2 slopes for high and low self-esteem were not significantly different from each other. Conversely, when they used less family support, the association between perceived racial discrimination and psychological distress was not significant for those with high self-esteem, but was significantly positive for those with low self-esteem. The result from the simple interaction indicated that, when less family support was used, the slopes for high and low self-esteem were significantly different. The result suggested that low use of family support may put these male students with low self-esteem at risk for psychological distress. Limitations, future research directions, and clinical implications were discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Transfer of perceptual learning between different visual tasks
McGovern, David P.; Webb, Ben S.; Peirce, Jonathan W.
2012-01-01
Practice in most sensory tasks substantially improves perceptual performance. A hallmark of this ‘perceptual learning' is its specificity for the basic attributes of the trained stimulus and task. Recent studies have challenged the specificity of learned improvements, although transfer between substantially different tasks has yet to be demonstrated. Here, we measure the degree of transfer between three distinct perceptual tasks. Participants trained on an orientation discrimination, a curvature discrimination, or a ‘global form' task, all using stimuli comprised of multiple oriented elements. Before and after training they were tested on all three and a contrast discrimination control task. A clear transfer of learning was observed, in a pattern predicted by the relative complexity of the stimuli in the training and test tasks. Our results suggest that sensory improvements derived from perceptual learning can transfer between very different visual tasks. PMID:23048211
Transfer of perceptual learning between different visual tasks.
McGovern, David P; Webb, Ben S; Peirce, Jonathan W
2012-10-09
Practice in most sensory tasks substantially improves perceptual performance. A hallmark of this 'perceptual learning' is its specificity for the basic attributes of the trained stimulus and task. Recent studies have challenged the specificity of learned improvements, although transfer between substantially different tasks has yet to be demonstrated. Here, we measure the degree of transfer between three distinct perceptual tasks. Participants trained on an orientation discrimination, a curvature discrimination, or a 'global form' task, all using stimuli comprised of multiple oriented elements. Before and after training they were tested on all three and a contrast discrimination control task. A clear transfer of learning was observed, in a pattern predicted by the relative complexity of the stimuli in the training and test tasks. Our results suggest that sensory improvements derived from perceptual learning can transfer between very different visual tasks.
Preserved Discrimination Performance and Neural Processing during Crossmodal Attention in Aging
Mishra, Jyoti; Gazzaley, Adam
2013-01-01
In a recent study in younger adults (19-29 year olds) we showed evidence that distributed audiovisual attention resulted in improved discrimination performance for audiovisual stimuli compared to focused visual attention. Here, we extend our findings to healthy older adults (60-90 year olds), showing that performance benefits of distributed audiovisual attention in this population match those of younger adults. Specifically, improved performance was revealed in faster response times for semantically congruent audiovisual stimuli during distributed relative to focused visual attention, without any differences in accuracy. For semantically incongruent stimuli, discrimination accuracy was significantly improved during distributed relative to focused attention. Furthermore, event-related neural processing showed intact crossmodal integration in higher performing older adults similar to younger adults. Thus, there was insufficient evidence to support an age-related deficit in crossmodal attention. PMID:24278464
Braille character discrimination in blindfolded human subjects.
Kauffman, Thomas; Théoret, Hugo; Pascual-Leone, Alvaro
2002-04-16
Visual deprivation may lead to enhanced performance in other sensory modalities. Whether this is the case in the tactile modality is controversial and may depend upon specific training and experience. We compared the performance of sighted subjects on a Braille character discrimination task to that of normal individuals blindfolded for a period of five days. Some participants in each group (blindfolded and sighted) received intensive Braille training to offset the effects of experience. Blindfolded subjects performed better than sighted subjects in the Braille discrimination task, irrespective of tactile training. For the left index finger, which had not been used in the formal Braille classes, blindfolding had no effect on performance while subjects who underwent tactile training outperformed non-stimulated participants. These results suggest that visual deprivation speeds up Braille learning and may be associated with behaviorally relevant neuroplastic changes.
Exploring What’s Missing: What Do Target Absent Trials Reveal About Autism Search Superiority?
Keehn, Brandon; Joseph, Robert M.
2016-01-01
We used eye-tracking to investigate the roles of enhanced discrimination and peripheral selection in superior visual search in autism spectrum disorder (ASD). Children with ASD were faster at visual search than their typically developing peers. However, group differences in performance and eye-movements did not vary with the level of difficulty of discrimination or selection. Rather, consistent with prior ASD research, group differences were mainly the effect of faster performance on target-absent trials. Eye-tracking revealed a lack of left-visual-field search asymmetry in ASD, which may confer an additional advantage when the target is absent. Lastly, ASD symptomatology was positively associated with search superiority, the mechanisms of which may shed light on the atypical brain organization that underlies social-communicative impairment in ASD. PMID:26762114
ERIC Educational Resources Information Center
Langstaff, Nancy
This book, intended for use by inservice teachers, preservice teachers, and parents interested in open classrooms, contains three chapters. "Beginning Reading in an Open Classroom" discusses language development, sight vocabulary, visual discrimination, auditory discrimination, directional concepts, small muscle control, and measurement of…
Role of Gamma-Band Synchronization in Priming of Form Discrimination for Multiobject Displays
ERIC Educational Resources Information Center
Lu, Hongjing; Morrison, Robert G.; Hummel, John E.; Holyoak, Keith J.
2006-01-01
Previous research has shown that synchronized flicker can facilitate detection of a single Kanizsa square. The present study investigated the role of temporally structured priming in discrimination tasks involving perceptual relations between multiple Kanizsa-type figures. Results indicate that visual information presented as temporally structured…
Life Span Changes in Visual Enumeration: The Number Discrimination Task.
ERIC Educational Resources Information Center
Trick, Lana M.; And Others
1996-01-01
Ninety-eight participants from 5 age groups with mean ages of 6, 8, 10, 22, and 72 years were tested in a series of speeded number discriminations. Found that response time slope as a function of number size decreased with age for numbers in the 1-4 range. (MDM)
Kansas Center for Research in Early Childhood Education Annual Report, FY 1973.
ERIC Educational Resources Information Center
Horowitz, Frances D.
This monograph is a collection of papers describing a series of loosely related studies of visual attention, auditory stimulation, and language discrimination in young infants. Titles include: (1) Infant Attention and Discrimination: Methodological and Substantive Issues; (2) The Addition of Auditory Stimulation (Music) and an Interspersed…
Minimum Colour Differences Required To Recognise Small Objects On A Colour CRT
NASA Astrophysics Data System (ADS)
Phillips, Peter L.
1985-05-01
Data is required to assist in the assessment, evaluation and optimisation of colour and other displays for both military and general use. A general aim is to develop a mathematical technique to aid optimisation and reduce the amount of expensive hardware development and trials necessary when introducing new displays. The present standards and methods available for evaluating colour differences are known not to apply to the perception of typical objects on a display. Data is required for irregular objects viewed at small angular subtense ((1°) and relating the recognition of form rather than colour matching. Therefore laboratory experiments have been carried out using a computer controlled CRT to measure the threshold colour difference that an observer requires between object and background so that he can discriminate a variety of similar objects. Measurements are included for a variety of background and object colourings. The results are presented in the CIE colorimetric system similar to current standards used by the display engineer. Apart from the characteristic small field tritanopia, the results show that larger colour differences are required for object recognition than those assumed from conventional colour discrimination data. A simple relationship to account for object size and background colour is suggested to aid visual performance assessments and modelling.
Cognitive Abilities on Transitive Inference Using a Novel Touchscreen Technology for Mice
Silverman, J.L.; Gastrell, P.T.; Karras, M.N.; Solomon, M.; Crawley, J.N.
2015-01-01
Cognitive abilities are impaired in neurodevelopmental disorders, including autism spectrum disorder (ASD) and schizophrenia. Preclinical models with strong endophenotypes relevant to cognitive dysfunctions offer a valuable resource for therapeutic development. However, improved assays to test higher order cognition are needed. We employed touchscreen technology to design a complex transitive inference (TI) assay that requires cognitive flexibility and relational learning. C57BL/6J (B6) mice with good cognitive skills and BTBR T+tf/J (BTBR), a model of ASD with cognitive deficits, were evaluated in simple and complex touchscreen assays. Both B6 and BTBR acquired visual discrimination and reversal. BTBR displayed deficits on components of TI, when 4 stimuli pairs were interspersed, which required flexible integrated knowledge. BTBR displayed impairment on the A > E inference, analogous to the A > E deficit in ASD. B6 and BTBR mice both reached criterion on the B > D comparison, unlike the B > D impairment in schizophrenia. These results demonstrate that mice are capable of complex discriminations and higher order tasks using methods and equipment paralleling those used in humans. Our discovery that a mouse model of ASD displays a TI deficit similar to humans with ASD supports the use of the touchscreen technology for complex cognitive tasks in mouse models of neurodevelopmental disorders. PMID:24293564
Grow, Laura L; Kodak, Tiffany; Carr, James E
2014-01-01
Previous research has demonstrated that the conditional-only method (starting with a multiple-stimulus array) is more efficient than the simple-conditional method (progressive incorporation of more stimuli into the array) for teaching receptive labeling to children with autism spectrum disorders (Grow, Carr, Kodak, Jostad, & Kisamore,). The current study systematically replicated the earlier study by comparing the 2 approaches using progressive prompting with 2 boys with autism. The results showed that the conditional-only method was a more efficient and reliable teaching procedure than the simple-conditional method. The results further call into question the practice of teaching simple discriminations to facilitate acquisition of conditional discriminations. © Society for the Experimental Analysis of Behavior.
Colour vision in ADHD: part 1--testing the retinal dopaminergic hypothesis.
Kim, Soyeon; Al-Haj, Mohamed; Chen, Samantha; Fuller, Stuart; Jain, Umesh; Carrasco, Marisa; Tannock, Rosemary
2014-10-24
To test the retinal dopaminergic hypothesis, which posits deficient blue color perception in ADHD, resulting from hypofunctioning CNS and retinal dopamine, to which blue cones are exquisitely sensitive. Also, purported sex differences in red color perception were explored. 30 young adults diagnosed with ADHD and 30 healthy young adults, matched on age and gender, performed a psychophysical task to measure blue and red color saturation and contrast discrimination ability. Visual function measures, such as the Visual Activities Questionnaire (VAQ) and Farnsworth-Munsell 100 hue test (FMT), were also administered. Females with ADHD were less accurate in discriminating blue and red color saturation relative to controls but did not differ in contrast sensitivity. Female control participants were better at discriminating red saturation than males, but no sex difference was present within the ADHD group. Poorer discrimination of red as well as blue color saturation in the female ADHD group may be partly attributable to a hypo-dopaminergic state in the retina, given that color perception (blue-yellow and red-green) is based on input from S-cones (short wavelength cone system) early in the visual pathway. The origin of female superiority in red perception may be rooted in sex-specific functional specialization in hunter-gather societies. The absence of this sexual dimorphism for red colour perception in ADHD females warrants further investigation.
Visuomotor sensitivity to visual information about surface orientation.
Knill, David C; Kersten, Daniel
2004-03-01
We measured human visuomotor sensitivity to visual information about three-dimensional surface orientation by analyzing movements made to place an object on a slanted surface. We applied linear discriminant analysis to the kinematics of subjects' movements to surfaces with differing slants (angle away form the fronto-parallel) to derive visuomotor d's for discriminating surfaces differing in slant by 5 degrees. Subjects' visuomotor sensitivity to information about surface orientation was very high, with discrimination "thresholds" ranging from 2 to 3 degrees. In a first experiment, we found that subjects performed only slightly better using binocular cues alone than monocular texture cues and that they showed only weak evidence for combining the cues when both were available, suggesting that monocular cues can be just as effective in guiding motor behavior in depth as binocular cues. In a second experiment, we measured subjects' perceptual discrimination and visuomotor thresholds in equivalent stimulus conditions to decompose visuomotor sensitivity into perceptual and motor components. Subjects' visuomotor thresholds were found to be slightly greater than their perceptual thresholds for a range of memory delays, from 1 to 3 s. The data were consistent with a model in which perceptual noise increases with increasing delay between stimulus presentation and movement initiation, but motor noise remains constant. This result suggests that visuomotor and perceptual systems rely on the same visual estimates of surface slant for memory delays ranging from 1 to 3 s.
Braun, J
1994-02-01
In more than one respect, visual search for the most salient or the least salient item in a display are different kinds of visual tasks. The present work investigated whether this difference is primarily one of perceptual difficulty, or whether it is more fundamental and relates to visual attention. Display items of different salience were produced by varying either size, contrast, color saturation, or pattern. Perceptual masking was employed and, on average, mask onset was delayed longer in search for the least salient item than in search for the most salient item. As a result, the two types of visual search presented comparable perceptual difficulty, as judged by psychophysical measures of performance, effective stimulus contrast, and stability of decision criterion. To investigate the role of attention in the two types of search, observers attempted to carry out a letter discrimination and a search task concurrently. To discriminate the letters, observers had to direct visual attention at the center of the display and, thus, leave unattended the periphery, which contained target and distractors of the search task. In this situation, visual search for the least salient item was severely impaired while visual search for the most salient item was only moderately affected, demonstrating a fundamental difference with respect to visual attention. A qualitatively identical pattern of results was encountered by Schiller and Lee (1991), who used similar visual search tasks to assess the effect of a lesion in extrastriate area V4 of the macaque.
Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima
2014-01-01
We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised. PMID:24466158
Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima
2014-01-01
We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised.
A simple method for quantitating the propensity for calcium oxalate crystallization in urine
NASA Technical Reports Server (NTRS)
Wabner, C. L.; Pak, C. Y.
1991-01-01
To assess the propensity for spontaneous crystallization of calcium oxalate in urine, the permissible increment in oxalate is calculated. The previous method required visual observation of crystallization with the addition of oxalate, this warranted the need for a large volume of urine and a sacrifice in accuracy in defining differences between small incremental changes of added oxalate. Therefore, this method has been miniaturized and spontaneous crystallization is detected from the depletion of radioactive oxalate. The new "micro" method demonstrated a marked decrease (p < 0.001) in the permissible increment in oxalate in urine of stone formers versus normal subjects. Moreover, crystallization inhibitors added to urine, in vitro (heparin or diphosphonate) or in vivo (potassium citrate administration), substantially increased the permissible increment in oxalate. Thus, the "micro" method has proven reliable and accurate in discriminating stone forming from control urine and in distinguishing changes of inhibitory activity.
Comparative effect of lens care solutions on blink rate, ocular discomfort and visual performance.
Yang, Shun-nan; Tai, Yu-chi; Sheedy, James E; Kinoshita, Beth; Lampa, Matthew; Kern, Jami R
2012-09-01
To help maintain clear vision and ocular surface health, eye blinks occur to distribute natural tears over the ocular surface, especially the corneal surface. Contact lens wearers may suffer from poor vision and dry eye symptoms due to difficulty in lens surface wetting and reduced tear production. Sustained viewing of a computer screen reduces eye blinks and exacerbates such difficulties. The present study evaluated the wetting effect of lens care solutions (LCSs) on blink rate, dry eye symptoms, and vision performance. Sixty-five adult habitual soft contact lens wearers were recruited to adapt to different LCSs (Opti-free, ReNu, and ClearCare) in a cross-over design. Blink rate in pictorial viewing and reading (measured with an eyetracker), dry eye symptoms (measured with the Ocular Surface Disease Index questionnaire), and visual discrimination (identifying tumbling E) immediately before and after eye blinks were measured after 2 weeks of adaption to LCS. Repeated measures anova and mixed model ancova were conducted to evaluate effects of LCS on blink rate, symptom score, and discrimination accuracy. Opti-Free resulted in lower dry eye symptoms (p = 0.018) than ClearCare, and lower spontaneous blink rate (measured in picture viewing) than ClearCare (p = 0.014) and ReNu (p = 0.041). In reading, blink rate was higher for ClearCare compared to ReNu (p = 0.026) and control (p = 0.024). Visual discrimination time was longer for the control (daily disposable lens) than for Opti-Free (p = 0.007), ReNu (p = 0.009), and ClearCare (0.013) immediately before the blink. LCSs differently affected blink rate, subjective dry eye symptoms, and visual discrimination speed. Those with wetting agents led to significantly fewer eye blinks while affording better ocular comfort for contact lens wearers, compared to that without. LCSs with wetting agents also resulted in better visual performance compared to wearing daily disposable contact lenses. These presumably are because of improved tear film quality. © 2012 The College of Optometrists.
Stimulus discriminability in visual search.
Verghese, P; Nakayama, K
1994-09-01
We measured the probability of detecting the target in a visual search task, as a function of the following parameters: the discriminability of the target from the distractors, the duration of the display, and the number of elements in the display. We examined the relation between these parameters at criterion performance (80% correct) to determine if the parameters traded off according to the predictions of a limited capacity model. For the three dimensions that we studied, orientation, color, and spatial frequency, the observed relationship between the parameters deviates significantly from a limited capacity model. The data relating discriminability to display duration are better than predicted over the entire range of orientation and color differences that we examined, and are consistent with the prediction for only a limited range of spatial frequency differences--from 12 to 23%. The relation between discriminability and number varies considerably across the three dimensions and is better than the limited capacity prediction for two of the three dimensions that we studied. Orientation discrimination shows a strong number effect, color discrimination shows almost no effect, and spatial frequency discrimination shows an intermediate effect. The different trading relationships in each dimension are more consistent with early filtering in that dimension, than with a common limited capacity stage. Our results indicate that higher-level processes that group elements together also play a strong role. Our experiments provide little support for limited capacity mechanisms over the range of stimulus differences that we examined in three different dimensions.
The Subjective Visual Vertical: Validation of a Simple Test
ERIC Educational Resources Information Center
Tesio, Luigi; Longo, Stefano; Rota, Viviana
2011-01-01
The study sought to provide norms for a simple test of visual perception of verticality (subjective visual vertical). The study was designed as a cohort study with a balanced design. The setting was the Rehabilitation Department of a University Hospital. Twenty-two healthy adults, of 23-58 years, 11 men (three left handed) and 11 women (three left…
Kim, Sung-Min
2018-01-01
Cessation of dewatering following underground mine closure typically results in groundwater rebound, because mine voids and surrounding strata undergo flooding up to the levels of the decant points, such as shafts and drifts. SIMPL (Simplified groundwater program In Mine workings using the Pipe equation and Lumped parameter model), a simplified lumped parameter model-based program for predicting groundwater levels in abandoned mines, is presented herein. The program comprises a simulation engine module, 3D visualization module, and graphical user interface, which aids data processing, analysis, and visualization of results. The 3D viewer facilitates effective visualization of the predicted groundwater level rebound phenomenon together with a topographic map, mine drift, goaf, and geological properties from borehole data. SIMPL is applied to data from the Dongwon coal mine and Dalsung copper mine in Korea, with strong similarities in simulated and observed results. By considering mine workings and interpond connections, SIMPL can thus be used to effectively analyze and visualize groundwater rebound. In addition, the predictions by SIMPL can be utilized to prevent the surrounding environment (water and soil) from being polluted by acid mine drainage. PMID:29747480
Vision/Visual Perception: An Annotated Bibliography.
ERIC Educational Resources Information Center
Weintraub, Sam, Comp.; Cowan, Robert J., Comp.
An update and modification of "Vision-Visual Discrimination" published in 1973, this annotated bibliography contains entries from the annual summaries of research in reading published by the International Reading Association (IRA) since then. The first large section, "Vision," is divided into two subgroups: (1) "Visually…
Detection of visual signals by rats: A computational model
We applied a neural network model of classical conditioning proposed by Schmajuk, Lam, and Gray (1996) to visual signal detection and discrimination tasks designed to assess sustained attention in rats (Bushnell, 1999). The model describes the animals’ expectation of receiving fo...
Visual Short-Term Memory for Complex Objects in 6- and 8-Month-Old Infants
ERIC Educational Resources Information Center
Kwon, Mee-Kyoung; Luck, Steven J.; Oakes, Lisa M.
2014-01-01
Infants' visual short-term memory (VSTM) for simple objects undergoes dramatic development: Six-month-old infants can store in VSTM information about only a simple object presented in isolation, whereas 8-month-old infants can store information about simple objects presented in multiple-item arrays. This study extended this work to examine…
Understanding How to Build Long-Lived Learning Collaborators
2016-03-16
discrimination in learning, and dynamic encoding strategies to improve visual encoding for learning via analogical generalization. We showed that spatial concepts...a 20,000 sketch corpus to examine the tradeoffs involved in visual representation and analogical generalization. 15. SUBJECT TERMS...strategies to improve visual encoding for learning via analogical generalization. We showed that spatial concepts can be learned via analogical
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2008-09-16
Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.
Comparing visual search and eye movements in bilinguals and monolinguals
Hout, Michael C.; Walenchok, Stephen C.; Azuma, Tamiko; Goldinger, Stephen D.
2017-01-01
Recent research has suggested that bilinguals show advantages over monolinguals in visual search tasks, although these findings have been derived from global behavioral measures of accuracy and response times. In the present study we sought to explore the bilingual advantage by using more sensitive eyetracking techniques across three visual search experiments. These spatially and temporally fine-grained measures allowed us to carefully investigate any nuanced attentional differences between bilinguals and monolinguals. Bilingual and monolingual participants completed visual search tasks that varied in difficulty. The experiments required participants to make careful discriminations in order to detect target Landolt Cs among similar distractors. In Experiment 1, participants performed both feature and conjunction search. In Experiments 2 and 3, participants performed visual search while making different types of speeded discriminations, after either locating the target or mentally updating a constantly changing target. The results across all experiments revealed that bilinguals and monolinguals were equally efficient at guiding attention and generating responses. These findings suggest that the bilingual advantage does not reflect a general benefit in attentional guidance, but could reflect more efficient guidance only under specific task demands. PMID:28508116
Perceptual learning in visual search: fast, enduring, but non-specific.
Sireteanu, R; Rettenbach, R
1995-07-01
Visual search has been suggested as a tool for isolating visual primitives. Elementary "features" were proposed to involve parallel search, while serial search is necessary for items without a "feature" status, or, in some cases, for conjunctions of "features". In this study, we investigated the role of practice in visual search tasks. We found that, under some circumstances, initially serial tasks can become parallel after a few hundred trials. Learning in visual search is far less specific than learning of visual discriminations and hyperacuity, suggesting that it takes place at another level in the central visual pathway, involving different neural circuits.
Stojcev, Maja; Radtke, Nils; D'Amaro, Daniele; Dyer, Adrian G; Neumeyer, Christa
2011-07-01
Visual systems can undergo striking adaptations to specific visual environments during evolution, but they can also be very "conservative." This seems to be the case in motion vision, which is surprisingly similar in species as distant as honeybee and goldfish. In both visual systems, motion vision measured with the optomotor response is color blind and mediated by one photoreceptor type only. Here, we ask whether this is also the case if the moving stimulus is restricted to a small part of the visual field, and test what influence velocity may have on chromatic motion perception. Honeybees were trained to discriminate between clockwise- and counterclockwise-rotating sector disks. Six types of disk stimuli differing in green receptor contrast were tested using three different rotational velocities. When green receptor contrast was at a minimum, bees were able to discriminate rotation directions with all colored disks at slow velocities of 6 and 12 Hz contrast frequency but not with a relatively high velocity of 24 Hz. In the goldfish experiment, the animals were trained to detect a moving red or blue disk presented in a green surround. Discrimination ability between this stimulus and a homogenous green background was poor when the M-cone type was not or only slightly modulated considering high stimulus velocity (7 cm/s). However, discrimination was improved with slower stimulus velocities (4 and 2 cm/s). These behavioral results indicate that there is potentially an object motion system in both honeybee and goldfish, which is able to incorporate color information at relatively low velocities but is color blind with higher speed. We thus propose that both honeybees and goldfish have multiple subsystems of object motion, which include achromatic as well as chromatic processing.
Information-Processing Correlates of Computer-Assisted Word Learning by Mentally Retarded Students.
ERIC Educational Resources Information Center
Conners, Frances A.; Detterman, Douglas K.
1987-01-01
Nineteen moderately/severely retarded students (ages 9-22) completed ten 15-minute computer-assisted instruction sessions and seven basic cognitive tasks measuring simple learning, choice reaction time, relearning, probed recall, stimulus discrimination, tachictoscopic threshold, and recognition memory. Stimulus discrimination, probed recall, and…
Unsupervised visual discrimination learning of complex stimuli: Accuracy, bias and generalization.
Montefusco-Siegmund, Rodrigo; Toro, Mauricio; Maldonado, Pedro E; Aylwin, María de la L
2018-07-01
Through same-different judgements, we can discriminate an immense variety of stimuli and consequently, they are critical in our everyday interaction with the environment. The quality of the judgements depends on familiarity with stimuli. A way to improve the discrimination is through learning, but to this day, we lack direct evidence of how learning shapes the same-different judgments with complex stimuli. We studied unsupervised visual discrimination learning in 42 participants, as they performed same-different judgments with two types of unfamiliar complex stimuli in the absence of labeling or individuation. Across nine daily training sessions with equiprobable same and different stimuli pairs, participants increased the sensitivity and the criterion by reducing the errors with both same and different pairs. With practice, there was a superior performance for different pairs and a bias for different response. To evaluate the process underlying this bias, we manipulated the proportion of same and different pairs, which resulted in an additional proportion-induced bias, suggesting that the bias observed with equal proportions was a stimulus processing bias. Overall, these results suggest that unsupervised discrimination learning occurs through changes in the stimulus processing that increase the sensory evidence and/or the precision of the working memory. Finally, the acquired discrimination ability was fully transferred to novel exemplars of the practiced stimuli category, in agreement with the acquisition of a category specific perceptual expertise. Copyright © 2018 Elsevier Ltd. All rights reserved.
Kasties, Nils; Starosta, Sarah; Güntürkün, Onur; Stüttgen, Maik C.
2016-01-01
Animals exploit visual information to identify objects, form stimulus-reward associations, and prepare appropriate behavioral responses. The nidopallium caudolaterale (NCL), an associative region of the avian endbrain, contains neurons exhibiting prominent response modulation during presentation of reward-predicting visual stimuli, but it is unclear whether neural activity represents valuation signals, stimulus properties, or sensorimotor contingencies. To test the hypothesis that NCL neurons represent stimulus value, we subjected pigeons to a Pavlovian sign-tracking paradigm in which visual cues predicted rewards differing in magnitude (large vs. small) and delay to presentation (short vs. long). Subjects’ strength of conditioned responding to visual cues reliably differentiated between predicted reward types and thus indexed valuation. The majority of NCL neurons discriminated between visual cues, with discriminability peaking shortly after stimulus onset and being maintained at lower levels throughout the stimulus presentation period. However, while some cells’ firing rates correlated with reward value, such neurons were not more frequent than expected by chance. Instead, neurons formed discernible clusters which differed in their preferred visual cue. We propose that this activity pattern constitutes a prerequisite for using visual information in more complex situations e.g. requiring value-based choices. PMID:27762287
Duncum, A J F; Atkins, K J; Beilharz, F L; Mundy, M E
2016-01-01
Individuals with body dysmorphic disorder (BDD) and clinically concerning body-image concern (BIC) appear to possess abnormalities in the way they perceive visual information in the form of a bias towards local visual processing. As inversion interrupts normal global processing, forcing individuals to process locally, an upright-inverted stimulus discrimination task was used to investigate this phenomenon. We examined whether individuals with nonclinical, yet high levels of BIC would show signs of this bias, in the form of reduced inversion effects (i.e., increased local processing). Furthermore, we assessed whether this bias appeared for general visual stimuli or specifically for appearance-related stimuli, such as faces and bodies. Participants with high-BIC (n = 25) and low-BIC (n = 30) performed a stimulus discrimination task with upright and inverted faces, scenes, objects, and bodies. Unexpectedly, the high-BIC group showed an increased inversion effect compared to the low-BIC group, indicating perceptual abnormalities may not be present as local processing biases, as originally thought. There was no significant difference in performance across stimulus types, signifying that any visual processing abnormalities may be general rather than appearance-based. This has important implications for whether visual processing abnormalities are predisposing factors for BDD or develop throughout the disorder.
Effects of attention and laterality on motion and orientation discrimination in deaf signers.
Bosworth, Rain G; Petrich, Jennifer A F; Dobkins, Karen R
2013-06-01
Previous studies have asked whether visual sensitivity and attentional processing in deaf signers are enhanced or altered as a result of their different sensory experiences during development, i.e., auditory deprivation and exposure to a visual language. In particular, deaf and hearing signers have been shown to exhibit a right visual field/left hemisphere advantage for motion processing, while hearing nonsigners do not. To examine whether this finding extends to other aspects of visual processing, we compared deaf signers and hearing nonsigners on motion, form, and brightness discrimination tasks. Secondly, to examine whether hemispheric lateralities are affected by attention, we employed a dual-task paradigm to measure form and motion thresholds under "full" vs. "poor" attention conditions. Deaf signers, but not hearing nonsigners, exhibited a right visual field advantage for motion processing. This effect was also seen for form processing and not for the brightness task. Moreover, no group differences were observed in attentional effects, and the motion and form visual field asymmetries were not modulated by attention, suggesting they occur at early levels of sensory processing. In sum, the results show that processing of motion and form, believed to be mediated by dorsal and ventral visual pathways, respectively, are left-hemisphere dominant in deaf signers. Published by Elsevier Inc.
A new metaphor for projection-based visual analysis and data exploration
NASA Astrophysics Data System (ADS)
Schreck, Tobias; Panse, Christian
2007-01-01
In many important application domains such as Business and Finance, Process Monitoring, and Security, huge and quickly increasing volumes of complex data are collected. Strong efforts are underway developing automatic and interactive analysis tools for mining useful information from these data repositories. Many data analysis algorithms require an appropriate definition of similarity (or distance) between data instances to allow meaningful clustering, classification, and retrieval, among other analysis tasks. Projection-based data visualization is highly interesting (a) for visual discrimination analysis of a data set within a given similarity definition, and (b) for comparative analysis of similarity characteristics of a given data set represented by different similarity definitions. We introduce an intuitive and effective novel approach for projection-based similarity visualization for interactive discrimination analysis, data exploration, and visual evaluation of metric space effectiveness. The approach is based on the convex hull metaphor for visually aggregating sets of points in projected space, and it can be used with a variety of different projection techniques. The effectiveness of the approach is demonstrated by application on two well-known data sets. Statistical evidence supporting the validity of the hull metaphor is presented. We advocate the hull-based approach over the standard symbol-based approach to projection visualization, as it allows a more effective perception of similarity relationships and class distribution characteristics.
Empiric determination of corrected visual acuity standards for train crews.
Schwartz, Steven H; Swanson, William H
2005-08-01
Probably the most common visual standard for employment in the transportation industry is best-corrected, high-contrast visual acuity. Because such standards were often established absent empiric linkage to job performance, it is possible that a job applicant or employee who has visual acuity less than the standard may be able to satisfactorily perform the required job activities. For the transportation system that we examined, the train crew is required to inspect visually the length of the train before and during the time it leaves the station. The purpose of the inspection is to determine if an individual is in a hazardous position with respect to the train. In this article, we determine the extent to which high-contrast visual acuity can predict performance on a simulated task. Performance at discriminating hazardous from safe conditions, as depicted in projected photographic slides, was determined as a function of visual acuity. For different levels of visual acuity, which was varied through the use of optical defocus, a subject was required to label scenes as hazardous or safe. Task performance was highly correlated with visual acuity as measured under conditions normally used for vision screenings (high-illumination and high-contrast): as the acuity decreases, performance at discriminating hazardous from safe scenes worsens. This empirically based methodology can be used to establish a corrected high-contrast visual acuity standard for safety-sensitive work in transportation that is linked to the performance of a job-critical task.
Visual awareness suppression by pre-stimulus brain stimulation; a neural effect.
Jacobs, Christianne; Goebel, Rainer; Sack, Alexander T
2012-01-02
Transcranial magnetic stimulation (TMS) has established the functional relevance of early visual cortex (EVC) for visual awareness with great temporal specificity non-invasively in conscious human volunteers. Many studies have found a suppressive effect when TMS was applied over EVC 80-100 ms after the onset of the visual stimulus (post-stimulus TMS time window). Yet, few studies found task performance to also suffer when TMS was applied even before visual stimulus presentation (pre-stimulus TMS time window). This pre-stimulus TMS effect, however, remains controversially debated and its origin had mainly been ascribed to TMS-induced eye-blinking artifacts. Here, we applied chronometric TMS over EVC during the execution of a visual discrimination task, covering an exhaustive range of visual stimulus-locked TMS time windows ranging from -80 pre-stimulus to 300 ms post-stimulus onset. Electrooculographical (EoG) recordings, sham TMS stimulation, and vertex TMS stimulation controlled for different types of non-neural TMS effects. Our findings clearly reveal TMS-induced masking effects for both pre- and post-stimulus time windows, and for both objective visual discrimination performance and subjective visibility. Importantly, all effects proved to be still present after post hoc removal of eye blink trials, suggesting a neural origin for the pre-stimulus TMS suppression effect on visual awareness. We speculate based on our data that TMS exerts its pre-stimulus effect via generation of a neural state which interacts with subsequent visual input. Copyright © 2011 Elsevier Inc. All rights reserved.
The Symmetry of Visual Fields in Chromatic Discrimination
ERIC Educational Resources Information Center
Danilova, M. V.; Mollon, J. D.
2009-01-01
Both classical and recent reports suggest a right-hemisphere superiority for color discrimination. Testing highly-trained normal subjects and taking care to eliminate asymmetries from the testing situation, we found no significant differences between left and right hemifields or between upper and lower hemifields. This was the case for both of the…
Long-term memory of color stimuli in the jungle crow (Corvus macrorhynchos).
Bogale, Bezawork Afework; Sugawara, Satoshi; Sakano, Katsuhisa; Tsuda, Sonoko; Sugita, Shoei
2012-03-01
Wild-caught jungle crows (n = 20) were trained to discriminate between color stimuli in a two-alternative discrimination task. Next, crows were tested for long-term memory after 1-, 2-, 3-, 6-, and 10-month retention intervals. This preliminary study showed that jungle crows learn the task and reach a discrimination criterion (80% or more correct choices in two consecutive sessions of ten trials) in a few trials, and some even in a single session. Most, if not all, crows successfully remembered the constantly reinforced visual stimulus during training after all retention intervals. These results suggest that jungle crows have a high retention capacity for learned information, at least after a 10-month retention interval and make no or very few errors. This study is the first to show long-term memory capacity of color stimuli in corvids following a brief training that memory rather than rehearsal was apparent. Memory of visual color information is vital for exploitation of biological resources in crows. We suspect that jungle crows could remember the learned color discrimination task even after a much longer retention interval.
Errorless discrimination and picture fading as techniques for teaching sight words to TMR students.
Walsh, B F; Lamberts, F
1979-03-01
The effectiveness of two approaches for teaching beginning sight words to 30 TMR students was compared. In Dorry and Zeaman's picture-fading technique, words are taught through association with pictures that are faded out over a series of trials, while in the Edmark program errorless-discrimination technique, words are taught through shaped sequences of visual and auditory--visual matching-to-sample, with the target word first appearing alone and eventually appearing with orthographically similar words. Students were instructed on two lists of 10 words each, one list in the picture-fading and one in the discrimination method, in a double counter-balanced, repeated-measures design. Covariance analysis on three measures (word identification, word recognition, and picture--word matching) showed highly significant differences between the two methods. Students' performance was better after instruction with the errorless-discrimination method than after instruction with the picture-fading method. The findings on picture fading were interpreted as indicating a possible failure of the shifting of control from picture to printed word that earlier researchers have hypothesized as occurring.
Neural dynamics of motion processing and speed discrimination.
Chey, J; Grossberg, S; Mingolla, E
1998-09-01
A neural network model of visual motion perception and speed discrimination is presented. The model shows how a distributed population code of speed tuning, that realizes a size-speed correlation, can be derived from the simplest mechanisms whereby activations of multiple spatially short-range filters of different size are transformed into speed-turned cell responses. These mechanisms use transient cell responses to moving stimuli, output thresholds that covary with filter size, and competition. These mechanisms are proposed to occur in the V1-->MT cortical processing stream. The model reproduces empirically derived speed discrimination curves and simulates data showing how visual speed perception and discrimination can be affected by stimulus contrast, duration, dot density and spatial frequency. Model motion mechanisms are analogous to mechanisms that have been used to model 3-D form and figure-ground perception. The model forms the front end of a larger motion processing system that has been used to simulate how global motion capture occurs, and how spatial attention is drawn to moving forms. It provides a computational foundation for an emerging neural theory of 3-D form and motion perception.
A Perceptuo-Cognitive-Motor Approach to the Special Child.
ERIC Educational Resources Information Center
Kornblum, Rena Beth
A movement therapist reviews ways in which a perceptuo-cognitive approach can help handicapped children in learning and in social adjustment. She identifies specific auditory problems (hearing loss, sound-ground confusion, auditory discrimination, auditory localization, auditory memory, auditory sequencing), visual problems (visual acuity,…
The pieces fit: Constituent structure and global coherence of visual narrative in RSVP.
Hagmann, Carl Erick; Cohn, Neil
2016-02-01
Recent research has shown that comprehension of visual narrative relies on the ordering and timing of sequential images. Here we tested if rapidly presented 6-image long visual sequences could be understood as coherent narratives. Half of the sequences were correctly ordered and half had two of the four internal panels switched. Participants reported whether the sequence was correctly ordered and rated its coherence. Accuracy in detecting a switch increased when panels were presented for 1 s rather than 0.5 s. Doubling the duration of the first panel did not affect results. When two switched panels were further apart, order was discriminated more accurately and coherence ratings were low, revealing that a strong local adjacency effect influenced order and coherence judgments. Switched panels at constituent boundaries or within constituents were most disruptive to order discrimination, indicating that the preservation of constituent structure is critical to visual narrative grammar. Copyright © 2016 Elsevier B.V. All rights reserved.
The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns
Duarte, Fabiola; Lemus, Luis
2017-01-01
The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406
Seeing without Seeing? Degraded Conscious Vision in a Blindsight Patient.
Overgaard, Morten; Fehl, Katrin; Mouridsen, Kim; Bergholt, Bo; Cleeremans, Axel
2008-08-21
Blindsight patients, whose primary visual cortex is lesioned, exhibit preserved ability to discriminate visual stimuli presented in their "blind" field, yet report no visual awareness hereof. Blindsight is generally studied in experimental investigations of single patients, as very few patients have been given this "diagnosis". In our single case study of patient GR, we ask whether blindsight is best described as unconscious vision, or rather as conscious, yet severely degraded vision. In experiment 1 and 2, we successfully replicate the typical findings of previous studies on blindsight. The third experiment, however, suggests that GR's ability to discriminate amongst visual stimuli does not reflect unconscious vision, but rather degraded, yet conscious vision. As our finding results from using a method for obtaining subjective reports that has not previously used in blindsight studies (but validated in studies of healthy subjects and other patients with brain injury), our results call for a reconsideration of blindsight, and, arguably also of many previous studies of unconscious perception in healthy subjects.
Visual body perception in anorexia nervosa.
Urgesi, Cosimo; Fornasari, Livia; Perini, Laura; Canalaz, Francesca; Cremaschi, Silvana; Faleschini, Laura; Balestrieri, Matteo; Fabbro, Franco; Aglioti, Salvatore Maria; Brambilla, Paolo
2012-05-01
Disturbance of body perception is a central aspect of anorexia nervosa (AN) and several neuroimaging studies have documented structural and functional alterations of occipito-temporal cortices involved in visual body processing. However, it is unclear whether these perceptual deficits involve more basic aspects of others' body perception. A consecutive sample of 15 adolescent patients with AN were compared with a group of 15 age- and gender-matched controls in delayed matching to sample tasks requiring the visual discrimination of the form or of the action of others' body. Patients showed better visual discrimination performance than controls in detail-based processing of body forms but not of body actions, which positively correlated with their increased tendency to convert a signal of punishment into a signal of reinforcement (higher persistence scores). The paradoxical advantage of patients with AN in detail-based body processing may be associated to their tendency to routinely explore body parts as a consequence of their obsessive worries about body appearance. Copyright © 2012 Wiley Periodicals, Inc.
Oetjen, Sophie; Ziefle, Martina
2009-01-01
An increasing demand to work with electronic displays and to use mobile computers emphasises the need to compare visual performance while working with different screen types. In the present study, a cathode ray tube (CRT) was compared to an external liquid crystal display (LCD) and a Notebook-LCD. The influence of screen type and viewing angle on discrimination performance was studied. Physical measurements revealed that luminance and contrast values change with varying viewing angles (anisotropy). This is most pronounced in Notebook-LCDs, followed by external LCDs and CRTs. Performance data showed that LCD's anisotropy has negative impacts on completing time critical visual tasks. The best results were achieved when a CRT was used. The largest deterioration of performance resulted when participants worked with a Notebook-LCD. When it is necessary to react quickly and accurately, LCD screens have disadvantages. The anisotropy of LCD-TFTs is therefore considered to be as a limiting factor deteriorating visual performance.
A hierarchical word-merging algorithm with class separability measure.
Wang, Lei; Zhou, Luping; Shen, Chunhua; Liu, Lingqiao; Liu, Huan
2014-03-01
In image recognition with the bag-of-features model, a small-sized visual codebook is usually preferred to obtain a low-dimensional histogram representation and high computational efficiency. Such a visual codebook has to be discriminative enough to achieve excellent recognition performance. To create a compact and discriminative codebook, in this paper we propose to merge the visual words in a large-sized initial codebook by maximally preserving class separability. We first show that this results in a difficult optimization problem. To deal with this situation, we devise a suboptimal but very efficient hierarchical word-merging algorithm, which optimally merges two words at each level of the hierarchy. By exploiting the characteristics of the class separability measure and designing a novel indexing structure, the proposed algorithm can hierarchically merge 10,000 visual words down to two words in merely 90 seconds. Also, to show the properties of the proposed algorithm and reveal its advantages, we conduct detailed theoretical analysis to compare it with another hierarchical word-merging algorithm that maximally preserves mutual information, obtaining interesting findings. Experimental studies are conducted to verify the effectiveness of the proposed algorithm on multiple benchmark data sets. As shown, it can efficiently produce more compact and discriminative codebooks than the state-of-the-art hierarchical word-merging algorithms, especially when the size of the codebook is significantly reduced.
Pigeons' discrimination of paintings by Monet and Picasso
Watanabe, Shigeru; Sakamoto, Junko; Wakita, Masumi
1995-01-01
Pigeons successfully learned to discriminate color slides of paintings by Monet and Picasso. Following this training, they discriminated novel paintings by Monet and Picasso that had never been presented during the discrimination training. Furthermore, they showed generalization from Monet's to Cezanne's and Renoir's paintings or from Picasso's to Braque's and Matisse's paintings. These results suggest that pigeons' behavior can be controlled by complex visual stimuli in ways that suggest categorization. Upside-down images of Monet's paintings disrupted the discrimination, whereas inverted images of Picasso's did not. This result may indicate that the pigeons' behavior was controlled by objects depicted in impressionists' paintings but was not controlled by objects in cubists' paintings. PMID:16812755
de Heering, Adélaïde; Dormal, Giulia; Pelland, Maxime; Lewis, Terri; Maurer, Daphne; Collignon, Olivier
2016-11-21
Is a short and transient period of visual deprivation early in life sufficient to induce lifelong changes in how we attend to, and integrate, simple visual and auditory information [1, 2]? This question is of crucial importance given the recent demonstration in both animals and humans that a period of blindness early in life permanently affects the brain networks dedicated to visual, auditory, and multisensory processing [1-16]. To address this issue, we compared a group of adults who had been treated for congenital bilateral cataracts during early infancy with a group of normally sighted controls on a task requiring simple detection of lateralized visual and auditory targets, presented alone or in combination. Redundancy gains obtained from the audiovisual conditions were similar between groups and surpassed the reaction time distribution predicted by Miller's race model. However, in comparison to controls, cataract-reversal patients were faster at processing simple auditory targets and showed differences in how they shifted attention across modalities. Specifically, they were faster at switching attention from visual to auditory inputs than in the reverse situation, while an opposite pattern was observed for controls. Overall, these results reveal that the absence of visual input during the first months of life does not prevent the development of audiovisual integration but enhances the salience of simple auditory inputs, leading to a different crossmodal distribution of attentional resources between auditory and visual stimuli. Copyright © 2016 Elsevier Ltd. All rights reserved.
Evidence for unlimited capacity processing of simple features in visual cortex
White, Alex L.; Runeson, Erik; Palmer, John; Ernst, Zachary R.; Boynton, Geoffrey M.
2017-01-01
Performance in many visual tasks is impaired when observers attempt to divide spatial attention across multiple visual field locations. Correspondingly, neuronal response magnitudes in visual cortex are often reduced during divided compared with focused spatial attention. This suggests that early visual cortex is the site of capacity limits, where finite processing resources must be divided among attended stimuli. However, behavioral research demonstrates that not all visual tasks suffer such capacity limits: The costs of divided attention are minimal when the task and stimulus are simple, such as when searching for a target defined by orientation or contrast. To date, however, every neuroimaging study of divided attention has used more complex tasks and found large reductions in response magnitude. We bridged that gap by using functional magnetic resonance imaging to measure responses in the human visual cortex during simple feature detection. The first experiment used a visual search task: Observers detected a low-contrast Gabor patch within one or four potentially relevant locations. The second experiment used a dual-task design, in which observers made independent judgments of Gabor presence in patches of dynamic noise at two locations. In both experiments, blood-oxygen level–dependent (BOLD) signals in the retinotopic cortex were significantly lower for ignored than attended stimuli. However, when observers divided attention between multiple stimuli, BOLD signals were not reliably reduced and behavioral performance was unimpaired. These results suggest that processing of simple features in early visual cortex has unlimited capacity. PMID:28654964
2012-02-09
different sources [12,13], but the analytical techniques needed for such analysis (XRD, INAA , & ICP-MS) are time consuming and require expensive...partial least-squares discriminant analysis (PLSDA) that used the SIMPLS solving method [33]. In the experi- ment design, a leave-one-sample-out (LOSO) para...REPORT Advanced signal processing analysis of laser-induced breakdown spectroscopy data for the discrimination of obsidian sources 14. ABSTRACT 16
Heisenberg scaling with weak measurement: a quantum state discrimination point of view
2015-03-18
a quantum state discrimination point of view. The Heisenberg scaling of the photon number for the precision of the interaction parameter between...coherent light and a spin one-half particle (or pseudo-spin) has a simple interpretation in terms of the interaction rotating the quantum state to an...release; distribution is unlimited. Heisenberg scaling with weak measurement: a quantum state discrimination point of view The views, opinions and/or
Kuhlmann, Levin; Vidyasagar, Trichur R.
2011-01-01
Controversy remains about how orientation selectivity emerges in simple cells of the mammalian primary visual cortex. In this paper, we present a computational model of how the orientation-biased responses of cells in lateral geniculate nucleus (LGN) can contribute to the orientation selectivity in simple cells in cats. We propose that simple cells are excited by lateral geniculate fields with an orientation-bias and disynaptically inhibited by unoriented lateral geniculate fields (or biased fields pooled across orientations), both at approximately the same retinotopic co-ordinates. This interaction, combined with recurrent cortical excitation and inhibition, helps to create the sharp orientation tuning seen in simple cell responses. Along with describing orientation selectivity, the model also accounts for the spatial frequency and length–response functions in simple cells, in normal conditions as well as under the influence of the GABAA antagonist, bicuculline. In addition, the model captures the response properties of LGN and simple cells to simultaneous visual stimulation and electrical stimulation of the LGN. We show that the sharp selectivity for stimulus orientation seen in primary visual cortical cells can be achieved without the excitatory convergence of the LGN input cells with receptive fields along a line in visual space, which has been a core assumption in classical models of visual cortex. We have also simulated how the full range of orientations seen in the cortex can emerge from the activity among broadly tuned channels tuned to a limited number of optimum orientations, just as in the classical case of coding for color in trichromatic primates. PMID:22013414
Synthesis of Survey Questions That Accurately Discriminate the Elements of the TPACK Framework
ERIC Educational Resources Information Center
Jaikaran-Doe, Seeta; Doe, Peter Edward
2015-01-01
A number of validated survey instruments for assessing technological pedagogical content knowledge (TPACK) do not accurately discriminate between the seven elements of the TPACK framework particularly technological content knowledge (TCK) and technological pedagogical knowledge (TPK). By posing simple questions that assess technological,…
Experience-Based Discrimination: Classroom Games
ERIC Educational Resources Information Center
Fryer, Roland G., Jr.; Goeree, Jacob K.; Holt, Charles A.
2005-01-01
The authors present a simple classroom game in which students are randomly designated as employers, purple workers, or green workers. This environment may generate "statistical" discrimination if workers of one color tend not to invest because they anticipate lower opportunities in the labor market, and these beliefs are self-confirming as…
Teaching Third-Degree Price Discrimination
ERIC Educational Resources Information Center
Round, David K.; McIver, Ron P.
2006-01-01
Third-degree price discrimination is taught in almost every intermediate microeconomics class. The theory, geometry, and the algebra behind the concept are simple, and the phenomenon is commonly associated with the sale of many of the goods and services used frequently by students. Classroom discussion is usually vibrant as students can relate…
Moehler, Tobias; Fiehler, Katja
2014-12-01
The present study investigated the coupling of selection-for-perception and selection-for-action during saccadic eye movement planning in three dual-task experiments. We focused on the effects of spatial congruency of saccade target (ST) location and discrimination target (DT) location and the time between ST-cue and Go-signal (SOA) on saccadic eye movement performance. In two experiments, participants performed a visual discrimination task at a cued location while programming a saccadic eye movement to a cued location. In the third experiment, the discrimination task was not cued and appeared at a random location. Spatial congruency of ST-location and DT-location resulted in enhanced perceptual performance irrespective of SOA. Perceptual performance in spatially incongruent trials was above chance, but only when the DT-location was cued. Saccade accuracy and precision were also affected by spatial congruency showing superior performance when the ST- and DT-location coincided. Saccade latency was only affected by spatial congruency when the DT-cue was predictive of the ST-location. Moreover, saccades consistently curved away from the incongruent DT-locations. Importantly, the effects of spatial congruency on saccade parameters only occurred when the DT-location was cued; therefore, results from experiments 1 and 2 are due to the endogenous allocation of attention to the DT-location and not caused by the salience of the probe. The SOA affected saccade latency showing decreasing latencies with increasing SOA. In conclusion, our results demonstrate that visuospatial attention can be voluntarily distributed upon spatially distinct perceptual and motor goals in dual-task situations, resulting in a decline of visual discrimination and saccade performance.
Turchi, Janita; Buffalari, Deanne; Mishkin, Mortimer
2008-01-01
Monkeys trained in either one-trial recognition at 8- to 10-min delays or multi-trial discrimination habits with 24-h intertrial intervals received systemic cholinergic and dopaminergic antagonists, scopolamine and haloperidol, respectively, in separate sessions. Recognition memory was impaired markedly by scopolamine but not at all by haloperidol, whereas habit formation was impaired markedly by haloperidol but only minimally by scopolamine. These differential drug effects point to differences in synaptic modification induced by the two neuromodulators that parallel the contrasting properties of the two types of learning, namely, fast acquisition but weak retention of memories versus slow acquisition but durable retention of habits. PMID:18685146
Turchi, Janita; Buffalari, Deanne; Mishkin, Mortimer
2008-08-01
Monkeys trained in either one-trial recognition at 8- to 10-min delays or multi-trial discrimination habits with 24-h intertrial intervals received systemic cholinergic and dopaminergic antagonists, scopolamine and haloperidol, respectively, in separate sessions. Recognition memory was impaired markedly by scopolamine but not at all by haloperidol, whereas habit formation was impaired markedly by haloperidol but only minimally by scopolamine. These differential drug effects point to differences in synaptic modification induced by the two neuromodulators that parallel the contrasting properties of the two types of learning, namely, fast acquisition but weak retention of memories versus slow acquisition but durable retention of habits.
Solomon, Joshua A.
2007-01-01
To explain the relationship between first- and second-response accuracies in a detection experiment, Swets, Tanner, and Birdsall [Swets, J., Tanner, W. P., Jr., & Birdsall, T. G. (1961). Decision processes in perception. Psychological Review, 68, 301–340] proposed that the variance of visual signals increased with their means. However, both a low threshold and intrinsic uncertainty produce similar relationships. I measured the relationship between first- and second-response accuracies for suprathreshold contrast discrimination, which is thought to be unaffected by sensory thresholds and intrinsic uncertainty. The results are consistent with a slowly increasing variance. PMID:17961625
NASA Astrophysics Data System (ADS)
Wihardi, Y.; Setiawan, W.; Nugraha, E.
2018-01-01
On this research we try to build CBIRS based on Learning Distance/Similarity Function using Linear Discriminant Analysis (LDA) and Histogram of Oriented Gradient (HoG) feature. Our method is invariant to depiction of image, such as similarity of image to image, sketch to image, and painting to image. LDA can decrease execution time compared to state of the art method, but it still needs an improvement in term of accuracy. Inaccuracy in our experiment happen because we did not perform sliding windows search and because of low number of negative samples as natural-world images.
Color-dependent learning in restrained Africanized honey bees.
Jernigan, C M; Roubik, D W; Wcislo, W T; Riveros, A J
2014-02-01
Associative color learning has been demonstrated to be very poor using restrained European honey bees unless the antennae are amputated. Consequently, our understanding of proximate mechanisms in visual information processing is handicapped. Here we test learning performance of Africanized honey bees under restrained conditions with visual and olfactory stimulation using the proboscis extension response (PER) protocol. Restrained individuals were trained to learn an association between a color stimulus and a sugar-water reward. We evaluated performance for 'absolute' learning (learned association between a stimulus and a reward) and 'discriminant' learning (discrimination between two stimuli). Restrained Africanized honey bees (AHBs) readily learned the association of color stimulus for both blue and green LED stimuli in absolute and discriminatory learning tasks within seven presentations, but not with violet as the rewarded color. Additionally, 24-h memory improved considerably during the discrimination task, compared with absolute association (15-55%). We found that antennal amputation was unnecessary and reduced performance in AHBs. Thus color learning can now be studied using the PER protocol with intact AHBs. This finding opens the way towards investigating visual and multimodal learning with application of neural techniques commonly used in restrained honey bees.
Tanaka, Tomohiro; Nishida, Satoshi
2015-01-01
The neuronal processes that underlie visual searches can be divided into two stages: target discrimination and saccade preparation/generation. This predicts that the length of time of the prediscrimination stage varies according to the search difficulty across different stimulus conditions, whereas the length of the latter postdiscrimination stage is stimulus invariant. However, recent studies have suggested that the length of the postdiscrimination interval changes with different stimulus conditions. To address whether and how the visual stimulus affects determination of the postdiscrimination interval, we recorded single-neuron activity in the lateral intraparietal area (LIP) when monkeys (Macaca fuscata) performed a color-singleton search involving four stimulus conditions that differed regarding luminance (Bright vs. Dim) and target-distractor color similarity (Easy vs. Difficult). We specifically focused on comparing activities between the Bright-Difficult and Dim-Easy conditions, in which the visual stimuli were considerably different, but the mean reaction times were indistinguishable. This allowed us to examine the neuronal activity when the difference in the degree of search speed between different stimulus conditions was minimal. We found that not only prediscrimination but also postdiscrimination intervals varied across stimulus conditions: the postdiscrimination interval was longer in the Dim-Easy condition than in the Bright-Difficult condition. Further analysis revealed that the postdiscrimination interval might vary with stimulus luminance. A computer simulation using an accumulation-to-threshold model suggested that the luminance-related difference in visual response strength at discrimination time could be the cause of different postdiscrimination intervals. PMID:25995344
Havermans, Anne; van Schayck, Onno C P; Vuurman, Eric F P M; Riedel, Wim J; van den Hurk, Job
2017-08-01
In the current study, we use functional magnetic resonance imaging (fMRI) and multi-voxel pattern analysis (MVPA) to investigate whether tobacco addiction biases basic visual processing in favour of smoking-related images. We hypothesize that the neural representation of smoking-related stimuli in the lateral occipital complex (LOC) is elevated after a period of nicotine deprivation compared to a satiated state, but that this is not the case for object categories unrelated to smoking. Current smokers (≥10 cigarettes a day) underwent two fMRI scanning sessions: one after 10 h of nicotine abstinence and the other one after smoking ad libitum. Regional blood oxygenated level-dependent (BOLD) response was measured while participants were presented with 24 blocks of 8 colour-matched pictures of cigarettes, pencils or chairs. The functional data of 10 participants were analysed through a pattern classification approach. In bilateral LOC clusters, the classifier was able to discriminate between patterns of activity elicited by visually similar smoking-related (cigarettes) and neutral objects (pencils) above empirically estimated chance levels only during deprivation (mean = 61.0%, chance (permutations) = 50.0%, p = .01) but not during satiation (mean = 53.5%, chance (permutations) = 49.9%, ns.). For all other stimulus contrasts, there was no difference in discriminability between the deprived and satiated conditions. The discriminability between smoking and non-smoking visual objects was elevated in object-selective brain region LOC after a period of nicotine abstinence. This indicates that attention bias likely affects basic visual object processing.
Khansari, Maziyar M; O’Neill, William; Penn, Richard; Chau, Felix; Blair, Norman P; Shahidi, Mahnaz
2016-01-01
The conjunctiva is a densely vascularized mucus membrane covering the sclera of the eye with a unique advantage of accessibility for direct visualization and non-invasive imaging. The purpose of this study is to apply an automated quantitative method for discrimination of different stages of diabetic retinopathy (DR) using conjunctival microvasculature images. Fine structural analysis of conjunctival microvasculature images was performed by ordinary least square regression and Fisher linear discriminant analysis. Conjunctival images between groups of non-diabetic and diabetic subjects at different stages of DR were discriminated. The automated method’s discriminate rates were higher than those determined by human observers. The method allowed sensitive and rapid discrimination by assessment of conjunctival microvasculature images and can be potentially useful for DR screening and monitoring. PMID:27446692
Facial patterns in a tropical social wasp correlate with colony membership
NASA Astrophysics Data System (ADS)
Baracchi, David; Turillazzi, Stefano; Chittka, Lars
2016-10-01
Social insects excel in discriminating nestmates from intruders, typically relying on colony odours. Remarkably, some wasp species achieve such discrimination using visual information. However, while it is universally accepted that odours mediate a group level recognition, the ability to recognise colony members visually has been considered possible only via individual recognition by which wasps discriminate `friends' and `foes'. Using geometric morphometric analysis, which is a technique based on a rigorous statistical theory of shape allowing quantitative multivariate analyses on structure shapes, we first quantified facial marking variation of Liostenogaster flavolineata wasps. We then compared this facial variation with that of chemical profiles (generated by cuticular hydrocarbons) within and between colonies. Principal component analysis and discriminant analysis applied to sets of variables containing pure shape information showed that despite appreciable intra-colony variation, the faces of females belonging to the same colony resemble one another more than those of outsiders. This colony-specific variation in facial patterns was on a par with that observed for odours. While the occurrence of face discrimination at the colony level remains to be tested by behavioural experiments, overall our results suggest that, in this species, wasp faces display adequate information that might be potentially perceived and used by wasps for colony level recognition.
Fam, Justine; Holmes, Nathan; Delaney, Andrew; Crane, James; Westbrook, R Frederick
2018-06-14
Oxytocin (OT) is a neuropeptide which influences the expression of social behavior and regulates its distribution according to the social context - OT is associated with increased pro-social effects in the absence of social threat and defensive aggression when threats are present. The present experiments investigated the effects of OT beyond that of social behavior by using a discriminative Pavlovian fear conditioning protocol with rats. In Experiment 1, an OT receptor agonist (TGOT) microinjected into the basolateral amygdala facilitated the discrimination between an auditory cue that signaled shock and another auditory cue that signaled the absence of shock. This TGOT-facilitated discrimination was replicated in a second experiment where the shocked and non-shocked auditory cues were accompanied by a common visual cue. Conditioned responding on probe trials of the auditory and visual elements indicated that TGOT administration produced a qualitative shift in the learning mechanisms underlying the discrimination between the two compounds. This was confirmed by comparisons between the present results and simulated predictions of elemental and configural associative learning models. Overall, the present findings demonstrate that the neuromodulatory effects of OT influence behavior outside of the social domain. Copyright © 2018 Elsevier Ltd. All rights reserved.
Direction discriminating hearing aid system
NASA Technical Reports Server (NTRS)
Jhabvala, M.; Lin, H. C.; Ward, G.
1991-01-01
A visual display was developed for people with substantial hearing loss in either one or both ears. The system consists of three discreet units; an eyeglass assembly for the visual display of the origin or direction of sounds; a stationary general purpose noise alarm; and a noise seeker wand.
Nagai, Takehiro; Matsushima, Toshiki; Koida, Kowa; Tani, Yusuke; Kitazaki, Michiteru; Nakauchi, Shigeki
2015-10-01
Humans can visually recognize material categories of objects, such as glass, stone, and plastic, easily. However, little is known about the kinds of surface quality features that contribute to such material class recognition. In this paper, we examine the relationship between perceptual surface features and material category discrimination performance for pictures of materials, focusing on temporal aspects, including reaction time and effects of stimulus duration. The stimuli were pictures of objects with an identical shape but made of different materials that could be categorized into seven classes (glass, plastic, metal, stone, wood, leather, and fabric). In a pre-experiment, observers rated the pictures on nine surface features, including visual (e.g., glossiness and transparency) and non-visual features (e.g., heaviness and warmness), on a 7-point scale. In the main experiments, observers judged whether two simultaneously presented pictures were classified as the same or different material category. Reaction times and effects of stimulus duration were measured. The results showed that visual feature ratings were correlated with material discrimination performance for short reaction times or short stimulus durations, while non-visual feature ratings were correlated only with performance for long reaction times or long stimulus durations. These results suggest that the mechanisms underlying visual and non-visual feature processing may differ in terms of processing time, although the cause is unclear. Visual surface features may mainly contribute to material recognition in daily life, while non-visual features may contribute only weakly, if at all. Copyright © 2014 Elsevier Ltd. All rights reserved.
Seeing visual word forms: spatial summation, eccentricity and spatial configuration.
Kao, Chien-Hui; Chen, Chien-Chung
2012-06-01
We investigated observers' performance in detecting and discriminating visual word forms as a function of target size and retinal eccentricity. The contrast threshold of visual words was measured with a spatial two-alternative forced-choice paradigm and a PSI adaptive method. The observers were to indicate which of two sides contained a stimulus in the detection task, and which contained a real character (as opposed to a pseudo- or non-character) in the discrimination task. When the target size was sufficiently small, the detection threshold of a character decreased as its size increased, with a slope of -1/2 on log-log coordinates, up to a critical size at all eccentricities and for all stimulus types. The discrimination threshold decreased with target size with a slope of -1 up to a critical size that was dependent on stimulus type and eccentricity. Beyond that size, the threshold decreased with a slope of -1/2 on log-log coordinates before leveling out. The data was well fit by a spatial summation model that contains local receptive fields (RFs) and a summation across these filters within an attention window. Our result implies that detection is mediated by local RFs smaller than any tested stimuli and thus detection performance is dominated by summation across receptive fields. On the other hand, discrimination is dominated by a summation within a local RF in the fovea but a cross RF summation in the periphery. Copyright © 2012 Elsevier Ltd. All rights reserved.
What visual information is used for stereoscopic depth displacement discrimination?
Nefs, Harold T; Harris, Julie M
2010-01-01
There are two ways to detect a displacement in stereoscopic depth, namely by monitoring the change in disparity over time (CDOT) or by monitoring the interocular velocity difference (IOVD). Though previous studies have attempted to understand which cue is most significant for the visual system, none has designed stimuli that provide a comparison in terms of relative efficiency between them. Here we used two-frame motion and random-dot noise to deliver equivalent strengths of CDOT and IOVD information to the visual system. Using three kinds of random-dot stimuli, we were able to isolate CDOT or IOVD or deliver both simultaneously. The proportion of dots delivering CDOT or IOVD signals could be varied, and we defined the discrimination threshold as the proportion needed to detect the direction of displacement (towards or away). Thresholds were similar for stimuli containing CDOT only, and containing both CDOT and IOVD, but only one participant was able to consistently perceive the displacement for stimuli containing only IOVD. We also investigated the effect of disparity pedestals on discrimination. Performance was best when the displacement crossed the reference plane, but was not significantly different for stimuli containing CDOT only and those containing both CDOT and IOVD. When stimuli are specifically designed to provide equivalent two-frame motion or disparity-change, few participants can reliably detect displacement when IOVD is the only cue. This challenges the notion that IOVD is involved in the discrimination of direction of displacement in two-frame motion displays.
Global and local processing near the left and right hands
Langerak, Robin M.; La Mantia, Carina L.; Brown, Liana E.
2013-01-01
Visual targets can be processed more quickly and reliably when a hand is placed near the target. Both unimodal and bimodal representations of hands are largely lateralized to the contralateral hemisphere, and since each hemisphere demonstrates specialized cognitive processing, it is possible that targets appearing near the left hand may be processed differently than targets appearing near the right hand. The purpose of this study was to determine whether visual processing near the left and right hands interacts with hemispheric specialization. We presented hierarchical-letter stimuli (e.g., small characters used as local elements to compose large characters at the global level) near the left or right hands separately and instructed participants to discriminate the presence of target letters (X and O) from non-target letters (T and U) at either the global or local levels as quickly as possible. Targets appeared at either the global or local level of the display, at both levels, or were absent from the display; participants made foot-press responses. When discriminating target presence at the global level, participants responded more quickly to stimuli presented near the left hand than near either the right hand or in the no-hand condition. Hand presence did not influence target discrimination at the local level. Our interpretation is that left-hand presence may help participants discriminate global information, a right hemisphere (RH) process, and that the left hand may influence visual processing in a way that is distinct from the right hand. PMID:24194725
Natural concepts in a juvenile gorilla (gorilla gorilla gorilla) at three levels of abstraction.
Vonk, Jennifer; MacDonald, Suzanne E
2002-01-01
The extent to which nonhumans are able to form conceptual versus perceptual discriminations remains a matter of debate. Among the great apes, only chimpanzees have been tested for conceptual understanding, defined as the ability to form discriminations not based solely on simple perceptual features of stimuli, and to transfer this learning to novel stimuli. In the present investigation, a young captive female gorilla was trained at three levels of abstraction (concrete, intermediate, and abstract) involving sets of photographs representing natural categories (e.g., orangutans vs. humans, primates vs. nonprimate animals, animals vs. foods). Within each level of abstraction, when the gorilla had learned to discriminate positive from negative exemplars in one set of photographs, a novel set was introduced. Transfer was defined in terms of high accuracy during the first two sessions with the new stimuli. The gorilla acquired discriminations at all three levels of abstraction but showed unambiguous transfer only with the concrete and abstract stimulus sets. Detailed analyses of response patterns revealed little evidence of control by simple stimulus features. Acquisition and transfer involving abstract stimulus sets suggest a conceptual basis for gorilla categorization. The gorilla's relatively poor performance with intermediate-level discriminations parallels findings with pigeons, and suggests a need to reconsider the role of perceptual information in discriminations thought to indicate conceptual behavior in nonhumans. PMID:12507006
What are the underlying units of perceived animacy? Chasing detection is intrinsically object-based.
van Buren, Benjamin; Gao, Tao; Scholl, Brian J
2017-10-01
One of the most foundational questions that can be asked about any visual process is the nature of the underlying 'units' over which it operates (e.g., features, objects, or spatial regions). Here we address this question-for the first time, to our knowledge-in the context of the perception of animacy. Even simple geometric shapes appear animate when they move in certain ways. Do such percepts arise whenever any visual feature moves appropriately, or do they require that the relevant features first be individuated as discrete objects? Observers viewed displays in which one disc (the "wolf") chased another (the "sheep") among several moving distractor discs. Critically, two pairs of discs were also connected by visible lines. In the Unconnected condition, both lines connected pairs of distractors; but in the Connected condition, one connected the wolf to a distractor, and the other connected the sheep to a different distractor. Observers in the Connected condition were much less likely to describe such displays using mental state terms. Furthermore, signal detection analyses were used to explore the objective ability to discriminate chasing displays from inanimate control displays in which the wolf moved toward the sheep's mirror-image. Chasing detection was severely impaired on Connected trials: observers could readily detect an object chasing another object, but not a line-end chasing another line-end, a line-end chasing an object, or an object chasing a line-end. We conclude that the underlying units of perceived animacy are discrete visual objects.
Montijn, Jorrit Steven; Klink, P Christaan; van Wezel, Richard J A
2012-01-01
Divisive normalization models of covert attention commonly use spike rate modulations as indicators of the effect of top-down attention. In addition, an increasing number of studies have shown that top-down attention increases the synchronization of neuronal oscillations as well, particularly in gamma-band frequencies (25-100 Hz). Although modulations of spike rate and synchronous oscillations are not mutually exclusive as mechanisms of attention, there has thus far been little effort to integrate these concepts into a single framework of attention. Here, we aim to provide such a unified framework by expanding the normalization model of attention with a multi-level hierarchical structure and a time dimension; allowing the simulation of a recently reported backward progression of attentional effects along the visual cortical hierarchy. A simple cascade of normalization models simulating different cortical areas is shown to cause signal degradation and a loss of stimulus discriminability over time. To negate this degradation and ensure stable neuronal stimulus representations, we incorporate a kind of oscillatory phase entrainment into our model that has previously been proposed as the "communication-through-coherence" (CTC) hypothesis. Our analysis shows that divisive normalization and oscillation models can complement each other in a unified account of the neural mechanisms of selective visual attention. The resulting hierarchical normalization and oscillation (HNO) model reproduces several additional spatial and temporal aspects of attentional modulation and predicts a latency effect on neuronal responses as a result of cued attention.
Montijn, Jorrit Steven; Klink, P. Christaan; van Wezel, Richard J. A.
2012-01-01
Divisive normalization models of covert attention commonly use spike rate modulations as indicators of the effect of top-down attention. In addition, an increasing number of studies have shown that top-down attention increases the synchronization of neuronal oscillations as well, particularly in gamma-band frequencies (25–100 Hz). Although modulations of spike rate and synchronous oscillations are not mutually exclusive as mechanisms of attention, there has thus far been little effort to integrate these concepts into a single framework of attention. Here, we aim to provide such a unified framework by expanding the normalization model of attention with a multi-level hierarchical structure and a time dimension; allowing the simulation of a recently reported backward progression of attentional effects along the visual cortical hierarchy. A simple cascade of normalization models simulating different cortical areas is shown to cause signal degradation and a loss of stimulus discriminability over time. To negate this degradation and ensure stable neuronal stimulus representations, we incorporate a kind of oscillatory phase entrainment into our model that has previously been proposed as the “communication-through-coherence” (CTC) hypothesis. Our analysis shows that divisive normalization and oscillation models can complement each other in a unified account of the neural mechanisms of selective visual attention. The resulting hierarchical normalization and oscillation (HNO) model reproduces several additional spatial and temporal aspects of attentional modulation and predicts a latency effect on neuronal responses as a result of cued attention. PMID:22586372
Recalibration of the Multisensory Temporal Window of Integration Results from Changing Task Demands
Mégevand, Pierre; Molholm, Sophie; Nayak, Ashabari; Foxe, John J.
2013-01-01
The notion of the temporal window of integration, when applied in a multisensory context, refers to the breadth of the interval across which the brain perceives two stimuli from different sensory modalities as synchronous. It maintains a unitary perception of multisensory events despite physical and biophysical timing differences between the senses. The boundaries of the window can be influenced by attention and past sensory experience. Here we examined whether task demands could also influence the multisensory temporal window of integration. We varied the stimulus onset asynchrony between simple, short-lasting auditory and visual stimuli while participants performed two tasks in separate blocks: a temporal order judgment task that required the discrimination of subtle auditory-visual asynchronies, and a reaction time task to the first incoming stimulus irrespective of its sensory modality. We defined the temporal window of integration as the range of stimulus onset asynchronies where performance was below 75% in the temporal order judgment task, as well as the range of stimulus onset asynchronies where responses showed multisensory facilitation (race model violation) in the reaction time task. In 5 of 11 participants, we observed audio-visual stimulus onset asynchronies where reaction time was significantly accelerated (indicating successful integration in this task) while performance was accurate in the temporal order judgment task (indicating successful segregation in that task). This dissociation suggests that in some participants, the boundaries of the temporal window of integration can adaptively recalibrate in order to optimize performance according to specific task demands. PMID:23951203
ERIC Educational Resources Information Center
Hayward, Carol M.; Gromko, Joyce Eastlund
2009-01-01
The purpose of this study was to examine predictors of music sight-reading ability. The authors hypothesized that speed and accuracy of music sight-reading would be predicted by a combination of aural pattern discrimination, spatial-temporal reasoning, and technical proficiency. Participants (N = 70) were wind players in concert bands at a…
Aphasic Patients Exhibit a Reversal of Hemispheric Asymmetries in Categorical Color Discrimination
ERIC Educational Resources Information Center
Paluy, Yulia; Gilbert, Aubrey L.; Baldo, Juliana V.; Dronkers, Nina F.; Ivry, Richard B.
2011-01-01
Patients with left hemisphere (LH) or right hemisphere (RH) brain injury due to stroke were tested on a speeded, color discrimination task in which two factors were manipulated: (1) the categorical relationship between the target and the distracters and (2) the visual field in which the target was presented. Similar to controls, the RH patients…
Bootstrap Methods: A Very Leisurely Look.
ERIC Educational Resources Information Center
Hinkle, Dennis E.; Winstead, Wayland H.
The Bootstrap method, a computer-intensive statistical method of estimation, is illustrated using a simple and efficient Statistical Analysis System (SAS) routine. The utility of the method for generating unknown parameters, including standard errors for simple statistics, regression coefficients, discriminant function coefficients, and factor…
Tijsma, Mylou; Vister, Eva; Hoang, Phu; Lord, Stephen R
2017-03-01
Purpose To determine (a) the discriminant validity for established fall risk factors and (b) the predictive validity for falls of a simple test of choice stepping reaction time (CSRT) in people with multiple sclerosis (MS). Method People with MS (n = 210, 21-74y) performed the CSRT, sensorimotor, balance and neuropsychological tests in a single session. They were then followed up for falls using monthly fall diaries for 6 months. Results The CSRT test had excellent discriminant validity with respect to established fall risk factors. Frequent fallers (≥3 falls) performed significantly worse in the CSRT test than non-frequent fallers (0-2 falls). With the odds of suffering frequent falls increasing 69% with each SD increase in CSRT (OR = 1.69, 95% CI: 1.27-2.26, p = <0.001). In regression analysis, CSRT was best explained by sway, time to complete the 9-Hole Peg test, knee extension strength of the weaker leg, proprioception and the time to complete the Trails B test (multiple R 2 = 0.449, p < 0.001). Conclusions A simple low tech CSRT test has excellent discriminative and predictive validity in relation to falls in people with MS. This test may prove useful in documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions. Implications for rehabilitation Good choice stepping reaction time (CSRT) is required for maintaining balance. A simple low-tech CSRT test has excellent discriminative and predictive validity in relation to falls in people with MS. This test may prove useful documenting longitudinal changes in fall risk in relation to MS disease progression and effects of interventions.
The Influence of Visual Ability on Learning and Memory Performance in 13 Strains of Mice
ERIC Educational Resources Information Center
Brown, Richard E.; Wong, Aimee A.
2007-01-01
We calculated visual ability in 13 strains of mice (129SI/Sv1mJ, A/J, AKR/J, BALB/cByJ, C3H/HeJ, C57BL/6J, CAST/EiJ, DBA/2J, FVB/NJ, MOLF/EiJ, SJL/J, SM/J, and SPRET/EiJ) on visual detection, pattern discrimination, and visual acuity and tested these and other mice of the same strains in a behavioral test battery that evaluated visuo-spatial…
Visual versus Phonological Abilities in Spanish Dyslexic Boys and Girls
ERIC Educational Resources Information Center
Bednarek, Dorota; Saldana, David; Garcia, Isabel
2009-01-01
Phonological and visual theories propose different primary deficits as part of the explanation for dyslexia. Both theories were put to test in a sample of Spanish dyslexic readers. Twenty-one dyslexic and 22 typically-developing children matched on chronological age were administered phonological discrimination and awareness tasks and coherent…
Can Blindsight Be Superior to "Sighted-Sight"?
ERIC Educational Resources Information Center
Trevethan, Ceri T.; Sahraie, Arash; Weiskrantz, Larry
2007-01-01
DB, the first blindsight case to be tested extensively (Weiskrantz, 1986) has demonstrated the ability to detect and discriminate a range of visual stimuli presented within his perimetrically blind visual field defect. In a temporal two alternative forced choice (2AFC) detection experiment we have investigated the limits of DB's detection ability…
Effects of Age and Reading Ability on Visual Discrimination.
ERIC Educational Resources Information Center
Musatti, Tullia; And Others
1981-01-01
Sixty children, prereaders and readers aged 4-6 years, matched color, shape, and letter features in pairs of cartoons. Older children and those able to read performed better, confirming the hypothesis that the development of some visual skills is a by-product of learning to read. (Author/SJL)
Enhanced Perceptual Functioning in Autism: An Update, and Eight Principles of Autistic Perception
ERIC Educational Resources Information Center
Mottron, Laurent; Dawson, Michelle; Soulieres, Isabelle; Hubert, Benedicte; Burack, Jake
2006-01-01
We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception…
NASA Astrophysics Data System (ADS)
Choi, Myoung-Hwan; Ahn, Jungryul; Park, Dae Jin; Lee, Sang Min; Kim, Kwangsoo; Cho, Dong-il Dan; Senok, Solomon S.; Koo, Kyo-in; Goo, Yong Sook
2017-02-01
Objective. Direct stimulation of retinal ganglion cells in degenerate retinas by implanting epi-retinal prostheses is a recognized strategy for restoration of visual perception in patients with retinitis pigmentosa or age-related macular degeneration. Elucidating the best stimulus-response paradigms in the laboratory using multielectrode arrays (MEA) is complicated by the fact that the short-latency spikes (within 10 ms) elicited by direct retinal ganglion cell (RGC) stimulation are obscured by the stimulus artifact which is generated by the electrical stimulator. Approach. We developed an artifact subtraction algorithm based on topographic prominence discrimination, wherein the duration of prominences within the stimulus artifact is used as a strategy for identifying the artifact for subtraction and clarifying the obfuscated spikes which are then quantified using standard thresholding. Main results. We found that the prominence discrimination based filters perform creditably in simulation conditions by successfully isolating randomly inserted spikes in the presence of simple and even complex residual artifacts. We also show that the algorithm successfully isolated short-latency spikes in an MEA-based recording from degenerate mouse retinas, where the amplitude and frequency characteristics of the stimulus artifact vary according to the distance of the recording electrode from the stimulating electrode. By ROC analysis of false positive and false negative first spike detection rates in a dataset of one hundred and eight RGCs from four retinal patches, we found that the performance of our algorithm is comparable to that of a generally-used artifact subtraction filter algorithm which uses a strategy of local polynomial approximation (SALPA). Significance. We conclude that the application of topographic prominence discrimination is a valid and useful method for subtraction of stimulation artifacts with variable amplitudes and shapes. We propose that our algorithm may be used as stand-alone or supplementary to other artifact subtraction algorithms like SALPA.
Common and Distinctive Patterns of Cognitive Dysfunction in Children With Benign Epilepsy Syndromes.
Cheng, Dazhi; Yan, Xiuxian; Gao, Zhijie; Xu, Keming; Zhou, Xinlin; Chen, Qian
2017-07-01
Childhood absence epilepsy and benign childhood epilepsy with centrotemporal spikes are the most common forms of benign epilepsy syndromes. Although cognitive dysfunctions occur in children with both childhood absence epilepsy and benign childhood epilepsy with centrotemporal spikes, the similarity between their patterns of underlying cognitive impairments is not well understood. To describe these patterns, we examined multiple cognitive functions in children with childhood absence epilepsy and benign childhood epilepsy with centrotemporal spikes. In this study, 43 children with childhood absence epilepsy, 47 children with benign childhood epilepsy with centrotemporal spikes, and 64 control subjects were recruited; all received a standardized assessment (i.e., computerized test battery) assessing processing speed, spatial skills, calculation, language ability, intelligence, visual attention, and executive function. Groups were compared in these cognitive domains. Simple regression analysis was used to analyze the effects of epilepsy-related clinical variables on cognitive test scores. Compared with control subjects, children with childhood absence epilepsy and benign childhood epilepsy with centrotemporal spikes showed cognitive deficits in intelligence and executive function, but performed normally in language processing. Impairment in visual attention was specific to patients with childhood absence epilepsy, whereas impaired spatial ability was specific to the children with benign childhood epilepsy with centrotemporal spikes. Simple regression analysis showed syndrome-related clinical variables did not affect cognitive functions. This study provides evidence of both common and distinctive cognitive features underlying the relative cognitive difficulties in children with childhood absence epilepsy and benign childhood epilepsy with centrotemporal spikes. Our data suggest that clinicians should pay particular attention to the specific cognitive deficits in children with childhood absence epilepsy and benign childhood epilepsy with centrotemporal spikes, to allow for more discriminative and potentially more effective interventions. Copyright © 2017 Elsevier Inc. All rights reserved.
No Child Left Behind? Sociology Ignored!
ERIC Educational Resources Information Center
Karen, David
2005-01-01
Too many American children are segregated into schools without standards, shuffled from grade-to-grade because of their age, regard less of their knowledge. This is discrimination, pure and simple--the soft bigotry of low expectations. And our nation should treat it like other forms of discrimination. We should end it. One size does not fit all…
Kawada, Y; Yamada, T; Unno, Y; Yunoki, A; Sato, Y; Hino, Y
2012-09-01
A simple but versatile data acquisition system for software coincidence experiments is described, in which any time stamping and live time controller are not provided. Signals from β- and γ-channels are fed to separately two fast ADCs (16 bits, 25 MHz clock maximum) via variable delay circuits and pulse-height stretchers, and also to pulse-height discriminators. The discriminating level was set to just above the electronic noise. Two ADCs were controlled with a common clock signal, and triggered simultaneously by the logic OR pulses from both discriminators. Paired digital signals for each sampling were sent to buffer memories connected to main PC with a FIFO (First-In, First-Out) pipe via USB. After data acquisition in list mode, various processing including pulse-height analyses was performed using MS-Excel (version 2007 and later). The usefulness of this system was demonstrated for 4πβ(PS)-4πγ coincidence measurements of (60)Co, (134)Cs and (152)Eu. Possibilities of other extended applications will be touched upon. Copyright © 2012 Elsevier Ltd. All rights reserved.