Sample records for visual discrimination model

  1. Discrimination of tenants with a visual impairment on the housing market: Empirical evidence from correspondence tests.

    PubMed

    Verhaeghe, Pieter-Paul; Van der Bracht, Koen; Van de Putte, Bart

    2016-04-01

    According to the social model of disability, physical 'impairments' become disabilities through exclusion in social relations. An obvious form of social exclusion might be discrimination, for instance on the rental housing market. Although discrimination has detrimental health effects, very few studies have examined discrimination of people with a visual impairment. We aim to study (1) the extent of discrimination of individuals with a visual impairment on the rental housing market and (2) differences in rates of discrimination between landowners and real estate agents. We conducted correspondence tests among 268 properties on the Belgian rental housing market. Using matched tests, we compared reactions by realtors and landowners to tenants with and tenants without a visual impairment. The results show that individuals with a visual impairment are substantially discriminated against in the rental housing market: at least one in three lessors discriminate against individuals with a visual impairment. We further discern differences in the propensity toward discrimination according to the type of lessor. Private landlords are at least twice as likely to discriminate against tenants with a visual impairment than real estate agents. At the same time, realtors still discriminate against one in five tenants with a visual impairment. This study shows the substantial discrimination against visually people with an impairment. Given the important consequences discrimination might have for physical and mental health, further research into this topic is needed. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Discrimination of numerical proportions: A comparison of binomial and Gaussian models.

    PubMed

    Raidvee, Aire; Lember, Jüri; Allik, Jüri

    2017-01-01

    Observers discriminated the numerical proportion of two sets of elements (N = 9, 13, 33, and 65) that differed either by color or orientation. According to the standard Thurstonian approach, the accuracy of proportion discrimination is determined by irreducible noise in the nervous system that stochastically transforms the number of presented visual elements onto a continuum of psychological states representing numerosity. As an alternative to this customary approach, we propose a Thurstonian-binomial model, which assumes discrete perceptual states, each of which is associated with a certain visual element. It is shown that the probability β with which each visual element can be noticed and registered by the perceptual system can explain data of numerical proportion discrimination at least as well as the continuous Thurstonian-Gaussian model, and better, if the greater parsimony of the Thurstonian-binomial model is taken into account using AIC model selection. We conclude that Gaussian and binomial models represent two different fundamental principles-internal noise vs. using only a fraction of available information-which are both plausible descriptions of visual perception.

  3. Evidence that primary visual cortex is required for image, orientation, and motion discrimination by rats.

    PubMed

    Petruno, Sarah K; Clark, Robert E; Reinagel, Pamela

    2013-01-01

    The pigmented Long-Evans rat has proven to be an excellent subject for studying visually guided behavior including quantitative visual psychophysics. This observation, together with its experimental accessibility and its close homology to the mouse, has made it an attractive model system in which to dissect the thalamic and cortical circuits underlying visual perception. Given that visually guided behavior in the absence of primary visual cortex has been described in the literature, however, it is an empirical question whether specific visual behaviors will depend on primary visual cortex in the rat. Here we tested the effects of cortical lesions on performance of two-alternative forced-choice visual discriminations by Long-Evans rats. We present data from one highly informative subject that learned several visual tasks and then received a bilateral lesion ablating >90% of primary visual cortex. After the lesion, this subject had a profound and persistent deficit in complex image discrimination, orientation discrimination, and full-field optic flow motion discrimination, compared with both pre-lesion performance and sham-lesion controls. Performance was intact, however, on another visual two-alternative forced-choice task that required approaching a salient visual target. A second highly informative subject learned several visual tasks prior to receiving a lesion ablating >90% of medial extrastriate cortex. This subject showed no impairment on any of the four task categories. Taken together, our data provide evidence that these image, orientation, and motion discrimination tasks require primary visual cortex in the Long-Evans rat, whereas approaching a salient visual target does not.

  4. A neurocomputational model of figure-ground discrimination and target tracking.

    PubMed

    Sun, H; Liu, L; Guo, A

    1999-01-01

    A neurocomputational model is presented for figureground discrimination and target tracking. In the model, the elementary motion detectors of the correlation type, the computational modules of saccadic and smooth pursuit eye movement, an oscillatory neural-network motion perception module and a selective attention module are involved. It is shown that through the oscillatory amplitude and frequency encoding, and selective synchronization of phase oscillators, the figure and the ground can be successfully discriminated from each other. The receptive fields developed by hidden units of the networks were surprisingly similar to the actual receptive fields and columnar organization found in the primate visual cortex. It is suggested that equivalent mechanisms may exist in the primate visual cortex to discriminate figure-ground in both temporal and spatial domains.

  5. Functional MRI Representational Similarity Analysis Reveals a Dissociation between Discriminative and Relative Location Information in the Human Visual System.

    PubMed

    Roth, Zvi N

    2016-01-01

    Neural responses in visual cortex are governed by a topographic mapping from retinal locations to cortical responses. Moreover, at the voxel population level early visual cortex (EVC) activity enables accurate decoding of stimuli locations. However, in many cases information enabling one to discriminate between locations (i.e., discriminative information) may be less relevant than information regarding the relative location of two objects (i.e., relative information). For example, when planning to grab a cup, determining whether the cup is located at the same retinal location as the hand is hardly relevant, whereas the location of the cup relative to the hand is crucial for performing the action. We have previously used multivariate pattern analysis techniques to measure discriminative location information, and found the highest levels in EVC, in line with other studies. Here we show, using representational similarity analysis, that availability of discriminative information in fMRI activation patterns does not entail availability of relative information. Specifically, we find that relative location information can be reliably extracted from activity patterns in posterior intraparietal sulcus (pIPS), but not from EVC, where we find the spatial representation to be warped. We further show that this variability in relative information levels between regions can be explained by a computational model based on an array of receptive fields. Moreover, when the model's receptive fields are extended to include inhibitory surround regions, the model can account for the spatial warping in EVC. These results demonstrate how size and shape properties of receptive fields in human visual cortex contribute to the transformation of discriminative spatial representations into relative spatial representations along the visual stream.

  6. Functional MRI Representational Similarity Analysis Reveals a Dissociation between Discriminative and Relative Location Information in the Human Visual System

    PubMed Central

    Roth, Zvi N.

    2016-01-01

    Neural responses in visual cortex are governed by a topographic mapping from retinal locations to cortical responses. Moreover, at the voxel population level early visual cortex (EVC) activity enables accurate decoding of stimuli locations. However, in many cases information enabling one to discriminate between locations (i.e., discriminative information) may be less relevant than information regarding the relative location of two objects (i.e., relative information). For example, when planning to grab a cup, determining whether the cup is located at the same retinal location as the hand is hardly relevant, whereas the location of the cup relative to the hand is crucial for performing the action. We have previously used multivariate pattern analysis techniques to measure discriminative location information, and found the highest levels in EVC, in line with other studies. Here we show, using representational similarity analysis, that availability of discriminative information in fMRI activation patterns does not entail availability of relative information. Specifically, we find that relative location information can be reliably extracted from activity patterns in posterior intraparietal sulcus (pIPS), but not from EVC, where we find the spatial representation to be warped. We further show that this variability in relative information levels between regions can be explained by a computational model based on an array of receptive fields. Moreover, when the model's receptive fields are extended to include inhibitory surround regions, the model can account for the spatial warping in EVC. These results demonstrate how size and shape properties of receptive fields in human visual cortex contribute to the transformation of discriminative spatial representations into relative spatial representations along the visual stream. PMID:27242455

  7. Improved Discrimination of Visual Stimuli Following Repetitive Transcranial Magnetic Stimulation

    PubMed Central

    Waterston, Michael L.; Pack, Christopher C.

    2010-01-01

    Background Repetitive transcranial magnetic stimulation (rTMS) at certain frequencies increases thresholds for motor-evoked potentials and phosphenes following stimulation of cortex. Consequently rTMS is often assumed to introduce a “virtual lesion” in stimulated brain regions, with correspondingly diminished behavioral performance. Methodology/Principal Findings Here we investigated the effects of rTMS to visual cortex on subjects' ability to perform visual psychophysical tasks. Contrary to expectations of a visual deficit, we find that rTMS often improves the discrimination of visual features. For coarse orientation tasks, discrimination of a static stimulus improved consistently following theta-burst stimulation of the occipital lobe. Using a reaction-time task, we found that these improvements occurred throughout the visual field and lasted beyond one hour post-rTMS. Low-frequency (1 Hz) stimulation yielded similar improvements. In contrast, we did not find consistent effects of rTMS on performance in a fine orientation discrimination task. Conclusions/Significance Overall our results suggest that rTMS generally improves or has no effect on visual acuity, with the nature of the effect depending on the type of stimulation and the task. We interpret our results in the context of an ideal-observer model of visual perception. PMID:20442776

  8. [Discrimination of varieties of brake fluid using visual-near infrared spectra].

    PubMed

    Jiang, Lu-lu; Tan, Li-hong; Qiu, Zheng-jun; Lu, Jiang-feng; He, Yong

    2008-06-01

    A new method was developed to fast discriminate brands of brake fluid by means of visual-near infrared spectroscopy. Five different brands of brake fluid were analyzed using a handheld near infrared spectrograph, manufactured by ASD Company, and 60 samples were gotten from each brand of brake fluid. The samples data were pretreated using average smoothing and standard normal variable method, and then analyzed using principal component analysis (PCA). A 2-dimensional plot was drawn based on the first and the second principal components, and the plot indicated that the clustering characteristic of different brake fluid is distinct. The foregoing 6 principal components were taken as input variable, and the band of brake fluid as output variable to build the discriminate model by stepwise discriminant analysis method. Two hundred twenty five samples selected randomly were used to create the model, and the rest 75 samples to verify the model. The result showed that the distinguishing rate was 94.67%, indicating that the method proposed in this paper has good performance in classification and discrimination. It provides a new way to fast discriminate different brands of brake fluid.

  9. Sounds Activate Visual Cortex and Improve Visual Discrimination

    PubMed Central

    Störmer, Viola S.; Martinez, Antigona; McDonald, John J.; Hillyard, Steven A.

    2014-01-01

    A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. PMID:25031419

  10. Detection and recognition of simple spatial forms

    NASA Technical Reports Server (NTRS)

    Watson, A. B.

    1983-01-01

    A model of human visual sensitivity to spatial patterns is constructed. The model predicts the visibility and discriminability of arbitrary two-dimensional monochrome images. The image is analyzed by a large array of linear feature sensors, which differ in spatial frequency, phase, orientation, and position in the visual field. All sensors have one octave frequency bandwidths, and increase in size linearly with eccentricity. Sensor responses are processed by an ideal Bayesian classifier, subject to uncertainty. The performance of the model is compared to that of the human observer in detecting and discriminating some simple images.

  11. Time- and Space-Order Effects in Timed Discrimination of Brightness and Size of Paired Visual Stimuli

    ERIC Educational Resources Information Center

    Patching, Geoffrey R.; Englund, Mats P.; Hellstrom, Ake

    2012-01-01

    Despite the importance of both response probability and response time for testing models of choice, there is a dearth of chronometric studies examining systematic asymmetries that occur over time- and space-orders in the method of paired comparisons. In this study, systematic asymmetries in discriminating the magnitude of paired visual stimuli are…

  12. Detection of visual signals by rats: A computational model

    EPA Science Inventory

    We applied a neural network model of classical conditioning proposed by Schmajuk, Lam, and Gray (1996) to visual signal detection and discrimination tasks designed to assess sustained attention in rats (Bushnell, 1999). The model describes the animals’ expectation of receiving fo...

  13. Optimal visuotactile integration for velocity discrimination of self-hand movements

    PubMed Central

    Chancel, M.; Blanchard, C.; Guerraz, M.; Montagnini, A.

    2016-01-01

    Illusory hand movements can be elicited by a textured disk or a visual pattern rotating under one's hand, while proprioceptive inputs convey immobility information (Blanchard C, Roll R, Roll JP, Kavounoudias A. PLoS One 8: e62475, 2013). Here, we investigated whether visuotactile integration can optimize velocity discrimination of illusory hand movements in line with Bayesian predictions. We induced illusory movements in 15 volunteers by visual and/or tactile stimulation delivered at six angular velocities. Participants had to compare hand illusion velocities with a 5°/s hand reference movement in an alternative forced choice paradigm. Results showed that the discrimination threshold decreased in the visuotactile condition compared with unimodal (visual or tactile) conditions, reflecting better bimodal discrimination. The perceptual strength (gain) of the illusions also increased: the stimulation required to give rise to a 5°/s illusory movement was slower in the visuotactile condition compared with each of the two unimodal conditions. The maximum likelihood estimation model satisfactorily predicted the improved discrimination threshold but not the increase in gain. When we added a zero-centered prior, reflecting immobility information, the Bayesian model did actually predict the gain increase but systematically overestimated it. Interestingly, the predicted gains better fit the visuotactile performances when a proprioceptive noise was generated by covibrating antagonist wrist muscles. These findings show that kinesthetic information of visual and tactile origins is optimally integrated to improve velocity discrimination of self-hand movements. However, a Bayesian model alone could not fully describe the illusory phenomenon pointing to the crucial importance of the omnipresent muscle proprioceptive cues with respect to other sensory cues for kinesthesia. PMID:27385802

  14. [Visual perception and its disorders].

    PubMed

    Ruf-Bächtiger, L

    1989-11-21

    It's the brain and not the eye that decides what is perceived. In spite of this fact, quite a lot is known about the functioning of the eye and the first sections of the optic tract, but little about the actual process of perception. Examination of visual perception and its malfunctions relies therefore on certain hypotheses. Proceeding from the model of functional brain systems, variant functional domains of visual perception can be distinguished. Among the more important of these domains are: digit span, visual discrimination and figure-ground discrimination. Evaluation of these functional domains allows us to understand those children with disorders of visual perception better and to develop more effective treatment methods.

  15. Perceptual asymmetry in texture perception.

    PubMed

    Williams, D; Julesz, B

    1992-07-15

    A fundamental property of human visual perception is our ability to distinguish between textures. A concerted effort has been made to account for texture segregation in terms of linear spatial filter models and their nonlinear extensions. However, for certain texture pairs the ease of discrimination changes when the role of figure and ground are reversed. This asymmetry poses a problem for both linear and nonlinear models. We have isolated a property of texture perception that can account for this asymmetry in discrimination: subjective closure. This property, which is also responsible for visual illusions, appears to be explainable by early visual processes alone. Our results force a reexamination of the process of human texture segregation and of some recent models that were introduced to explain it.

  16. Neural dynamics of motion processing and speed discrimination.

    PubMed

    Chey, J; Grossberg, S; Mingolla, E

    1998-09-01

    A neural network model of visual motion perception and speed discrimination is presented. The model shows how a distributed population code of speed tuning, that realizes a size-speed correlation, can be derived from the simplest mechanisms whereby activations of multiple spatially short-range filters of different size are transformed into speed-turned cell responses. These mechanisms use transient cell responses to moving stimuli, output thresholds that covary with filter size, and competition. These mechanisms are proposed to occur in the V1-->MT cortical processing stream. The model reproduces empirically derived speed discrimination curves and simulates data showing how visual speed perception and discrimination can be affected by stimulus contrast, duration, dot density and spatial frequency. Model motion mechanisms are analogous to mechanisms that have been used to model 3-D form and figure-ground perception. The model forms the front end of a larger motion processing system that has been used to simulate how global motion capture occurs, and how spatial attention is drawn to moving forms. It provides a computational foundation for an emerging neural theory of 3-D form and motion perception.

  17. Exploiting Attribute Correlations: A Novel Trace Lasso-Based Weakly Supervised Dictionary Learning Method.

    PubMed

    Wu, Lin; Wang, Yang; Pan, Shirui

    2017-12-01

    It is now well established that sparse representation models are working effectively for many visual recognition tasks, and have pushed forward the success of dictionary learning therein. Recent studies over dictionary learning focus on learning discriminative atoms instead of purely reconstructive ones. However, the existence of intraclass diversities (i.e., data objects within the same category but exhibit large visual dissimilarities), and interclass similarities (i.e., data objects from distinct classes but share much visual similarities), makes it challenging to learn effective recognition models. To this end, a large number of labeled data objects are required to learn models which can effectively characterize these subtle differences. However, labeled data objects are always limited to access, committing it difficult to learn a monolithic dictionary that can be discriminative enough. To address the above limitations, in this paper, we propose a weakly-supervised dictionary learning method to automatically learn a discriminative dictionary by fully exploiting visual attribute correlations rather than label priors. In particular, the intrinsic attribute correlations are deployed as a critical cue to guide the process of object categorization, and then a set of subdictionaries are jointly learned with respect to each category. The resulting dictionary is highly discriminative and leads to intraclass diversity aware sparse representations. Extensive experiments on image classification and object recognition are conducted to show the effectiveness of our approach.

  18. Visual recovery in cortical blindness is limited by high internal noise

    PubMed Central

    Cavanaugh, Matthew R.; Zhang, Ruyuan; Melnick, Michael D.; Das, Anasuya; Roberts, Mariel; Tadin, Duje; Carrasco, Marisa; Huxlin, Krystel R.

    2015-01-01

    Damage to the primary visual cortex typically causes cortical blindness (CB) in the hemifield contralateral to the damaged hemisphere. Recent evidence indicates that visual training can partially reverse CB at trained locations. Whereas training induces near-complete recovery of coarse direction and orientation discriminations, deficits in fine motion processing remain. Here, we systematically disentangle components of the perceptual inefficiencies present in CB fields before and after coarse direction discrimination training. In seven human CB subjects, we measured threshold versus noise functions before and after coarse direction discrimination training in the blind field and at corresponding intact field locations. Threshold versus noise functions were analyzed within the framework of the linear amplifier model and the perceptual template model. Linear amplifier model analysis identified internal noise as a key factor differentiating motion processing across the tested areas, with visual training reducing internal noise in the blind field. Differences in internal noise also explained residual perceptual deficits at retrained locations. These findings were confirmed with perceptual template model analysis, which further revealed that the major residual deficits between retrained and intact field locations could be explained by differences in internal additive noise. There were no significant differences in multiplicative noise or the ability to process external noise. Together, these results highlight the critical role of altered internal noise processing in mediating training-induced visual recovery in CB fields, and may explain residual perceptual deficits relative to intact regions of the visual field. PMID:26389544

  19. Image Discrimination Models for Object Detection in Natural Backgrounds

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J., Jr.

    2000-01-01

    This paper reviews work accomplished and in progress at NASA Ames relating to visual target detection. The focus is on image discrimination models, starting with Watson's pioneering development of a simple spatial model and progressing through this model's descendents and extensions. The application of image discrimination models to target detection will be described and results reviewed for Rohaly's vehicle target data and the Search 2 data. The paper concludes with a description of work we have done to model the process by which observers learn target templates and methods for elucidating those templates.

  20. Sounds activate visual cortex and improve visual discrimination.

    PubMed

    Feng, Wenfeng; Störmer, Viola S; Martinez, Antigona; McDonald, John J; Hillyard, Steven A

    2014-07-16

    A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. Copyright © 2014 the authors 0270-6474/14/349817-08$15.00/0.

  1. Visual cues for woodpeckers: light reflectance of decayed wood varies by decay fungus

    USGS Publications Warehouse

    O'Daniels, Sean T.; Kesler, Dylan C.; Mihail, Jeanne D.; Webb, Elisabeth B.; Werner, Scott J.

    2018-01-01

    The appearance of wood substrates is likely relevant to bird species with life histories that require regular interactions with wood for food and shelter. Woodpeckers detect decayed wood for cavity placement or foraging, and some species may be capable of detecting trees decayed by specific fungi; however, a mechanism allowing for such specificity remains unidentified. We hypothesized that decay fungi associated with woodpecker cavity sites alter the substrate reflectance in a species-specific manner that is visually discriminable by woodpeckers. We grew 10 species of wood decay fungi from pure cultures on sterile wood substrates of 3 tree species. We then measured the relative reflectance spectra of decayed and control wood wafers and compared them using the receptor noise-limited (RNL) color discrimination model. The RNL model has been used in studies of feather coloration, egg shells, flowers, and fruit to model how the colors of objects appear to birds. Our analyses indicated 6 of 10 decayed substrate/control comparisons were above the threshold of discrimination (i.e., indicating differences discriminable by avian viewers), and 12 of 13 decayed substrate comparisons were also above threshold for a hypothetical woodpecker. We conclude that woodpeckers should be capable of visually detecting decayed wood on trees where bark is absent, and they should also be able to detect visually species-specific differences in wood substrates decayed by fungi used in this study. Our results provide evidence for a visual mechanism by which woodpeckers could identify and select substrates decayed by specific fungi, which has implications for understanding ecologically important woodpecker–fungus interactions.

  2. Heterogeneity effects in visual search predicted from the group scanning model.

    PubMed

    Macquistan, A D

    1994-12-01

    The group scanning model of feature integration theory (Treisman & Gormican, 1988) suggests that subjects search visual displays serially by groups, but process items within each group in parallel. The size of these groups is determined by the discriminability of the targets in the background of distractors. When the target is poorly discriminable, the size of the scanned group will be small, and search will be slow. The model predicts that group size will be smallest when targets of an intermediate value on a perceptual dimension are presented in a heterogeneous background of distractors that have higher and lower values on the same dimension. Experiment 1 demonstrates this effect. Experiment 2 controls for a possible confound of decision complexity in Experiment 1. For simple feature targets, the group scanning model provides a good account of the visual search process.

  3. Deep neural networks for modeling visual perceptual learning.

    PubMed

    Wenliang, Li; Seitz, Aaron R

    2018-05-23

    Understanding visual perceptual learning (VPL) has become increasingly more challenging as new phenomena are discovered with novel stimuli and training paradigms. While existing models aid our knowledge of critical aspects of VPL, the connections shown by these models between behavioral learning and plasticity across different brain areas are typically superficial. Most models explain VPL as readout from simple perceptual representations to decision areas and are not easily adaptable to explain new findings. Here, we show that a well-known instance of deep neural network (DNN), while not designed specifically for VPL, provides a computational model of VPL with enough complexity to be studied at many levels of analyses. After learning a Gabor orientation discrimination task, the DNN model reproduced key behavioral results, including increasing specificity with higher task precision, and also suggested that learning precise discriminations could asymmetrically transfer to coarse discriminations when the stimulus conditions varied. In line with the behavioral findings, the distribution of plasticity moved towards lower layers when task precision increased, and this distribution was also modulated by tasks with different stimulus types. Furthermore, learning in the network units demonstrated close resemblance to extant electrophysiological recordings in monkey visual areas. Altogether, the DNN fulfilled predictions of existing theories regarding specificity and plasticity, and reproduced findings of tuning changes in neurons of the primate visual areas. Although the comparisons were mostly qualitative, the DNN provides a new method of studying VPL and can serve as a testbed for theories and assist in generating predictions for physiological investigations. SIGNIFICANCE STATEMENT Visual perceptual learning (VPL) has been found to cause changes at multiple stages of the visual hierarchy. We found that training a deep neural network (DNN) on an orientation discrimination task produced similar behavioral and physiological patterns found in human and monkey experiments. Unlike existing VPL models, the DNN was pre-trained on natural images to reach high performance in object recognition but was not designed specifically for VPL, and yet it fulfilled predictions of existing theories regarding specificity and plasticity, and reproduced findings of tuning changes in neurons of the primate visual areas. When used with care, this unbiased and deep-hierarchical model can provide new ways of studying VPL from behavior to physiology. Copyright © 2018 the authors.

  4. Visual discrimination in an orangutan (Pongo pygmaeus): measuring visual preference.

    PubMed

    Hanazuka, Yuki; Kurotori, Hidetoshi; Shimizu, Mika; Midorikawa, Akira

    2012-04-01

    Although previous studies have confirmed that trained orangutans visually discriminate between mammals and artificial objects, whether orangutans without operant conditioning can discriminate remains unknown. The visual discrimination ability in an orangutan (Pongo pygmaeus) with no experience in operant learning was examined using measures of visual preference. Sixteen color photographs of inanimate objects and of mammals with four legs were randomly presented to an orangutan. The results showed that the mean looking time at photographs of mammals with four legs was longer than that for inanimate objects, suggesting that the orangutan discriminated mammals with four legs from inanimate objects. The results implied that orangutans who have not experienced operant conditioning may possess the ability to discriminate visually.

  5. Assessing Visuospatial Skills in Parkinson's: Comparison of Neuropsychological Assessment Battery Visual Discrimination to the Judgment of Line Orientation.

    PubMed

    Renfroe, Jenna B; Turner, Travis H; Hinson, Vanessa K

    2017-02-01

    Judgment of Line Orientation (JOLO) test is widely used in assessing visuospatial deficits in Parkinson's disease (PD). The neuropsychological assessment battery (NAB) offers the Visual Discrimination test, with age and education correction, parallel forms, and co-normed standardization sample for comparisons within and between domains. However, NAB Visual Discrimination has not been validated in PD, and may not measure the same construct as JOLO. A heterogeneous sample of 47 PD patients completed the JOLO and NAB Visual Discrimination within a broader neuropsychological evaluation. Pearson correlations assessed relationships between JOLO and NAB Visual Discrimination performances. Raw and demographically corrected scores from JOLO and Visual Discrimination were only weakly correlated. NAB Visual Discrimination subtest was moderately correlated with overall cognitive functioning, whereas the JOLO was not. Despite apparent virtues, results do not support NAB Visual Discrimination as an alternative to JOLO in assessing visuospatial functioning in PD. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Experimental and Computational Studies of Cortical Neural Network Properties Through Signal Processing

    NASA Astrophysics Data System (ADS)

    Clawson, Wesley Patrick

    Previous studies, both theoretical and experimental, of network level dynamics in the cerebral cortex show evidence for a statistical phenomenon called criticality; a phenomenon originally studied in the context of phase transitions in physical systems and that is associated with favorable information processing in the context of the brain. The focus of this thesis is to expand upon past results with new experimentation and modeling to show a relationship between criticality and the ability to detect and discriminate sensory input. A line of theoretical work predicts maximal sensory discrimination as a functional benefit of criticality, which can then be characterized using mutual information between sensory input, visual stimulus, and neural response,. The primary finding of our experiments in the visual cortex in turtles and neuronal network modeling confirms this theoretical prediction. We show that sensory discrimination is maximized when visual cortex operates near criticality. In addition to presenting this primary finding in detail, this thesis will also address our preliminary results on change-point-detection in experimentally measured cortical dynamics.

  7. Audition dominates vision in duration perception irrespective of salience, attention, and temporal discriminability

    PubMed Central

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2014-01-01

    Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403

  8. Invariant recognition drives neural representations of action sequences

    PubMed Central

    Poggio, Tomaso

    2017-01-01

    Recognizing the actions of others from visual stimuli is a crucial aspect of human perception that allows individuals to respond to social cues. Humans are able to discriminate between similar actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding action recognition at the neural level have not always translated into precise accounts of the computational principles underlying what representations of action sequences are constructed by human visual cortex. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, Convolutional Neural Networks (CNNs), that achieve human level performance in complex discriminative tasks. Within this class, architectures that better support invariant object recognition also produce image representations that better match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations of actions remains unknown. Here we show that spatiotemporal CNNs accurately categorize video stimuli into action classes, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed in the brain. These results broaden the scope of the invariant recognition framework for understanding visual intelligence from perception of inanimate objects and faces in static images to the study of human perception of action sequences. PMID:29253864

  9. Spatial and Feature-Based Attention in a Layered Cortical Microcircuit Model

    PubMed Central

    Wagatsuma, Nobuhiko; Potjans, Tobias C.; Diesmann, Markus; Sakai, Ko; Fukai, Tomoki

    2013-01-01

    Directing attention to the spatial location or the distinguishing feature of a visual object modulates neuronal responses in the visual cortex and the stimulus discriminability of subjects. However, the spatial and feature-based modes of attention differently influence visual processing by changing the tuning properties of neurons. Intriguingly, neurons' tuning curves are modulated similarly across different visual areas under both these modes of attention. Here, we explored the mechanism underlying the effects of these two modes of visual attention on the orientation selectivity of visual cortical neurons. To do this, we developed a layered microcircuit model. This model describes multiple orientation-specific microcircuits sharing their receptive fields and consisting of layers 2/3, 4, 5, and 6. These microcircuits represent a functional grouping of cortical neurons and mutually interact via lateral inhibition and excitatory connections between groups with similar selectivity. The individual microcircuits receive bottom-up visual stimuli and top-down attention in different layers. A crucial assumption of the model is that feature-based attention activates orientation-specific microcircuits for the relevant feature selectively, whereas spatial attention activates all microcircuits homogeneously, irrespective of their orientation selectivity. Consequently, our model simultaneously accounts for the multiplicative scaling of neuronal responses in spatial attention and the additive modulations of orientation tuning curves in feature-based attention, which have been observed widely in various visual cortical areas. Simulations of the model predict contrasting differences between excitatory and inhibitory neurons in the two modes of attentional modulations. Furthermore, the model replicates the modulation of the psychophysical discriminability of visual stimuli in the presence of external noise. Our layered model with a biologically suggested laminar structure describes the basic circuit mechanism underlying the attention-mode specific modulations of neuronal responses and visual perception. PMID:24324628

  10. Facial expressions as a model to test the role of the sensorimotor system in the visual perception of the actions.

    PubMed

    Mele, Sonia; Ghirardi, Valentina; Craighero, Laila

    2017-12-01

    A long-term debate concerns whether the sensorimotor coding carried out during transitive actions observation reflects the low-level movement implementation details or the movement goals. On the contrary, phonemes and emotional facial expressions are intransitive actions that do not fall into this debate. The investigation of phonemes discrimination has proven to be a good model to demonstrate that the sensorimotor system plays a role in understanding actions acoustically presented. In the present study, we adapted the experimental paradigms already used in phonemes discrimination during face posture manipulation, to the discrimination of emotional facial expressions. We submitted participants to a lower or to an upper face posture manipulation during the execution of a four alternative labelling task of pictures randomly taken from four morphed continua between two emotional facial expressions. The results showed that the implementation of low-level movement details influence the discrimination of ambiguous facial expressions differing for a specific involvement of those movement details. These findings indicate that facial expressions discrimination is a good model to test the role of the sensorimotor system in the perception of actions visually presented.

  11. A Neural Marker of Medical Visual Expertise: Implications for Training

    ERIC Educational Resources Information Center

    Rourke, Liam; Cruikshank, Leanna C.; Shapke, Larissa; Singhal, Anthony

    2016-01-01

    Researchers have identified a component of the EEG that discriminates visual experts from novices. The marker indexes a comprehensive model of visual processing, and if it is apparent in physicians, it could be used to investigate the development and training of their visual expertise. The purpose of this study was to determine whether a neural…

  12. Color discrimination errors associate with axial motor impairments in Parkinson's disease.

    PubMed

    Bohnen, Nicolaas I; Haugen, Jacob; Ridder, Andrew; Kotagal, Vikas; Albin, Roger L; Frey, Kirk A; Müller, Martijn L T M

    2017-01-01

    Visual function deficits are more common in imbalance-predominant compared to tremor-predominant PD suggesting a pathophysiological role of impaired visual functions in axial motor impairments. To investigate the relationship between changes in color discrimination and motor impairments in PD while accounting for cognitive or other confounder factors. PD subjects (n=49, age 66.7±8.3 years; Hoehn & Yahr stage 2.6±0.6) completed color discrimination assessment using the Farnsworth-Munsell 100 Hue Color Vision Test, neuropsychological, motor assessments and [ 11 C]dihydrotetrabenazine vesicular monoamine transporter type 2 PET imaging. MDS-UPDRS sub-scores for cardinal motor features were computed. Timed up and go mobility and walking tests were assessed in 48 subjects. Bivariate correlation coefficients between color discrimination and motor variables were significant only for the Timed up and go (R S =0.44, P=0.0018) and the MDS-UPDRS axial motor scores (R S =0.38, P=0.0068). Multiple regression confounder analysis using the Timed up and go as outcome parameter showed a significant total model (F (5,43) = 7.3, P<0.0001) with significant regressor effects for color discrimination (standardized β=0.32, t=2.6, P=0.012), global cognitive Z-score (β=-0.33, t=-2.5, P=0.018), duration of disease (β=0.26, t=1.8, P=0.038), but not for age or striatal dopaminergic binding. The color discrimination test was also a significant independent regressor in the MDS-UPDRS axial motor model (standardized β=0.29, t=2.4, P=0.022; total model t (5,43) = 6.4, P=0.0002). Color discrimination errors associate with axial motor features in PD independent of cognitive deficits, nigrostriatal dopaminergic denervation, and other confounder variables. These findings may reflect shared pathophysiology between color discrimination visual impairments and axial motor burden in PD.

  13. Feature and Region Selection for Visual Learning.

    PubMed

    Zhao, Ji; Wang, Liantao; Cabral, Ricardo; De la Torre, Fernando

    2016-03-01

    Visual learning problems, such as object classification and action recognition, are typically approached using extensions of the popular bag-of-words (BoWs) model. Despite its great success, it is unclear what visual features the BoW model is learning. Which regions in the image or video are used to discriminate among classes? Which are the most discriminative visual words? Answering these questions is fundamental for understanding existing BoW models and inspiring better models for visual recognition. To answer these questions, this paper presents a method for feature selection and region selection in the visual BoW model. This allows for an intermediate visualization of the features and regions that are important for visual learning. The main idea is to assign latent weights to the features or regions, and jointly optimize these latent variables with the parameters of a classifier (e.g., support vector machine). There are four main benefits of our approach: 1) our approach accommodates non-linear additive kernels, such as the popular χ(2) and intersection kernel; 2) our approach is able to handle both regions in images and spatio-temporal regions in videos in a unified way; 3) the feature selection problem is convex, and both problems can be solved using a scalable reduced gradient method; and 4) we point out strong connections with multiple kernel learning and multiple instance learning approaches. Experimental results in the PASCAL VOC 2007, MSR Action Dataset II and YouTube illustrate the benefits of our approach.

  14. Action Recognition and Movement Direction Discrimination Tasks Are Associated with Different Adaptation Patterns

    PubMed Central

    de la Rosa, Stephan; Ekramnia, Mina; Bülthoff, Heinrich H.

    2016-01-01

    The ability to discriminate between different actions is essential for action recognition and social interactions. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g., left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g., when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently categorized either the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms. PMID:26941633

  15. Monkey pulvinar neurons fire differentially to snake postures.

    PubMed

    Le, Quan Van; Isbell, Lynne A; Matsumoto, Jumpei; Le, Van Quang; Hori, Etsuro; Tran, Anh Hai; Maior, Rafael S; Tomaz, Carlos; Ono, Taketoshi; Nishijo, Hisao

    2014-01-01

    There is growing evidence from both behavioral and neurophysiological approaches that primates are able to rapidly discriminate visually between snakes and innocuous stimuli. Recent behavioral evidence suggests that primates are also able to discriminate the level of threat posed by snakes, by responding more intensely to a snake model poised to strike than to snake models in coiled or sinusoidal postures (Etting and Isbell 2014). In the present study, we examine the potential for an underlying neurological basis for this ability. Previous research indicated that the pulvinar is highly sensitive to snake images. We thus recorded pulvinar neurons in Japanese macaques (Macaca fuscata) while they viewed photos of snakes in striking and non-striking postures in a delayed non-matching to sample (DNMS) task. Of 821 neurons recorded, 78 visually responsive neurons were tested with the all snake images. We found that pulvinar neurons in the medial and dorsolateral pulvinar responded more strongly to snakes in threat displays poised to strike than snakes in non-threat-displaying postures with no significant difference in response latencies. A multidimensional scaling analysis of the 78 visually responsive neurons indicated that threat-displaying and non-threat-displaying snakes were separated into two different clusters in the first epoch of 50 ms after stimulus onset, suggesting bottom-up visual information processing. These results indicate that pulvinar neurons in primates discriminate between poised to strike from those in non-threat-displaying postures. This neuronal ability likely facilitates behavioral discrimination and has clear adaptive value. Our results are thus consistent with the Snake Detection Theory, which posits that snakes were instrumental in the evolution of primate visual systems.

  16. Association of impaired facial affect recognition with basic facial and visual processing deficits in schizophrenia.

    PubMed

    Norton, Daniel; McBain, Ryan; Holt, Daphne J; Ongur, Dost; Chen, Yue

    2009-06-15

    Impaired emotion recognition has been reported in schizophrenia, yet the nature of this impairment is not completely understood. Recognition of facial emotion depends on processing affective and nonaffective facial signals, as well as basic visual attributes. We examined whether and how poor facial emotion recognition in schizophrenia is related to basic visual processing and nonaffective face recognition. Schizophrenia patients (n = 32) and healthy control subjects (n = 29) performed emotion discrimination, identity discrimination, and visual contrast detection tasks, where the emotionality, distinctiveness of identity, or visual contrast was systematically manipulated. Subjects determined which of two presentations in a trial contained the target: the emotional face for emotion discrimination, a specific individual for identity discrimination, and a sinusoidal grating for contrast detection. Patients had significantly higher thresholds (worse performance) than control subjects for discriminating both fearful and happy faces. Furthermore, patients' poor performance in fear discrimination was predicted by performance in visual detection and face identity discrimination. Schizophrenia patients require greater emotional signal strength to discriminate fearful or happy face images from neutral ones. Deficient emotion recognition in schizophrenia does not appear to be determined solely by affective processing but is also linked to the processing of basic visual and facial information.

  17. Convergent-Discriminant Validity of the Jewish Employment Vocational System (JEVS).

    ERIC Educational Resources Information Center

    Tryjankowski, Elaine M.

    This study investigated the construct validity of five perceptual traits (auditory discrimination, visual discrimination, visual memory, visual-motor coordination, and auditory to visual-motor coordination) with five simulated work samples (union assembly, resistor reading, budgette assembly, lock assembly, and nail and screw sort) from the Jewish…

  18. Comparative psychophysics of bumblebee and honeybee colour discrimination and object detection.

    PubMed

    Dyer, Adrian G; Spaethe, Johannes; Prack, Sabina

    2008-07-01

    Bumblebee (Bombus terrestris) discrimination of targets with broadband reflectance spectra was tested using simultaneous viewing conditions, enabling an accurate determination of the perceptual limit of colour discrimination excluding confounds from memory coding (experiment 1). The level of colour discrimination in bumblebees, and honeybees (Apis mellifera) (based upon previous observations), exceeds predictions of models considering receptor noise in the honeybee. Bumblebee and honeybee photoreceptors are similar in spectral shape and spacing, but bumblebees exhibit significantly poorer colour discrimination in behavioural tests, suggesting possible differences in spatial or temporal signal processing. Detection of stimuli in a Y-maze was evaluated for bumblebees (experiment 2) and honeybees (experiment 3). Honeybees detected stimuli containing both green-receptor-contrast and colour contrast at a visual angle of approximately 5 degrees , whilst stimuli that contained only colour contrast were only detected at a visual angle of 15 degrees . Bumblebees were able to detect these stimuli at a visual angle of 2.3 degrees and 2.7 degrees , respectively. A comparison of the experiments suggests a tradeoff between colour discrimination and colour detection in these two species, limited by the need to pool colour signals to overcome receptor noise. We discuss the colour processing differences and possible adaptations to specific ecological habitats.

  19. Dynamic and predictive links between touch and vision.

    PubMed

    Gray, Rob; Tan, Hong Z

    2002-07-01

    We investigated crossmodal links between vision and touch for moving objects. In experiment 1, observers discriminated visual targets presented randomly at one of five locations on their forearm. Tactile pulses simulating motion along the forearm preceded visual targets. At short tactile-visual ISIs, discriminations were more rapid when the final tactile pulse and visual target were at the same location. At longer ISIs, discriminations were more rapid when the visual target was offset in the motion direction and were slower for offsets opposite to the motion direction. In experiment 2, speeded tactile discriminations at one of three random locations on the forearm were preceded by a visually simulated approaching object. Discriminations were more rapid when the object approached the location of the tactile stimulation and discrimination performance was dependent on the approaching object's time to contact. These results demonstrate dynamic links in the spatial mapping between vision and touch.

  20. Qualitative Contrast between Knowledge-Limited Mixed-State and Variable-Resources Models of Visual Change Detection

    ERIC Educational Resources Information Center

    Nosofsky, Robert M.; Donkin, Chris

    2016-01-01

    We report an experiment designed to provide a qualitative contrast between knowledge-limited versions of mixed-state and variable-resources (VR) models of visual change detection. The key data pattern is that observers often respond "same" on big-change trials, while simultaneously being able to discriminate between same and small-change…

  1. Visual modelling suggests a weak relationship between the evolution of ultraviolet vision and plumage coloration in birds.

    PubMed

    Lind, O; Delhey, K

    2015-03-01

    Birds have sophisticated colour vision mediated by four cone types that cover a wide visual spectrum including ultraviolet (UV) wavelengths. Many birds have modest UV sensitivity provided by violet-sensitive (VS) cones with sensitivity maxima between 400 and 425 nm. However, some birds have evolved higher UV sensitivity and a larger visual spectrum given by UV-sensitive (UVS) cones maximally sensitive at 360-370 nm. The reasons for VS-UVS transitions and their relationship to visual ecology remain unclear. It has been hypothesized that the evolution of UVS-cone vision is linked to plumage colours so that visual sensitivity and feather coloration are 'matched'. This leads to the specific prediction that UVS-cone vision enhances the discrimination of plumage colours of UVS birds while such an advantage is absent or less pronounced for VS-bird coloration. We test this hypothesis using knowledge of the complex distribution of UVS cones among birds combined with mathematical modelling of colour discrimination during different viewing conditions. We find no support for the hypothesis, which, combined with previous studies, suggests only a weak relationship between UVS-cone vision and plumage colour evolution. Instead, we suggest that UVS-cone vision generally favours colour discrimination, which creates a nonspecific selection pressure for the evolution of UVS cones. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.

  2. Study of blur discrimination for 3D stereo viewing

    NASA Astrophysics Data System (ADS)

    Subedar, Mahesh; Karam, Lina J.

    2014-03-01

    Blur is an important attribute in the study and modeling of the human visual system. Blur discrimination was studied extensively using 2D test patterns. In this study, we present the details of subjective tests performed to measure blur discrimination thresholds using stereoscopic 3D test patterns. Specifically, the effect of disparity on the blur discrimination thresholds is studied on a passive stereoscopic 3D display. The blur discrimination thresholds are measured using stereoscopic 3D test patterns with positive, negative and zero disparity values, at multiple reference blur levels. A disparity value of zero represents the 2D viewing case where both the eyes will observe the same image. The subjective test results indicate that the blur discrimination thresholds remain constant as we vary the disparity value. This further indicates that binocular disparity does not affect blur discrimination thresholds and the models developed for 2D blur discrimination thresholds can be extended to stereoscopic 3D blur discrimination thresholds. We have presented fitting of the Weber model to the 3D blur discrimination thresholds measured from the subjective experiments.

  3. VFMA: Topographic Analysis of Sensitivity Data From Full-Field Static Perimetry

    PubMed Central

    Weleber, Richard G.; Smith, Travis B.; Peters, Dawn; Chegarnov, Elvira N.; Gillespie, Scott P.; Francis, Peter J.; Gardiner, Stuart K.; Paetzold, Jens; Dietzsch, Janko; Schiefer, Ulrich; Johnson, Chris A.

    2015-01-01

    Purpose: To analyze static visual field sensitivity with topographic models of the hill of vision (HOV), and to characterize several visual function indices derived from the HOV volume. Methods: A software application, Visual Field Modeling and Analysis (VFMA), was developed for static perimetry data visualization and analysis. Three-dimensional HOV models were generated for 16 healthy subjects and 82 retinitis pigmentosa patients. Volumetric visual function indices, which are measures of quantity and comparable regardless of perimeter test pattern, were investigated. Cross-validation, reliability, and cross-sectional analyses were performed to assess this methodology and compare the volumetric indices to conventional mean sensitivity and mean deviation. Floor effects were evaluated by computer simulation. Results: Cross-validation yielded an overall R2 of 0.68 and index of agreement of 0.89, which were consistent among subject groups, indicating good accuracy. Volumetric and conventional indices were comparable in terms of test–retest variability and discriminability among subject groups. Simulated floor effects did not negatively impact the repeatability of any index, but large floor changes altered the discriminability for regional volumetric indices. Conclusions: VFMA is an effective tool for clinical and research analyses of static perimetry data. Topographic models of the HOV aid the visualization of field defects, and topographically derived indices quantify the magnitude and extent of visual field sensitivity. Translational Relevance: VFMA assists with the interpretation of visual field data from any perimetric device and any test location pattern. Topographic models and volumetric indices are suitable for diagnosis, monitoring of field loss, patient counseling, and endpoints in therapeutic trials. PMID:25938002

  4. Stereoscopic processing of crossed and uncrossed disparities in the human visual cortex.

    PubMed

    Li, Yuan; Zhang, Chuncheng; Hou, Chunping; Yao, Li; Zhang, Jiacai; Long, Zhiying

    2017-12-21

    Binocular disparity provides a powerful cue for depth perception in a stereoscopic environment. Despite increasing knowledge of the cortical areas that process disparity from neuroimaging studies, the neural mechanism underlying disparity sign processing [crossed disparity (CD)/uncrossed disparity (UD)] is still poorly understood. In the present study, functional magnetic resonance imaging (fMRI) was used to explore different neural features that are relevant to disparity-sign processing. We performed an fMRI experiment on 27 right-handed healthy human volunteers by using both general linear model (GLM) and multi-voxel pattern analysis (MVPA) methods. First, GLM was used to determine the cortical areas that displayed different responses to different disparity signs. Second, MVPA was used to determine how the cortical areas discriminate different disparity signs. The GLM analysis results indicated that shapes with UD induced significantly stronger activity in the sub-region (LO) of the lateral occipital cortex (LOC) than those with CD. The results of MVPA based on region of interest indicated that areas V3d and V3A displayed higher accuracy in the discrimination of crossed and uncrossed disparities than LOC. The results of searchlight-based MVPA indicated that the dorsal visual cortex showed significantly higher prediction accuracy than the ventral visual cortex and the sub-region LO of LOC showed high accuracy in the discrimination of crossed and uncrossed disparities. The results may suggest the dorsal visual areas are more discriminative to the disparity signs than the ventral visual areas although they are not sensitive to the disparity sign processing. Moreover, the LO in the ventral visual cortex is relevant to the recognition of shapes with different disparity signs and discriminative to the disparity sign.

  5. Visual perception and appraisal of persons with impairments: a randomised controlled field experiment using photo elicitation.

    PubMed

    Reinhardt, Jan Dietrich; Ballert, Carolina Saskia; Fellinghauer, Bernd; Lötscher, Alexander; Gradinger, Felix; Hilfiker, Roger; Graf, Sibylle; Stucki, Gerold

    2011-01-01

    Visual cues from persons with impairments may trigger stereotypical generalisations that lead to prejudice and discrimination. The main objective of this pilot study is to examine whether visual stimuli of impairment activate latent prejudice against disability and whether this connection can be counteracted with priming strategies. In a field experiment, participants were asked to rate photographs showing models with mental impairments, wheelchair users with paraplegia, and persons without any visible impairment. Participants should appraise the models with regard to several features (e.g. communicativeness, intelligence). One hundred participants rated 12 photo models yielding a total of 1183 observations. One group of participants was primed with a cover story introducing visual perception of impairment as the study's gist, while controls received neutral information. Photo models with mental impairments were rated lowest and models without visible impairment highest. In participants who did not have prior contacts with persons with impairments, priming led to a levelling of scores of models with and without impairment. Prior contacts with persons with impairments created similar effects as the priming. Unexpectedly, a pattern of converse double discrimination to the disadvantage of men with mental impairments was revealed. Signs of stereotypical processing of visual cues of impairment have been found in participants of the Swiss general population. Personal contact with persons with impairments as well as priming participants seems to reduce stereotyping.

  6. Monkey Pulvinar Neurons Fire Differentially to Snake Postures

    PubMed Central

    Le, Quan Van; Isbell, Lynne A.; Matsumoto, Jumpei; Le, Van Quang; Hori, Etsuro; Tran, Anh Hai; Maior, Rafael S.; Tomaz, Carlos; Ono, Taketoshi; Nishijo, Hisao

    2014-01-01

    There is growing evidence from both behavioral and neurophysiological approaches that primates are able to rapidly discriminate visually between snakes and innocuous stimuli. Recent behavioral evidence suggests that primates are also able to discriminate the level of threat posed by snakes, by responding more intensely to a snake model poised to strike than to snake models in coiled or sinusoidal postures (Etting and Isbell 2014). In the present study, we examine the potential for an underlying neurological basis for this ability. Previous research indicated that the pulvinar is highly sensitive to snake images. We thus recorded pulvinar neurons in Japanese macaques (Macaca fuscata) while they viewed photos of snakes in striking and non-striking postures in a delayed non-matching to sample (DNMS) task. Of 821 neurons recorded, 78 visually responsive neurons were tested with the all snake images. We found that pulvinar neurons in the medial and dorsolateral pulvinar responded more strongly to snakes in threat displays poised to strike than snakes in non-threat-displaying postures with no significant difference in response latencies. A multidimensional scaling analysis of the 78 visually responsive neurons indicated that threat-displaying and non-threat-displaying snakes were separated into two different clusters in the first epoch of 50 ms after stimulus onset, suggesting bottom-up visual information processing. These results indicate that pulvinar neurons in primates discriminate between poised to strike from those in non-threat-displaying postures. This neuronal ability likely facilitates behavioral discrimination and has clear adaptive value. Our results are thus consistent with the Snake Detection Theory, which posits that snakes were instrumental in the evolution of primate visual systems. PMID:25479158

  7. Intact Visual Discrimination of Complex and Feature-Ambiguous Stimuli in the Absence of Perirhinal Cortex

    ERIC Educational Resources Information Center

    Squire, Larry R.; Levy, Daniel A.; Shrager, Yael

    2005-01-01

    The perirhinal cortex is known to be important for memory, but there has recently been interest in the possibility that it might also be involved in visual perceptual functions. In four experiments, we assessed visual discrimination ability and visual discrimination learning in severely amnesic patients with large medial temporal lobe lesions that…

  8. Visual Speech Fills in Both Discrimination and Identification of Non-Intact Auditory Speech in Children

    ERIC Educational Resources Information Center

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve

    2018-01-01

    To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…

  9. Prestimulus alpha-band power biases visual discrimination confidence, but not accuracy.

    PubMed

    Samaha, Jason; Iemi, Luca; Postle, Bradley R

    2017-09-01

    The magnitude of power in the alpha-band (8-13Hz) of the electroencephalogram (EEG) prior to the onset of a near threshold visual stimulus predicts performance. Together with other findings, this has been interpreted as evidence that alpha-band dynamics reflect cortical excitability. We reasoned, however, that non-specific changes in excitability would be expected to influence signal and noise in the same way, leaving actual discriminability unchanged. Indeed, using a two-choice orientation discrimination task, we found that discrimination accuracy was unaffected by fluctuations in prestimulus alpha power. Decision confidence, on the other hand, was strongly negatively correlated with prestimulus alpha power. This finding constitutes a clear dissociation between objective and subjective measures of visual perception as a function of prestimulus cortical excitability. This dissociation is predicted by a model where the balance of evidence supporting each choice drives objective performance but only the magnitude of evidence supporting the selected choice drives subjective reports, suggesting that human perceptual confidence can be suboptimal with respect to tracking objective accuracy. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Examination of the Relation between an Assessment of Skills and Performance on Auditory-Visual Conditional Discriminations for Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Kodak, Tiffany; Clements, Andrea; Paden, Amber R.; LeBlanc, Brittany; Mintz, Joslyn; Toussaint, Karen A.

    2015-01-01

    The current investigation evaluated repertoires that may be related to performance on auditory-to-visual conditional discrimination training with 9 students who had been diagnosed with autism spectrum disorder. The skills included in the assessment were matching, imitation, scanning, an auditory discrimination, and a visual discrimination. The…

  11. Visual discrimination training improves Humphrey perimetry in chronic cortically induced blindness.

    PubMed

    Cavanaugh, Matthew R; Huxlin, Krystel R

    2017-05-09

    To assess if visual discrimination training improves performance on visual perimetry tests in chronic stroke patients with visual cortex involvement. 24-2 and 10-2 Humphrey visual fields were analyzed for 17 chronic cortically blind stroke patients prior to and following visual discrimination training, as well as in 5 untrained, cortically blind controls. Trained patients practiced direction discrimination, orientation discrimination, or both, at nonoverlapping, blind field locations. All pretraining and posttraining discrimination performance and Humphrey fields were collected with online eye tracking, ensuring gaze-contingent stimulus presentation. Trained patients recovered ∼108 degrees 2 of vision on average, while untrained patients spontaneously improved over an area of ∼16 degrees 2 . Improvement was not affected by patient age, time since lesion, size of initial deficit, or training type, but was proportional to the amount of training performed. Untrained patients counterbalanced their improvements with worsening of sensitivity over ∼9 degrees 2 of their visual field. Worsening was minimal in trained patients. Finally, although discrimination performance improved at all trained locations, changes in Humphrey sensitivity occurred both within trained regions and beyond, extending over a larger area along the blind field border. In adults with chronic cortical visual impairment, the blind field border appears to have enhanced plastic potential, which can be recruited by gaze-controlled visual discrimination training to expand the visible field. Our findings underscore a critical need for future studies to measure the effects of vision restoration approaches on perimetry in larger cohorts of patients. Copyright © 2017 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the American Academy of Neurology.

  12. Neural and Behavioral Correlates of Attentional Inhibition Training and Perceptual Discrimination Training in a Visual Flanker Task

    PubMed Central

    Melara, Robert D.; Singh, Shalini; Hien, Denise A.

    2018-01-01

    Two groups of healthy young adults were exposed to 3 weeks of cognitive training in a modified version of the visual flanker task, one group trained to discriminate the target (discrimination training) and the other group to ignore the flankers (inhibition training). Inhibition training, but not discrimination training, led to significant reductions in both Garner interference, indicating improved selective attention, and in Stroop interference, indicating more efficient resolution of stimulus conflict. The behavioral gains from training were greatest in participants who showed the poorest selective attention at pretest. Electrophysiological recordings revealed that inhibition training increased the magnitude of Rejection Positivity (RP) to incongruent distractors, an event-related potential (ERP) component associated with inhibitory control. Source modeling of RP uncovered a dipole in the medial frontal gyrus for those participants receiving inhibition training, but in the cingulate gyrus for those participants receiving discrimination training. Results suggest that inhibitory control is plastic; inhibition training improves conflict resolution, particularly in individuals with poor attention skills. PMID:29875644

  13. Neural and Behavioral Correlates of Attentional Inhibition Training and Perceptual Discrimination Training in a Visual Flanker Task.

    PubMed

    Melara, Robert D; Singh, Shalini; Hien, Denise A

    2018-01-01

    Two groups of healthy young adults were exposed to 3 weeks of cognitive training in a modified version of the visual flanker task, one group trained to discriminate the target (discrimination training) and the other group to ignore the flankers (inhibition training). Inhibition training, but not discrimination training, led to significant reductions in both Garner interference, indicating improved selective attention, and in Stroop interference, indicating more efficient resolution of stimulus conflict. The behavioral gains from training were greatest in participants who showed the poorest selective attention at pretest. Electrophysiological recordings revealed that inhibition training increased the magnitude of Rejection Positivity (RP) to incongruent distractors, an event-related potential (ERP) component associated with inhibitory control. Source modeling of RP uncovered a dipole in the medial frontal gyrus for those participants receiving inhibition training, but in the cingulate gyrus for those participants receiving discrimination training. Results suggest that inhibitory control is plastic; inhibition training improves conflict resolution, particularly in individuals with poor attention skills.

  14. Investigation of Neural Strategies of Visual Search

    NASA Technical Reports Server (NTRS)

    Krauzlis, Richard J.

    2003-01-01

    The goal of this project was to measure how neurons in the superior colliculus (SC) change their activity during a visual search task. Specifically, we proposed to measure how the activity of these neurons was altered by the discriminability of visual targets and to test how these changes might predict the changes in the subjects performance. The primary rationale for this study was that understanding how the information encoded by these neurons constrains overall search performance would foster the development of better models of human performance. Work performed during the period supported by this grant has achieved these aims. First, we have recorded from neurons in the superior colliculus (SC) during a visual search task in which the difficulty of the task and the performance of the subject was systematically varied. The results from these single-neuron physiology experiments shows that prior to eye movement onset, the difference in activity across the ensemble of neurons reaches a fixed threshold value, reflecting the operation of a winner-take-all mechanism. Second, we have developed a model of eye movement decisions based on the principle of winner-take-all . The model incorporates the idea that the overt saccade choice reflects only one of the multiple saccades prepared during visual discrimination, consistent with our physiological data. The value of the model is that, unlike previous models, it is able to account for both the latency and the percent correct of saccade choices.

  15. Robust visual tracking via multiple discriminative models with object proposals

    NASA Astrophysics Data System (ADS)

    Zhang, Yuanqiang; Bi, Duyan; Zha, Yufei; Li, Huanyu; Ku, Tao; Wu, Min; Ding, Wenshan; Fan, Zunlin

    2018-04-01

    Model drift is an important reason for tracking failure. In this paper, multiple discriminative models with object proposals are used to improve the model discrimination for relieving this problem. Firstly, the target location and scale changing are captured by lots of high-quality object proposals, which are represented by deep convolutional features for target semantics. And then, through sharing a feature map obtained by a pre-trained network, ROI pooling is exploited to wrap the various sizes of object proposals into vectors of the same length, which are used to learn a discriminative model conveniently. Lastly, these historical snapshot vectors are trained by different lifetime models. Based on entropy decision mechanism, the bad model owing to model drift can be corrected by selecting the best discriminative model. This would improve the robustness of the tracker significantly. We extensively evaluate our tracker on two popular benchmarks, the OTB 2013 benchmark and UAV20L benchmark. On both benchmarks, our tracker achieves the best performance on precision and success rate compared with the state-of-the-art trackers.

  16. Intrinsic, stimulus-driven and task-dependent connectivity in human auditory cortex.

    PubMed

    Häkkinen, Suvi; Rinne, Teemu

    2018-06-01

    A hierarchical and modular organization is a central hypothesis in the current primate model of auditory cortex (AC) but lacks validation in humans. Here we investigated whether fMRI connectivity at rest and during active tasks is informative of the functional organization of human AC. Identical pitch-varying sounds were presented during a visual discrimination (i.e. no directed auditory attention), pitch discrimination, and two versions of pitch n-back memory tasks. Analysis based on fMRI connectivity at rest revealed a network structure consisting of six modules in supratemporal plane (STP), temporal lobe, and inferior parietal lobule (IPL) in both hemispheres. In line with the primate model, in which higher-order regions have more longer-range connections than primary regions, areas encircling the STP module showed the highest inter-modular connectivity. Multivariate pattern analysis indicated significant connectivity differences between the visual task and rest (driven by the presentation of sounds during the visual task), between auditory and visual tasks, and between pitch discrimination and pitch n-back tasks. Further analyses showed that these differences were particularly due to connectivity modulations between the STP and IPL modules. While the results are generally in line with the primate model, they highlight the important role of human IPL during the processing of both task-irrelevant and task-relevant auditory information. Importantly, the present study shows that fMRI connectivity at rest, during presentation of sounds, and during active listening provides novel information about the functional organization of human AC.

  17. Enhanced Perceptual Functioning in Autism: An Update, and Eight Principles of Autistic Perception

    ERIC Educational Resources Information Center

    Mottron, Laurent; Dawson, Michelle; Soulieres, Isabelle; Hubert, Benedicte; Burack, Jake

    2006-01-01

    We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception…

  18. Adolescent fluoxetine exposure produces enduring, sex-specific alterations of visual discrimination and attention in rats.

    PubMed

    LaRoche, Ronee B; Morgan, Russell E

    2007-01-01

    Over the past two decades the use of selective serotonin reuptake inhibitors (SSRIs) to treat behavioral disorders in children has grown rapidly, despite little evidence regarding the safety and efficacy of these drugs for use in children. Utilizing a rat model, this study investigated whether post-weaning exposure to a prototype SSRI, fluoxetine (FLX), influenced performance on visual tasks designed to measure discrimination learning, sustained attention, inhibitory control, and reaction time. Additionally, sex differences in response to varying doses of fluoxetine were examined. In Experiment 1, female rats were administered (P.O.) fluoxetine (10 mg/kg ) or vehicle (apple juice) from PND 25 thru PND 49. After a 14 day washout period, subjects were trained to perform a simultaneous visual discrimination task. Subjects were then tested for 20 sessions on a visual attention task that consisted of varied stimulus delays (0, 3, 6, or 9 s) and cue durations (200, 400, or 700 ms). In Experiment 2, both male and female Long-Evans rats (24 F, 24 M) were administered fluoxetine (0, 5, 10, or 15 mg/kg) then tested in the same visual tasks used in Experiment 1, with the addition of open-field and elevated plus-maze testing. Few FLX-related differences were seen in the visual discrimination, open field, or plus-maze tasks. However, results from the visual attention task indicated a dose-dependent reduction in the performance of fluoxetine-treated males, whereas fluoxetine-treated females tended to improve over baseline. These findings indicate that enduring, behaviorally-relevant alterations of the CNS can occur following pharmacological manipulation of the serotonin system during postnatal development.

  19. Using Prosopagnosia to Test and Modify Visual Recognition Theory.

    PubMed

    O'Brien, Alexander M

    2018-02-01

    Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.

  20. Stimulus discriminability in visual search.

    PubMed

    Verghese, P; Nakayama, K

    1994-09-01

    We measured the probability of detecting the target in a visual search task, as a function of the following parameters: the discriminability of the target from the distractors, the duration of the display, and the number of elements in the display. We examined the relation between these parameters at criterion performance (80% correct) to determine if the parameters traded off according to the predictions of a limited capacity model. For the three dimensions that we studied, orientation, color, and spatial frequency, the observed relationship between the parameters deviates significantly from a limited capacity model. The data relating discriminability to display duration are better than predicted over the entire range of orientation and color differences that we examined, and are consistent with the prediction for only a limited range of spatial frequency differences--from 12 to 23%. The relation between discriminability and number varies considerably across the three dimensions and is better than the limited capacity prediction for two of the three dimensions that we studied. Orientation discrimination shows a strong number effect, color discrimination shows almost no effect, and spatial frequency discrimination shows an intermediate effect. The different trading relationships in each dimension are more consistent with early filtering in that dimension, than with a common limited capacity stage. Our results indicate that higher-level processes that group elements together also play a strong role. Our experiments provide little support for limited capacity mechanisms over the range of stimulus differences that we examined in three different dimensions.

  1. The attention-weighted sample-size model of visual short-term memory: Attention capture predicts resource allocation and memory load.

    PubMed

    Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren

    2016-09-01

    We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Semi-Supervised Tensor-Based Graph Embedding Learning and Its Application to Visual Discriminant Tracking.

    PubMed

    Hu, Weiming; Gao, Jin; Xing, Junliang; Zhang, Chao; Maybank, Stephen

    2017-01-01

    An appearance model adaptable to changes in object appearance is critical in visual object tracking. In this paper, we treat an image patch as a two-order tensor which preserves the original image structure. We design two graphs for characterizing the intrinsic local geometrical structure of the tensor samples of the object and the background. Graph embedding is used to reduce the dimensions of the tensors while preserving the structure of the graphs. Then, a discriminant embedding space is constructed. We prove two propositions for finding the transformation matrices which are used to map the original tensor samples to the tensor-based graph embedding space. In order to encode more discriminant information in the embedding space, we propose a transfer-learning- based semi-supervised strategy to iteratively adjust the embedding space into which discriminative information obtained from earlier times is transferred. We apply the proposed semi-supervised tensor-based graph embedding learning algorithm to visual tracking. The new tracking algorithm captures an object's appearance characteristics during tracking and uses a particle filter to estimate the optimal object state. Experimental results on the CVPR 2013 benchmark dataset demonstrate the effectiveness of the proposed tracking algorithm.

  3. Additional Remarks on Designing Category-Level Attributes for Discriminative Visual Recognition

    DTIC Science & Technology

    2013-01-01

    Discriminative Visual Recognition ∗ Felix X. Yu†, Liangliang Cao§, Rogerio S. Feris§, John R. Smith§, Shih-Fu Chang† † Columbia University § IBM T. J...for Designing Category-Level Attributes for Dis- criminative Visual Recognition [3]. We first provide an overview of the proposed ap- proach in...2013 to 00-00-2013 4. TITLE AND SUBTITLE Additional Remarks on Designing Category-Level Attributes for Discriminative Visual Recognition 5a

  4. Preschoolers Benefit From Visually Salient Speech Cues

    PubMed Central

    Holt, Rachael Frush

    2015-01-01

    Purpose This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method Twelve adults and 27 typically developing 3- and 4-year-old children completed 3 audiovisual (AV) speech integration tasks: matching, discrimination, and recognition. The authors compared AV benefit for visually salient and less visually salient speech discrimination contrasts and assessed the visual saliency of consonant confusions in auditory-only and AV word recognition. Results Four-year-olds and adults demonstrated visual influence on all measures. Three-year-olds demonstrated visual influence on speech discrimination and recognition measures. All groups demonstrated greater AV benefit for the visually salient discrimination contrasts. AV recognition benefit in 4-year-olds and adults depended on the visual saliency of speech sounds. Conclusions Preschoolers can demonstrate AV speech integration. Their AV benefit results from efficient use of visually salient speech cues. Four-year-olds, but not 3-year-olds, used visual phonological knowledge to take advantage of visually salient speech cues, suggesting possible developmental differences in the mechanisms of AV benefit. PMID:25322336

  5. Simultaneous Visual Discrimination in Asian Elephants

    ERIC Educational Resources Information Center

    Nissani, Moti; Hoefler-Nissani, Donna; Lay, U. Tin; Htun, U. Wan

    2005-01-01

    Two experiments explored the behavior of 20 Asian elephants ("Elephas aximus") in simultaneous visual discrimination tasks. In Experiment 1, 7 Burmese logging elephants acquired a white+/black- discrimination, reaching criterion in a mean of 2.6 sessions and 117 discrete trials, whereas 4 elephants acquired a black+/white- discrimination in 5.3…

  6. Visual speech discrimination and identification of natural and synthetic consonant stimuli

    PubMed Central

    Files, Benjamin T.; Tjan, Bosco S.; Jiang, Jintao; Bernstein, Lynne E.

    2015-01-01

    From phonetic features to connected discourse, every level of psycholinguistic structure including prosody can be perceived through viewing the talking face. Yet a longstanding notion in the literature is that visual speech perceptual categories comprise groups of phonemes (referred to as visemes), such as /p, b, m/ and /f, v/, whose internal structure is not informative to the visual speech perceiver. This conclusion has not to our knowledge been evaluated using a psychophysical discrimination paradigm. We hypothesized that perceivers can discriminate the phonemes within typical viseme groups, and that discrimination measured with d-prime (d’) and response latency is related to visual stimulus dissimilarities between consonant segments. In Experiment 1, participants performed speeded discrimination for pairs of consonant-vowel spoken nonsense syllables that were predicted to be same, near, or far in their perceptual distances, and that were presented as natural or synthesized video. Near pairs were within-viseme consonants. Natural within-viseme stimulus pairs were discriminated significantly above chance (except for /k/-/h/). Sensitivity (d’) increased and response times decreased with distance. Discrimination and identification were superior with natural stimuli, which comprised more phonetic information. We suggest that the notion of the viseme as a unitary perceptual category is incorrect. Experiment 2 probed the perceptual basis for visual speech discrimination by inverting the stimuli. Overall reductions in d’ with inverted stimuli but a persistent pattern of larger d’ for far than for near stimulus pairs are interpreted as evidence that visual speech is represented by both its motion and configural attributes. The methods and results of this investigation open up avenues for understanding the neural and perceptual bases for visual and audiovisual speech perception and for development of practical applications such as visual lipreading/speechreading speech synthesis. PMID:26217249

  7. Individual differences in attention strategies during detection, fine discrimination, and coarse discrimination

    PubMed Central

    Hecker, Elizabeth A.; Serences, John T.; Srinivasan, Ramesh

    2013-01-01

    Interacting with the environment requires the ability to flexibly direct attention to relevant features. We examined the degree to which individuals attend to visual features within and across Detection, Fine Discrimination, and Coarse Discrimination tasks. Electroencephalographic (EEG) responses were measured to an unattended peripheral flickering (4 or 6 Hz) grating while individuals (n = 33) attended to orientations that were offset by 0°, 10°, 20°, 30°, 40°, and 90° from the orientation of the unattended flicker. These unattended responses may be sensitive to attentional gain at the attended spatial location, since attention to features enhances early visual responses throughout the visual field. We found no significant differences in tuning curves across the three tasks in part due to individual differences in strategies. We sought to characterize individual attention strategies using hierarchical Bayesian modeling, which grouped individuals into families of curves that reflect attention to the physical target orientation (“on-channel”) or away from the target orientation (“off-channel”) or a uniform distribution of attention. The different curves were related to behavioral performance; individuals with “on-channel” curves had lower thresholds than individuals with uniform curves. Individuals with “off-channel” curves during Fine Discrimination additionally had lower thresholds than those assigned to uniform curves, highlighting the perceptual benefits of attending away from the physical target orientation during fine discriminations. Finally, we showed that a subset of individuals with optimal curves (“on-channel”) during Detection also demonstrated optimal curves (“off-channel”) during Fine Discrimination, indicating that a subset of individuals can modulate tuning optimally for detection and discrimination. PMID:23678013

  8. A strategy to optimize CT pediatric dose with a visual discrimination model

    NASA Astrophysics Data System (ADS)

    Gutierrez, Daniel; Gudinchet, François; Alamo-Maestre, Leonor T.; Bochud, François O.; Verdun, Francis R.

    2008-03-01

    Technological developments of computed tomography (CT) have led to a drastic increase of its clinical utilization, creating concerns about patient exposure. To better control dose to patients, we propose a methodology to find an objective compromise between dose and image quality by means of a visual discrimination model. A GE LightSpeed-Ultra scanner was used to perform the acquisitions. A QRM 3D low contrast resolution phantom (QRM - Germany) was scanned using CTDI vol values in the range of 1.7 to 103 mGy. Raw data obtained with the highest CTDI vol were afterwards processed to simulate dose reductions by white noise addition. Noise realism of the simulations was verified by comparing normalized noise power spectra aspect and amplitudes (NNPS) and standard deviation measurements. Patient images were acquired using the Diagnostic Reference Levels (DRL) proposed in Switzerland. Noise reduction was then simulated, as for the QRM phantom, to obtain five different CTDI vol levels, down to 3.0 mGy. Image quality of phantom images was assessed with the Sarnoff JNDmetrix visual discrimination model and compared to an assessment made by means of the ROC methodology, taken as a reference. For patient images a similar approach was taken but using as reference the Visual Grading Analysis (VGA) method. A relationship between Sarnoff JNDmetrix and ROC results was established for low contrast detection in phantom images, demonstrating that the Sarnoff JNDmetrix can be used for qualification of images with highly correlated noise. Patient image qualification showed a threshold of conspicuity loss only for children over 35 kg.

  9. Object recognition with hierarchical discriminant saliency networks.

    PubMed

    Han, Sunhyoung; Vasconcelos, Nuno

    2014-01-01

    The benefits of integrating attention and object recognition are investigated. While attention is frequently modeled as a pre-processor for recognition, we investigate the hypothesis that attention is an intrinsic component of recognition and vice-versa. This hypothesis is tested with a recognition model, the hierarchical discriminant saliency network (HDSN), whose layers are top-down saliency detectors, tuned for a visual class according to the principles of discriminant saliency. As a model of neural computation, the HDSN has two possible implementations. In a biologically plausible implementation, all layers comply with the standard neurophysiological model of visual cortex, with sub-layers of simple and complex units that implement a combination of filtering, divisive normalization, pooling, and non-linearities. In a convolutional neural network implementation, all layers are convolutional and implement a combination of filtering, rectification, and pooling. The rectification is performed with a parametric extension of the now popular rectified linear units (ReLUs), whose parameters can be tuned for the detection of target object classes. This enables a number of functional enhancements over neural network models that lack a connection to saliency, including optimal feature denoising mechanisms for recognition, modulation of saliency responses by the discriminant power of the underlying features, and the ability to detect both feature presence and absence. In either implementation, each layer has a precise statistical interpretation, and all parameters are tuned by statistical learning. Each saliency detection layer learns more discriminant saliency templates than its predecessors and higher layers have larger pooling fields. This enables the HDSN to simultaneously achieve high selectivity to target object classes and invariance. The performance of the network in saliency and object recognition tasks is compared to those of models from the biological and computer vision literatures. This demonstrates benefits for all the functional enhancements of the HDSN, the class tuning inherent to discriminant saliency, and saliency layers based on templates of increasing target selectivity and invariance. Altogether, these experiments suggest that there are non-trivial benefits in integrating attention and recognition.

  10. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking

    PubMed Central

    Peel, Hayden J.; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A.

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features. PMID:29725292

  11. Perceptual Discrimination of Basic Object Features Is Not Facilitated When Priming Stimuli Are Prevented From Reaching Awareness by Means of Visual Masking.

    PubMed

    Peel, Hayden J; Sperandio, Irene; Laycock, Robin; Chouinard, Philippe A

    2018-01-01

    Our understanding of how form, orientation and size are processed within and outside of awareness is limited and requires further investigation. Therefore, we investigated whether or not the visual discrimination of basic object features can be influenced by subliminal processing of stimuli presented beforehand. Visual masking was used to render stimuli perceptually invisible. Three experiments examined if visible and invisible primes could facilitate the subsequent feature discrimination of visible targets. The experiments differed in the kind of perceptual discrimination that participants had to make. Namely, participants were asked to discriminate visual stimuli on the basis of their form, orientation, or size. In all three experiments, we demonstrated reliable priming effects when the primes were visible but not when the primes were made invisible. Our findings underscore the importance of conscious awareness in facilitating the perceptual discrimination of basic object features.

  12. Examination of the relation between an assessment of skills and performance on auditory-visual conditional discriminations for children with autism spectrum disorder.

    PubMed

    Kodak, Tiffany; Clements, Andrea; Paden, Amber R; LeBlanc, Brittany; Mintz, Joslyn; Toussaint, Karen A

    2015-01-01

    The current investigation evaluated repertoires that may be related to performance on auditory-to-visual conditional discrimination training with 9 students who had been diagnosed with autism spectrum disorder. The skills included in the assessment were matching, imitation, scanning, an auditory discrimination, and a visual discrimination. The results of the skills assessment showed that 4 participants failed to demonstrate mastery of at least 1 of the skills. We compared the outcomes of the assessment to the results of auditory-visual conditional discrimination training and found that training outcomes were related to the assessment outcomes for 7 of the 9 participants. One participant who did not demonstrate mastery of all assessment skills subsequently learned several conditional discriminations when blocked training trials were conducted. Another participant who did not demonstrate mastery of the auditory discrimination skill subsequently acquired conditional discriminations in 1 of the training conditions. We discuss the implications of the assessment for practice and suggest additional areas of research on this topic. © Society for the Experimental Analysis of Behavior.

  13. Visually induced gains in pitch discrimination: Linking audio-visual processing with auditory abilities.

    PubMed

    Møller, Cecilie; Højlund, Andreas; Bærentsen, Klaus B; Hansen, Niels Chr; Skewes, Joshua C; Vuust, Peter

    2018-05-01

    Perception is fundamentally a multisensory experience. The principle of inverse effectiveness (PoIE) states how the multisensory gain is maximal when responses to the unisensory constituents of the stimuli are weak. It is one of the basic principles underlying multisensory processing of spatiotemporally corresponding crossmodal stimuli that are well established at behavioral as well as neural levels. It is not yet clear, however, how modality-specific stimulus features influence discrimination of subtle changes in a crossmodally corresponding feature belonging to another modality. Here, we tested the hypothesis that reliance on visual cues to pitch discrimination follow the PoIE at the interindividual level (i.e., varies with varying levels of auditory-only pitch discrimination abilities). Using an oddball pitch discrimination task, we measured the effect of varying visually perceived vertical position in participants exhibiting a wide range of pitch discrimination abilities (i.e., musicians and nonmusicians). Visual cues significantly enhanced pitch discrimination as measured by the sensitivity index d', and more so in the crossmodally congruent than incongruent condition. The magnitude of gain caused by compatible visual cues was associated with individual pitch discrimination thresholds, as predicted by the PoIE. This was not the case for the magnitude of the congruence effect, which was unrelated to individual pitch discrimination thresholds, indicating that the pitch-height association is robust to variations in auditory skills. Our findings shed light on individual differences in multisensory processing by suggesting that relevant multisensory information that crucially aids some perceivers' performance may be of less importance to others, depending on their unisensory abilities.

  14. Using a visual discrimination model for the detection of compression artifacts in virtual pathology images.

    PubMed

    Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S

    2011-02-01

    A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.

  15. The time-course of activation in the dorsal and ventral visual streams during landmark cueing and perceptual discrimination tasks.

    PubMed

    Lambert, Anthony J; Wootton, Adrienne

    2017-08-01

    Different patterns of high density EEG activity were elicited by the same peripheral stimuli, in the context of Landmark Cueing and Perceptual Discrimination tasks. The C1 component of the visual event-related potential (ERP) at parietal - occipital electrode sites was larger in the Landmark Cueing task, and source localisation suggested greater activation in the superior parietal lobule (SPL) in this task, compared to the Perceptual Discrimination task, indicating stronger early recruitment of the dorsal visual stream. In the Perceptual Discrimination task, source localisation suggested widespread activation of the inferior temporal gyrus (ITG) and fusiform gyrus (FFG), structures associated with the ventral visual stream, during the early phase of the P1 ERP component. Moreover, during a later epoch (171-270ms after stimulus onset) increased temporal-occipital negativity, and stronger recruitment of ITG and FFG were observed in the Perceptual Discrimination task. These findings illuminate the contrasting functions of the dorsal and ventral visual streams, to support rapid shifts of attention in response to contextual landmarks, and conscious discrimination, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Predictive Coding in Area V4: Dynamic Shape Discrimination under Partial Occlusion

    PubMed Central

    Choi, Hannah; Pasupathy, Anitha; Shea-Brown, Eric

    2018-01-01

    The primate visual system has an exquisite ability to discriminate partially occluded shapes. Recent electrophysiological recordings suggest that response dynamics in intermediate visual cortical area V4, shaped by feedback from prefrontal cortex (PFC), may play a key role. To probe the algorithms that may underlie these findings, we build and test a model of V4 and PFC interactions based on a hierarchical predictive coding framework. We propose that probabilistic inference occurs in two steps. Initially, V4 responses are driven solely by bottom-up sensory input and are thus strongly influenced by the level of occlusion. After a delay, V4 responses combine both feedforward input and feedback signals from the PFC; the latter reflect predictions made by PFC about the visual stimulus underlying V4 activity. We find that this model captures key features of V4 and PFC dynamics observed in experiments. Specifically, PFC responses are strongest for occluded stimuli and delayed responses in V4 are less sensitive to occlusion, supporting our hypothesis that the feedback signals from PFC underlie robust discrimination of occluded shapes. Thus, our study proposes that area V4 and PFC participate in hierarchical inference, with feedback signals encoding top-down predictions about occluded shapes. PMID:29566355

  17. Spatial vision in older adults: perceptual changes and neural bases.

    PubMed

    McKendrick, Allison M; Chan, Yu Man; Nguyen, Bao N

    2018-05-17

    The number of older adults is rapidly increasing internationally, leading to a significant increase in research on how healthy ageing impacts vision. Most clinical assessments of spatial vision involve simple detection (letter acuity, grating contrast sensitivity, perimetry). However, most natural visual environments are more spatially complicated, requiring contrast discrimination, and the delineation of object boundaries and contours, which are typically present on non-uniform backgrounds. In this review we discuss recent research that reports on the effects of normal ageing on these more complex visual functions, specifically in the context of recent neurophysiological studies. Recent research has concentrated on understanding the effects of healthy ageing on neural responses within the visual pathway in animal models. Such neurophysiological research has led to numerous, subsequently tested, hypotheses regarding the likely impact of healthy human ageing on specific aspects of spatial vision. Healthy normal ageing impacts significantly on spatial visual information processing from the retina through to visual cortex. Some human data validates that obtained from studies of animal physiology, however some findings indicate that rethinking of presumed neural substrates is required. Notably, not all spatial visual processes are altered by age. Healthy normal ageing impacts significantly on some spatial visual processes (in particular centre-surround tasks), but leaves contrast discrimination, contrast adaptation, and orientation discrimination relatively intact. The study of older adult vision contributes to knowledge of the brain mechanisms altered by the ageing process, can provide practical information regarding visual environments that older adults may find challenging, and may lead to new methods of assessing visual performance in clinical environments. © 2018 The Authors Ophthalmic & Physiological Optics © 2018 The College of Optometrists.

  18. Training haptic stiffness discrimination: time course of learning with or without visual information and knowledge of results.

    PubMed

    Teodorescu, Kinneret; Bouchigny, Sylvain; Korman, Maria

    2013-08-01

    In this study, we explored the time course of haptic stiffness discrimination learning and how it was affected by two experimental factors, the addition of visual information and/or knowledge of results (KR) during training. Stiffness perception may integrate both haptic and visual modalities. However, in many tasks, the visual field is typically occluded, forcing stiffness perception to be dependent exclusively on haptic information. No studies to date addressed the time course of haptic stiffness perceptual learning. Using a virtual environment (VE) haptic interface and a two-alternative forced-choice discrimination task, the haptic stiffness discrimination ability of 48 participants was tested across 2 days. Each day included two haptic test blocks separated by a training block Additional visual information and/or KR were manipulated between participants during training blocks. Practice repetitions alone induced significant improvement in haptic stiffness discrimination. Between days, accuracy was slightly improved, but decision time performance was deteriorated. The addition of visual information and/or KR had only temporary effects on decision time, without affecting the time course of haptic discrimination learning. Learning in haptic stiffness discrimination appears to evolve through at least two distinctive phases: A single training session resulted in both immediate and latent learning. This learning was not affected by the training manipulations inspected. Training skills in VE in spaced sessions can be beneficial for tasks in which haptic perception is critical, such as surgery procedures, when the visual field is occluded. However, training protocols for such tasks should account for low impact of multisensory information and KR.

  19. EXEL; Experience for Children in Learning. Parent-Directed Activities to Develop: Oral Expression, Visual Discrimination, Auditory Discrimination, Motor Coordination.

    ERIC Educational Resources Information Center

    Behrmann, Polly; Millman, Joan

    The activities collected in this handbook are planned for parents to use with their children in a learning experience. They can also be used in the classroom. Sections contain games designed to develop visual discrimination, auditory discrimination, motor coordination and oral expression. An objective is given for each game, and directions for…

  20. Colour thresholds in a coral reef fish

    PubMed Central

    Vorobyev, M.; Marshall, N. J.

    2016-01-01

    Coral reef fishes are among the most colourful animals in the world. Given the diversity of lifestyles and habitats on the reef, it is probable that in many instances coloration is a compromise between crypsis and communication. However, human observation of this coloration is biased by our primate visual system. Most animals have visual systems that are ‘tuned’ differently to humans; optimized for different parts of the visible spectrum. To understand reef fish colours, we need to reconstruct the appearance of colourful patterns and backgrounds as they are seen through the eyes of fish. Here, the coral reef associated triggerfish, Rhinecanthus aculeatus, was tested behaviourally to determine the limits of its colour vision. This is the first demonstration of behavioural colour discrimination thresholds in a coral reef species and is a critical step in our understanding of communication and speciation in this vibrant colourful habitat. Fish were trained to discriminate between a reward colour stimulus and series of non-reward colour stimuli and the discrimination thresholds were found to correspond well with predictions based on the receptor noise limited visual model and anatomy of the eye. Colour discrimination abilities of both reef fish and a variety of animals can therefore now be predicted using the parameters described here. PMID:27703704

  1. Short-term visual deprivation reduces interference effects of task-irrelevant facial expressions on affective prosody judgments

    PubMed Central

    Fengler, Ineke; Nava, Elena; Röder, Brigitte

    2015-01-01

    Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 h and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up) on an audio–visual (i.e., faces and voices) emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio–visual (i.e., tone bursts and light flashes) discrimination task and two unimodal (one auditory and one visual) perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile) tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seems to possibly prevail for longer durations. PMID:25954166

  2. Short-term visual deprivation, tactile acuity, and haptic solid shape discrimination.

    PubMed

    Crabtree, Charles E; Norman, J Farley

    2014-01-01

    Previous psychophysical studies have reported conflicting results concerning the effects of short-term visual deprivation upon tactile acuity. Some studies have found that 45 to 90 minutes of total light deprivation produce significant improvements in participants' tactile acuity as measured with a grating orientation discrimination task. In contrast, a single 2011 study found no such improvement while attempting to replicate these earlier findings. A primary goal of the current experiment was to resolve this discrepancy in the literature by evaluating the effects of a 90-minute period of total light deprivation upon tactile grating orientation discrimination. We also evaluated the potential effect of short-term deprivation upon haptic 3-D shape discrimination using a set of naturally-shaped solid objects. According to previous research, short-term deprivation enhances performance in a tactile 2-D shape discrimination task - perhaps a similar improvement also occurs for haptic 3-D shape discrimination. The results of the current investigation demonstrate that not only does short-term visual deprivation not enhance tactile acuity, it additionally has no effect upon haptic 3-D shape discrimination. While visual deprivation had no effect in our study, there was a significant effect of experience and learning for the grating orientation task - the participants' tactile acuity improved over time, independent of whether they had, or had not, experienced visual deprivation.

  3. Touchscreen learning deficits in Ube3a, Ts65Dn and Mecp2 mouse models of neurodevelopmental disorders with intellectual disabilities.

    PubMed

    Leach, P T; Crawley, J N

    2017-12-20

    Mutant mouse models of neurodevelopmental disorders with intellectual disabilities provide useful translational research tools, especially in cases where robust cognitive deficits are reproducibly detected. However, motor, sensory and/or health issues consequent to the mutation may introduce artifacts that preclude testing in some standard cognitive assays. Touchscreen learning and memory tasks in small operant chambers have the potential to circumvent these confounds. Here we use touchscreen visual discrimination learning to evaluate performance in the maternally derived Ube3a mouse model of Angelman syndrome, the Ts65Dn trisomy mouse model of Down syndrome, and the Mecp2 Bird mouse model of Rett syndrome. Significant deficits in acquisition of a 2-choice visual discrimination task were detected in both Ube3a and Ts65Dn mice. Procedural control measures showed no genotype differences during pretraining phases or during acquisition. Mecp2 males did not survive long enough for touchscreen training, consistent with previous reports. Most Mecp2 females failed on pretraining criteria. Significant impairments on Morris water maze spatial learning were detected in both Ube3a and Ts65Dn, replicating previous findings. Abnormalities on rotarod in Ube3a, and on open field in Ts65Dn, replicating previous findings, may have contributed to the observed acquisition deficits and swim speed abnormalities during water maze performance. In contrast, these motor phenotypes do not appear to have affected touchscreen procedural abilities during pretraining or visual discrimination training. Our findings of slower touchscreen learning in 2 mouse models of neurodevelopmental disorders with intellectual disabilities indicate that operant tasks offer promising outcome measures for the preclinical discovery of effective pharmacological therapeutics. © 2017 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.

  4. Increased conspicuousness can explain the match between visual sensitivities and blue plumage colours in fairy-wrens.

    PubMed

    Delhey, Kaspar; Hall, Michelle; Kingma, Sjouke A; Peters, Anne

    2013-01-07

    Colour signals are expected to match visual sensitivities of intended receivers. In birds, evolutionary shifts from violet-sensitive (V-type) to ultraviolet-sensitive (U-type) vision have been linked to increased prevalence of colours rich in shortwave reflectance (ultraviolet/blue), presumably due to better perception of such colours by U-type vision. Here we provide the first test of this widespread idea using fairy-wrens and allies (Family Maluridae) as a model, a family where shifts in visual sensitivities from V- to U-type eyes are associated with male nuptial plumage rich in ultraviolet/blue colours. Using psychophysical visual models, we compared the performance of both types of visual systems at two tasks: (i) detecting contrast between male plumage colours and natural backgrounds, and (ii) perceiving intraspecific chromatic variation in male plumage. While U-type outperforms V-type vision at both tasks, the crucial test here is whether U-type vision performs better at detecting and discriminating ultraviolet/blue colours when compared with other colours. This was true for detecting contrast between plumage colours and natural backgrounds (i), but not for discriminating intraspecific variability (ii). Our data indicate that selection to maximize conspicuousness to conspecifics may have led to the correlation between ultraviolet/blue colours and U-type vision in this clade of birds.

  5. A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos.

    PubMed

    Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar

    2018-01-01

    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.

  6. A parallel spatiotemporal saliency and discriminative online learning method for visual target tracking in aerial videos

    PubMed Central

    2018-01-01

    Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421

  7. A dual-task investigation of automaticity in visual word processing

    NASA Technical Reports Server (NTRS)

    McCann, R. S.; Remington, R. W.; Van Selst, M.

    2000-01-01

    An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.

  8. Discriminating Power of Localized Three-Dimensional Facial Morphology

    PubMed Central

    Hammond, Peter; Hutton, Tim J.; Allanson, Judith E.; Buxton, Bernard; Campbell, Linda E.; Clayton-Smith, Jill; Donnai, Dian; Karmiloff-Smith, Annette; Metcalfe, Kay; Murphy, Kieran C.; Patton, Michael; Pober, Barbara; Prescott, Katrina; Scambler, Pete; Shaw, Adam; Smith, Ann C. M.; Stevens, Angela F.; Temple, I. Karen; Hennekam, Raoul; Tassabehji, May

    2005-01-01

    Many genetic syndromes involve a facial gestalt that suggests a preliminary diagnosis to an experienced clinical geneticist even before a clinical examination and genotyping are undertaken. Previously, using visualization and pattern recognition, we showed that dense surface models (DSMs) of full face shape characterize facial dysmorphology in Noonan and in 22q11 deletion syndromes. In this much larger study of 696 individuals, we extend the use of DSMs of the full face to establish accurate discrimination between controls and individuals with Williams, Smith-Magenis, 22q11 deletion, or Noonan syndromes and between individuals with different syndromes in these groups. However, the full power of the DSM approach is demonstrated by the comparable discriminating abilities of localized facial features, such as periorbital, perinasal, and perioral patches, and the correlation of DSM-based predictions and molecular findings. This study demonstrates the potential of face shape models to assist clinical training through visualization, to support clinical diagnosis of affected individuals through pattern recognition, and to enable the objective comparison of individuals sharing other phenotypic or genotypic properties. PMID:16380911

  9. A Comparison of the Effects of Depth Rotation on Visual and Haptic Three-Dimensional Object Recognition

    ERIC Educational Resources Information Center

    Lawson, Rebecca

    2009-01-01

    A sequential matching task was used to compare how the difficulty of shape discrimination influences the achievement of object constancy for depth rotations across haptic and visual object recognition. Stimuli were nameable, 3-dimensional plastic models of familiar objects (e.g., bed, chair) and morphs midway between these endpoint shapes (e.g., a…

  10. Associative visual learning by tethered bees in a controlled visual environment.

    PubMed

    Buatois, Alexis; Pichot, Cécile; Schultheiss, Patrick; Sandoz, Jean-Christophe; Lazzari, Claudio R; Chittka, Lars; Avarguès-Weber, Aurore; Giurfa, Martin

    2017-10-10

    Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS-). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS- after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS- also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.

  11. Neural discriminability in rat lateral extrastriate cortex and deep but not superficial primary visual cortex correlates with shape discriminability.

    PubMed

    Vermaercke, Ben; Van den Bergh, Gert; Gerich, Florian; Op de Beeck, Hans

    2015-01-01

    Recent studies have revealed a surprising degree of functional specialization in rodent visual cortex. It is unknown to what degree this functional organization is related to the well-known hierarchical organization of the visual system in primates. We designed a study in rats that targets one of the hallmarks of the hierarchical object vision pathway in primates: selectivity for behaviorally relevant dimensions. We compared behavioral performance in a visual water maze with neural discriminability in five visual cortical areas. We tested behavioral discrimination in two independent batches of six rats using six pairs of shapes used previously to probe shape selectivity in monkey cortex (Lehky and Sereno, 2007). The relative difficulty (error rate) of shape pairs was strongly correlated between the two batches, indicating that some shape pairs were more difficult to discriminate than others. Then, we recorded in naive rats from five visual areas from primary visual cortex (V1) over areas LM, LI, LL, up to lateral occipito-temporal cortex (TO). Shape selectivity in the upper layers of V1, where the information enters cortex, correlated mostly with physical stimulus dissimilarity and not with behavioral performance. In contrast, neural discriminability in lower layers of all areas was strongly correlated with behavioral performance. These findings, in combination with the results from Vermaercke et al. (2014b), suggest that the functional specialization in rodent lateral visual cortex reflects a processing hierarchy resulting in the emergence of complex selectivity that is related to behaviorally relevant stimulus differences.

  12. Visual adaptation provides objective electrophysiological evidence of facial identity discrimination.

    PubMed

    Retter, Talia L; Rossion, Bruno

    2016-07-01

    Discrimination of facial identities is a fundamental function of the human brain that is challenging to examine with macroscopic measurements of neural activity, such as those obtained with functional magnetic resonance imaging (fMRI) and electroencephalography (EEG). Although visual adaptation or repetition suppression (RS) stimulation paradigms have been successfully implemented to this end with such recording techniques, objective evidence of an identity-specific discrimination response due to adaptation at the level of the visual representation is lacking. Here, we addressed this issue with fast periodic visual stimulation (FPVS) and EEG recording combined with a symmetry/asymmetry adaptation paradigm. Adaptation to one facial identity is induced through repeated presentation of that identity at a rate of 6 images per second (6 Hz) over 10 sec. Subsequently, this identity is presented in alternation with another facial identity (i.e., its anti-face, both faces being equidistant from an average face), producing an identity repetition rate of 3 Hz over a 20 sec testing sequence. A clear EEG response at 3 Hz is observed over the right occipito-temporal (ROT) cortex, indexing discrimination between the two facial identities in the absence of an explicit behavioral discrimination measure. This face identity discrimination occurs immediately after adaptation and disappears rapidly within 20 sec. Importantly, this 3 Hz response is not observed in a control condition without the single-identity 10 sec adaptation period. These results indicate that visual adaptation to a given facial identity produces an objective (i.e., at a pre-defined stimulation frequency) electrophysiological index of visual discrimination between that identity and another, and provides a unique behavior-free quantification of the effect of visual adaptation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Educational Materials Development in Primary Science: Insect Identification Kit

    ERIC Educational Resources Information Center

    Franks, Frank L.; Huff, Roger

    1976-01-01

    A study was conducted to evaluate the effectiveness of three-dimensional plastic models in teaching 71 visually handicapped students (in grades 1-3) to discriminate major body parts of insects, spiders, and earthworms. (SBH)

  14. Pyramid algorithms as models of human cognition

    NASA Astrophysics Data System (ADS)

    Pizlo, Zygmunt; Li, Zheng

    2003-06-01

    There is growing body of experimental evidence showing that human perception and cognition involves mechanisms that can be adequately modeled by pyramid algorithms. The main aspect of those mechanisms is hierarchical clustering of information: visual images, spatial relations, and states as well as transformations of a problem. In this paper we review prior psychophysical and simulation results on visual size transformation, size discrimination, speed-accuracy tradeoff, figure-ground segregation, and the traveling salesman problem. We also present our new results on graph search and on the 15-puzzle.

  15. Dynamic functional brain networks involved in simple visual discrimination learning.

    PubMed

    Fidalgo, Camino; Conejo, Nélida María; González-Pardo, Héctor; Arias, Jorge Luis

    2014-10-01

    Visual discrimination tasks have been widely used to evaluate many types of learning and memory processes. However, little is known about the brain regions involved at different stages of visual discrimination learning. We used cytochrome c oxidase histochemistry to evaluate changes in regional brain oxidative metabolism during visual discrimination learning in a water-T maze at different time points during training. As compared with control groups, the results of the present study reveal the gradual activation of cortical (prefrontal and temporal cortices) and subcortical brain regions (including the striatum and the hippocampus) associated to the mastery of a simple visual discrimination task. On the other hand, the brain regions involved and their functional interactions changed progressively over days of training. Regions associated with novelty, emotion, visuo-spatial orientation and motor aspects of the behavioral task seem to be relevant during the earlier phase of training, whereas a brain network comprising the prefrontal cortex was found along the whole learning process. This study highlights the relevance of functional interactions among brain regions to investigate learning and memory processes. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. The disassociation of visual and acoustic conspecific cues decreases discrimination by female zebra finches (Taeniopygia guttata).

    PubMed

    Campbell, Dana L M; Hauber, Mark E

    2009-08-01

    Female zebra finches (Taeniopygia guttata) use visual and acoustic traits for accurate recognition of male conspecifics. Evidence from video playbacks confirms that both sensory modalities are important for conspecific and species discrimination, but experimental evidence of the individual roles of these cue types affecting live conspecific recognition is limited. In a spatial paradigm to test discrimination, the authors used live male zebra finch stimuli of 2 color morphs, wild-type (conspecific) and white with a painted black beak (foreign), producing 1 of 2 vocalization types: songs and calls learned from zebra finch parents (conspecific) or cross-fostered songs and calls learned from Bengalese finch (Lonchura striata vars. domestica) foster parents (foreign). The authors found that female zebra finches consistently preferred males with conspecific visual and acoustic cues over males with foreign cues, but did not discriminate when the conspecific and foreign visual and acoustic cues were mismatched. These results indicate the importance of both visual and acoustic features for female zebra finches when discriminating between live conspecific males. Copyright 2009 APA, all rights reserved.

  17. Short-Term Visual Deprivation, Tactile Acuity, and Haptic Solid Shape Discrimination

    PubMed Central

    Crabtree, Charles E.; Norman, J. Farley

    2014-01-01

    Previous psychophysical studies have reported conflicting results concerning the effects of short-term visual deprivation upon tactile acuity. Some studies have found that 45 to 90 minutes of total light deprivation produce significant improvements in participants' tactile acuity as measured with a grating orientation discrimination task. In contrast, a single 2011 study found no such improvement while attempting to replicate these earlier findings. A primary goal of the current experiment was to resolve this discrepancy in the literature by evaluating the effects of a 90-minute period of total light deprivation upon tactile grating orientation discrimination. We also evaluated the potential effect of short-term deprivation upon haptic 3-D shape discrimination using a set of naturally-shaped solid objects. According to previous research, short-term deprivation enhances performance in a tactile 2-D shape discrimination task – perhaps a similar improvement also occurs for haptic 3-D shape discrimination. The results of the current investigation demonstrate that not only does short-term visual deprivation not enhance tactile acuity, it additionally has no effect upon haptic 3-D shape discrimination. While visual deprivation had no effect in our study, there was a significant effect of experience and learning for the grating orientation task – the participants' tactile acuity improved over time, independent of whether they had, or had not, experienced visual deprivation. PMID:25397327

  18. Honeybees can discriminate between Monet and Picasso paintings.

    PubMed

    Wu, Wen; Moreno, Antonio M; Tangen, Jason M; Reinhard, Judith

    2013-01-01

    Honeybees (Apis mellifera) have remarkable visual learning and discrimination abilities that extend beyond learning simple colours, shapes or patterns. They can discriminate landscape scenes, types of flowers, and even human faces. This suggests that in spite of their small brain, honeybees have a highly developed capacity for processing complex visual information, comparable in many respects to vertebrates. Here, we investigated whether this capacity extends to complex images that humans distinguish on the basis of artistic style: Impressionist paintings by Monet and Cubist paintings by Picasso. We show that honeybees learned to simultaneously discriminate between five different Monet and Picasso paintings, and that they do not rely on luminance, colour, or spatial frequency information for discrimination. When presented with novel paintings of the same style, the bees even demonstrated some ability to generalize. This suggests that honeybees are able to discriminate Monet paintings from Picasso ones by extracting and learning the characteristic visual information inherent in each painting style. Our study further suggests that discrimination of artistic styles is not a higher cognitive function that is unique to humans, but simply due to the capacity of animals-from insects to humans-to extract and categorize the visual characteristics of complex images.

  19. Coupled binary embedding for large-scale image retrieval.

    PubMed

    Zheng, Liang; Wang, Shengjin; Tian, Qi

    2014-08-01

    Visual matching is a crucial step in image retrieval based on the bag-of-words (BoW) model. In the baseline method, two keypoints are considered as a matching pair if their SIFT descriptors are quantized to the same visual word. However, the SIFT visual word has two limitations. First, it loses most of its discriminative power during quantization. Second, SIFT only describes the local texture feature. Both drawbacks impair the discriminative power of the BoW model and lead to false positive matches. To tackle this problem, this paper proposes to embed multiple binary features at indexing level. To model correlation between features, a multi-IDF scheme is introduced, through which different binary features are coupled into the inverted file. We show that matching verification methods based on binary features, such as Hamming embedding, can be effectively incorporated in our framework. As an extension, we explore the fusion of binary color feature into image retrieval. The joint integration of the SIFT visual word and binary features greatly enhances the precision of visual matching, reducing the impact of false positive matches. Our method is evaluated through extensive experiments on four benchmark datasets (Ukbench, Holidays, DupImage, and MIR Flickr 1M). We show that our method significantly improves the baseline approach. In addition, large-scale experiments indicate that the proposed method requires acceptable memory usage and query time compared with other approaches. Further, when global color feature is integrated, our method yields competitive performance with the state-of-the-arts.

  20. Visual speech alters the discrimination and identification of non-intact auditory speech in children with hearing loss.

    PubMed

    Jerger, Susan; Damian, Markus F; McAlpine, Rachel P; Abdi, Hervé

    2017-03-01

    Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/-B/aa or/-B/az). The items started with an easy-to-speechread/B/or difficult-to-speechread/G/onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/-B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same-as opposed to different-responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g.,/-B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz-as opposed to az- responses in the audiovisual than auditory mode. Performance in the audiovisual mode showed more same responses for the intact vs. non-intact different pairs (e.g., Baa:/-B/aa) and more intact onset responses for nonword repetition (Baz for/-B/az). Thus visual speech altered both discrimination and identification in the CHL-to a large extent for the/B/onsets but only minimally for the/G/onsets. The CHL identified the stimuli similarly to the CNH but did not discriminate the stimuli similarly. A bias-free measure of the children's discrimination skills (i.e., d' analysis) revealed that the CHL had greater difficulty discriminating intact from non-intact speech in both modes. As the degree of HL worsened, the ability to discriminate the intact vs. non-intact onsets in the auditory mode worsened. Discrimination ability in CHL significantly predicted their identification of the onsets-even after variation due to the other variables was controlled. These results clearly established that visual speech can fill in non-intact auditory speech, and this effect, in turn, made the non-intact onsets more difficult to discriminate from intact speech and more likely to be perceived as intact. Such results 1) demonstrate the value of visual speech at multiple levels of linguistic processing and 2) support intervention programs that view visual speech as a powerful asset for developing spoken language in CHL. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Visual Speech Alters the Discrimination and Identification of Non-Intact Auditory Speech in Children with Hearing Loss

    PubMed Central

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Hervé

    2017-01-01

    Objectives Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Methods Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/–B/aa or /–B/az). The items started with an easy-to-speechread /B/ or difficult-to-speechread /G/ onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/–B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same—as opposed to different—responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g., /–B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz—as opposed to az— responses in the audiovisual than auditory mode. Results Performance in the audiovisual mode showed more same responses for the intact vs. non-intact different pairs (e.g., Baa:/–B/aa) and more intact onset responses for nonword repetition (Baz for/–B/az). Thus visual speech altered both discrimination and identification in the CHL—to a large extent for the /B/ onsets but only minimally for the /G/ onsets. The CHL identified the stimuli similarly to the CNH but did not discriminate the stimuli similarly. A bias-free measure of the children’s discrimination skills (i.e., d’ analysis) revealed that the CHL had greater difficulty discriminating intact from non-intact speech in both modes. As the degree of HL worsened, the ability to discriminate the intact vs. non-intact onsets in the auditory mode worsened. Discrimination ability in CHL significantly predicted their identification of the onsets—even after variation due to the other variables was controlled. Conclusions These results clearly established that visual speech can fill in non-intact auditory speech, and this effect, in turn, made the non-intact onsets more difficult to discriminate from intact speech and more likely to be perceived as intact. Such results 1) demonstrate the value of visual speech at multiple levels of linguistic processing and 2) support intervention programs that view visual speech as a powerful asset for developing spoken language in CHL. PMID:28167003

  2. A conflict-based model of color categorical perception: evidence from a priming study.

    PubMed

    Hu, Zhonghua; Hanley, J Richard; Zhang, Ruiling; Liu, Qiang; Roberson, Debi

    2014-10-01

    Categorical perception (CP) of color manifests as faster or more accurate discrimination of two shades of color that straddle a category boundary (e.g., one blue and one green) than of two shades from within the same category (e.g., two different shades of green), even when the differences between the pairs of colors are equated according to some objective metric. The results of two experiments provide new evidence for a conflict-based account of this effect, in which CP is caused by competition between visual and verbal/categorical codes on within-category trials. According to this view, conflict arises because the verbal code indicates that the two colors are the same, whereas the visual code indicates that they are different. In Experiment 1, two shades from the same color category were discriminated significantly faster when the previous trial also comprised a pair of within-category colors than when the previous trial comprised a pair from two different color categories. Under the former circumstances, the CP effect disappeared. According to the conflict-based model, response conflict between visual and categorical codes during discrimination of within-category pairs produced an adjustment of cognitive control that reduced the weight given to the categorical code relative to the visual code on the subsequent trial. Consequently, responses on within-category trials were facilitated, and CP effects were reduced. The effectiveness of this conflict-based account was evaluated in comparison with an alternative view that CP reflects temporary warping of perceptual space at the boundaries between color categories.

  3. Teaching Equivalence Relations to Individuals with Minimal Verbal Repertoires: Are Visual and Auditory-Visual Discriminations Predictive of Stimulus Equivalence?

    ERIC Educational Resources Information Center

    Vause, Tricia; Martin, Garry L.; Yu, C.T.; Marion, Carole; Sakko, Gina

    2005-01-01

    The relationship between language, performance on the Assessment of Basic Learning Abilities (ABLA) test, and stimulus equivalence was examined. Five participants with minimal verbal repertoires were studied; 3 who passed up to ABLA Level 4, a visual quasi-identity discrimination and 2 who passed ABLA Level 6, an auditory-visual nonidentity…

  4. Visual Discrimination of Color Normals and Color Deficients. Final Report.

    ERIC Educational Resources Information Center

    Chen, Yih-Wen

    Since visual discrimination is one of the factors involved in learning from instructional media, the present study was designed (1) to investigate the effects of hue contrast, illuminant intensity, brightness contrast, and viewing distance on the discrimination accuracy of those who see color normally and those who do not, and (2) to investigate…

  5. Visual body recognition in a prosopagnosic patient.

    PubMed

    Moro, V; Pernigo, S; Avesani, R; Bulgarelli, C; Urgesi, C; Candidi, M; Aglioti, S M

    2012-01-01

    Conspicuous deficits in face recognition characterize prosopagnosia. Information on whether agnosic deficits may extend to non-facial body parts is lacking. Here we report the neuropsychological description of FM, a patient affected by a complete deficit in face recognition in the presence of mild clinical signs of visual object agnosia. His deficit involves both overt and covert recognition of faces (i.e. recognition of familiar faces, but also categorization of faces for gender or age) as well as the visual mental imagery of faces. By means of a series of matching-to-sample tasks we investigated: (i) a possible association between prosopagnosia and disorders in visual body perception; (ii) the effect of the emotional content of stimuli on the visual discrimination of faces, bodies and objects; (iii) the existence of a dissociation between identity recognition and the emotional discrimination of faces and bodies. Our results document, for the first time, the co-occurrence of body agnosia, i.e. the visual inability to discriminate body forms and body actions, and prosopagnosia. Moreover, the results show better performance in the discrimination of emotional face and body expressions with respect to body identity and neutral actions. Since FM's lesions involve bilateral fusiform areas, it is unlikely that the amygdala-temporal projections explain the relative sparing of emotion discrimination performance. Indeed, the emotional content of the stimuli did not improve the discrimination of their identity. The results hint at the existence of two segregated brain networks involved in identity and emotional discrimination that are at least partially shared by face and body processing. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Sequential Ideal-Observer Analysis of Visual Discriminations.

    ERIC Educational Resources Information Center

    Geisler, Wilson S.

    1989-01-01

    A new analysis, based on the concept of the ideal observer in signal detection theory, is described. It allows: tracing of the flow of discrimination information through the initial physiological stages of visual processing for arbitrary spatio-chromatic stimuli, and measurement of the information content of said visual stimuli. (TJH)

  7. Tensor discriminant color space for face recognition.

    PubMed

    Wang, Su-Jing; Yang, Jian; Zhang, Na; Zhou, Chun-Guang

    2011-09-01

    Recent research efforts reveal that color may provide useful information for face recognition. For different visual tasks, the choice of a color space is generally different. How can a color space be sought for the specific face recognition problem? To address this problem, this paper represents a color image as a third-order tensor and presents the tensor discriminant color space (TDCS) model. The model can keep the underlying spatial structure of color images. With the definition of n-mode between-class scatter matrices and within-class scatter matrices, TDCS constructs an iterative procedure to obtain one color space transformation matrix and two discriminant projection matrices by maximizing the ratio of these two scatter matrices. The experiments are conducted on two color face databases, AR and Georgia Tech face databases, and the results show that both the performance and the efficiency of the proposed method are better than those of the state-of-the-art color image discriminant model, which involve one color space transformation matrix and one discriminant projection matrix, specifically in a complicated face database with various pose variations.

  8. The Temporal Dynamics of Visual Search: Evidence for Parallel Processing in Feature and Conjunction Searches

    PubMed Central

    McElree, Brian; Carrasco, Marisa

    2012-01-01

    Feature and conjunction searches have been argued to delineate parallel and serial operations in visual processing. The authors evaluated this claim by examining the temporal dynamics of the detection of features and conjunctions. The 1st experiment used a reaction time (RT) task to replicate standard mean RT patterns and to examine the shapes of the RT distributions. The 2nd experiment used the response-signal speed–accuracy trade-off (SAT) procedure to measure discrimination (asymptotic detection accuracy) and detection speed (processing dynamics). Set size affected discrimination in both feature and conjunction searches but affected detection speed only in the latter. Fits of models to the SAT data that included a serial component overpredicted the magnitude of the observed dynamics differences. The authors concluded that both features and conjunctions are detected in parallel. Implications for the role of attention in visual processing are discussed. PMID:10641310

  9. Investigating the role of the superior colliculus in active vision with the visual search paradigm.

    PubMed

    Shen, Kelly; Valero, Jerome; Day, Gregory S; Paré, Martin

    2011-06-01

    We review here both the evidence that the functional visuomotor organization of the optic tectum is conserved in the primate superior colliculus (SC) and the evidence for the linking proposition that SC discriminating activity instantiates saccade target selection. We also present new data in response to questions that arose from recent SC visual search studies. First, we observed that SC discriminating activity predicts saccade initiation when monkeys perform an unconstrained search for a target defined by either a single visual feature or a conjunction of two features. Quantitative differences between the results in these two search tasks suggest, however, that SC discriminating activity does not only reflect saccade programming. This finding concurs with visual search studies conducted in posterior parietal cortex and the idea that, during natural active vision, visual attention is shifted concomitantly with saccade programming. Second, the analysis of a large neuronal sample recorded during feature search revealed that visual neurons in the superficial layers do possess discriminating activity. In addition, the hypotheses that there are distinct types of SC neurons in the deeper layers and that they are differently involved in saccade target selection were not substantiated. Third, we found that the discriminating quality of single-neuron activity substantially surpasses the ability of the monkeys to discriminate the target from distracters, raising the possibility that saccade target selection is a noisy process. We discuss these new findings in light of the visual search literature and the view that the SC is a visual salience map for orienting eye movements. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  10. Masking by Gratings Predicted by an Image Sequence Discriminating Model: Testing Models for Perceptual Discrimination Using Repeatable Noise

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    Adding noise to stimuli to be discriminated allows estimation of observer classification functions based on the correlation between observer responses and relevant features of the noisy stimuli. Examples will be presented of stimulus features that are found in auditory tone detection and visual vernier acuity. using the standard signal detection model (Thurstone scaling), we derive formulas to estimate the proportion of the observers decision variable variance that is controlled by the added noise. one is based on the probability of agreement of the observer with him/herself on trials with the same noise sample. Another is based on the relative performance of the observer and the model. When these do not agree, the model can be rejected. A second derivation gives the probability of agreement of observer and model when the observer follows the model except for internal noise. Agreement significantly less than this amount allows rejection of the model.

  11. Is improved contrast sensitivity a natural consequence of visual training?

    PubMed Central

    Levi, Aaron; Shaked, Danielle; Tadin, Duje; Huxlin, Krystel R.

    2015-01-01

    Many studies have shown that training and testing conditions modulate specificity of visual learning to trained stimuli and tasks. In visually impaired populations, generalizability of visual learning to untrained stimuli/tasks is almost always reported, with contrast sensitivity (CS) featuring prominently among these collaterally-improved functions. To understand factors underlying this difference, we measured CS for direction and orientation discrimination in the visual periphery of three groups of visually-intact subjects. Group 1 trained on an orientation discrimination task with static Gabors whose luminance contrast was decreased as performance improved. Group 2 trained on a global direction discrimination task using high-contrast random dot stimuli previously used to recover motion perception in cortically blind patients. Group 3 underwent no training. Both forms of training improved CS with some degree of specificity for basic attributes of the trained stimulus/task. Group 1's largest enhancement was in CS around the trained spatial/temporal frequencies; similarly, Group 2's largest improvements occurred in CS for discriminating moving and flickering stimuli. Group 3 saw no significant CS changes. These results indicate that CS improvements may be a natural consequence of multiple forms of visual training in visually intact humans, albeit with some specificity to the trained visual domain(s). PMID:26305736

  12. Solid shape discrimination from vision and haptics: natural objects (Capsicum annuum) and Gibson's "feelies".

    PubMed

    Norman, J Farley; Phillips, Flip; Holmin, Jessica S; Norman, Hideko F; Beers, Amanda M; Boswell, Alexandria M; Cheeseman, Jacob R; Stethen, Angela G; Ronning, Cecilia

    2012-10-01

    A set of three experiments evaluated 96 participants' ability to visually and haptically discriminate solid object shape. In the past, some researchers have found haptic shape discrimination to be substantially inferior to visual shape discrimination, while other researchers have found haptics and vision to be essentially equivalent. A primary goal of the present study was to understand these discrepant past findings and to determine the true capabilities of the haptic system. All experiments used the same task (same vs. different shape discrimination) and stimulus objects (James Gibson's "feelies" and a set of naturally shaped objects--bell peppers). However, the methodology varied across experiments. Experiment 1 used random 3-dimensional (3-D) orientations of the stimulus objects, and the conditions were full-cue (active manipulation of objects and rotation of the visual objects in depth). Experiment 2 restricted the 3-D orientations of the stimulus objects and limited the haptic and visual information available to the participants. Experiment 3 compared restricted and full-cue conditions using random 3-D orientations. We replicated both previous findings in the current study. When we restricted visual and haptic information (and placed the stimulus objects in the same orientation on every trial), the participants' visual performance was superior to that obtained for haptics (replicating the earlier findings of Davidson et al. in Percept Psychophys 15(3):539-543, 1974). When the circumstances resembled those of ordinary life (e.g., participants able to actively manipulate objects and see them from a variety of perspectives), we found no significant difference between visual and haptic solid shape discrimination.

  13. Time course of discrimination between emotional facial expressions: the role of visual saliency.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2011-08-01

    Saccadic and manual responses were used to investigate the speed of discrimination between happy and non-happy facial expressions in two-alternative-forced-choice tasks. The minimum latencies of correct saccadic responses indicated that the earliest time point at which discrimination occurred ranged between 200 and 280ms, depending on type of expression. Corresponding minimum latencies for manual responses ranged between 440 and 500ms. For both response modalities, visual saliency of the mouth region was a critical factor in facilitating discrimination: The more salient the mouth was in happy face targets in comparison with non-happy distracters, the faster discrimination was. Global image characteristics (e.g., luminance) and semantic factors (i.e., categorical similarity and affective valence of expression) made minor or no contribution to discrimination efficiency. This suggests that visual saliency of distinctive facial features, rather than the significance of expression, is used to make both early and later expression discrimination decisions. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Tactile discrimination activates the visual cortex of the recently blind naive to Braille: a functional magnetic resonance imaging study in humans.

    PubMed

    Sadato, Norihiro; Okada, Tomohisa; Kubota, Kiyokazu; Yonekura, Yoshiharu

    2004-04-08

    The occipital cortex of blind subjects is known to be activated during tactile discrimination tasks such as Braille reading. To investigate whether this is due to long-term learning of Braille or to sensory deafferentation, we used fMRI to study tactile discrimination tasks in subjects who had recently lost their sight and never learned Braille. The occipital cortex of the blind subjects without Braille training was activated during the tactile discrimination task, whereas that of control sighted subjects was not. This finding suggests that the activation of the visual cortex of the blind during performance of a tactile discrimination task may be due to sensory deafferentation, wherein a competitive imbalance favors the tactile over the visual modality.

  15. Visual discrimination transfer and modulation by biogenic amines in honeybees.

    PubMed

    Vieira, Amanda Rodrigues; Salles, Nayara; Borges, Marco; Mota, Theo

    2018-05-10

    For more than a century, visual learning and memory have been studied in the honeybee Apis mellifera using operant appetitive conditioning. Although honeybees show impressive visual learning capacities in this well-established protocol, operant training of free-flying animals cannot be combined with invasive protocols for studying the neurobiological basis of visual learning. In view of this, different attempts have been made to develop new classical conditioning protocols for studying visual learning in harnessed honeybees, though learning performance remains considerably poorer than that for free-flying animals. Here, we investigated the ability of honeybees to use visual information acquired during classical conditioning in a new operant context. We performed differential visual conditioning of the proboscis extension reflex (PER) followed by visual orientation tests in a Y-maze. Classical conditioning and Y-maze retention tests were performed using the same pair of perceptually isoluminant chromatic stimuli, to avoid the influence of phototaxis during free-flying orientation. Visual discrimination transfer was clearly observed, with pre-trained honeybees significantly orienting their flights towards the former positive conditioned stimulus (CS+), thus showing that visual memories acquired by honeybees are resistant to context changes between conditioning and the retention test. We combined this visual discrimination approach with selective pharmacological injections to evaluate the effect of dopamine and octopamine in appetitive visual learning. Both octopaminergic and dopaminergic antagonists impaired visual discrimination performance, suggesting that both these biogenic amines modulate appetitive visual learning in honeybees. Our study brings new insight into cognitive and neurobiological mechanisms underlying visual learning in honeybees. © 2018. Published by The Company of Biologists Ltd.

  16. Visuomotor sensitivity to visual information about surface orientation.

    PubMed

    Knill, David C; Kersten, Daniel

    2004-03-01

    We measured human visuomotor sensitivity to visual information about three-dimensional surface orientation by analyzing movements made to place an object on a slanted surface. We applied linear discriminant analysis to the kinematics of subjects' movements to surfaces with differing slants (angle away form the fronto-parallel) to derive visuomotor d's for discriminating surfaces differing in slant by 5 degrees. Subjects' visuomotor sensitivity to information about surface orientation was very high, with discrimination "thresholds" ranging from 2 to 3 degrees. In a first experiment, we found that subjects performed only slightly better using binocular cues alone than monocular texture cues and that they showed only weak evidence for combining the cues when both were available, suggesting that monocular cues can be just as effective in guiding motor behavior in depth as binocular cues. In a second experiment, we measured subjects' perceptual discrimination and visuomotor thresholds in equivalent stimulus conditions to decompose visuomotor sensitivity into perceptual and motor components. Subjects' visuomotor thresholds were found to be slightly greater than their perceptual thresholds for a range of memory delays, from 1 to 3 s. The data were consistent with a model in which perceptual noise increases with increasing delay between stimulus presentation and movement initiation, but motor noise remains constant. This result suggests that visuomotor and perceptual systems rely on the same visual estimates of surface slant for memory delays ranging from 1 to 3 s.

  17. Cortical Neuroprosthesis Merges Visible and Invisible Light Without Impairing Native Sensory Function

    PubMed Central

    Thomson, Eric E.; Zea, Ivan; França, Wendy

    2017-01-01

    Abstract Adult rats equipped with a sensory prosthesis, which transduced infrared (IR) signals into electrical signals delivered to somatosensory cortex (S1), took approximately 4 d to learn a four-choice IR discrimination task. Here, we show that when such IR signals are projected to the primary visual cortex (V1), rats that are pretrained in a visual-discrimination task typically learn the same IR discrimination task on their first day of training. However, without prior training on a visual discrimination task, the learning rates for S1- and V1-implanted animals converged, suggesting there is no intrinsic difference in learning rate between the two areas. We also discovered that animals were able to integrate IR information into the ongoing visual processing stream in V1, performing a visual-IR integration task in which they had to combine IR and visual information. Furthermore, when the IR prosthesis was implanted in S1, rats showed no impairment in their ability to use their whiskers to perform a tactile discrimination task. Instead, in some rats, this ability was actually enhanced. Cumulatively, these findings suggest that cortical sensory neuroprostheses can rapidly augment the representational scope of primary sensory areas, integrating novel sources of information into ongoing processing while incurring minimal loss of native function. PMID:29279860

  18. A Mouse Model of Visual Perceptual Learning Reveals Alterations in Neuronal Coding and Dendritic Spine Density in the Visual Cortex.

    PubMed

    Wang, Yan; Wu, Wei; Zhang, Xian; Hu, Xu; Li, Yue; Lou, Shihao; Ma, Xiao; An, Xu; Liu, Hui; Peng, Jing; Ma, Danyi; Zhou, Yifeng; Yang, Yupeng

    2016-01-01

    Visual perceptual learning (VPL) can improve spatial vision in normally sighted and visually impaired individuals. Although previous studies of humans and large animals have explored the neural basis of VPL, elucidation of the underlying cellular and molecular mechanisms remains a challenge. Owing to the advantages of molecular genetic and optogenetic manipulations, the mouse is a promising model for providing a mechanistic understanding of VPL. Here, we thoroughly evaluated the effects and properties of VPL on spatial vision in C57BL/6J mice using a two-alternative, forced-choice visual water task. Briefly, the mice underwent prolonged training at near the individual threshold of contrast or spatial frequency (SF) for pattern discrimination or visual detection for 35 consecutive days. Following training, the contrast-threshold trained mice showed an 87% improvement in contrast sensitivity (CS) and a 55% gain in visual acuity (VA). Similarly, the SF-threshold trained mice exhibited comparable and long-lasting improvements in VA and significant gains in CS over a wide range of SFs. Furthermore, learning largely transferred across eyes and stimulus orientations. Interestingly, learning could transfer from a pattern discrimination task to a visual detection task, but not vice versa. We validated that this VPL fully restored VA in adult amblyopic mice and old mice. Taken together, these data indicate that mice, as a species, exhibit reliable VPL. Intrinsic signal optical imaging revealed that mice with perceptual training had higher cut-off SFs in primary visual cortex (V1) than those without perceptual training. Moreover, perceptual training induced an increase in the dendritic spine density in layer 2/3 pyramidal neurons of V1. These results indicated functional and structural alterations in V1 during VPL. Overall, our VPL mouse model will provide a platform for investigating the neurobiological basis of VPL.

  19. Global Image Dissimilarity in Macaque Inferotemporal Cortex Predicts Human Visual Search Efficiency

    PubMed Central

    Sripati, Arun P.; Olson, Carl R.

    2010-01-01

    Finding a target in a visual scene can be easy or difficult depending on the nature of the distractors. Research in humans has suggested that search is more difficult the more similar the target and distractors are to each other. However, it has not yielded an objective definition of similarity. We hypothesized that visual search performance depends on similarity as determined by the degree to which two images elicit overlapping patterns of neuronal activity in visual cortex. To test this idea, we recorded from neurons in monkey inferotemporal cortex (IT) and assessed visual search performance in humans using pairs of images formed from the same local features in different global arrangements. The ability of IT neurons to discriminate between two images was strongly predictive of the ability of humans to discriminate between them during visual search, accounting overall for 90% of the variance in human performance. A simple physical measure of global similarity – the degree of overlap between the coarse footprints of a pair of images – largely explains both the neuronal and the behavioral results. To explain the relation between population activity and search behavior, we propose a model in which the efficiency of global oddball search depends on contrast-enhancing lateral interactions in high-order visual cortex. PMID:20107054

  20. An Amorphous Model for Morphological Processing in Visual Comprehension Based on Naive Discriminative Learning

    ERIC Educational Resources Information Center

    Baayen, R. Harald; Milin, Petar; Durdevic, Dusica Filipovic; Hendrix, Peter; Marelli, Marco

    2011-01-01

    A 2-layer symbolic network model based on the equilibrium equations of the Rescorla-Wagner model (Danks, 2003) is proposed. The study first presents 2 experiments in Serbian, which reveal for sentential reading the inflectional paradigmatic effects previously observed by Milin, Filipovic Durdevic, and Moscoso del Prado Martin (2009) for unprimed…

  1. Increased conspicuousness can explain the match between visual sensitivities and blue plumage colours in fairy-wrens

    PubMed Central

    Delhey, Kaspar; Hall, Michelle; Kingma, Sjouke A.; Peters, Anne

    2013-01-01

    Colour signals are expected to match visual sensitivities of intended receivers. In birds, evolutionary shifts from violet-sensitive (V-type) to ultraviolet-sensitive (U-type) vision have been linked to increased prevalence of colours rich in shortwave reflectance (ultraviolet/blue), presumably due to better perception of such colours by U-type vision. Here we provide the first test of this widespread idea using fairy-wrens and allies (Family Maluridae) as a model, a family where shifts in visual sensitivities from V- to U-type eyes are associated with male nuptial plumage rich in ultraviolet/blue colours. Using psychophysical visual models, we compared the performance of both types of visual systems at two tasks: (i) detecting contrast between male plumage colours and natural backgrounds, and (ii) perceiving intraspecific chromatic variation in male plumage. While U-type outperforms V-type vision at both tasks, the crucial test here is whether U-type vision performs better at detecting and discriminating ultraviolet/blue colours when compared with other colours. This was true for detecting contrast between plumage colours and natural backgrounds (i), but not for discriminating intraspecific variability (ii). Our data indicate that selection to maximize conspicuousness to conspecifics may have led to the correlation between ultraviolet/blue colours and U-type vision in this clade of birds. PMID:23118438

  2. Reading Performance Is Enhanced by Visual Texture Discrimination Training in Chinese-Speaking Children with Developmental Dyslexia

    PubMed Central

    Meng, Xiangzhi; Lin, Ou; Wang, Fang; Jiang, Yuzheng; Song, Yan

    2014-01-01

    Background High order cognitive processing and learning, such as reading, interact with lower-level sensory processing and learning. Previous studies have reported that visual perceptual training enlarges visual span and, consequently, improves reading speed in young and old people with amblyopia. Recently, a visual perceptual training study in Chinese-speaking children with dyslexia found that the visual texture discrimination thresholds of these children in visual perceptual training significantly correlated with their performance in Chinese character recognition, suggesting that deficits in visual perceptual processing/learning might partly underpin the difficulty in reading Chinese. Methodology/Principal Findings To further clarify whether visual perceptual training improves the measures of reading performance, eighteen children with dyslexia and eighteen typically developed readers that were age- and IQ-matched completed a series of reading measures before and after visual texture discrimination task (TDT) training. Prior to the TDT training, each group of children was split into two equivalent training and non-training groups in terms of all reading measures, IQ, and TDT. The results revealed that the discrimination threshold SOAs of TDT were significantly higher for the children with dyslexia than for the control children before training. Interestingly, training significantly decreased the discrimination threshold SOAs of TDT for both the typically developed readers and the children with dyslexia. More importantly, the training group with dyslexia exhibited significant enhancement in reading fluency, while the non-training group with dyslexia did not show this improvement. Additional follow-up tests showed that the improvement in reading fluency is a long-lasting effect and could be maintained for up to two months in the training group with dyslexia. Conclusion/Significance These results suggest that basic visual perceptual processing/learning and reading ability in Chinese might at least partially rely on overlapping mechanisms. PMID:25247602

  3. Visual Discrimination and Motor Reproduction of Movement by Individuals with Mental Retardation.

    ERIC Educational Resources Information Center

    Shinkfield, Alison J.; Sparrow, W. A.; Day, R. H.

    1997-01-01

    Visual discrimination and motor reproduction tasks involving computer-simulated arm movements were administered to 12 adults with mental retardation and a gender-matched control group. The purpose was to examine whether inadequacies in visual perception account for the poorer motor performance of this population. Results indicate both perceptual…

  4. Perceptual and academic patterns of learning-disabled/gifted students.

    PubMed

    Waldron, K A; Saphire, D G

    1992-04-01

    This research explored ways gifted children with learning disabilities perceive and recall auditory and visual input and apply this information to reading, mathematics, and spelling. 24 learning-disabled/gifted children and a matched control group of normally achieving gifted students were tested for oral reading, word recognition and analysis, listening comprehension, and spelling. In mathematics, they were tested for numeration, mental and written computation, word problems, and numerical reasoning. To explore perception and memory skills, students were administered formal tests of visual and auditory memory as well as auditory discrimination of sounds. Their responses to reading and to mathematical computations were further considered for evidence of problems in visual discrimination, visual sequencing, and visual spatial areas. Analyses indicated that these learning-disabled/gifted students were significantly weaker than controls in their decoding skills, in spelling, and in most areas of mathematics. They were also significantly weaker in auditory discrimination and memory, and in visual discrimination, sequencing, and spatial abilities. Conclusions are that these underlying perceptual and memory deficits may be related to students' academic problems.

  5. Sex differences in audiovisual discrimination learning by Bengalese finches (Lonchura striata var. domestica).

    PubMed

    Seki, Yoshimasa; Okanoya, Kazuo

    2008-02-01

    Both visual and auditory information are important for songbirds, especially in developmental and sexual contexts. To investigate bimodal cognition in songbirds, the authors conducted audiovisual discrimination training in Bengalese finches. The authors used two types of stimulus: an "artificial stimulus," which is a combination of simple figures and sound, and a "biological stimulus," consisting of video images of singing males along with their songs. The authors found that while both sexes predominantly used visual cues in the discrimination tasks, males tended to be more dependent on auditory information for the biological stimulus. Female responses were always dependent on the visual stimulus for both stimulus types. Only males changed their discrimination strategy according to stimulus type. Although males used both visual and auditory cues for the biological stimulus, they responded to the artificial stimulus depending only on visual information, as the females did. These findings suggest a sex difference in innate auditory sensitivity. (c) 2008 APA.

  6. The Development of Face Perception in Infancy: Intersensory Interference and Unimodal Visual Facilitation

    PubMed Central

    Bahrick, Lorraine E.; Lickliter, Robert; Castellanos, Irina

    2014-01-01

    Although research has demonstrated impressive face perception skills of young infants, little attention has focused on conditions that enhance versus impair infant face perception. The present studies tested the prediction, generated from the Intersensory Redundancy Hypothesis (IRH), that face discrimination, which relies on detection of visual featural information, would be impaired in the context of intersensory redundancy provided by audiovisual speech, and enhanced in the absence of intersensory redundancy (unimodal visual and asynchronous audiovisual speech) in early development. Later in development, following improvements in attention, faces should be discriminated in both redundant audiovisual and nonredundant stimulation. Results supported these predictions. Two-month-old infants discriminated a novel face in unimodal visual and asynchronous audiovisual speech but not in synchronous audiovisual speech. By 3 months, face discrimination was evident even during synchronous audiovisual speech. These findings indicate that infant face perception is enhanced and emerges developmentally earlier following unimodal visual than synchronous audiovisual exposure and that intersensory redundancy generated by naturalistic audiovisual speech can interfere with face processing. PMID:23244407

  7. Olfactory discrimination: when vision matters?

    PubMed

    Demattè, M Luisa; Sanabria, Daniel; Spence, Charles

    2009-02-01

    Many previous studies have attempted to investigate the effect of visual cues on olfactory perception in humans. The majority of this research has only looked at the modulatory effect of color, which has typically been explained in terms of multisensory perceptual interactions. However, such crossmodal effects may equally well relate to interactions taking place at a higher level of information processing as well. In fact, it is well-known that semantic knowledge can have a substantial effect on people's olfactory perception. In the present study, we therefore investigated the influence of visual cues, consisting of color patches and/or shapes, on people's olfactory discrimination performance. Participants had to make speeded odor discrimination responses (lemon vs. strawberry) while viewing a red or yellow color patch, an outline drawing of a strawberry or lemon, or a combination of these color and shape cues. Even though participants were instructed to ignore the visual stimuli, our results demonstrate that the accuracy of their odor discrimination responses was influenced by visual distractors. This result shows that both color and shape information are taken into account during speeded olfactory discrimination, even when such information is completely task irrelevant, hinting at the automaticity of such higher level visual-olfactory crossmodal interactions.

  8. Visual processing in reading disorders and attention-deficit/hyperactivity disorder and its contribution to basic reading ability

    PubMed Central

    Kibby, Michelle Y.; Dyer, Sarah M.; Vadnais, Sarah A.; Jagger, Audreyana C.; Casher, Gabriel A.; Stacy, Maria

    2015-01-01

    Whether visual processing deficits are common in reading disorders (RD), and related to reading ability in general, has been debated for decades. The type of visual processing affected also is debated, although visual discrimination and short-term memory (STM) may be more commonly related to reading ability. Reading disorders are frequently comorbid with ADHD, and children with ADHD often have subclinical reading problems. Hence, children with ADHD were used as a comparison group in this study. ADHD and RD may be dissociated in terms of visual processing. Whereas RD may be associated with deficits in visual discrimination and STM for order, ADHD is associated with deficits in visual-spatial processing. Thus, we hypothesized that children with RD would perform worse than controls and children with ADHD only on a measure of visual discrimination and a measure of visual STM that requires memory for order. We expected all groups would perform comparably on the measure of visual STM that does not require sequential processing. We found children with RD or ADHD were commensurate to controls on measures of visual discrimination and visual STM that do not require sequential processing. In contrast, both RD groups (RD, RD/ADHD) performed worse than controls on the measure of visual STM that requires memory for order, and children with comorbid RD/ADHD performed worse than those with ADHD. In addition, of the three visual measures, only sequential visual STM predicted reading ability. Hence, our findings suggest there is a deficit in visual sequential STM that is specific to RD and is related to basic reading ability. The source of this deficit is worthy of further research, but it may include both reduced memory for order and poorer verbal mediation. PMID:26579020

  9. Visual processing in reading disorders and attention-deficit/hyperactivity disorder and its contribution to basic reading ability.

    PubMed

    Kibby, Michelle Y; Dyer, Sarah M; Vadnais, Sarah A; Jagger, Audreyana C; Casher, Gabriel A; Stacy, Maria

    2015-01-01

    Whether visual processing deficits are common in reading disorders (RD), and related to reading ability in general, has been debated for decades. The type of visual processing affected also is debated, although visual discrimination and short-term memory (STM) may be more commonly related to reading ability. Reading disorders are frequently comorbid with ADHD, and children with ADHD often have subclinical reading problems. Hence, children with ADHD were used as a comparison group in this study. ADHD and RD may be dissociated in terms of visual processing. Whereas RD may be associated with deficits in visual discrimination and STM for order, ADHD is associated with deficits in visual-spatial processing. Thus, we hypothesized that children with RD would perform worse than controls and children with ADHD only on a measure of visual discrimination and a measure of visual STM that requires memory for order. We expected all groups would perform comparably on the measure of visual STM that does not require sequential processing. We found children with RD or ADHD were commensurate to controls on measures of visual discrimination and visual STM that do not require sequential processing. In contrast, both RD groups (RD, RD/ADHD) performed worse than controls on the measure of visual STM that requires memory for order, and children with comorbid RD/ADHD performed worse than those with ADHD. In addition, of the three visual measures, only sequential visual STM predicted reading ability. Hence, our findings suggest there is a deficit in visual sequential STM that is specific to RD and is related to basic reading ability. The source of this deficit is worthy of further research, but it may include both reduced memory for order and poorer verbal mediation.

  10. A hierarchical word-merging algorithm with class separability measure.

    PubMed

    Wang, Lei; Zhou, Luping; Shen, Chunhua; Liu, Lingqiao; Liu, Huan

    2014-03-01

    In image recognition with the bag-of-features model, a small-sized visual codebook is usually preferred to obtain a low-dimensional histogram representation and high computational efficiency. Such a visual codebook has to be discriminative enough to achieve excellent recognition performance. To create a compact and discriminative codebook, in this paper we propose to merge the visual words in a large-sized initial codebook by maximally preserving class separability. We first show that this results in a difficult optimization problem. To deal with this situation, we devise a suboptimal but very efficient hierarchical word-merging algorithm, which optimally merges two words at each level of the hierarchy. By exploiting the characteristics of the class separability measure and designing a novel indexing structure, the proposed algorithm can hierarchically merge 10,000 visual words down to two words in merely 90 seconds. Also, to show the properties of the proposed algorithm and reveal its advantages, we conduct detailed theoretical analysis to compare it with another hierarchical word-merging algorithm that maximally preserves mutual information, obtaining interesting findings. Experimental studies are conducted to verify the effectiveness of the proposed algorithm on multiple benchmark data sets. As shown, it can efficiently produce more compact and discriminative codebooks than the state-of-the-art hierarchical word-merging algorithms, especially when the size of the codebook is significantly reduced.

  11. Oxytocin receptor activation in the basolateral complex of the amygdala enhances discrimination between discrete cues and promotes configural processing of cues.

    PubMed

    Fam, Justine; Holmes, Nathan; Delaney, Andrew; Crane, James; Westbrook, R Frederick

    2018-06-14

    Oxytocin (OT) is a neuropeptide which influences the expression of social behavior and regulates its distribution according to the social context - OT is associated with increased pro-social effects in the absence of social threat and defensive aggression when threats are present. The present experiments investigated the effects of OT beyond that of social behavior by using a discriminative Pavlovian fear conditioning protocol with rats. In Experiment 1, an OT receptor agonist (TGOT) microinjected into the basolateral amygdala facilitated the discrimination between an auditory cue that signaled shock and another auditory cue that signaled the absence of shock. This TGOT-facilitated discrimination was replicated in a second experiment where the shocked and non-shocked auditory cues were accompanied by a common visual cue. Conditioned responding on probe trials of the auditory and visual elements indicated that TGOT administration produced a qualitative shift in the learning mechanisms underlying the discrimination between the two compounds. This was confirmed by comparisons between the present results and simulated predictions of elemental and configural associative learning models. Overall, the present findings demonstrate that the neuromodulatory effects of OT influence behavior outside of the social domain. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Emergence of transformation-tolerant representations of visual objects in rat lateral extrastriate cortex

    PubMed Central

    Tafazoli, Sina; Safaai, Houman; De Franceschi, Gioia; Rosselli, Federica Bianca; Vanzella, Walter; Riggi, Margherita; Buffolo, Federica; Panzeri, Stefano; Zoccolan, Davide

    2017-01-01

    Rodents are emerging as increasingly popular models of visual functions. Yet, evidence that rodent visual cortex is capable of advanced visual processing, such as object recognition, is limited. Here we investigate how neurons located along the progression of extrastriate areas that, in the rat brain, run laterally to primary visual cortex, encode object information. We found a progressive functional specialization of neural responses along these areas, with: (1) a sharp reduction of the amount of low-level, energy-related visual information encoded by neuronal firing; and (2) a substantial increase in the ability of both single neurons and neuronal populations to support discrimination of visual objects under identity-preserving transformations (e.g., position and size changes). These findings strongly argue for the existence of a rat object-processing pathway, and point to the rodents as promising models to dissect the neuronal circuitry underlying transformation-tolerant recognition of visual objects. DOI: http://dx.doi.org/10.7554/eLife.22794.001 PMID:28395730

  13. Visual discrimination predicts naming and semantic association accuracy in Alzheimer disease.

    PubMed

    Harnish, Stacy M; Neils-Strunjas, Jean; Eliassen, James; Reilly, Jamie; Meinzer, Marcus; Clark, John Greer; Joseph, Jane

    2010-12-01

    Language impairment is a common symptom of Alzheimer disease (AD), and is thought to be related to semantic processing. This study examines the contribution of another process, namely visual perception, on measures of confrontation naming and semantic association abilities in persons with probable AD. Twenty individuals with probable mild-moderate Alzheimer disease and 20 age-matched controls completed a battery of neuropsychologic measures assessing visual perception, naming, and semantic association ability. Visual discrimination tasks that varied in the degree to which they likely accessed stored structural representations were used to gauge whether structural processing deficits could account for deficits in naming and in semantic association in AD. Visual discrimination abilities of nameable objects in AD strongly predicted performance on both picture naming and semantic association ability, but lacked the same predictive value for controls. Although impaired, performance on visual discrimination tests of abstract shapes and novel faces showed no significant relationship with picture naming and semantic association. These results provide additional evidence to support that structural processing deficits exist in AD, and may contribute to object recognition and naming deficits. Our findings suggest that there is a common deficit in discrimination of pictures using nameable objects, picture naming, and semantic association of pictures in AD. Disturbances in structural processing of pictured items may be associated with lexical-semantic impairment in AD, owing to degraded internal storage of structural knowledge.

  14. Seeing the mean: ensemble coding for sets of faces.

    PubMed

    Haberman, Jason; Whitney, David

    2009-06-01

    We frequently encounter groups of similar objects in our visual environment: a bed of flowers, a basket of oranges, a crowd of people. How does the visual system process such redundancy? Research shows that rather than code every element in a texture, the visual system favors a summary statistical representation of all the elements. The authors demonstrate that although it may facilitate texture perception, ensemble coding also occurs for faces-a level of processing well beyond that of textures. Observers viewed sets of faces varying in emotionality (e.g., happy to sad) and assessed the mean emotion of each set. Although observers retained little information about the individual set members, they had a remarkably precise representation of the mean emotion. Observers continued to discriminate the mean emotion accurately even when they viewed sets of 16 faces for 500 ms or less. Modeling revealed that perceiving the average facial expression in groups of faces was not due to noisy representation or noisy discrimination. These findings support the hypothesis that ensemble coding occurs extremely fast at multiple levels of visual analysis. (c) 2009 APA, all rights reserved.

  15. THE ROLE OF THE HIPPOCAMPUS IN OBJECT DISCRIMINATION BASED ON VISUAL FEATURES.

    PubMed

    Levcik, David; Nekovarova, Tereza; Antosova, Eliska; Stuchlik, Ales; Klement, Daniel

    2018-06-07

    The role of rodent hippocampus has been intensively studied in different cognitive tasks. However, its role in discrimination of objects remains controversial due to conflicting findings. We tested whether the number and type of features available for the identification of objects might affect the strategy (hippocampal-independent vs. hippocampal-dependent) that rats adopt to solve object discrimination tasks. We trained rats to discriminate 2D visual objects presented on a computer screen. The objects were defined either by their shape only or by multiple-features (a combination of filling pattern and brightness in addition to the shape). Our data showed that objects displayed as simple geometric shapes are not discriminated by trained rats after their hippocampi had been bilaterally inactivated by the GABA A -agonist muscimol. On the other hand, objects containing a specific combination of non-geometric features in addition to the shape are discriminated even without the hippocampus. Our results suggest that the involvement of the hippocampus in visual object discrimination depends on the abundance of object's features. Copyright © 2018. Published by Elsevier Inc.

  16. The visual discrimination of negative facial expressions by younger and older adults.

    PubMed

    Mienaltowski, Andrew; Johnson, Ellen R; Wittman, Rebecca; Wilson, Anne-Taylor; Sturycz, Cassandra; Norman, J Farley

    2013-04-05

    Previous research has demonstrated that older adults are not as accurate as younger adults at perceiving negative emotions in facial expressions. These studies rely on emotion recognition tasks that involve choosing between many alternatives, creating the possibility that age differences emerge for cognitive rather than perceptual reasons. In the present study, an emotion discrimination task was used to investigate younger and older adults' ability to visually discriminate between negative emotional facial expressions (anger, sadness, fear, and disgust) at low (40%) and high (80%) expressive intensity. Participants completed trials blocked by pairs of emotions. Discrimination ability was quantified from the participants' responses using signal detection measures. In general, the results indicated that older adults had more difficulty discriminating between low intensity expressions of negative emotions than did younger adults. However, younger and older adults did not differ when discriminating between anger and sadness. These findings demonstrate that age differences in visual emotion discrimination emerge when signal detection measures are used but that these differences are not uniform and occur only in specific contexts.

  17. Serial and parallel attentive visual searches: evidence from cumulative distribution functions of response times.

    PubMed

    Sung, Kyongje

    2008-12-01

    Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the results suggested parallel rather than serial processing, even though the tasks produced significant set-size effects. Serial processing was produced only in a condition with a difficult discrimination and a very large set-size effect. The results support C. Bundesen's (1990) claim that an extreme set-size effect leads to serial processing. Implications for parallel models of visual selection are discussed.

  18. Object localization, discrimination, and grasping with the optic nerve visual prosthesis.

    PubMed

    Duret, Florence; Brelén, Måten E; Lambert, Valerie; Gérard, Benoît; Delbeke, Jean; Veraart, Claude

    2006-01-01

    This study involved a volunteer completely blind from retinis pigmentosa who had previously been implanted with an optic nerve visual prosthesis. The aim of this two-year study was to train the volunteer to localize a given object in nine different positions, to discriminate the object within a choice of six, and then to grasp it. In a closed-loop protocol including a head worn video camera, the nerve was stimulated whenever a part of the processed image of the object being scrutinized matched the center of an elicitable phosphene. The accessible visual field included 109 phosphenes in a 14 degrees x 41 degrees area. Results showed that training was required to succeed in the localization and discrimination tasks, but practically no training was required for grasping the object. The volunteer was able to successfully complete all tasks after training. The volunteer systematically performed several left-right and bottom-up scanning movements during the discrimination task. Discrimination strategies included stimulation phases and no-stimulation phases of roughly similar duration. This study provides a step towards the practical use of the optic nerve visual prosthesis in current daily life.

  19. Display size effects in visual search: analyses of reaction time distributions as mixtures.

    PubMed

    Reynolds, Ann; Miller, Jeff

    2009-05-01

    In a reanalysis of data from Cousineau and Shiffrin (2004) and two new visual search experiments, we used a likelihood ratio test to examine the full distributions of reaction time (RT) for evidence that the display size effect is a mixture-type effect that occurs on only a proportion of trials, leaving RT in the remaining trials unaffected, as is predicted by serial self-terminating search models. Experiment 1 was a reanalysis of Cousineau and Shiffrin's data, for which a mixture effect had previously been established by a bimodal distribution of RTs, and the results confirmed that the likelihood ratio test could also detect this mixture. Experiment 2 applied the likelihood ratio test within a more standard visual search task with a relatively easy target/distractor discrimination, and Experiment 3 applied it within a target identification search task within the same types of stimuli. Neither of these experiments provided any evidence for the mixture-type display size effect predicted by serial self-terminating search models. Overall, these results suggest that serial self-terminating search models may generally be applicable only with relatively difficult target/distractor discriminations, and then only for some participants. In addition, they further illustrate the utility of analysing full RT distributions in addition to mean RT.

  20. Quality metrics for sensor images

    NASA Technical Reports Server (NTRS)

    Ahumada, AL

    1993-01-01

    Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery. Watson has been developing a general spatial-temporal vision model to optimize video compression techniques. These models need to be adapted and calibrated for AVID applications.

  1. Effective 3-D shape discrimination survives retinal blur.

    PubMed

    Norman, J Farley; Beers, Amanda M; Holmin, Jessica S; Boswell, Alexandria M

    2010-08-01

    A single experiment evaluated observers' ability to visually discriminate 3-D object shape, where the 3-D structure was defined by motion, texture, Lambertian shading, and occluding contours. The observers' vision was degraded to varying degrees by blurring the experimental stimuli, using 2.0-, 2.5-, and 3.0-diopter convex lenses. The lenses reduced the observers' acuity from -0.091 LogMAR (in the no-blur conditions) to 0.924 LogMAR (in the conditions with the most blur; 3.0-diopter lenses). This visual degradation, although producing severe reductions in visual acuity, had only small (but significant) effects on the observers' ability to discriminate 3-D shape. The observers' shape discrimination performance was facilitated by the objects' rotation in depth, regardless of the presence or absence of blur. Our results indicate that accurate global shape discrimination survives a considerable amount of retinal blur.

  2. High visual acuity revealed in dogs

    PubMed Central

    Lind, Olle; Milton, Ida; Andersson, Elin; Jensen, Per

    2017-01-01

    Humans have selectively bred and used dogs over a period of thousands of years, and more recently the dog has become an important model animal for studies in ethology, cognition and genetics. These broad interests warrant careful descriptions of the senses of dogs. Still there is little known about dog vision, especially what dogs can discriminate in different light conditions. We trained and tested whippets, pugs, and a Shetland sheepdog in a two-choice discrimination set-up and show that dogs can discriminate patterns with spatial frequencies between 5.5 and 19.5 cycle per degree (cpd) in the bright light condition (43 cd m-2). This is a higher spatial resolution than has been previously reported although the individual variation in our tests was large. Humans tested in the same set-up reached acuities corresponding to earlier studies, ranging between 32.1 and 44.2 cpd. In the dim light condition (0.0087 cd m-2) the acuity of dogs ranged between 1.8 and 3.5 cpd while in humans, between 5.9 and 9.9 cpd. Thus, humans make visual discrimination of objects from roughly a threefold distance compared to dogs in both bright and dim light. PMID:29206864

  3. High visual acuity revealed in dogs.

    PubMed

    Lind, Olle; Milton, Ida; Andersson, Elin; Jensen, Per; Roth, Lina S V

    2017-01-01

    Humans have selectively bred and used dogs over a period of thousands of years, and more recently the dog has become an important model animal for studies in ethology, cognition and genetics. These broad interests warrant careful descriptions of the senses of dogs. Still there is little known about dog vision, especially what dogs can discriminate in different light conditions. We trained and tested whippets, pugs, and a Shetland sheepdog in a two-choice discrimination set-up and show that dogs can discriminate patterns with spatial frequencies between 5.5 and 19.5 cycle per degree (cpd) in the bright light condition (43 cd m-2). This is a higher spatial resolution than has been previously reported although the individual variation in our tests was large. Humans tested in the same set-up reached acuities corresponding to earlier studies, ranging between 32.1 and 44.2 cpd. In the dim light condition (0.0087 cd m-2) the acuity of dogs ranged between 1.8 and 3.5 cpd while in humans, between 5.9 and 9.9 cpd. Thus, humans make visual discrimination of objects from roughly a threefold distance compared to dogs in both bright and dim light.

  4. Improving Dorsal Stream Function in Dyslexics by Training Figure/Ground Motion Discrimination Improves Attention, Reading Fluency, and Working Memory.

    PubMed

    Lawton, Teri

    2016-01-01

    There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) targeting linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning.

  5. Recognition of visual stimuli and memory for spatial context in schizophrenic patients and healthy volunteers.

    PubMed

    Brébion, Gildas; David, Anthony S; Pilowsky, Lyn S; Jones, Hugh

    2004-11-01

    Verbal and visual recognition tasks were administered to 40 patients with schizophrenia and 40 healthy comparison subjects. The verbal recognition task consisted of discriminating between 16 target words and 16 new words. The visual recognition task consisted of discriminating between 16 target pictures (8 black-and-white and 8 color) and 16 new pictures (8 black-and-white and 8 color). Visual recognition was followed by a spatial context discrimination task in which subjects were required to remember the spatial location of the target pictures at encoding. Results showed that recognition deficit in patients was similar for verbal and visual material. In both schizophrenic and healthy groups, men, but not women, obtained better recognition scores for the colored than for the black-and-white pictures. However, men and women similarly benefited from color to reduce spatial context discrimination errors. Patients showed a significant deficit in remembering the spatial location of the pictures, independently of accuracy in remembering the pictures themselves. These data suggest that patients are impaired in the amount of visual information that they can encode. With regards to the perceptual attributes of the stimuli, memory for spatial information appears to be affected, but not processing of color information.

  6. Object detection in natural backgrounds predicted by discrimination performance and models

    NASA Technical Reports Server (NTRS)

    Rohaly, A. M.; Ahumada, A. J. Jr; Watson, A. B.

    1997-01-01

    Many models of visual performance predict image discriminability, the visibility of the difference between a pair of images. We compared the ability of three image discrimination models to predict the detectability of objects embedded in natural backgrounds. The three models were: a multiple channel Cortex transform model with within-channel masking; a single channel contrast sensitivity filter model; and a digital image difference metric. Each model used a Minkowski distance metric (generalized vector magnitude) to summate absolute differences between the background and object plus background images. For each model, this summation was implemented with three different exponents: 2, 4 and infinity. In addition, each combination of model and summation exponent was implemented with and without a simple contrast gain factor. The model outputs were compared to measures of object detectability obtained from 19 observers. Among the models without the contrast gain factor, the multiple channel model with a summation exponent of 4 performed best, predicting the pattern of observer d's with an RMS error of 2.3 dB. The contrast gain factor improved the predictions of all three models for all three exponents. With the factor, the best exponent was 4 for all three models, and their prediction errors were near 1 dB. These results demonstrate that image discrimination models can predict the relative detectability of objects in natural scenes.

  7. Different target-discrimination times can be followed by the same saccade-initiation timing in different stimulus conditions during visual searches

    PubMed Central

    Tanaka, Tomohiro; Nishida, Satoshi

    2015-01-01

    The neuronal processes that underlie visual searches can be divided into two stages: target discrimination and saccade preparation/generation. This predicts that the length of time of the prediscrimination stage varies according to the search difficulty across different stimulus conditions, whereas the length of the latter postdiscrimination stage is stimulus invariant. However, recent studies have suggested that the length of the postdiscrimination interval changes with different stimulus conditions. To address whether and how the visual stimulus affects determination of the postdiscrimination interval, we recorded single-neuron activity in the lateral intraparietal area (LIP) when monkeys (Macaca fuscata) performed a color-singleton search involving four stimulus conditions that differed regarding luminance (Bright vs. Dim) and target-distractor color similarity (Easy vs. Difficult). We specifically focused on comparing activities between the Bright-Difficult and Dim-Easy conditions, in which the visual stimuli were considerably different, but the mean reaction times were indistinguishable. This allowed us to examine the neuronal activity when the difference in the degree of search speed between different stimulus conditions was minimal. We found that not only prediscrimination but also postdiscrimination intervals varied across stimulus conditions: the postdiscrimination interval was longer in the Dim-Easy condition than in the Bright-Difficult condition. Further analysis revealed that the postdiscrimination interval might vary with stimulus luminance. A computer simulation using an accumulation-to-threshold model suggested that the luminance-related difference in visual response strength at discrimination time could be the cause of different postdiscrimination intervals. PMID:25995344

  8. Seeing visual word forms: spatial summation, eccentricity and spatial configuration.

    PubMed

    Kao, Chien-Hui; Chen, Chien-Chung

    2012-06-01

    We investigated observers' performance in detecting and discriminating visual word forms as a function of target size and retinal eccentricity. The contrast threshold of visual words was measured with a spatial two-alternative forced-choice paradigm and a PSI adaptive method. The observers were to indicate which of two sides contained a stimulus in the detection task, and which contained a real character (as opposed to a pseudo- or non-character) in the discrimination task. When the target size was sufficiently small, the detection threshold of a character decreased as its size increased, with a slope of -1/2 on log-log coordinates, up to a critical size at all eccentricities and for all stimulus types. The discrimination threshold decreased with target size with a slope of -1 up to a critical size that was dependent on stimulus type and eccentricity. Beyond that size, the threshold decreased with a slope of -1/2 on log-log coordinates before leveling out. The data was well fit by a spatial summation model that contains local receptive fields (RFs) and a summation across these filters within an attention window. Our result implies that detection is mediated by local RFs smaller than any tested stimuli and thus detection performance is dominated by summation across receptive fields. On the other hand, discrimination is dominated by a summation within a local RF in the fovea but a cross RF summation in the periphery. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Comparative effect of lens care solutions on blink rate, ocular discomfort and visual performance.

    PubMed

    Yang, Shun-nan; Tai, Yu-chi; Sheedy, James E; Kinoshita, Beth; Lampa, Matthew; Kern, Jami R

    2012-09-01

    To help maintain clear vision and ocular surface health, eye blinks occur to distribute natural tears over the ocular surface, especially the corneal surface. Contact lens wearers may suffer from poor vision and dry eye symptoms due to difficulty in lens surface wetting and reduced tear production. Sustained viewing of a computer screen reduces eye blinks and exacerbates such difficulties. The present study evaluated the wetting effect of lens care solutions (LCSs) on blink rate, dry eye symptoms, and vision performance. Sixty-five adult habitual soft contact lens wearers were recruited to adapt to different LCSs (Opti-free, ReNu, and ClearCare) in a cross-over design. Blink rate in pictorial viewing and reading (measured with an eyetracker), dry eye symptoms (measured with the Ocular Surface Disease Index questionnaire), and visual discrimination (identifying tumbling E) immediately before and after eye blinks were measured after 2 weeks of adaption to LCS. Repeated measures anova and mixed model ancova were conducted to evaluate effects of LCS on blink rate, symptom score, and discrimination accuracy. Opti-Free resulted in lower dry eye symptoms (p = 0.018) than ClearCare, and lower spontaneous blink rate (measured in picture viewing) than ClearCare (p = 0.014) and ReNu (p = 0.041). In reading, blink rate was higher for ClearCare compared to ReNu (p = 0.026) and control (p = 0.024). Visual discrimination time was longer for the control (daily disposable lens) than for Opti-Free (p = 0.007), ReNu (p = 0.009), and ClearCare (0.013) immediately before the blink. LCSs differently affected blink rate, subjective dry eye symptoms, and visual discrimination speed. Those with wetting agents led to significantly fewer eye blinks while affording better ocular comfort for contact lens wearers, compared to that without. LCSs with wetting agents also resulted in better visual performance compared to wearing daily disposable contact lenses. These presumably are because of improved tear film quality. © 2012 The College of Optometrists.

  10. Both hand position and movement direction modulate visual attention

    PubMed Central

    Festman, Yariv; Adam, Jos J.; Pratt, Jay; Fischer, Martin H.

    2013-01-01

    The current study explored effects of continuous hand motion on the allocation of visual attention. A concurrent paradigm was used to combine visually concealed continuous hand movements with an attentionally demanding letter discrimination task. The letter probe appeared contingent upon the moving right hand passing through one of six positions. Discrimination responses were then collected via a keyboard press with the static left hand. Both the right hand's position and its movement direction systematically contributed to participants' visual sensitivity. Discrimination performance increased substantially when the right hand was distant from, but moving toward the visual probe location (replicating the far-hand effect, Festman et al., 2013). However, this effect disappeared when the probe appeared close to the static left hand, supporting the view that static and dynamic features of both hands combine in modulating pragmatic maps of attention. PMID:24098288

  11. Effects of Hand Proximity and Movement Direction in Spatial and Temporal Gap Discrimination.

    PubMed

    Wiemers, Michael; Fischer, Martin H

    2016-01-01

    Previous research on the interplay between static manual postures and visual attention revealed enhanced visual selection near the hands (near-hand effect). During active movements there is also superior visual performance when moving toward compared to away from the stimulus (direction effect). The "modulated visual pathways" hypothesis argues that differential involvement of magno- and parvocellular visual processing streams causes the near-hand effect. The key finding supporting this hypothesis is an increase in temporal and a reduction in spatial processing in near-hand space (Gozli et al., 2012). Since this hypothesis has, so far, only been tested with static hand postures, we provide a conceptual replication of Gozli et al.'s (2012) result with moving hands, thus also probing the generality of the direction effect. Participants performed temporal or spatial gap discriminations while their right hand was moving below the display. In contrast to Gozli et al. (2012), temporal gap discrimination was superior at intermediate and not near hand proximity. In spatial gap discrimination, a direction effect without hand proximity effect suggests that pragmatic attentional maps overshadowed temporal/spatial processing biases for far/near-hand space.

  12. Visual adaptation enhances action sound discrimination.

    PubMed

    Barraclough, Nick E; Page, Steve A; Keefe, Bruce D

    2017-01-01

    Prolonged exposure, or adaptation, to a stimulus in 1 modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in 1 modality can bias perception in another modality. Here, we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory, or audiovisual hand actions enhanced discrimination between 2 subsequently presented hand action sounds. Discrimination was most enhanced when the visual action "matched" the auditory action. In addition, prior adaptation to a visual, auditory, or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation-induced crossmodal enhancements cannot be explained by postperceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli.

  13. Treatment of amblyopia in the adult: insights from a new rodent model of visual perceptual learning.

    PubMed

    Bonaccorsi, Joyce; Berardi, Nicoletta; Sale, Alessandro

    2014-01-01

    Amblyopia is the most common form of impairment of visual function affecting one eye, with a prevalence of about 1-5% of the total world population. Amblyopia usually derives from conditions of early functional imbalance between the two eyes, owing to anisometropia, strabismus, or congenital cataract, and results in a pronounced reduction of visual acuity and severe deficits in contrast sensitivity and stereopsis. It is widely accepted that, due to a lack of sufficient plasticity in the adult brain, amblyopia becomes untreatable after the closure of the critical period in the primary visual cortex. However, recent results obtained both in animal models and in clinical trials have challenged this view, unmasking a previously unsuspected potential for promoting recovery even in adulthood. In this context, non invasive procedures based on visual perceptual learning, i.e., the improvement in visual performance on a variety of simple visual tasks following practice, emerge as particularly promising to rescue discrimination abilities in adult amblyopic subjects. This review will survey recent work regarding the impact of visual perceptual learning on amblyopia, with a special focus on a new experimental model of perceptual learning in the amblyopic rat.

  14. Treatment of amblyopia in the adult: insights from a new rodent model of visual perceptual learning

    PubMed Central

    Bonaccorsi, Joyce; Berardi, Nicoletta; Sale, Alessandro

    2014-01-01

    Amblyopia is the most common form of impairment of visual function affecting one eye, with a prevalence of about 1–5% of the total world population. Amblyopia usually derives from conditions of early functional imbalance between the two eyes, owing to anisometropia, strabismus, or congenital cataract, and results in a pronounced reduction of visual acuity and severe deficits in contrast sensitivity and stereopsis. It is widely accepted that, due to a lack of sufficient plasticity in the adult brain, amblyopia becomes untreatable after the closure of the critical period in the primary visual cortex. However, recent results obtained both in animal models and in clinical trials have challenged this view, unmasking a previously unsuspected potential for promoting recovery even in adulthood. In this context, non invasive procedures based on visual perceptual learning, i.e., the improvement in visual performance on a variety of simple visual tasks following practice, emerge as particularly promising to rescue discrimination abilities in adult amblyopic subjects. This review will survey recent work regarding the impact of visual perceptual learning on amblyopia, with a special focus on a new experimental model of perceptual learning in the amblyopic rat. PMID:25076874

  15. Understanding Deep Representations Learned in Modeling Users Likes.

    PubMed

    Guntuku, Sharath Chandra; Zhou, Joey Tianyi; Roy, Sujoy; Lin, Weisi; Tsang, Ivor W

    2016-08-01

    Automatically understanding and discriminating different users' liking for an image is a challenging problem. This is because the relationship between image features (even semantic ones extracted by existing tools, viz., faces, objects, and so on) and users' likes is non-linear, influenced by several subtle factors. This paper presents a deep bi-modal knowledge representation of images based on their visual content and associated tags (text). A mapping step between the different levels of visual and textual representations allows for the transfer of semantic knowledge between the two modalities. Feature selection is applied before learning deep representation to identify the important features for a user to like an image. The proposed representation is shown to be effective in discriminating users based on images they like and also in recommending images that a given user likes, outperforming the state-of-the-art feature representations by  ∼ 15 %-20%. Beyond this test-set performance, an attempt is made to qualitatively understand the representations learned by the deep architecture used to model user likes.

  16. Decontaminate feature for tracking: adaptive tracking via evolutionary feature subset

    NASA Astrophysics Data System (ADS)

    Liu, Qiaoyuan; Wang, Yuru; Yin, Minghao; Ren, Jinchang; Li, Ruizhi

    2017-11-01

    Although various visual tracking algorithms have been proposed in the last 2-3 decades, it remains a challenging problem for effective tracking with fast motion, deformation, occlusion, etc. Under complex tracking conditions, most tracking models are not discriminative and adaptive enough. When the combined feature vectors are inputted to the visual models, this may lead to redundancy causing low efficiency and ambiguity causing poor performance. An effective tracking algorithm is proposed to decontaminate features for each video sequence adaptively, where the visual modeling is treated as an optimization problem from the perspective of evolution. Every feature vector is compared to a biological individual and then decontaminated via classical evolutionary algorithms. With the optimized subsets of features, the "curse of dimensionality" has been avoided while the accuracy of the visual model has been improved. The proposed algorithm has been tested on several publicly available datasets with various tracking challenges and benchmarked with a number of state-of-the-art approaches. The comprehensive experiments have demonstrated the efficacy of the proposed methodology.

  17. Performance, physiological, and oculometer evaluation of VTOL landing displays

    NASA Technical Reports Server (NTRS)

    North, R. A.; Stackhouse, S. P.; Graffunder, K.

    1979-01-01

    A methodological approach to measuring workload was investigated for evaluation of new concepts in VTOL aircraft displays. Physiological, visual response, and conventional flight performance measures were recorded for landing approaches performed in the NASA Visual Motion Simulator (VMS). Three displays (two computer graphic and a conventional flight director), three crosswind amplitudes, and two motion base conditions (fixed vs. moving base) were tested in a factorial design. Multivariate discriminant functions were formed from flight performance and/or visual response variables. The flight performance variable discriminant showed maximum differentation between crosswind conditions. The visual response measure discriminant maximized differences between fixed vs. motion base conditions and experimental displays. Physiological variables were used to attempt to predict the discriminant function values for each subject/condition trial. The weights of the physiological variables in these equations showed agreement with previous studies. High muscle tension, light but irregular breathing patterns, and higher heart rate with low amplitude all produced higher scores on this scale and thus represent higher workload levels.

  18. Figure-ground discrimination in the avian brain: the nucleus rotundus and its inhibitory complex.

    PubMed

    Acerbo, Martin J; Lazareva, Olga F; McInnerney, John; Leiker, Emily; Wasserman, Edward A; Poremba, Amy

    2012-10-01

    In primates, neurons sensitive to figure-ground status are located in striate cortex (area V1) and extrastriate cortex (area V2). Although much is known about the anatomical structure and connectivity of the avian visual pathway, the functional organization of the avian brain remains largely unexplored. To pinpoint the areas associated with figure-ground segregation in the avian brain, we used a radioactively labeled glucose analog to compare differences in glucose uptake after figure-ground, color, and shape discriminations. We also included a control group that received food on a variable-interval schedule, but was not required to learn a visual discrimination. Although the discrimination task depended on group assignment, the stimulus displays were identical for all three experimental groups, ensuring that all animals were exposed to the same visual input. Our analysis concentrated on the primary thalamic nucleus associated with visual processing, the nucleus rotundus (Rt), and two nuclei providing regulatory feedback, the pretectum (PT) and the nucleus subpretectalis/interstitio-pretecto-subpretectalis complex (SP/IPS). We found that figure-ground discrimination was associated with strong and nonlateralized activity of Rt and SP/IPS, whereas color discrimination produced strong and lateralized activation in Rt alone. Shape discrimination was associated with lower activity of Rt than in the control group. Taken together, our results suggest that figure-ground discrimination is associated with Rt and that SP/IPS may be a main source of inhibitory control. Thus, figure-ground segregation in the avian brain may occur earlier than in the primate brain. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Figure-ground discrimination in the avian brain: The nucleus rotundus and its inhibitory complex

    PubMed Central

    Acerbo, Martin J.; Lazareva, Olga F.; McInnerney, John; Leiker, Emily; Wasserman, Edward A.; Poremba, Amy

    2012-01-01

    In primates, neurons sensitive to figure-ground status are located in striate cortex (area V1) and extrastriate cortex (area V2). Although much is known about the anatomical structure and connectivity of the avian visual pathway, the functional organization of the avian brain remains largely unexplored. To pinpoint the areas associated with figure-ground segregation in the avian brain, we used a radioactively labeled glucose analog to compare differences in glucose uptake after figure-ground, color, and shape discriminations. We also included a control group that received food on a variable-interval schedule, but was not required to learn a visual discrimination. Although the discrimination task depended on group assignment, the stimulus displays were identical for all three experimental groups, ensuring that all animals were exposed to the same visual input. Our analysis concentrated on the primary thalamic nucleus associated with visual processing, the nucleus rotundus (Rt), and two nuclei providing regulatory feedback, the pretectum (PT) and the nucleus subpretectalis/interstitio-pretecto-subpretectalis complex (SP/IPS). We found that figure-ground discrimination was associated with strong and nonlateralized activity of Rt and SP/IPS, whereas color discrimination produced strong and lateralized activation in Rt alone. Shape discrimination was associated with lower activity of Rt than in the control group. Taken together, our results suggest that figure-ground discrimination is associated with Rt and that SP/IPS may be a main source of inhibitory control. Thus, figure-ground segregation in the avian brain may occur earlier than in the primate brain. PMID:22917681

  20. Individuals with 22q11.2 Deletion Syndrome Are Impaired at Explicit, but Not Implicit, Discrimination of Local Forms Embedded in Global Structures

    ERIC Educational Resources Information Center

    Giersch, Anne; Glaser, Bronwyn; Pasca, Catherine; Chabloz, Mélanie; Debbané, Martin; Eliez, Stephan

    2014-01-01

    Individuals with 22q11.2 deletion syndrome (22q11.2DS) are impaired at exploring visual information in space; however, not much is known about visual form discrimination in the syndrome. Thirty-five individuals with 22q11.2DS and 41 controls completed a form discrimination task with global forms made up of local elements. Affected individuals…

  1. Visual Aversive Learning Compromises Sensory Discrimination.

    PubMed

    Shalev, Lee; Paz, Rony; Avidan, Galia

    2018-03-14

    Aversive learning is thought to modulate perceptual thresholds, which can lead to overgeneralization. However, it remains undetermined whether this modulation is domain specific or a general effect. Moreover, despite the unique role of the visual modality in human perception, it is unclear whether this aspect of aversive learning exists in this modality. The current study was designed to examine the effect of visual aversive outcomes on the perception of basic visual and auditory features. We tested the ability of healthy participants, both males and females, to discriminate between neutral stimuli, before and after visual learning. In each experiment, neutral stimuli were associated with aversive images in an experimental group and with neutral images in a control group. Participants demonstrated a deterioration in discrimination (higher discrimination thresholds) only after aversive learning. This deterioration was measured for both auditory (tone frequency) and visual (orientation and contrast) features. The effect was replicated in five different experiments and lasted for at least 24 h. fMRI neural responses and pupil size were also measured during learning. We showed an increase in neural activations in the anterior cingulate cortex, insula, and amygdala during aversive compared with neutral learning. Interestingly, the early visual cortex showed increased brain activity during aversive compared with neutral context trials, with identical visual information. Our findings imply the existence of a central multimodal mechanism, which modulates early perceptual properties, following exposure to negative situations. Such a mechanism could contribute to abnormal responses that underlie anxiety states, even in new and safe environments. SIGNIFICANCE STATEMENT Using a visual aversive-learning paradigm, we found deteriorated discrimination abilities for visual and auditory stimuli that were associated with visual aversive stimuli. We showed increased neural activations in the anterior cingulate cortex, insula, and amygdala during aversive learning, compared with neutral learning. Importantly, similar findings were also evident in the early visual cortex during trials with aversive/neutral context, but with identical visual information. The demonstration of this phenomenon in the visual modality is important, as it provides support to the notion that aversive learning can influence perception via a central mechanism, independent of input modality. Given the dominance of the visual system in human perception, our findings hold relevance to daily life, as well as imply a potential etiology for anxiety disorders. Copyright © 2018 the authors 0270-6474/18/382766-14$15.00/0.

  2. Recognition of tennis serve performed by a digital player: comparison among polygon, shadow, and stick-figure models.

    PubMed

    Ida, Hirofumi; Fukuhara, Kazunobu; Ishii, Motonobu

    2012-01-01

    The objective of this study was to assess the cognitive effect of human character models on the observer's ability to extract relevant information from computer graphics animation of tennis serve motions. Three digital human models (polygon, shadow, and stick-figure) were used to display the computationally simulated serve motions, which were perturbed at the racket-arm by modulating the speed (slower or faster) of one of the joint rotations (wrist, elbow, or shoulder). Twenty-one experienced tennis players and 21 novices made discrimination responses about the modulated joint and also specified the perceived swing speeds on a visual analogue scale. The result showed that the discrimination accuracies of the experienced players were both above and below chance level depending on the modulated joint whereas those of the novices mostly remained at chance or guessing levels. As far as the experienced players were concerned, the polygon model decreased the discrimination accuracy as compared with the stick-figure model. This suggests that the complicated pictorial information may have a distracting effect on the recognition of the observed action. On the other hand, the perceived swing speed of the perturbed motion relative to the control was lower for the stick-figure model than for the polygon model regardless of the skill level. This result suggests that the simplified visual information can bias the perception of the motion speed toward slower. It was also shown that the increasing the joint rotation speed increased the perceived swing speed, although the resulting racket velocity had little correlation with this speed sensation. Collectively, observer's recognition of the motion pattern and perception of the motion speed can be affected by the pictorial information of the human model as well as by the perturbation processing applied to the observed motion.

  3. Multi-class ERP-based BCI data analysis using a discriminant space self-organizing map.

    PubMed

    Onishi, Akinari; Natsume, Kiyohisa

    2014-01-01

    Emotional or non-emotional image stimulus is recently applied to event-related potential (ERP) based brain computer interfaces (BCI). Though the classification performance is over 80% in a single trial, a discrimination between those ERPs has not been considered. In this research we tried to clarify the discriminability of four-class ERP-based BCI target data elicited by desk, seal, spider images and letter intensifications. A conventional self organizing map (SOM) and newly proposed discriminant space SOM (ds-SOM) were applied, then the discriminabilites were visualized. We also classify all pairs of those ERPs by stepwise linear discriminant analysis (SWLDA) and verify the visualization of discriminabilities. As a result, the ds-SOM showed understandable visualization of the data with a shorter computational time than the traditional SOM. We also confirmed the clear boundary between the letter cluster and the other clusters. The result was coherent with the classification performances by SWLDA. The method might be helpful not only for developing a new BCI paradigm, but also for the big data analysis.

  4. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.

    PubMed

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-04-15

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.

  5. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    PubMed Central

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  6. Sparse network-based models for patient classification using fMRI

    PubMed Central

    Rosa, Maria J.; Portugal, Liana; Hahn, Tim; Fallgatter, Andreas J.; Garrido, Marta I.; Shawe-Taylor, John; Mourao-Miranda, Janaina

    2015-01-01

    Pattern recognition applied to whole-brain neuroimaging data, such as functional Magnetic Resonance Imaging (fMRI), has proved successful at discriminating psychiatric patients from healthy participants. However, predictive patterns obtained from whole-brain voxel-based features are difficult to interpret in terms of the underlying neurobiology. Many psychiatric disorders, such as depression and schizophrenia, are thought to be brain connectivity disorders. Therefore, pattern recognition based on network models might provide deeper insights and potentially more powerful predictions than whole-brain voxel-based approaches. Here, we build a novel sparse network-based discriminative modeling framework, based on Gaussian graphical models and L1-norm regularized linear Support Vector Machines (SVM). In addition, the proposed framework is optimized in terms of both predictive power and reproducibility/stability of the patterns. Our approach aims to provide better pattern interpretation than voxel-based whole-brain approaches by yielding stable brain connectivity patterns that underlie discriminative changes in brain function between the groups. We illustrate our technique by classifying patients with major depressive disorder (MDD) and healthy participants, in two (event- and block-related) fMRI datasets acquired while participants performed a gender discrimination and emotional task, respectively, during the visualization of emotional valent faces. PMID:25463459

  7. Improving Dorsal Stream Function in Dyslexics by Training Figure/Ground Motion Discrimination Improves Attention, Reading Fluency, and Working Memory

    PubMed Central

    Lawton, Teri

    2016-01-01

    There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) targeting linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning. PMID:27551263

  8. Do Visually Impaired People Develop Superior Smell Ability?

    PubMed

    Majchrzak, Dorota; Eberhard, Julia; Kalaus, Barbara; Wagner, Karl-Heinz

    2017-10-01

    It is well known that visually impaired people perform better in orientation by sound than sighted individuals, but it is not clear whether this enhanced awareness also extends to other senses. Therefore, the aim of this study was to observe whether visually impaired subjects develop superior abilities in olfactory perception to compensate for their lack of vision. We investigated the odor perception of visually impaired individuals aged 7 to 89 ( n = 99; 52 women, 47 men) and compared them with subjects of a control group aged 8 to 82 years ( n = 100; 45 women, 55 men) without any visual impairment. The participants were evaluated by Sniffin' Sticks odor identification and discrimination test. Identification ability was assessed for 16 common odors presented in felt-tip pens. In the odor discrimination task, subjects had to determine which of three pens in 16 triplets had a different odor. The median number of correctly identified odorant pens in both groups was the same, 13 of the offered 16. In the discrimination test, there was also no significant difference observed. Gender did not influence results. Age-related changes were observed in both groups with olfactory perception decreasing after the age of 51. We could not confirm that visually impaired people were better in smell identification and discrimination ability than sighted individuals.

  9. Colour processing in complex environments: insights from the visual system of bees

    PubMed Central

    Dyer, Adrian G.; Paulk, Angelique C.; Reser, David H.

    2011-01-01

    Colour vision enables animals to detect and discriminate differences in chromatic cues independent of brightness. How the bee visual system manages this task is of interest for understanding information processing in miniaturized systems, as well as the relationship between bee pollinators and flowering plants. Bees can quickly discriminate dissimilar colours, but can also slowly learn to discriminate very similar colours, raising the question as to how the visual system can support this, or whether it is simply a learning and memory operation. We discuss the detailed neuroanatomical layout of the brain, identify probable brain areas for colour processing, and suggest that there may be multiple systems in the bee brain that mediate either coarse or fine colour discrimination ability in a manner dependent upon individual experience. These multiple colour pathways have been identified along both functional and anatomical lines in the bee brain, providing us with some insights into how the brain may operate to support complex colour discrimination behaviours. PMID:21147796

  10. Some distinguishing characteristics of contour and texture phenomena in images

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.

    1992-01-01

    The development of generalized contour/texture discrimination techniques is a central element necessary for machine vision recognition and interpretation of arbitrary images. Here, the visual perception of texture, selected studies of texture analysis in machine vision, and diverse small samples of contour and texture are all used to provide insights into the fundamental characteristics of contour and texture. From these, an experimental discrimination scheme is developed and tested on a battery of natural images. The visual perception of texture defined fine texture as a subclass which is interpreted as shading and is distinct from coarse figural similarity textures. Also, perception defined the smallest scale for contour/texture discrimination as eight to nine visual acuity units. Three contour/texture discrimination parameters were found to be moderately successful for this scale discrimination: (1) lightness change in a blurred version of the image, (2) change in lightness change in the original image, and (3) percent change in edge counts relative to local maximum.

  11. Enhanced attentional gain as a mechanism for generalized perceptual learning in human visual cortex.

    PubMed

    Byers, Anna; Serences, John T

    2014-09-01

    Learning to better discriminate a specific visual feature (i.e., a specific orientation in a specific region of space) has been associated with plasticity in early visual areas (sensory modulation) and with improvements in the transmission of sensory information from early visual areas to downstream sensorimotor and decision regions (enhanced readout). However, in many real-world scenarios that require perceptual expertise, observers need to efficiently process numerous exemplars from a broad stimulus class as opposed to just a single stimulus feature. Some previous data suggest that perceptual learning leads to highly specific neural modulations that support the discrimination of specific trained features. However, the extent to which perceptual learning acts to improve the discriminability of a broad class of stimuli via the modulation of sensory responses in human visual cortex remains largely unknown. Here, we used functional MRI and a multivariate analysis method to reconstruct orientation-selective response profiles based on activation patterns in the early visual cortex before and after subjects learned to discriminate small offsets in a set of grating stimuli that were rendered in one of nine possible orientations. Behavioral performance improved across 10 training sessions, and there was a training-related increase in the amplitude of orientation-selective response profiles in V1, V2, and V3 when orientation was task relevant compared with when it was task irrelevant. These results suggest that generalized perceptual learning can lead to modified responses in the early visual cortex in a manner that is suitable for supporting improved discriminability of stimuli drawn from a large set of exemplars. Copyright © 2014 the American Physiological Society.

  12. High sensitivity to short wavelengths in a lizard and implications for understanding the evolution of visual systems in lizards

    PubMed Central

    Fleishman, Leo J.; Loew, Ellis R.; Whiting, Martin J.

    2011-01-01

    Progress in developing animal communication theory is frequently constrained by a poor understanding of sensory systems. For example, while lizards have been the focus of numerous studies in visual signalling, we only have data on the spectral sensitivities of a few species clustered in two major clades (Iguania and Gekkota). Using electroretinography and microspectrophotometry, we studied the visual system of the cordylid lizard Platysaurus broadleyi because it represents an unstudied clade (Scinciformata) with respect to visual systems and because UV signals feature prominently in its social behaviour. The retina possessed four classes of single and one class of double cones. Sensitivity in the ultraviolet region (UV) was approximately three times higher than previously reported for other lizards. We found more colourless oil droplets (associated with UV-sensitive (UVS) and short wavelength-sensitive (SWS) photoreceptors), suggesting that the increased sensitivity was owing to the presence of more UVS photoreceptors. Using the Vorobyev–Osorio colour discrimination model, we demonstrated that an increase in the number of UVS photoreceptors significantly enhances a lizard's ability to discriminate conspecific male throat colours. Visual systems in diurnal lizards appear to be broadly conserved, but data from additional clades are needed to confirm this. PMID:21389031

  13. The Learning of Difficult Visual Discriminations by the Moderately and Severely Retarded

    ERIC Educational Resources Information Center

    Gold, Marc W.; Barclay, Craig R.

    2015-01-01

    A procedure to effectively and efficiently train moderately and severely retarded individuals to make fine visual discriminations is described. Results suggest that expectancies for such individuals are in need of examination. Implications for sheltered workshops, work activity centers and classrooms are discussed. [This article appeared…

  14. IEEE Conference on Neural Information Processing Systems - Natural and Synthetic Held in Denver, Colorado on 28 November-1 December 1988

    DTIC Science & Technology

    1989-08-14

    DISCRIMINATE SIMILAR KANJt CHARACTERS. Yoshihiro Mori, Kazuhiko Yokosawa . 12 FURTHER EXPLORATIONS IN THE LEARNING OF VISUALLY-GUIDED REACHING: MAKING MURPHY...NETWORKS THAT LEARN TO DISCRIMINATE SIMILAR KANJI CHARACTERS YOSHIHIRO MORI, KAZUHIKO YOKOSAWA , ATR Auditory and Visual Perception Research Laboratories

  15. Intersensory Redundancy Hinders Face Discrimination in Preschool Children: Evidence for Visual Facilitation

    ERIC Educational Resources Information Center

    Bahrick, Lorraine E.; Krogh-Jespersen, Sheila; Argumosa, Melissa A.; Lopez, Hassel

    2014-01-01

    Although infants and children show impressive face-processing skills, little research has focused on the conditions that facilitate versus impair face perception. According to the intersensory redundancy hypothesis (IRH), face discrimination, which relies on detection of visual featural information, should be impaired in the context of…

  16. Frontal–Occipital Connectivity During Visual Search

    PubMed Central

    Pantazatos, Spiro P.; Yanagihara, Ted K.; Zhang, Xian; Meitzler, Thomas

    2012-01-01

    Abstract Although expectation- and attention-related interactions between ventral and medial prefrontal cortex and stimulus category-selective visual regions have been identified during visual detection and discrimination, it is not known if similar neural mechanisms apply to other tasks such as visual search. The current work tested the hypothesis that high-level frontal regions, previously implicated in expectation and visual imagery of object categories, interact with visual regions associated with object recognition during visual search. Using functional magnetic resonance imaging, subjects searched for a specific object that varied in size and location within a complex natural scene. A model-free, spatial-independent component analysis isolated multiple task-related components, one of which included visual cortex, as well as a cluster within ventromedial prefrontal cortex (vmPFC), consistent with the engagement of both top-down and bottom-up processes. Analyses of psychophysiological interactions showed increased functional connectivity between vmPFC and object-sensitive lateral occipital cortex (LOC), and results from dynamic causal modeling and Bayesian Model Selection suggested bidirectional connections between vmPFC and LOC that were positively modulated by the task. Using image-guided diffusion-tensor imaging, functionally seeded, probabilistic white-matter tracts between vmPFC and LOC, which presumably underlie this effective interconnectivity, were also observed. These connectivity findings extend previous models of visual search processes to include specific frontal–occipital neuronal interactions during a natural and complex search task. PMID:22708993

  17. Object similarity affects the perceptual strategy underlying invariant visual object recognition in rats

    PubMed Central

    Rosselli, Federica B.; Alemi, Alireza; Ansuini, Alessio; Zoccolan, Davide

    2015-01-01

    In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness). In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant) to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: (i) smaller and more scattered; (ii) only partially preserved across object views; and (iii) only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning. PMID:25814936

  18. Visual Processing of Object Velocity and Acceleration

    DTIC Science & Technology

    1991-12-13

    more recently, Dr. Grzywacz’s applications of filtering models to the psychophysics of speed discrimination; 3) the McKee-Welch studies on the...population of spatio-temporally oriented filters to encode velocity. Dr. Grzywacz has attempted to reconcile his model with a variety of psychophysical...by many authors.23 In these models , the image is tectors have different sizes and spatial positions, but they all spatially and temporally filtered

  19. Real-time detection and discrimination of visual perception using electrocorticographic signals

    NASA Astrophysics Data System (ADS)

    Kapeller, C.; Ogawa, H.; Schalk, G.; Kunii, N.; Coon, W. G.; Scharinger, J.; Guger, C.; Kamada, K.

    2018-06-01

    Objective. Several neuroimaging studies have demonstrated that the ventral temporal cortex contains specialized regions that process visual stimuli. This study investigated the spatial and temporal dynamics of electrocorticographic (ECoG) responses to different types and colors of visual stimulation that were presented to four human participants, and demonstrated a real-time decoder that detects and discriminates responses to untrained natural images. Approach. ECoG signals from the participants were recorded while they were shown colored and greyscale versions of seven types of visual stimuli (images of faces, objects, bodies, line drawings, digits, and kanji and hiragana characters), resulting in 14 classes for discrimination (experiment I). Additionally, a real-time system asynchronously classified ECoG responses to faces, kanji and black screens presented via a monitor (experiment II), or to natural scenes (i.e. the face of an experimenter, natural images of faces and kanji, and a mirror) (experiment III). Outcome measures in all experiments included the discrimination performance across types based on broadband γ activity. Main results. Experiment I demonstrated an offline classification accuracy of 72.9% when discriminating among the seven types (without color separation). Further discrimination of grey versus colored images reached an accuracy of 67.1%. Discriminating all colors and types (14 classes) yielded an accuracy of 52.1%. In experiment II and III, the real-time decoder correctly detected 73.7% responses to face, kanji and black computer stimuli and 74.8% responses to presented natural scenes. Significance. Seven different types and their color information (either grey or color) could be detected and discriminated using broadband γ activity. Discrimination performance maximized for combined spatial-temporal information. The discrimination of stimulus color information provided the first ECoG-based evidence for color-related population-level cortical broadband γ responses in humans. Stimulus categories can be detected by their ECoG responses in real time within 500 ms with respect to stimulus onset.

  20. Selection-for-action in visual search.

    PubMed

    Hannus, Aave; Cornelissen, Frans W; Lindemann, Oliver; Bekkering, Harold

    2005-01-01

    Grasping an object rather than pointing to it enhances processing of its orientation but not its color. Apparently, visual discrimination is selectively enhanced for a behaviorally relevant feature. In two experiments we investigated the limitations and targets of this bias. Specifically, in Experiment 1 we were interested to find out whether the effect is capacity demanding, therefore we manipulated the set-size of the display. The results indicated a clear cognitive processing capacity requirement, i.e. the magnitude of the effect decreased for a larger set size. Consequently, in Experiment 2, we investigated if the enhancement effect occurs only at the level of behaviorally relevant feature or at a level common to different features. Therefore we manipulated the discriminability of the behaviorally neutral feature (color). Again, results showed that this manipulation influenced the action enhancement of the behaviorally relevant feature. Particularly, the effect of the color manipulation on the action enhancement suggests that the action effect is more likely to bias the competition between different visual features rather than to enhance the processing of the relevant feature. We offer a theoretical account that integrates the action-intention effect within the biased competition model of visual selective attention.

  1. Training on Movement Figure-Ground Discrimination Remediates Low-Level Visual Timing Deficits in the Dorsal Stream, Improving High-Level Cognitive Functioning, Including Attention, Reading Fluency, and Working Memory.

    PubMed

    Lawton, Teri; Shelley-Tremblay, John

    2017-01-01

    The purpose of this study was to determine whether neurotraining to discriminate a moving test pattern relative to a stationary background, figure-ground discrimination, improves vision and cognitive functioning in dyslexics, as well as typically-developing normal students. We predict that improving the speed and sensitivity of figure-ground movement discrimination ( PATH to Reading neurotraining) acts to remediate visual timing deficits in the dorsal stream, thereby improving processing speed, reading fluency, and the executive control functions of attention and working memory in both dyslexic and normal students who had PATH neurotraining more than in those students who had no neurotraining. This prediction was evaluated by measuring whether dyslexic and normal students improved on standardized tests of cognitive skills following neurotraining exercises, more than following computer-based guided reading ( Raz-Kids ( RK )). The neurotraining used in this study was visually-based training designed to improve magnocellular function at both low and high levels in the dorsal stream: the input to the executive control networks coding working memory and attention. This approach represents a paradigm shift from the phonologically-based treatment for dyslexia, which concentrates on high-level speech and reading areas. This randomized controlled-validation study was conducted by training the entire second and third grade classrooms (42 students) for 30 min twice a week before guided reading. Standardized tests were administered at the beginning and end of 12-weeks of intervention training to evaluate improvements in academic skills. Only movement-discrimination training remediated both low-level visual timing deficits and high-level cognitive functioning, including selective and sustained attention, reading fluency and working memory for both dyslexic and normal students. Remediating visual timing deficits in the dorsal stream revealed the causal role of visual movement discrimination training in improving high-level cognitive functions such as attention, reading acquisition and working memory. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways in the dorsal stream is a fundamental cause of dyslexia and being at-risk for reading problems in normal students, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological or language deficits, requiring a paradigm shift from phonologically-based treatment of dyslexia to a visually-based treatment. This study shows that visual movement-discrimination can be used not only to diagnose dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning.

  2. Training on Movement Figure-Ground Discrimination Remediates Low-Level Visual Timing Deficits in the Dorsal Stream, Improving High-Level Cognitive Functioning, Including Attention, Reading Fluency, and Working Memory

    PubMed Central

    Lawton, Teri; Shelley-Tremblay, John

    2017-01-01

    The purpose of this study was to determine whether neurotraining to discriminate a moving test pattern relative to a stationary background, figure-ground discrimination, improves vision and cognitive functioning in dyslexics, as well as typically-developing normal students. We predict that improving the speed and sensitivity of figure-ground movement discrimination (PATH to Reading neurotraining) acts to remediate visual timing deficits in the dorsal stream, thereby improving processing speed, reading fluency, and the executive control functions of attention and working memory in both dyslexic and normal students who had PATH neurotraining more than in those students who had no neurotraining. This prediction was evaluated by measuring whether dyslexic and normal students improved on standardized tests of cognitive skills following neurotraining exercises, more than following computer-based guided reading (Raz-Kids (RK)). The neurotraining used in this study was visually-based training designed to improve magnocellular function at both low and high levels in the dorsal stream: the input to the executive control networks coding working memory and attention. This approach represents a paradigm shift from the phonologically-based treatment for dyslexia, which concentrates on high-level speech and reading areas. This randomized controlled-validation study was conducted by training the entire second and third grade classrooms (42 students) for 30 min twice a week before guided reading. Standardized tests were administered at the beginning and end of 12-weeks of intervention training to evaluate improvements in academic skills. Only movement-discrimination training remediated both low-level visual timing deficits and high-level cognitive functioning, including selective and sustained attention, reading fluency and working memory for both dyslexic and normal students. Remediating visual timing deficits in the dorsal stream revealed the causal role of visual movement discrimination training in improving high-level cognitive functions such as attention, reading acquisition and working memory. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways in the dorsal stream is a fundamental cause of dyslexia and being at-risk for reading problems in normal students, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological or language deficits, requiring a paradigm shift from phonologically-based treatment of dyslexia to a visually-based treatment. This study shows that visual movement-discrimination can be used not only to diagnose dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning. PMID:28555097

  3. Role for the M1 Muscarinic Acetylcholine Receptor in Top-Down Cognitive Processing Using a Touchscreen Visual Discrimination Task in Mice.

    PubMed

    Gould, R W; Dencker, D; Grannan, M; Bubser, M; Zhan, X; Wess, J; Xiang, Z; Locuson, C; Lindsley, C W; Conn, P J; Jones, C K

    2015-10-21

    The M1 muscarinic acetylcholine receptor (mAChR) subtype has been implicated in the underlying mechanisms of learning and memory and represents an important potential pharmacotherapeutic target for the cognitive impairments observed in neuropsychiatric disorders such as schizophrenia. Patients with schizophrenia show impairments in top-down processing involving conflict between sensory-driven and goal-oriented processes that can be modeled in preclinical studies using touchscreen-based cognition tasks. The present studies used a touchscreen visual pairwise discrimination task in which mice discriminated between a less salient and a more salient stimulus to assess the influence of the M1 mAChR on top-down processing. M1 mAChR knockout (M1 KO) mice showed a slower rate of learning, evidenced by slower increases in accuracy over 12 consecutive days, and required more days to acquire (achieve 80% accuracy) this discrimination task compared to wild-type mice. In addition, the M1 positive allosteric modulator BQCA enhanced the rate of learning this discrimination in wild-type, but not in M1 KO, mice when BQCA was administered daily prior to testing over 12 consecutive days. Importantly, in discriminations between stimuli of equal salience, M1 KO mice did not show impaired acquisition and BQCA did not affect the rate of learning or acquisition in wild-type mice. These studies are the first to demonstrate performance deficits in M1 KO mice using touchscreen cognitive assessments and enhanced rate of learning and acquisition in wild-type mice through M1 mAChR potentiation when the touchscreen discrimination task involves top-down processing. Taken together, these findings provide further support for M1 potentiation as a potential treatment for the cognitive symptoms associated with schizophrenia.

  4. Testing Models for Perceptual Discrimination Using Repeatable Noise

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    Adding noise to stimuli to be discriminated allows estimation of observer classification functions based on the correlation between observer responses and relevant features of the noisy stimuli. Examples will be presented of stimulus features that are found in auditory tone detection and visual Vernier acuity. Using the standard signal detection model (Thurstone scaling), we derive formulas to estimate the proportion of the observer's decision variable variance that is controlled by the added noise. One is based on the probability of agreement of the observer with him/herself on trials with the same noise sample. Another is based on the relative performance of the observer and the model. When these do not agree, the model can be rejected. A second derivation gives the probability of agreement of observer and model when the observer follows the model except for internal noise. Agreement significantly less than this amount allows rejection of the model.

  5. The relationship between hue discrimination and contrast sensitivity deficits in patients with diabetes mellitus.

    PubMed

    Trick, G L; Burde, R M; Gordon, M O; Santiago, J V; Kilo, C

    1988-05-01

    In an attempt to elucidate more fully the pathophysiologic basis of early visual dysfunction in patients with diabetes mellitus, color vision (hue discrimination) and spatial resolution (contrast sensitivity) were tested in diabetic patients with little or no retinopathy (n = 57) and age-matched visual normals (n = 35). Some evidence of visual dysfunction was observed in 37.8% of the diabetics with no retinopathy and 60.0% of the diabetics with background retinopathy. Although significant hue discrimination and contrast sensitivity deficits were observed in both groups of diabetic patients, contrast sensitivity was abnormal more frequently than hue discrimination. However, only 5.4% of the diabetics with no retinopathy and 10.0% of the diabetics with background retinopathy exhibited both abnormal hue discrimination and abnormal contrast sensitivity. Contrary to previous reports, blue-yellow (B-Y) and red-green (R-G) hue discrimination deficits were observed with approximately equal frequency. In the diabetic group, contrast sensitivity was reduced at all spatial frequencies tested, but for individual diabetic patients, significant deficits were only evident for the mid-range spatial frequencies. Among diabetic patients, the hue discrimination deficits, but not the contrast sensitivity abnormalities, were correlated with the patients' hemoglobin A1 level. A negative correlation between contrast sensitivity at 6.0 cpd and the duration of diabetes also was observed.

  6. Combining features from ERP components in single-trial EEG for discriminating four-category visual objects.

    PubMed

    Wang, Changming; Xiong, Shi; Hu, Xiaoping; Yao, Li; Zhang, Jiacai

    2012-10-01

    Categorization of images containing visual objects can be successfully recognized using single-trial electroencephalograph (EEG) measured when subjects view images. Previous studies have shown that task-related information contained in event-related potential (ERP) components could discriminate two or three categories of object images. In this study, we investigated whether four categories of objects (human faces, buildings, cats and cars) could be mutually discriminated using single-trial EEG data. Here, the EEG waveforms acquired while subjects were viewing four categories of object images were segmented into several ERP components (P1, N1, P2a and P2b), and then Fisher linear discriminant analysis (Fisher-LDA) was used to classify EEG features extracted from ERP components. Firstly, we compared the classification results using features from single ERP components, and identified that the N1 component achieved the highest classification accuracies. Secondly, we discriminated four categories of objects using combining features from multiple ERP components, and showed that combination of ERP components improved four-category classification accuracies by utilizing the complementarity of discriminative information in ERP components. These findings confirmed that four categories of object images could be discriminated with single-trial EEG and could direct us to select effective EEG features for classifying visual objects.

  7. Color discrimination performance in patients with Alzheimer's disease.

    PubMed

    Salamone, Giovanna; Di Lorenzo, Concetta; Mosti, Serena; Lupo, Federica; Cravello, Luca; Palmer, Katie; Musicco, Massimo; Caltagirone, Carlo

    2009-01-01

    Visual deficits are frequent in Alzheimer's disease (AD), yet little is known about the nature of these disturbances. The aim of the present study was to investigate color discrimination in patients with AD to determine whether impairment of this visual function is a cognitive or perceptive/sensory disturbance. A cross-sectional clinical study was conducted in a specialized dementia unit on 20 patients with mild/moderate AD and 21 age-matched normal controls. Color discrimination was measured by the Farnsworth-Munsell 100 hue test. Cognitive functioning was measured with the Mini-Mental State Examination (MMSE) and a comprehensive battery of neuropsychological tests. The scores obtained on the color discrimination test were compared between AD patients and controls adjusting for global and domain-specific cognitive performance. Color discrimination performance was inversely related to MMSE score. AD patients had a higher number of errors in color discrimination than controls (mean +/- SD total error score: 442.4 +/- 84.5 vs. 304.1 +/- 45.9). This trend persisted even after adjustment for MMSE score and cognitive performance on specific cognitive domains. A specific reduction of color discrimination capacity is present in AD patients. This deficit does not solely depend upon cognitive impairment, and involvement of the primary visual cortex and/or retinal ganglionar cells may be contributory.

  8. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    PubMed

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  9. Contribution of amygdalar and lateral hypothalamic neurons to visual information processing of food and nonfood in monkey.

    PubMed

    Ono, T; Tamura, R; Nishijo, H; Nakamura, K; Tabuchi, E

    1989-02-01

    Visual information processing was investigated in the inferotemporal cortical (ITCx)-amygdalar (AM)-lateral hypothalamic (LHA) axis which contributes to food-nonfood discrimination. Neuronal activity was recorded from monkey AM and LHA during discrimination of sensory stimuli including sight of food or nonfood. The task had four phases: control, visual, bar press, and ingestion. Of 710 AM neurons tested, 220 (31.0%) responded during visual phase: 48 to only visual stimulation, 13 (1.9%) to visual plus oral sensory stimulation, 142 (20.0%) to multimodal stimulation and 17 (2.4%) to one affectively significant item. Of 669 LHA neurons tested, 106 (15.8%) responded in the visual phase. Of 80 visual-related neurons tested systematically, 33 (41.2%) responded selectively to the sight of any object predicting the availability of reward, and 47 (58.8%) responded nondifferentially to both food and nonfood. Many of AM neuron responses were graded according to the degree of affective significance of sensory stimuli (sensory-affective association), but responses of LHA food responsive neurons did not depend on the kind of reward indicated by the sensory stimuli (stimulus-reinforcement association). Some AM and LHA food responses were modulated by extinction or reversal. Dynamic information processing in ITCx-AM-LHA axis was investigated by reversible deficits of bilateral ITCx or AM by cooling. ITCx cooling suppressed discrimination by vision responding AM neurons (8/17). AM cooling suppressed LHA responses to food (9/22). We suggest deep AM-LHA involvement in food-nonfood discrimination based on AM sensory-affective association and LHA stimulus-reinforcement association.

  10. Visuoperceptual impairment in dementia with Lewy bodies.

    PubMed

    Mori, E; Shimomura, T; Fujimori, M; Hirono, N; Imamura, T; Hashimoto, M; Tanimukai, S; Kazui, H; Hanihara, T

    2000-04-01

    In dementia with Lewy bodies (DLB), vision-related cognitive and behavioral symptoms are common, and involvement of the occipital visual cortices has been demonstrated in functional neuroimaging studies. To delineate visuoperceptual disturbance in patients with DLB in comparison with that in patients with Alzheimer disease and to explore the relationship between visuoperceptual disturbance and the vision-related cognitive and behavioral symptoms. Case-control study. Research-oriented hospital. Twenty-four patients with probable DLB (based on criteria of the Consortium on DLB International Workshop) and 48 patients with probable Alzheimer disease (based on criteria of the National Institute of Neurological and Communicative Disorders and Stroke-Alzheimer's Disease and Related Disorders Association) who were matched to those with DLB 2:1 by age, sex, education, and Mini-Mental State Examination score. Four test items to examine visuoperceptual functions, including the object size discrimination, form discrimination, overlapping figure identification, and visual counting tasks. Compared with patients with probable Alzheimer disease, patients with probable DLB scored significantly lower on all the visuoperceptive tasks (P<.04 to P<.001). In the DLB group, patients with visual hallucinations (n = 18) scored significantly lower on the overlapping figure identification (P = .01) than those without them (n = 6), and patients with television misidentifications (n = 5) scored significantly lower on the size discrimination (P<.001), form discrimination (P = .01), and visual counting (P = .007) than those without them (n = 19). Visual perception is defective in probable DLB. The defective visual perception plays a role in development of visual hallucinations, delusional misidentifications, visual agnosias, and visuoconstructive disability charcteristic of DLB.

  11. Hippocampus, Perirhinal Cortex, and Complex Visual Discriminations in Rats and Humans

    ERIC Educational Resources Information Center

    Hales, Jena B.; Broadbent, Nicola J.; Velu, Priya D.; Squire, Larry R.; Clark, Robert E.

    2015-01-01

    Structures in the medial temporal lobe, including the hippocampus and perirhinal cortex, are known to be essential for the formation of long-term memory. Recent animal and human studies have investigated whether perirhinal cortex might also be important for visual perception. In our study, using a simultaneous oddity discrimination task, rats with…

  12. Double Dissociation of Pharmacologically Induced Deficits in Visual Recognition and Visual Discrimination Learning

    ERIC Educational Resources Information Center

    Turchi, Janita; Buffalari, Deanne; Mishkin, Mortimer

    2008-01-01

    Monkeys trained in either one-trial recognition at 8- to 10-min delays or multi-trial discrimination habits with 24-h intertrial intervals received systemic cholinergic and dopaminergic antagonists, scopolamine and haloperidol, respectively, in separate sessions. Recognition memory was impaired markedly by scopolamine but not at all by…

  13. A Rapid Assessment of Instructional Strategies to Teach Auditory-Visual Conditional Discriminations to Children with Autism

    ERIC Educational Resources Information Center

    Kodak, Tiffany; Clements, Andrea; LeBlanc, Brittany

    2013-01-01

    The purpose of the present investigation was to evaluate a rapid assessment procedure to identify effective instructional strategies to teach auditory-visual conditional discriminations to children diagnosed with autism. We replicated and extended previous rapid skills assessments (Lerman, Vorndran, Addison, & Kuhn, 2004) by evaluating the effects…

  14. Speaker Identity Supports Phonetic Category Learning

    ERIC Educational Resources Information Center

    Mani, Nivedita; Schneider, Signe

    2013-01-01

    Visual cues from the speaker's face, such as the discriminable mouth movements used to produce speech sounds, improve discrimination of these sounds by adults. The speaker's face, however, provides more information than just the mouth movements used to produce speech--it also provides a visual indexical cue of the identity of the speaker. The…

  15. Scopolamine effects on visual discrimination: modifications related to stimulus control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, H.L.

    1975-01-01

    Stumptail monkeys (Macaca arctoides) performed a discrete trial, three-choice visual discrimination. The discrimination behavior was controlled by the shape of the visual stimuli. Strength of the stimuli in controlling behavior was systematically related to a physical property of the stimuli, luminance. Low luminance provided weak control, resulting in a low accuracy of discrimination, a low response probability and maximal sensitivity to scopolamine (7.5-60 ..mu..g/kg). In contrast, high luminance provided strong control of behavior and attenuated the effects of scopolamine. Methylscopolamine had no effect in doses of 30 to 90 ..mu..g/kg. Scopolamine effects resembled the effects of reducing stimulus control inmore » undrugged monkeys. Since behavior under weak control seems to be especially sensitive to drugs, manipulations of stimulus control may be particularly useful whenever determination of the minimally-effective dose is important, as in behavioral toxicology. Present results are interpreted as specific visual effects of the drug, since nonsensory factors such as base-line response rate, reinforcement schedule, training history, motor performance and motivation were controlled. Implications for state-dependent effects of drugs are discussed.« less

  16. Perceptual learning of basic visual features remains task specific with Training-Plus-Exposure (TPE) training.

    PubMed

    Cong, Lin-Juan; Wang, Ru-Jie; Yu, Cong; Zhang, Jun-Yun

    2016-01-01

    Visual perceptual learning is known to be specific to the trained retinal location, feature, and task. However, location and feature specificity can be eliminated by double-training or TPE training protocols, in which observers receive additional exposure to the transfer location or feature dimension via an irrelevant task besides the primary learning task Here we tested whether these new training protocols could even make learning transfer across different tasks involving discrimination of basic visual features (e.g., orientation and contrast). Observers practiced a near-threshold orientation (or contrast) discrimination task. Following a TPE training protocol, they also received exposure to the transfer task via performing suprathreshold contrast (or orientation) discrimination in alternating blocks of trials in the same sessions. The results showed no evidence for significant learning transfer to the untrained near-threshold contrast (or orientation) discrimination task after discounting the pretest effects and the suprathreshold practice effects. These results thus do not support a hypothetical task-independent component in perceptual learning of basic visual features. They also set the boundary of the new training protocols in their capability to enable learning transfer.

  17. Videopanorama Frame Rate Requirements Derived from Visual Discrimination of Deceleration During Simulated Aircraft Landing

    NASA Technical Reports Server (NTRS)

    Furnstenau, Norbert; Ellis, Stephen R.

    2015-01-01

    In order to determine the required visual frame rate (FR) for minimizing prediction errors with out-the-window video displays at remote/virtual airport towers, thirteen active air traffic controllers viewed high dynamic fidelity simulations of landing aircraft and decided whether aircraft would stop as if to be able to make a turnoff or whether a runway excursion would be expected. The viewing conditions and simulation dynamics replicated visual rates and environments of transport aircraft landing at small commercial airports. The required frame rate was estimated using Bayes inference on prediction errors by linear FRextrapolation of event probabilities conditional on predictions (stop, no-stop). Furthermore estimates were obtained from exponential model fits to the parametric and non-parametric perceptual discriminabilities d' and A (average area under ROC-curves) as dependent on FR. Decision errors are biased towards preference of overshoot and appear due to illusionary increase in speed at low frames rates. Both Bayes and A - extrapolations yield a framerate requirement of 35 < FRmin < 40 Hz. When comparing with published results [12] on shooter game scores the model based d'(FR)-extrapolation exhibits the best agreement and indicates even higher FRmin > 40 Hz for minimizing decision errors. Definitive recommendations require further experiments with FR > 30 Hz.

  18. Stimulus similarity determines the prevalence of behavioral laterality in a visual discrimination task for mice

    PubMed Central

    Treviño, Mario

    2014-01-01

    Animal choices depend on direct sensory information, but also on the dynamic changes in the magnitude of reward. In visual discrimination tasks, the emergence of lateral biases in the choice record from animals is often described as a behavioral artifact, because these are highly correlated with error rates affecting psychophysical measurements. Here, we hypothesized that biased choices could constitute a robust behavioral strategy to solve discrimination tasks of graded difficulty. We trained mice to swim in a two-alterative visual discrimination task with escape from water as the reward. Their prevalence of making lateral choices increased with stimulus similarity and was present in conditions of high discriminability. While lateralization occurred at the individual level, it was absent, on average, at the population level. Biased choice sequences obeyed the generalized matching law and increased task efficiency when stimulus similarity was high. A mathematical analysis revealed that strongly-biased mice used information from past rewards but not past choices to make their current choices. We also found that the amount of lateralized choices made during the first day of training predicted individual differences in the average learning behavior. This framework provides useful analysis tools to study individualized visual-learning trajectories in mice. PMID:25524257

  19. Do rats use shape to solve “shape discriminations”?

    PubMed Central

    Minini, Loredana; Jeffery, Kathryn J.

    2006-01-01

    Visual discrimination tasks are increasingly used to explore the neurobiology of vision in rodents, but it remains unclear how the animals solve these tasks: Do they process shapes holistically, or by using low-level features such as luminance and angle acuity? In the present study we found that when discriminating triangles from squares, rats did not use shape but instead relied on local luminance differences in the lower hemifield. A second experiment prevented this strategy by using stimuli—squares and rectangles—that varied in size and location, and for which the only constant predictor of reward was aspect ratio (ratio of height to width: a simple descriptor of “shape”). Rats eventually learned to use aspect ratio but only when no other discriminand was available, and performance remained very poor even at asymptote. These results suggest that although rats can process both dimensions simultaneously, they do not naturally solve shape discrimination tasks this way. This may reflect either a failure to visually process global shape information or a failure to discover shape as the discriminative stimulus in a simultaneous discrimination. Either way, our results suggest that simultaneous shape discrimination is not a good task for studies of visual perception in rodents. PMID:16705141

  20. How to evade a coevolving brood parasite: egg discrimination versus egg variability as host defences.

    PubMed

    Spottiswoode, Claire N; Stevens, Martin

    2011-12-07

    Arms races between avian brood parasites and their hosts often result in parasitic mimicry of host eggs, to evade rejection. Once egg mimicry has evolved, host defences could escalate in two ways: (i) hosts could improve their level of egg discrimination; and (ii) negative frequency-dependent selection could generate increased variation in egg appearance (polymorphism) among individuals. Proficiency in one defence might reduce selection on the other, while a combination of the two should enable successful rejection of parasitic eggs. We compared three highly variable host species of the Afrotropical cuckoo finch Anomalospiza imberbis, using egg rejection experiments and modelling of avian colour and pattern vision. We show that each differed in their level of polymorphism, in the visual cues they used to reject foreign eggs, and in their degree of discrimination. The most polymorphic host had the crudest discrimination, whereas the least polymorphic was most discriminating. The third species, not currently parasitized, was intermediate for both defences. A model simulating parasitic laying and host rejection behaviour based on the field experiments showed that the two host strategies result in approximately the same fitness advantage to hosts. Thus, neither strategy is superior, but rather they reflect alternative potential evolutionary trajectories.

  1. Individual personality differences in goats predict their performance in visual learning and non-associative cognitive tasks.

    PubMed

    Nawroth, Christian; Prentice, Pamela M; McElligott, Alan G

    2017-01-01

    Variation in common personality traits, such as boldness or exploration, is often associated with risk-reward trade-offs and behavioural flexibility. To date, only a few studies have examined the effects of consistent behavioural traits on both learning and cognition. We investigated whether certain personality traits ('exploration' and 'sociability') of individuals were related to cognitive performance, learning flexibility and learning style in a social ungulate species, the goat (Capra hircus). We also investigated whether a preference for feature cues rather than impaired learning abilities can explain performance variation in a visual discrimination task. We found that personality scores were consistent across time and context. Less explorative goats performed better in a non-associative cognitive task, in which subjects had to follow the trajectory of a hidden object (i.e. testing their ability for object permanence). We also found that less sociable subjects performed better compared to more sociable goats in a visual discrimination task. Good visual learning performance was associated with a preference for feature cues, indicating personality-dependent learning strategies in goats. Our results suggest that personality traits predict the outcome in visual discrimination and non-associative cognitive tasks in goats and that impaired performance in a visual discrimination tasks does not necessarily imply impaired learning capacities, but rather can be explained by a varying preference for feature cues. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Discriminative stimuli that control instrumental tobacco-seeking by human smokers also command selective attention.

    PubMed

    Hogarth, Lee; Dickinson, Anthony; Duka, Theodora

    2003-08-01

    Incentive salience theory states that acquired bias in selective attention for stimuli associated with tobacco-smoke reinforcement controls the selective performance of tobacco-seeking and tobacco-taking behaviour. To support this theory, we assessed whether a stimulus that had acquired control of a tobacco-seeking response in a discrimination procedure would command the focus of visual attention in a subsequent test phase. Smokers received discrimination training in which an instrumental key-press response was followed by tobacco-smoke reinforcement when one visual discriminative stimulus (S+) was present, but not when another stimulus (S-) was present. The skin conductance response to the S+ and S- assessed whether Pavlovian conditioning to the S+ had taken place. In a subsequent test phase, the S+ and S- were presented in the dot-probe task and the allocation of the focus of visual attention to these stimuli was measured. Participants learned to perform the instrumental tobacco-seeking response selectively in the presence of the S+ relative to the S-, and showed a greater skin conductance response to the S+ than the S-. In the subsequent test phase, participants allocated the focus of visual attention to the S+ in preference to the S-. Correlation analysis revealed that the visual attentional bias for the S+ was positively associated with the number of times the S+ had been paired with tobacco-smoke in training, the skin conductance response to the S+ and with subjective craving to smoke. Furthermore, increased exposure to tobacco-smoke in the natural environment was associated with reduced discrimination learning. These data demonstrate that discriminative stimuli that signal that tobacco-smoke reinforcement is available acquire the capacity to command selective attentional and elicit instrumental tobacco-seeking behaviour.

  3. Prestimulus oscillatory activity in the alpha band predicts visual discrimination ability.

    PubMed

    van Dijk, Hanneke; Schoffelen, Jan-Mathijs; Oostenveld, Robert; Jensen, Ole

    2008-02-20

    Although the resting and baseline states of the human electroencephalogram and magnetoencephalogram (MEG) are dominated by oscillations in the alpha band (approximately 10 Hz), the functional role of these oscillations remains unclear. In this study we used MEG to investigate how spontaneous oscillations in humans presented before visual stimuli modulate visual perception. Subjects had to report if there was a subtle difference in gray levels between two superimposed presented discs. We then compared the prestimulus brain activity for correctly (hits) versus incorrectly (misses) identified stimuli. We found that visual discrimination ability decreased with an increase in prestimulus alpha power. Given that reaction times did not vary systematically with prestimulus alpha power changes in vigilance are not likely to explain the change in discrimination ability. Source reconstruction using spatial filters allowed us to identify the brain areas accounting for this effect. The dominant sources modulating visual perception were localized around the parieto-occipital sulcus. We suggest that the parieto-occipital alpha power reflects functional inhibition imposed by higher level areas, which serves to modulate the gain of the visual stream.

  4. The informativity of sound modulates crossmodal facilitation of visual discrimination: a fMRI study.

    PubMed

    Li, Qi; Yu, Hongtao; Li, Xiujun; Sun, Hongzan; Yang, Jingjing; Li, Chunlin

    2017-01-18

    Many studies have investigated behavioral crossmodal facilitation when a visual stimulus is accompanied by a concurrent task-irrelevant sound. Lippert and colleagues reported that a concurrent task-irrelevant sound reduced the uncertainty of the timing of the visual display and improved perceptional responses (informative sound). However, the neural mechanism by which the informativity of sound affected crossmodal facilitation of visual discrimination remained unclear. In this study, we used event-related functional MRI to investigate the neural mechanisms underlying the role of informativity of sound in crossmodal facilitation of visual discrimination. Significantly faster reaction times were observed when there was an informative relationship between auditory and visual stimuli. The functional MRI results showed sound informativity-induced activation enhancement including the left fusiform gyrus and the right lateral occipital complex. Further correlation analysis showed that the right lateral occipital complex was significantly correlated with the behavioral benefit in reaction times. This suggests that this region was modulated by the informative relationship within audiovisual stimuli that was learnt during the experiment, resulting in late-stage multisensory integration and enhanced behavioral responses.

  5. Fornix and medial temporal lobe lesions lead to comparable deficits in complex visual perception.

    PubMed

    Lech, Robert K; Koch, Benno; Schwarz, Michael; Suchan, Boris

    2016-05-04

    Recent research dealing with the structures of the medial temporal lobe (MTL) has shifted away from exclusively investigating memory-related processes and has repeatedly incorporated the investigation of complex visual perception. Several studies have demonstrated that higher level visual tasks can recruit structures like the hippocampus and perirhinal cortex in order to successfully perform complex visual discriminations, leading to a perceptual-mnemonic or representational view of the medial temporal lobe. The current study employed a complex visual discrimination paradigm in two patients suffering from brain lesions with differing locations and origin. Both patients, one with extensive medial temporal lobe lesions (VG) and one with a small lesion of the anterior fornix (HJK), were impaired in complex discriminations while showing otherwise mostly intact cognitive functions. The current data confirmed previous results while also extending the perceptual-mnemonic theory of the MTL to the main output structure of the hippocampus, the fornix. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Timing of target discrimination in human frontal eye fields.

    PubMed

    O'Shea, Jacinta; Muggleton, Neil G; Cowey, Alan; Walsh, Vincent

    2004-01-01

    Frontal eye field (FEF) neurons discharge in response to behaviorally relevant stimuli that are potential targets for saccades. Distinct visual and motor processes have been dissociated in the FEF of macaque monkeys, but little is known about the visual processing capacity of FEF in humans. We used double-pulse transcranial magnetic stimulation [(d)TMS] to investigate the timing of target discrimination during visual conjunction search. We applied dual TMS pulses separated by 40 msec over the right FEF and vertex. These were applied in five timing conditions to sample separate time windows within the first 200 msec of visual processing. (d)TMS impaired search performance, reflected in reduced d' scores. This effect was limited to a time window between 40 and 80 msec after search array onset. These parameters correspond with single-cell activity in FEF that predicts monkeys' behavioral reports on hit, miss, false alarm, and correct rejection trials. Our findings demonstrate a crucial early role for human FEF in visual target discrimination that is independent of saccade programming.

  7. Pre-cooling moderately enhances visual discrimination during exercise in the heat.

    PubMed

    Clarke, Neil D; Duncan, Michael J; Smith, Mike; Hankey, Joanne

    2017-02-01

    Pre-cooling has been reported to attenuate the increase in core temperature, although, information regarding the effects of pre-cooling on cognitive function is limited. The present study investigated the effects of pre-cooling on visual discrimination during exercise in the heat. Eight male recreational runners completed 90 min of treadmill running at 65% [Formula: see text] 2max in the heat [32.4 ± 0.9°C and 46.8 ± 6.4% relative humidity (r.h.)] on two occasions in a randomised, counterbalanced crossover design. Participants underwent pre-cooling by means of water immersion (20.3 ± 0.3°C) for 60 min or remained seated for 60 min in a laboratory (20.2 ± 1.7°C and 60.2 ± 2.5% r.h.). Rectal temperature (T rec ) and mean skin temperature (T skin ) were monitored throughout the protocol. At 30-min intervals participants performed a visual discrimination task. Following pre-cooling, T rec (P = 0.040; [Formula: see text] = 0.48) was moderately lower at 0 and 30 min and T skin (P = 0.003; [Formula: see text] = 0.75) lower to a large extent at 0 min of exercise. Visual discrimination was moderately more accurate at 60 and 90 min of exercise following pre-cooling (P = 0.067; [Formula: see text] = 0.40). Pre-cooling resulted in small improvements in visual discrimination sensitivity (F 1,7  = 2.188; P = 0.183; [Formula: see text] = 0.24), criterion (F 1,7  = 1.298; P = 0.292; [Formula: see text] = 0.16) and bias (F 1,7  = 2.202; P = 0.181; [Formula: see text] = 0.24). Pre-cooling moderately improves visual discrimination accuracy during exercise in the heat.

  8. Transcranial direct current stimulation (tDCS) facilitates overall visual search response times but does not interact with visual search task factors

    PubMed Central

    Gordon, Barry

    2018-01-01

    Whether transcranial direct current stimulation (tDCS) affects mental functions, and how any such effects arise from its neural effects, continue to be debated. We investigated whether tDCS applied over the visual cortex (Oz) with a vertex (Cz) reference might affect response times (RTs) in a visual search task. We also examined whether any significant tDCS effects would interact with task factors (target presence, discrimination difficulty, and stimulus brightness) that are known to selectively influence one or the other of the two information processing stages posited by current models of visual search. Based on additive factor logic, we expected that the pattern of interactions involving a significant tDCS effect could help us colocalize the tDCS effect to one (or both) of the processing stages. In Experiment 1 (n = 12), anodal tDCS improved RTs significantly; cathodal tDCS produced a nonsignificant trend toward improvement. However, there were no interactions between the anodal tDCS effect and target presence or discrimination difficulty. In Experiment 2 (n = 18), we manipulated stimulus brightness along with target presence and discrimination difficulty. Anodal and cathodal tDCS both produced significant improvements in RTs. Again, the tDCS effects did not interact with any of the task factors. In Experiment 3 (n = 16), electrodes were placed at Cz and on the upper arm, to test for a possible effect of incidental stimulation of the motor regions under Cz. No effect of tDCS on RTs was found. These findings strengthen the case for tDCS having real effects on cerebral information processing. However, these effects did not clearly arise from either of the two processing stages of the visual search process. We suggest that this is because tDCS has a DIFFUSE, pervasive action across the task-relevant neuroanatomical region(s), not a discrete effect in terms of information processing stages. PMID:29558513

  9. Transcranial direct current stimulation (tDCS) facilitates overall visual search response times but does not interact with visual search task factors.

    PubMed

    Sung, Kyongje; Gordon, Barry

    2018-01-01

    Whether transcranial direct current stimulation (tDCS) affects mental functions, and how any such effects arise from its neural effects, continue to be debated. We investigated whether tDCS applied over the visual cortex (Oz) with a vertex (Cz) reference might affect response times (RTs) in a visual search task. We also examined whether any significant tDCS effects would interact with task factors (target presence, discrimination difficulty, and stimulus brightness) that are known to selectively influence one or the other of the two information processing stages posited by current models of visual search. Based on additive factor logic, we expected that the pattern of interactions involving a significant tDCS effect could help us colocalize the tDCS effect to one (or both) of the processing stages. In Experiment 1 (n = 12), anodal tDCS improved RTs significantly; cathodal tDCS produced a nonsignificant trend toward improvement. However, there were no interactions between the anodal tDCS effect and target presence or discrimination difficulty. In Experiment 2 (n = 18), we manipulated stimulus brightness along with target presence and discrimination difficulty. Anodal and cathodal tDCS both produced significant improvements in RTs. Again, the tDCS effects did not interact with any of the task factors. In Experiment 3 (n = 16), electrodes were placed at Cz and on the upper arm, to test for a possible effect of incidental stimulation of the motor regions under Cz. No effect of tDCS on RTs was found. These findings strengthen the case for tDCS having real effects on cerebral information processing. However, these effects did not clearly arise from either of the two processing stages of the visual search process. We suggest that this is because tDCS has a DIFFUSE, pervasive action across the task-relevant neuroanatomical region(s), not a discrete effect in terms of information processing stages.

  10. The McCollough effect and facial emotion discrimination in patients with schizophrenia and their unaffected relatives.

    PubMed

    Surguladze, Simon A; Chkonia, Eka D; Kezeli, Archil R; Roinishvili, Maya O; Stahl, Daniel; David, Anthony S

    2012-05-01

    Abnormalities in visual processing have been found consistently in schizophrenia patients, including deficits in early visual processing, perceptual organization, and facial emotion recognition. There is however no consensus as to whether these abnormalities represent heritable illness traits and what their contribution is to psychopathology. Fifty patients with schizophrenia, 61 of their first-degree healthy relatives, and 50 psychiatrically healthy volunteers were tested with regard to facial affect (FA) discrimination and susceptibility to develop the color-contingent illusion [the McCollough Effect (ME)]. Both patients and relatives demonstrated significantly lower accuracy in FA discrimination compared with controls. There was also a significant effect of familiality: Participants from the same families had more similar accuracy scores than those who belonged to different families. Experiments with the ME showed that schizophrenia patients required longer time to develop the illusion than relatives and controls, which indicated poor visual adaptation in schizophrenia. Relatives were marginally slower than controls. There was no significant association between the measures of FA discrimination accuracy and ME in any of the participant groups. Facial emotion discrimination was associated with the degree of interpersonal problems, as measured by the Schizotypal Personality Questionnaire in relatives and healthy volunteers, whereas the ME was associated with the perceptual-cognitive symptoms of schizotypy and positive symptoms of schizophrenia. Our results support the heritability of FA discrimination deficits as a trait and indicate visual adaptation abnormalities in schizophrenia, which are symptom related.

  11. A manual and an automatic TERS based virus discrimination

    NASA Astrophysics Data System (ADS)

    Olschewski, Konstanze; Kämmer, Evelyn; Stöckel, Stephan; Bocklitz, Thomas; Deckert-Gaudig, Tanja; Zell, Roland; Cialla-May, Dana; Weber, Karina; Deckert, Volker; Popp, Jürgen

    2015-02-01

    Rapid techniques for virus identification are more relevant today than ever. Conventional virus detection and identification strategies generally rest upon various microbiological methods and genomic approaches, which are not suited for the analysis of single virus particles. In contrast, the highly sensitive spectroscopic technique tip-enhanced Raman spectroscopy (TERS) allows the characterisation of biological nano-structures like virions on a single-particle level. In this study, the feasibility of TERS in combination with chemometrics to discriminate two pathogenic viruses, Varicella-zoster virus (VZV) and Porcine teschovirus (PTV), was investigated. In a first step, chemometric methods transformed the spectral data in such a way that a rapid visual discrimination of the two examined viruses was enabled. In a further step, these methods were utilised to perform an automatic quality rating of the measured spectra. Spectra that passed this test were eventually used to calculate a classification model, through which a successful discrimination of the two viral species based on TERS spectra of single virus particles was also realised with a classification accuracy of 91%.Rapid techniques for virus identification are more relevant today than ever. Conventional virus detection and identification strategies generally rest upon various microbiological methods and genomic approaches, which are not suited for the analysis of single virus particles. In contrast, the highly sensitive spectroscopic technique tip-enhanced Raman spectroscopy (TERS) allows the characterisation of biological nano-structures like virions on a single-particle level. In this study, the feasibility of TERS in combination with chemometrics to discriminate two pathogenic viruses, Varicella-zoster virus (VZV) and Porcine teschovirus (PTV), was investigated. In a first step, chemometric methods transformed the spectral data in such a way that a rapid visual discrimination of the two examined viruses was enabled. In a further step, these methods were utilised to perform an automatic quality rating of the measured spectra. Spectra that passed this test were eventually used to calculate a classification model, through which a successful discrimination of the two viral species based on TERS spectra of single virus particles was also realised with a classification accuracy of 91%. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr07033j

  12. Origin and Function of Tuning Diversity in Macaque Visual Cortex

    PubMed Central

    Goris, Robbe L.T.; Simoncelli, Eero P.; Movshon, J. Anthony

    2016-01-01

    SUMMARY Neurons in visual cortex vary in their orientation selectivity. We measured responses of V1 and V2 cells to orientation mixtures and fit them with a model whose stimulus selectivity arises from the combined effects of filtering, suppression, and response nonlinearity. The model explains the diversity of orientation selectivity with neuron-to-neuron variability in all three mechanisms, of which variability in the orientation bandwidth of linear filtering is the most important. The model also accounts for the cells’ diversity of spatial frequency selectivity. Tuning diversity is matched to the needs of visual encoding. The orientation content found in natural scenes is diverse, and neurons with different selectivities are adapted to different stimulus configurations. Single orientations are better encoded by highly selective neurons, while orientation mixtures are better encoded by less selective neurons. A diverse population of neurons therefore provides better overall discrimination capabilities for natural images than any homogeneous population. PMID:26549331

  13. Visual event-related potential changes in multiple system atrophy: delayed N2 latency in selective attention to a color task.

    PubMed

    Kamitani, Toshiaki; Kuroiwa, Yoshiyuki

    2009-01-01

    Recent studies demonstrated an altered P3 component and prolonged reaction time during the visual discrimination tasks in multiple system atrophy (MSA). In MSA, however, little is known about the N2 component which is known to be closely related to the visual discrimination process. We therefore compared the N2 component as well as the N1 and P3 components in 17 MSA patients with these components in 10 normal controls, by using a visual selective attention task to color or to shape. While the P3 in MSA was significantly delayed in selective attention to shape, the N2 in MSA was significantly delayed in selective attention to color. N1 was normally preserved both in attention to color and in attention to shape. Our electrophysiological results indicate that the color discrimination process during selective attention is impaired in MSA.

  14. Two Methods for Teaching Simple Visual Discriminations to Learners with Severe Disabilities

    ERIC Educational Resources Information Center

    Graff, Richard B.; Green, Gina

    2004-01-01

    Simple discriminations are involved in many functional skills; additionally, they are components of conditional discriminations (identity and arbitrary matching-to-sample), which are involved in a wide array of other important performances. Many individuals with severe disabilities have difficulty acquiring simple discriminations with standard…

  15. A biologically plausible computational model for auditory object recognition.

    PubMed

    Larson, Eric; Billimoria, Cyrus P; Sen, Kamal

    2009-01-01

    Object recognition is a task of fundamental importance for sensory systems. Although this problem has been intensively investigated in the visual system, relatively little is known about the recognition of complex auditory objects. Recent work has shown that spike trains from individual sensory neurons can be used to discriminate between and recognize stimuli. Multiple groups have developed spike similarity or dissimilarity metrics to quantify the differences between spike trains. Using a nearest-neighbor approach the spike similarity metrics can be used to classify the stimuli into groups used to evoke the spike trains. The nearest prototype spike train to the tested spike train can then be used to identify the stimulus. However, how biological circuits might perform such computations remains unclear. Elucidating this question would facilitate the experimental search for such circuits in biological systems, as well as the design of artificial circuits that can perform such computations. Here we present a biologically plausible model for discrimination inspired by a spike distance metric using a network of integrate-and-fire model neurons coupled to a decision network. We then apply this model to the birdsong system in the context of song discrimination and recognition. We show that the model circuit is effective at recognizing individual songs, based on experimental input data from field L, the avian primary auditory cortex analog. We also compare the performance and robustness of this model to two alternative models of song discrimination: a model based on coincidence detection and a model based on firing rate.

  16. Discrimination of Complex Human Behavior by Pigeons (Columba livia) and Humans

    PubMed Central

    Qadri, Muhammad A. J.; Sayde, Justin M.; Cook, Robert G.

    2014-01-01

    The cognitive and neural mechanisms for recognizing and categorizing behavior are not well understood in non-human animals. In the current experiments, pigeons and humans learned to categorize two non-repeating, complex human behaviors (“martial arts” vs. “Indian dance”). Using multiple video exemplars of a digital human model, pigeons discriminated these behaviors in a go/no-go task and humans in a choice task. Experiment 1 found that pigeons already experienced with discriminating the locomotive actions of digital animals acquired the discrimination more rapidly when action information was available than when only pose information was available. Experiments 2 and 3 found this same dynamic superiority effect with naïve pigeons and human participants. Both species used the same combination of immediately available static pose information and more slowly perceived dynamic action cues to discriminate the behavioral categories. Theories based on generalized visual mechanisms, as opposed to embodied, species-specific action networks, offer a parsimonious account of how these different animals recognize behavior across and within species. PMID:25379777

  17. Hidden discriminative features extraction for supervised high-order time series modeling.

    PubMed

    Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee

    2016-11-01

    In this paper, an orthogonal Tucker-decomposition-based extraction of high-order discriminative subspaces from a tensor-based time series data structure is presented, named as Tensor Discriminative Feature Extraction (TDFE). TDFE relies on the employment of category information for the maximization of the between-class scatter and the minimization of the within-class scatter to extract optimal hidden discriminative feature subspaces that are simultaneously spanned by every modality for supervised tensor modeling. In this context, the proposed tensor-decomposition method provides the following benefits: i) reduces dimensionality while robustly mining the underlying discriminative features, ii) results in effective interpretable features that lead to an improved classification and visualization, and iii) reduces the processing time during the training stage and the filtering of the projection by solving the generalized eigenvalue issue at each alternation step. Two real third-order tensor-structures of time series datasets (an epilepsy electroencephalogram (EEG) that is modeled as channel×frequency bin×time frame and a microarray data that is modeled as gene×sample×time) were used for the evaluation of the TDFE. The experiment results corroborate the advantages of the proposed method with averages of 98.26% and 89.63% for the classification accuracies of the epilepsy dataset and the microarray dataset, respectively. These performance averages represent an improvement on those of the matrix-based algorithms and recent tensor-based, discriminant-decomposition approaches; this is especially the case considering the small number of samples that are used in practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Behavioral evaluation of visual function of rats using a visual discrimination apparatus.

    PubMed

    Thomas, Biju B; Samant, Deedar M; Seiler, Magdalene J; Aramant, Robert B; Sheikholeslami, Sharzad; Zhang, Kevin; Chen, Zhenhai; Sadda, SriniVas R

    2007-05-15

    A visual discrimination apparatus was developed to evaluate the visual sensitivity of normal pigmented rats (n=13) and S334ter-line-3 retinal degenerate (RD) rats (n=15). The apparatus is a modified Y maze consisting of two chambers leading to the rats' home cage. Rats were trained to find a one-way exit door leading into their home cage, based on distinguishing between two different visual alternatives (either a dark background or black and white stripes at varying luminance levels) which were randomly displayed on the back of each chamber. Within 2 weeks of training, all rats were able to distinguish between these two visual patterns. The discrimination threshold of normal pigmented rats was a luminance level of -5.37+/-0.05 log cd/m(2); whereas the threshold level of 100-day-old RD rats was -1.14+/-0.09 log cd/m(2) with considerable variability in performance. When tested at a later age (about 150 days), the threshold level of RD rats was significantly increased (-0.82+/-0.09 log cd/m(2), p<0.03, paired t-test). This apparatus could be useful to train rats at a very early age to distinguish between two different visual stimuli and may be effective for visual functional evaluations following therapeutic interventions.

  19. Neural correlates of face gender discrimination learning.

    PubMed

    Su, Junzhu; Tan, Qingleng; Fang, Fang

    2013-04-01

    Using combined psychophysics and event-related potentials (ERPs), we investigated the effect of perceptual learning on face gender discrimination and probe the neural correlates of the learning effect. Human subjects were trained to perform a gender discrimination task with male or female faces. Before and after training, they were tested with the trained faces and other faces with the same and opposite genders. ERPs responding to these faces were recorded. Psychophysical results showed that training significantly improved subjects' discrimination performance and the improvement was specific to the trained gender, as well as to the trained identities. The training effect indicates that learning occurs at two levels-the category level (gender) and the exemplar level (identity). ERP analyses showed that the gender and identity learning was associated with the N170 latency reduction at the left occipital-temporal area and the N170 amplitude reduction at the right occipital-temporal area, respectively. These findings provide evidence for the facilitation model and the sharpening model on neuronal plasticity from visual experience, suggesting a faster processing speed and a sparser representation of face induced by perceptual learning.

  20. Brief Report: Eye Movements during Visual Search Tasks Indicate Enhanced Stimulus Discriminability in Subjects with PDD

    ERIC Educational Resources Information Center

    Kemner, Chantal; van Ewijk, Lizet; van Engeland, Herman; Hooge, Ignace

    2008-01-01

    Subjects with PDD excel on certain visuo-spatial tasks, amongst which visual search tasks, and this has been attributed to enhanced perceptual discrimination. However, an alternative explanation is that subjects with PDD show a different, more effective search strategy. The present study aimed to test both hypotheses, by measuring eye movements…

  1. A Further Evaluation of Picture Prompts during Auditory-Visual Conditional Discrimination Training

    ERIC Educational Resources Information Center

    Carp, Charlotte L.; Peterson, Sean P.; Arkel, Amber J.; Petursdottir, Anna I.; Ingvarsson, Einar T.

    2012-01-01

    This study was a systematic replication and extension of Fisher, Kodak, and Moore (2007), in which a picture prompt embedded into a least-to-most prompting sequence facilitated acquisition of auditory-visual conditional discriminations. Participants were 4 children who had been diagnosed with autism; 2 had limited prior receptive skills, and 2 had…

  2. Effects of Visual and Auditory Perceptual Aptitudes and Letter Discrimination Pretraining on Word Recognition.

    ERIC Educational Resources Information Center

    Janssen, David Rainsford

    This study investigated alternate methods of letter discrimination pretraining and word recognition training in young children. Seventy kindergarten children were trained to recognize eight printed words in a vocabulary list by a mixed-list paired-associate method. Four of the stimulus words had visual response choices (pictures) and four had…

  3. Visual discrimination in the pigeon (Columba livia): effects of selective lesions of the nucleus rotundus

    NASA Technical Reports Server (NTRS)

    Laverghetta, A. V.; Shimizu, T.

    1999-01-01

    The nucleus rotundus is a large thalamic nucleus in birds and plays a critical role in many visual discrimination tasks. In order to test the hypothesis that there are functionally distinct subdivisions in the nucleus rotundus, effects of selective lesions of the nucleus were studied in pigeons. The birds were trained to discriminate between different types of stationary objects and between different directions of moving objects. Multiple regression analyses revealed that lesions in the anterior, but not posterior, division caused deficits in discrimination of small stationary stimuli. Lesions in neither the anterior nor posterior divisions predicted effects in discrimination of moving stimuli. These results are consistent with a prediction led from the hypothesis that the nucleus is composed of functional subdivisions.

  4. Face adaptation improves gender discrimination.

    PubMed

    Yang, Hua; Shen, Jianhong; Chen, Juan; Fang, Fang

    2011-01-01

    Adaptation to a visual pattern can alter the sensitivities of neuronal populations encoding the pattern. However, the functional roles of adaptation, especially in high-level vision, are still equivocal. In the present study, we performed three experiments to investigate if face gender adaptation could affect gender discrimination. Experiments 1 and 2 revealed that adapting to a male/female face could selectively enhance discrimination for male/female faces. Experiment 3 showed that the discrimination enhancement induced by face adaptation could transfer across a substantial change in three-dimensional face viewpoint. These results provide further evidence suggesting that, similar to low-level vision, adaptation in high-level vision could calibrate the visual system to current inputs of complex shapes (i.e. face) and improve discrimination at the adapted characteristic. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. A crossmodal crossover: opposite effects of visual and auditory perceptual load on steady-state evoked potentials to irrelevant visual stimuli.

    PubMed

    Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B

    2012-07-16

    Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Evidence of Blocking with Geometric Cues in a Virtual Watermaze

    ERIC Educational Resources Information Center

    Redhead, Edward S.; Hamilton, Derek A.

    2009-01-01

    Three computer based experiments, testing human participants in a non-immersive virtual watermaze task, used a blocking design to assess whether two sets of geometric cues would compete in a manner described by associative models of learning. In stage 1, participants were required to discriminate between visually distinct platforms. In stage 2,…

  7. Feature extraction with deep neural networks by a generalized discriminant analysis.

    PubMed

    Stuhlsatz, André; Lippel, Jens; Zielke, Thomas

    2012-04-01

    We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.

  8. Figure–ground discrimination behavior in Drosophila. I. Spatial organization of wing-steering responses

    PubMed Central

    Fox, Jessica L.; Aptekar, Jacob W.; Zolotova, Nadezhda M.; Shoemaker, Patrick A.; Frye, Mark A.

    2014-01-01

    The behavioral algorithms and neural subsystems for visual figure–ground discrimination are not sufficiently described in any model system. The fly visual system shares structural and functional similarity with that of vertebrates and, like vertebrates, flies robustly track visual figures in the face of ground motion. This computation is crucial for animals that pursue salient objects under the high performance requirements imposed by flight behavior. Flies smoothly track small objects and use wide-field optic flow to maintain flight-stabilizing optomotor reflexes. The spatial and temporal properties of visual figure tracking and wide-field stabilization have been characterized in flies, but how the two systems interact spatially to allow flies to actively track figures against a moving ground has not. We took a systems identification approach in flying Drosophila and measured wing-steering responses to velocity impulses of figure and ground motion independently. We constructed a spatiotemporal action field (STAF) – the behavioral analog of a spatiotemporal receptive field – revealing how the behavioral impulse responses to figure tracking and concurrent ground stabilization vary for figure motion centered at each location across the visual azimuth. The figure tracking and ground stabilization STAFs show distinct spatial tuning and temporal dynamics, confirming the independence of the two systems. When the figure tracking system is activated by a narrow vertical bar moving within the frontal field of view, ground motion is essentially ignored despite comprising over 90% of the total visual input. PMID:24198267

  9. Post-traumatic stress symptoms are associated with better performance on a delayed match-to-position task

    PubMed Central

    2018-01-01

    Many individuals with posttraumatic stress disorder (PTSD) report experiencing frequent intrusive memories of the original traumatic event (e.g., flashbacks). These memories can be triggered by situations or stimuli that reflect aspects of the trauma and may reflect basic processes in learning and memory, such as generalization. It is possible that, through increased generalization, non-threatening stimuli that once evoked normal memories become associated with traumatic memories. Previous research has reported increased generalization in PTSD, but the role of visual discrimination processes has not been examined. To investigate visual discrimination in PTSD, 143 participants (Veterans and civilians) self-assessed for symptom severity were grouped according to the presence of severe PTSD symptoms (PTSS) vs. few/no symptoms (noPTSS). Participants were given a visual match-to-sample pattern separation task that varied trials by spatial separation (Low, Medium, High) and temporal delays (5, 10, 20, 30 s). Unexpectedly, the PTSS group demonstrated better discrimination performance than the noPTSS group at the most difficult spatial trials (Low spatial separation). Further assessment of accuracy and reaction time using diffusion drift modeling indicated that the better performance by the PTSS group on the hardest trials was not explained by slower reaction times, but rather a faster accumulation of evidence during decision making in conjunction with a reduced threshold, indicating a tendency in the PTSS group to decide quickly rather than waiting for additional evidence to support the decision. This result supports the need for future studies examining the precise role of discrimination and generalization in PTSD, and how these cognitive processes might contribute to expression and maintenance of PTSD symptoms. PMID:29736339

  10. Fine-grained temporal coding of visually-similar categories in the ventral visual pathway and prefrontal cortex

    PubMed Central

    Xu, Yang; D'Lauro, Christopher; Pyles, John A.; Kass, Robert E.; Tarr, Michael J.

    2013-01-01

    Humans are remarkably proficient at categorizing visually-similar objects. To better understand the cortical basis of this categorization process, we used magnetoencephalography (MEG) to record neural activity while participants learned–with feedback–to discriminate two highly-similar, novel visual categories. We hypothesized that although prefrontal regions would mediate early category learning, this role would diminish with increasing category familiarity and that regions within the ventral visual pathway would come to play a more prominent role in encoding category-relevant information as learning progressed. Early in learning we observed some degree of categorical discriminability and predictability in both prefrontal cortex and the ventral visual pathway. Predictability improved significantly above chance in the ventral visual pathway over the course of learning with the left inferior temporal and fusiform gyri showing the greatest improvement in predictability between 150 and 250 ms (M200) during category learning. In contrast, there was no comparable increase in discriminability in prefrontal cortex with the only significant post-learning effect being a decrease in predictability in the inferior frontal gyrus between 250 and 350 ms (M300). Thus, the ventral visual pathway appears to encode learned visual categories over the long term. At the same time these results add to our understanding of the cortical origins of previously reported signature temporal components associated with perceptual learning. PMID:24146656

  11. Cortical potentials evoked by confirming and disconfirming feedback following an auditory discrimination.

    NASA Technical Reports Server (NTRS)

    Squires, K. C.; Hillyard, S. A.; Lindsay, P. H.

    1973-01-01

    Vertex potentials elicited by visual feedback signals following an auditory intensity discrimination have been studied with eight subjects. Feedback signals which confirmed the prior sensory decision elicited small P3s, while disconfirming feedback elicited P3s that were larger. On the average, the latency of P3 was also found to increase with increasing disparity between the judgment and the feedback information. These effects were part of an overall dichotomy in wave shape following confirming vs disconfirming feedback. These findings are incorporated in a general model of the role of P3 in perceptual decision making.

  12. Spelling: A Visual Skill.

    ERIC Educational Resources Information Center

    Hendrickson, Homer

    1988-01-01

    Spelling problems arise due to problems with form discrimination and inadequate visualization. A child's sequence of visual development involves learning motor control and coordination, with vision directing and monitoring the movements; learning visual comparison of size, shape, directionality, and solidity; developing visual memory or recall;…

  13. Relationship between BOLD amplitude and pattern classification of orientation-selective activity in the human visual cortex.

    PubMed

    Tong, Frank; Harrison, Stephenie A; Dewey, John A; Kamitani, Yukiyasu

    2012-11-15

    Orientation-selective responses can be decoded from fMRI activity patterns in the human visual cortex, using multivariate pattern analysis (MVPA). To what extent do these feature-selective activity patterns depend on the strength and quality of the sensory input, and might the reliability of these activity patterns be predicted by the gross amplitude of the stimulus-driven BOLD response? Observers viewed oriented gratings that varied in luminance contrast (4, 20 or 100%) or spatial frequency (0.25, 1.0 or 4.0 cpd). As predicted, activity patterns in early visual areas led to better discrimination of orientations presented at high than low contrast, with greater effects of contrast found in area V1 than in V3. A second experiment revealed generally better decoding of orientations at low or moderate as compared to high spatial frequencies. Interestingly however, V1 exhibited a relative advantage at discriminating high spatial frequency orientations, consistent with the finer scale of representation in the primary visual cortex. In both experiments, the reliability of these orientation-selective activity patterns was well predicted by the average BOLD amplitude in each region of interest, as indicated by correlation analyses, as well as decoding applied to a simple model of voxel responses to simulated orientation columns. Moreover, individual differences in decoding accuracy could be predicted by the signal-to-noise ratio of an individual's BOLD response. Our results indicate that decoding accuracy can be well predicted by incorporating the amplitude of the BOLD response into simple simulation models of cortical selectivity; such models could prove useful in future applications of fMRI pattern classification. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Relationship between BOLD amplitude and pattern classification of orientation-selective activity in the human visual cortex

    PubMed Central

    Tong, Frank; Harrison, Stephenie A.; Dewey, John A.; Kamitani, Yukiyasu

    2012-01-01

    Orientation-selective responses can be decoded from fMRI activity patterns in the human visual cortex, using multivariate pattern analysis (MVPA). To what extent do these feature-selective activity patterns depend on the strength and quality of the sensory input, and might the reliability of these activity patterns be predicted by the gross amplitude of the stimulus-driven BOLD response? Observers viewed oriented gratings that varied in luminance contrast (4, 20 or 100%) or spatial frequency (0.25, 1.0 or 4.0 cpd). As predicted, activity patterns in early visual areas led to better discrimination of orientations presented at high than low contrast, with greater effects of contrast found in area V1 than in V3. A second experiment revealed generally better decoding of orientations at low or moderate as compared to high spatial frequencies. Interestingly however, V1 exhibited a relative advantage at discriminating high spatial frequency orientations, consistent with the finer scale of representation in the primary visual cortex. In both experiments, the reliability of these orientation-selective activity patterns was well predicted by the average BOLD amplitude in each region of interest, as indicated by correlation analyses, as well as decoding applied to a simple model of voxel responses to simulated orientation columns. Moreover, individual differences in decoding accuracy could be predicted by the signal-to-noise ratio of an individual's BOLD response. Our results indicate that decoding accuracy can be well predicted by incorporating the amplitude of the BOLD response into simple simulation models of cortical selectivity; such models could prove useful in future applications of fMRI pattern classification. PMID:22917989

  15. Asymmetric top-down modulation of ascending visual pathways in pigeons.

    PubMed

    Freund, Nadja; Valencia-Alfonso, Carlos E; Kirsch, Janina; Brodmann, Katja; Manns, Martina; Güntürkün, Onur

    2016-03-01

    Cerebral asymmetries are a ubiquitous phenomenon evident in many species, incl. humans, and they display some similarities in their organization across vertebrates. In many species the left hemisphere is associated with the ability to categorize objects based on abstract or experience-based behaviors. Using the asymmetrically organized visual system of pigeons as an animal model, we show that descending forebrain pathways asymmetrically modulate visually evoked responses of single thalamic units. Activity patterns of neurons within the nucleus rotundus, the largest thalamic visual relay structure in birds, were differently modulated by left and right hemispheric descending systems. Thus, visual information ascending towards the left hemisphere was modulated by forebrain top-down systems at thalamic level, while right thalamic units were strikingly less modulated. This asymmetry of top-down control could promote experience-based processes within the left hemisphere, while biasing the right side towards stimulus-bound response patterns. In a subsequent behavioral task we tested the possible functional impact of this asymmetry. Under monocular conditions, pigeons learned to discriminate color pairs, so that each hemisphere was trained on one specific discrimination. Afterwards the animals were presented with stimuli that put the hemispheres in conflict. Response patterns on the conflicting stimuli revealed a clear dominance of the left hemisphere. Transient inactivation of left hemispheric top-down control reduced this dominance while inactivation of right hemispheric top-down control had no effect on response patterns. Functional asymmetries of descending systems that modify visual ascending pathways seem to play an important role in the superiority of the left hemisphere in experience-based visual tasks. Copyright © 2015. Published by Elsevier Ltd.

  16. Quantifying and visualizing variations in sets of images using continuous linear optimal transport

    NASA Astrophysics Data System (ADS)

    Kolouri, Soheil; Rohde, Gustavo K.

    2014-03-01

    Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.

  17. Neuronal pattern separation of motion-relevant input in LIP activity

    PubMed Central

    Berberian, Nareg; MacPherson, Amanda; Giraud, Eloïse; Richardson, Lydia

    2016-01-01

    In various regions of the brain, neurons discriminate sensory stimuli by decreasing the similarity between ambiguous input patterns. Here, we examine whether this process of pattern separation may drive the rapid discrimination of visual motion stimuli in the lateral intraparietal area (LIP). Starting with a simple mean-rate population model that captures neuronal activity in LIP, we show that overlapping input patterns can be reformatted dynamically to give rise to separated patterns of neuronal activity. The population model predicts that a key ingredient of pattern separation is the presence of heterogeneity in the response of individual units. Furthermore, the model proposes that pattern separation relies on heterogeneity in the temporal dynamics of neural activity and not merely in the mean firing rates of individual neurons over time. We confirm these predictions in recordings of macaque LIP neurons and show that the accuracy of pattern separation is a strong predictor of behavioral performance. Overall, results propose that LIP relies on neuronal pattern separation to facilitate decision-relevant discrimination of sensory stimuli. NEW & NOTEWORTHY A new hypothesis is proposed on the role of the lateral intraparietal (LIP) region of cortex during rapid decision making. This hypothesis suggests that LIP alters the representation of ambiguous inputs to reduce their overlap, thus improving sensory discrimination. A combination of computational modeling, theoretical analysis, and electrophysiological data shows that the pattern separation hypothesis links neural activity to behavior and offers novel predictions on the role of LIP during sensory discrimination. PMID:27881719

  18. Where You Look Matters for Body Perception: Preferred Gaze Location Contributes to the Body Inversion Effect

    PubMed Central

    McKean, Danielle L.; Tsao, Jack W.; Chan, Annie W.-Y.

    2017-01-01

    The Body Inversion Effect (BIE; reduced visual discrimination performance for inverted compared to upright bodies) suggests that bodies are visually processed configurally; however, the specific importance of head posture information in the BIE has been indicated in reports of BIE reduction for whole bodies with fixed head position and for headless bodies. Through measurement of gaze patterns and investigation of the causal relation of fixation location to visual body discrimination performance, the present study reveals joint contributions of feature and configuration processing to visual body discrimination. Participants predominantly gazed at the (body-centric) upper body for upright bodies and the lower body for inverted bodies in the context of an experimental paradigm directly comparable to that of prior studies of the BIE. Subsequent manipulation of fixation location indicates that these preferential gaze locations causally contributed to the BIE for whole bodies largely due to the informative nature of gazing at or near the head. Also, a BIE was detected for both whole and headless bodies even when fixation location on the body was held constant, indicating a role of configural processing in body discrimination, though inclusion of the head posture information was still highly discriminative in the context of such processing. Interestingly, the impact of configuration (upright and inverted) to the BIE appears greater than that of differential preferred gaze locations. PMID:28085894

  19. Learning and recall of form discriminations during reversible cooling deactivation of ventral-posterior suprasylvian cortex in the cat.

    PubMed Central

    Lomber, S G; Payne, B R; Cornwell, P

    1996-01-01

    Extrastriate visual cortex of the ventral-posterior suprasylvian gyrus (vPS cortex) of freely behaving cats was reversibly deactivated with cooling to determine its role in performance on a battery of simple or masked two-dimensional pattern discriminations, and three-dimensional object discriminations. Deactivation of vPS cortex by cooling profoundly impaired the ability of the cats to recall the difference between all previously learned pattern and object discriminations. However, the cats' ability to learn or relearn pattern and object discriminations while vPS was deactivated depended upon the nature of the pattern or object and the cats' prior level of exposure to them. During cooling of vPS cortex, the cats could neither learn the novel object discriminations nor relearn a highly familiar masked or partially occluded pattern discrimination, although they could relearn both the highly familiar object and simple pattern discriminations. These cooling-induced deficits resemble those induced by cooling of the topologically equivalent inferotemporal cortex of monkeys and provides evidence that the equivalent regions contribute to visual processing in similar ways. Images Fig. 1 Fig. 3 PMID:8643686

  20. Enhanced pure-tone pitch discrimination among persons with autism but not Asperger syndrome.

    PubMed

    Bonnel, Anna; McAdams, Stephen; Smith, Bennett; Berthiaume, Claude; Bertone, Armando; Ciocca, Valter; Burack, Jacob A; Mottron, Laurent

    2010-07-01

    Persons with Autism spectrum disorders (ASD) display atypical perceptual processing in visual and auditory tasks. In vision, Bertone, Mottron, Jelenic, and Faubert (2005) found that enhanced and diminished visual processing is linked to the level of neural complexity required to process stimuli, as proposed in the neural complexity hypothesis. Based on these findings, Samson, Mottron, Jemel, Belin, and Ciocca (2006) proposed to extend the neural complexity hypothesis to the auditory modality. They hypothesized that persons with ASD should display enhanced performance for simple tones that are processed in primary auditory cortical regions, but diminished performance for complex tones that require additional processing in associative auditory regions, in comparison to typically developing individuals. To assess this hypothesis, we designed four auditory discrimination experiments targeting pitch, non-vocal and vocal timbre, and loudness. Stimuli consisted of spectro-temporally simple and complex tones. The participants were adolescents and young adults with autism, Asperger syndrome, and typical developmental histories, all with IQs in the normal range. Consistent with the neural complexity hypothesis and enhanced perceptual functioning model of ASD (Mottron, Dawson, Soulières, Hubert, & Burack, 2006), the participants with autism, but not with Asperger syndrome, displayed enhanced pitch discrimination for simple tones. However, no discrimination-thresholds differences were found between the participants with ASD and the typically developing persons across spectrally and temporally complex conditions. These findings indicate that enhanced pure-tone pitch discrimination may be a cognitive correlate of speech-delay among persons with ASD. However, auditory discrimination among this group does not appear to be directly contingent on the spectro-temporal complexity of the stimuli. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  1. Spectral discrimination in color blind animals via chromatic aberration and pupil shape.

    PubMed

    Stubbs, Alexander L; Stubbs, Christopher W

    2016-07-19

    We present a mechanism by which organisms with only a single photoreceptor, which have a monochromatic view of the world, can achieve color discrimination. An off-axis pupil and the principle of chromatic aberration (where different wavelengths come to focus at different distances behind a lens) can combine to provide "color-blind" animals with a way to distinguish colors. As a specific example, we constructed a computer model of the visual system of cephalopods (octopus, squid, and cuttlefish) that have a single unfiltered photoreceptor type. We compute a quantitative image quality budget for this visual system and show how chromatic blurring dominates the visual acuity in these animals in shallow water. We quantitatively show, through numerical simulations, how chromatic aberration can be exploited to obtain spectral information, especially through nonaxial pupils that are characteristic of coleoid cephalopods. We have also assessed the inherent ambiguity between range and color that is a consequence of the chromatic variation of best focus with wavelength. This proposed mechanism is consistent with the extensive suite of visual/behavioral and physiological data that has been obtained from cephalopod studies and offers a possible solution to the apparent paradox of vivid chromatic behaviors in color blind animals. Moreover, this proposed mechanism has potential applicability in organisms with limited photoreceptor complements, such as spiders and dolphins.

  2. Support for Lateralization of the Whorf Effect beyond the Realm of Color Discrimination

    ERIC Educational Resources Information Center

    Gilbert, Aubrey L.; Regier, Terry; Kay, Paul; Ivry, Richard B.

    2008-01-01

    Recent work has shown that Whorf effects of language on color discrimination are stronger in the right visual field than in the left. Here we show that this phenomenon is not limited to color: The perception of animal figures (cats and dogs) was more strongly affected by linguistic categories for stimuli presented to the right visual field than…

  3. Perceptual learning of basic visual features remains task specific with Training-Plus-Exposure (TPE) training

    PubMed Central

    Cong, Lin-Juan; Wang, Ru-Jie; Yu, Cong; Zhang, Jun-Yun

    2016-01-01

    Visual perceptual learning is known to be specific to the trained retinal location, feature, and task. However, location and feature specificity can be eliminated by double-training or TPE training protocols, in which observers receive additional exposure to the transfer location or feature dimension via an irrelevant task besides the primary learning task Here we tested whether these new training protocols could even make learning transfer across different tasks involving discrimination of basic visual features (e.g., orientation and contrast). Observers practiced a near-threshold orientation (or contrast) discrimination task. Following a TPE training protocol, they also received exposure to the transfer task via performing suprathreshold contrast (or orientation) discrimination in alternating blocks of trials in the same sessions. The results showed no evidence for significant learning transfer to the untrained near-threshold contrast (or orientation) discrimination task after discounting the pretest effects and the suprathreshold practice effects. These results thus do not support a hypothetical task-independent component in perceptual learning of basic visual features. They also set the boundary of the new training protocols in their capability to enable learning transfer. PMID:26873777

  4. Melanopsin-based brightness discrimination in mice and humans.

    PubMed

    Brown, Timothy M; Tsujimura, Sei-Ichi; Allen, Annette E; Wynne, Jonathan; Bedford, Robert; Vickery, Graham; Vugler, Anthony; Lucas, Robert J

    2012-06-19

    Photoreception in the mammalian retina is not restricted to rods and cones but extends to a small number of intrinsically photoreceptive retinal ganglion cells (ipRGCs), expressing the photopigment melanopsin. ipRGCs are known to support various accessory visual functions including circadian photoentrainment and pupillary reflexes. However, despite anatomical and physiological evidence that they contribute to the thalamocortical visual projection, no aspect of visual discrimination has been shown to rely upon ipRGCs. Based on their currently known roles, we hypothesized that ipRGCs may contribute to distinguishing brightness. This percept is related to an object's luminance-a photometric measure of light intensity relevant for cone photoreceptors. However, the perceived brightness of different sources is not always predicted by their respective luminance. Here, we used parallel behavioral and electrophysiological experiments to first show that melanopsin contributes to brightness discrimination in both retinally degenerate and fully sighted mice. We continued to use comparable paradigms in psychophysical experiments to provide evidence for a similar role in healthy human subjects. These data represent the first direct evidence that an aspect of visual discrimination in normally sighted subjects can be supported by inner retinal photoreceptors. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Effective real-time vehicle tracking using discriminative sparse coding on local patches

    NASA Astrophysics Data System (ADS)

    Chen, XiangJun; Ye, Feiyue; Ruan, Yaduan; Chen, Qimei

    2016-01-01

    A visual tracking framework that provides an object detector and tracker, which focuses on effective and efficient visual tracking in surveillance of real-world intelligent transport system applications, is proposed. The framework casts the tracking task as problems of object detection, feature representation, and classification, which is different from appearance model-matching approaches. Through a feature representation of discriminative sparse coding on local patches called DSCLP, which trains a dictionary on local clustered patches sampled from both positive and negative datasets, the discriminative power and robustness has been improved remarkably, which makes our method more robust to a complex realistic setting with all kinds of degraded image quality. Moreover, by catching objects through one-time background subtraction, along with offline dictionary training, computation time is dramatically reduced, which enables our framework to achieve real-time tracking performance even in a high-definition sequence with heavy traffic. Experiment results show that our work outperforms some state-of-the-art methods in terms of speed, accuracy, and robustness and exhibits increased robustness in a complex real-world scenario with degraded image quality caused by vehicle occlusion, image blur of rain or fog, and change in viewpoint or scale.

  6. Nonlinear dimensionality reduction methods for synthetic biology biobricks' visualization.

    PubMed

    Yang, Jiaoyun; Wang, Haipeng; Ding, Huitong; An, Ning; Alterovitz, Gil

    2017-01-19

    Visualizing data by dimensionality reduction is an important strategy in Bioinformatics, which could help to discover hidden data properties and detect data quality issues, e.g. data noise, inappropriately labeled data, etc. As crowdsourcing-based synthetic biology databases face similar data quality issues, we propose to visualize biobricks to tackle them. However, existing dimensionality reduction methods could not be directly applied on biobricks datasets. Hereby, we use normalized edit distance to enhance dimensionality reduction methods, including Isomap and Laplacian Eigenmaps. By extracting biobricks from synthetic biology database Registry of Standard Biological Parts, six combinations of various types of biobricks are tested. The visualization graphs illustrate discriminated biobricks and inappropriately labeled biobricks. Clustering algorithm K-means is adopted to quantify the reduction results. The average clustering accuracy for Isomap and Laplacian Eigenmaps are 0.857 and 0.844, respectively. Besides, Laplacian Eigenmaps is 5 times faster than Isomap, and its visualization graph is more concentrated to discriminate biobricks. By combining normalized edit distance with Isomap and Laplacian Eigenmaps, synthetic biology biobircks are successfully visualized in two dimensional space. Various types of biobricks could be discriminated and inappropriately labeled biobricks could be determined, which could help to assess crowdsourcing-based synthetic biology databases' quality, and make biobricks selection.

  7. Discrimination of Mediterranean mussel (Mytilus galloprovincialis) feces in deposited materials by fecal morphology.

    PubMed

    Akiyama, Yoshihiro B; Iseri, Erina; Kataoka, Tomoya; Tanaka, Makiko; Katsukoshi, Kiyonori; Moki, Hirotada; Naito, Ryoji; Hem, Ramrav; Okada, Tomonari

    2017-02-15

    In the present study, we determined the common morphological characteristics of the feces of Mytilus galloprovincialis to develop a method for visually discriminating the feces of this mussel in deposited materials. This method can be used to assess the effect of mussel feces on benthic environments. The accuracy of visual morphology-based discrimination of mussel feces in deposited materials was confirmed by DNA analysis. Eighty-nine percent of mussel feces shared five common morphological characteristics. Of the 372 animal species investigated, only four species shared all five of these characteristics. More than 96% of the samples were visually identified as M. galloprovincialis feces on the basis of morphology of the particles containing the appropriate mitochondrial DNA. These results suggest that mussel feces can be discriminated with high accuracy on the basis of their morphological characteristics. Thus, our method can be used to quantitatively assess the effect of mussel feces on local benthic environments. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. A PDP model of the simultaneous perception of multiple objects

    NASA Astrophysics Data System (ADS)

    Henderson, Cynthia M.; McClelland, James L.

    2011-06-01

    Illusory conjunctions in normal and simultanagnosic subjects are two instances where the visual features of multiple objects are incorrectly 'bound' together. A connectionist model explores how multiple objects could be perceived correctly in normal subjects given sufficient time, but could give rise to illusory conjunctions with damage or time pressure. In this model, perception of two objects benefits from lateral connections between hidden layers modelling aspects of the ventral and dorsal visual pathways. As with simultanagnosia, simulations of dorsal lesions impair multi-object recognition. In contrast, a large ventral lesion has minimal effect on dorsal functioning, akin to dissociations between simple object manipulation (retained in visual form agnosia and semantic dementia) and object discrimination (impaired in these disorders) [Hodges, J.R., Bozeat, S., Lambon Ralph, M.A., Patterson, K., and Spatt, J. (2000), 'The Role of Conceptual Knowledge: Evidence from Semantic Dementia', Brain, 123, 1913-1925; Milner, A.D., and Goodale, M.A. (2006), The Visual Brain in Action (2nd ed.), New York: Oxford]. It is hoped that the functioning of this model might suggest potential processes underlying dorsal and ventral contributions to the correct perception of multiple objects.

  9. Humans do not have direct access to retinal flow during walking

    PubMed Central

    Souman, Jan L.; Freeman, Tom C.A.; Eikmeier, Verena; Ernst, Marc O.

    2013-01-01

    Perceived visual speed has been reported to be reduced during walking. This reduction has been attributed to a partial subtraction of walking speed from visual speed (Durgin & Gigone, 2007; Durgin, Gigone, & Scott, 2005). We tested whether observers still have access to the retinal flow before subtraction takes place. Observers performed a 2IFC visual speed discrimination task while walking on a treadmill. In one condition, walking speed was identical in the two intervals, while in a second condition walking speed differed between intervals. If observers have access to the retinal flow before subtraction, any changes in walking speed across intervals should not affect their ability to discriminate retinal flow speed. Contrary to this “direct-access hypothesis”, we found that observers were worse at discrimination when walking speed differed between intervals. The results therefore suggest that observers do not have access to retinal flow before subtraction. We also found that the amount of subtraction depended on the visual speed presented, suggesting that the interaction between the processing of visual input and of self-motion is more complex than previously proposed. PMID:20884509

  10. Quality metrics in high-dimensional data visualization: an overview and systematization.

    PubMed

    Bertini, Enrico; Tatu, Andrada; Keim, Daniel

    2011-12-01

    In this paper, we present a systematization of techniques that use quality metrics to help in the visual exploration of meaningful patterns in high-dimensional data. In a number of recent papers, different quality metrics are proposed to automate the demanding search through large spaces of alternative visualizations (e.g., alternative projections or ordering), allowing the user to concentrate on the most promising visualizations suggested by the quality metrics. Over the last decade, this approach has witnessed a remarkable development but few reflections exist on how these methods are related to each other and how the approach can be developed further. For this purpose, we provide an overview of approaches that use quality metrics in high-dimensional data visualization and propose a systematization based on a thorough literature review. We carefully analyze the papers and derive a set of factors for discriminating the quality metrics, visualization techniques, and the process itself. The process is described through a reworked version of the well-known information visualization pipeline. We demonstrate the usefulness of our model by applying it to several existing approaches that use quality metrics, and we provide reflections on implications of our model for future research. © 2010 IEEE

  11. Refining Stimulus Parameters in Assessing Infant Speech Perception Using Visual Reinforcement Infant Speech Discrimination: Sensation Level.

    PubMed

    Uhler, Kristin M; Baca, Rosalinda; Dudas, Emily; Fredrickson, Tammy

    2015-01-01

    Speech perception measures have long been considered an integral piece of the audiological assessment battery. Currently, a prelinguistic, standardized measure of speech perception is missing in the clinical assessment battery for infants and young toddlers. Such a measure would allow systematic assessment of speech perception abilities of infants as well as the potential to investigate the impact early identification of hearing loss and early fitting of amplification have on the auditory pathways. To investigate the impact of sensation level (SL) on the ability of infants with normal hearing (NH) to discriminate /a-i/ and /ba-da/ and to determine if performance on the two contrasts are significantly different in predicting the discrimination criterion. The design was based on a survival analysis model for event occurrence and a repeated measures logistic model for binary outcomes. The outcome for survival analysis was the minimum SL for criterion and the outcome for the logistic regression model was the presence/absence of achieving the criterion. Criterion achievement was designated when an infant's proportion correct score was >0.75 on the discrimination performance task. Twenty-two infants with NH sensitivity participated in this study. There were 9 males and 13 females, aged 6-14 mo. Testing took place over two to three sessions. The first session consisted of a hearing test, threshold assessment of the two speech sounds (/a/ and /i/), and if time and attention allowed, visual reinforcement infant speech discrimination (VRISD). The second session consisted of VRISD assessment for the two test contrasts (/a-i/ and /ba-da/). The presentation level started at 50 dBA. If the infant was unable to successfully achieve criterion (>0.75) at 50 dBA, the presentation level was increased to 70 dBA followed by 60 dBA. Data examination included an event analysis, which provided the probability of criterion distribution across SL. The second stage of the analysis was a repeated measures logistic regression where SL and contrast were used to predict the likelihood of speech discrimination criterion. Infants were able to reach criterion for the /a-i/ contrast at statistically lower SLs when compared to /ba-da/. There were six infants who never reached criterion for /ba-da/ and one never reached criterion for /a-i/. The conditional probability of not reaching criterion by 70 dB SL was 0% for /a-i/ and 21% for /ba-da/. The predictive logistic regression model showed that children were more likely to discriminate the /a-i/ even when controlling for SL. Nearly all normal-hearing infants can demonstrate discrimination criterion of a vowel contrast at 60 dB SL, while a level of ≥70 dB SL may be needed to allow all infants to demonstrate discrimination criterion of a difficult consonant contrast. American Academy of Audiology.

  12. Multiple Concurrent Visual-Motor Mappings: Implications for Models of Adaptation

    NASA Technical Reports Server (NTRS)

    Cunningham, H. A.; Welch, Robert B.

    1994-01-01

    Previous research on adaptation to visual-motor rearrangement suggests that the central nervous system represents accurately only 1 visual-motor mapping at a time. This idea was examined in 3 experiments where subjects tracked a moving target under repeated alternations between 2 initially interfering mappings (the 'normal' mapping characteristic of computer input devices and a 108' rotation of the normal mapping). Alternation between the 2 mappings led to significant reduction in error under the rotated mapping and significant reduction in the adaptation aftereffect ordinarily caused by switching between mappings. Color as a discriminative cue, interference versus decay in adaptation aftereffect, and intermanual transfer were also examined. The results reveal a capacity for multiple concurrent visual-motor mappings, possibly controlled by a parametric process near the motor output stage of processing.

  13. Smell or vision? The use of different sensory modalities in predator discrimination.

    PubMed

    Fischer, Stefan; Oberhummer, Evelyne; Cunha-Saraiva, Filipa; Gerber, Nina; Taborsky, Barbara

    2017-01-01

    Theory predicts that animals should adjust their escape responses to the perceived predation risk. The information animals obtain about potential predation risk may differ qualitatively depending on the sensory modality by which a cue is perceived. For instance, olfactory cues may reveal better information about the presence or absence of threats, whereas visual information can reliably transmit the position and potential attack distance of a predator. While this suggests a differential use of information perceived through the two sensory channels, the relative importance of visual vs. olfactory cues when distinguishing between different predation threats is still poorly understood. Therefore, we exposed individuals of the cooperatively breeding cichlid Neolamprologus pulcher to a standardized threat stimulus combined with either predator or non-predator cues presented either visually or chemically. We predicted that flight responses towards a threat stimulus are more pronounced if cues of dangerous rather than harmless heterospecifics are presented and that N. pulcher , being an aquatic species, relies more on olfaction when discriminating between dangerous and harmless heterospecifics. N. pulcher responded faster to the threat stimulus, reached a refuge faster and entered a refuge more likely when predator cues were perceived. Unexpectedly, the sensory modality used to perceive the cues did not affect the escape response or the duration of the recovery phase. This suggests that N. pulcher are able to discriminate heterospecific cues with similar acuity when using vision or olfaction. We discuss that this ability may be advantageous in aquatic environments where the visibility conditions strongly vary over time. The ability to rapidly discriminate between dangerous predators and harmless heterospecifics is crucial for the survival of prey animals. In seasonally fluctuating environment, sensory conditions may change over the year and may make the use of multiple sensory modalities for heterospecific discrimination highly beneficial. Here we compared the efficacy of visual and olfactory senses in the discrimination ability of the cooperatively breeding cichlid Neolamprologus pulcher . We presented individual fish with visual or olfactory cues of predators or harmless heterospecifics and recorded their flight response. When exposed to predator cues, individuals responded faster, reached a refuge faster and were more likely to enter the refuge. Unexpectedly, the olfactory and visual senses seemed to be equally efficient in this discrimination task, suggesting that seasonal variation of water conditions experienced by N. pulcher may necessitate the use of multiple sensory channels for the same task.

  14. Multi-level discriminative dictionary learning with application to large scale image classification.

    PubMed

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  15. Evaluation of a pilot workload metric for simulated VTOL landing tasks

    NASA Technical Reports Server (NTRS)

    North, R. A.; Graffunder, K.

    1979-01-01

    A methodological approach to measuring workload was investigated for evaluation of new concepts in VTOL aircraft displays. Multivariate discriminant functions were formed from conventional flight performance and/or visual response variables to maximize detection of experimental differences. The flight performance variable discriminant showed maximum differentiation between crosswind conditions. The visual response measure discriminant maximized differences between fixed vs. motion base conditions and experimental displays. Physiological variables were used to attempt to predict the discriminant function values for each subject/condition/trial. The weights of the physiological variables in these equations showed agreement with previous studies. High muscle tension, light but irregular breathing patterns, and higher heart rate with low amplitude all produced higher scores on this scale and thus, represented higher workload levels.

  16. Visual perceptual load induces inattentional deafness.

    PubMed

    Macdonald, James S P; Lavie, Nilli

    2011-08-01

    In this article, we establish a new phenomenon of "inattentional deafness" and highlight the level of load on visual attention as a critical determinant of this phenomenon. In three experiments, we modified an inattentional blindness paradigm to assess inattentional deafness. Participants made either a low- or high-load visual discrimination concerning a cross shape (respectively, a discrimination of line color or of line length with a subtle length difference). A brief pure tone was presented simultaneously with the visual task display on a final trial. Failures to notice the presence of this tone (i.e., inattentional deafness) reached a rate of 79% in the high-visual-load condition, significantly more than in the low-load condition. These findings establish the phenomenon of inattentional deafness under visual load, thereby extending the load theory of attention (e.g., Lavie, Journal of Experimental Psychology. Human Perception and Performance, 25, 596-616, 1995) to address the cross-modal effects of visual perceptual load.

  17. Characteristic and intermingled neocortical circuits encode different visual object discriminations.

    PubMed

    Zhang, Guo-Rong; Zhao, Hua; Cook, Nathan; Svestka, Michael; Choi, Eui M; Jan, Mary; Cook, Robert G; Geller, Alfred I

    2017-07-28

    Synaptic plasticity and neural network theories hypothesize that the essential information for advanced cognitive tasks is encoded in specific circuits and neurons within distributed neocortical networks. However, these circuits are incompletely characterized, and we do not know if a specific discrimination is encoded in characteristic circuits among multiple animals. Here, we determined the spatial distribution of active neurons for a circuit that encodes some of the essential information for a cognitive task. We genetically activated protein kinase C pathways in several hundred spatially-grouped glutamatergic and GABAergic neurons in rat postrhinal cortex, a multimodal associative area that is part of a distributed circuit that encodes visual object discriminations. We previously established that this intervention enhances accuracy for specific discriminations. Moreover, the genetically-modified, local circuit in POR cortex encodes some of the essential information, and this local circuit is preferentially activated during performance, as shown by activity-dependent gene imaging. Here, we mapped the positions of the active neurons, which revealed that two image sets are encoded in characteristic and different circuits. While characteristic circuits are known to process sensory information, in sensory areas, this is the first demonstration that characteristic circuits encode specific discriminations, in a multimodal associative area. Further, the circuits encoding the two image sets are intermingled, and likely overlapping, enabling efficient encoding. Consistent with reconsolidation theories, intermingled and overlapping encoding could facilitate formation of associations between related discriminations, including visually similar discriminations or discriminations learned at the same time or place. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Learning and Discrimination of Audiovisual Events in Human Infants: The Hierarchical Relation between Intersensory Temporal Synchrony and Rhythmic Pattern Cues.

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2003-01-01

    Three experiments examined 4- to 10-month-olds' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Results established that infants of all ages could discriminate between two different audio-visual rhythmic events. Only 10-month-olds detected a desynchronization of the auditory and visual…

  19. Explaining the Timing of Natural Scene Understanding with a Computational Model of Perceptual Categorization

    PubMed Central

    Sofer, Imri; Crouzet, Sébastien M.; Serre, Thomas

    2015-01-01

    Observers can rapidly perform a variety of visual tasks such as categorizing a scene as open, as outdoor, or as a beach. Although we know that different tasks are typically associated with systematic differences in behavioral responses, to date, little is known about the underlying mechanisms. Here, we implemented a single integrated paradigm that links perceptual processes with categorization processes. Using a large image database of natural scenes, we trained machine-learning classifiers to derive quantitative measures of task-specific perceptual discriminability based on the distance between individual images and different categorization boundaries. We showed that the resulting discriminability measure accurately predicts variations in behavioral responses across categorization tasks and stimulus sets. We further used the model to design an experiment, which challenged previous interpretations of the so-called “superordinate advantage.” Overall, our study suggests that observed differences in behavioral responses across rapid categorization tasks reflect natural variations in perceptual discriminability. PMID:26335683

  20. Dynamic crossmodal links revealed by steady-state responses in auditory-visual divided attention.

    PubMed

    de Jong, Ritske; Toffanin, Paolo; Harbers, Marten

    2010-01-01

    Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were smaller, but still significant, in detection than in discrimination. The auditory steady-state response (SSR) showed no effects of attention at frontocentral locations, but did so at occipital locations where it was evident only when attention was divided between audition and vision. Similarly, the visual SSR at occipital locations was substantially enhanced when attention was divided across modalities. Both effects were equally present in detection and discrimination. We suggest that both effects reflect a common cause: An attention-dependent influence of auditory information processing on early cortical stages of visual information processing, mediated by enhanced effective connectivity between the two modalities under conditions of divided attention. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  1. Bilateral lesions of nucleus subpretectalis/interstitio-pretecto-subpretectalis (SP/IPS) selectively impair figure-ground discrimination in pigeons.

    PubMed

    Scully, Erin N; Acerbo, Martin J; Lazareva, Olga F

    2014-01-01

    Earlier, we reported that nucleus rotundus (Rt) together with its inhibitory complex, nucleus subpretectalis/interstitio-pretecto-subpretectalis (SP/IPS), had significantly higher activity in pigeons performing figure-ground discrimination than in the control group that did not perform any visual discriminations. In contrast, color discrimination produced significantly higher activity than control in the Rt but not in the SP/IPS. Finally, shape discrimination produced significantly lower activity than control in both the Rt and the SP/IPS. In this study, we trained pigeons to simultaneously perform three visual discriminations (figure-ground, color, and shape) using the same stimulus displays. When birds learned to perform all three tasks concurrently at high levels of accuracy, we conducted bilateral chemical lesions of the SP/IPS. After a period of recovery, the birds were retrained on the same tasks to evaluate the effect of lesions on maintenance of these discriminations. We found that the lesions of the SP/IPS had no effect on color or shape discrimination and that they significantly impaired figure-ground discrimination. Together with our earlier data, these results suggest that the nucleus Rt and the SP/IPS are the key structures involved in figure-ground discrimination. These results also imply that thalamic processing is critical for figure-ground segregation in avian brain.

  2. Metabolic Pathways Visualization Skills Development by Undergraduate Students

    ERIC Educational Resources Information Center

    dos Santos, Vanessa J. S. V.; Galembeck, Eduardo

    2015-01-01

    We have developed a metabolic pathways visualization skill test (MPVST) to gain greater insight into our students' abilities to comprehend the visual information presented in metabolic pathways diagrams. The test is able to discriminate students' visualization ability with respect to six specific visualization skills that we identified as key to…

  3. Learning alters theta amplitude, theta-gamma coupling and neuronal synchronization in inferotemporal cortex.

    PubMed

    Kendrick, Keith M; Zhan, Yang; Fischer, Hanno; Nicol, Alister U; Zhang, Xuejuan; Feng, Jianfeng

    2011-06-09

    How oscillatory brain rhythms alone, or in combination, influence cortical information processing to support learning has yet to be fully established. Local field potential and multi-unit neuronal activity recordings were made from 64-electrode arrays in the inferotemporal cortex of conscious sheep during and after visual discrimination learning of face or object pairs. A neural network model has been developed to simulate and aid functional interpretation of learning-evoked changes. Following learning the amplitude of theta (4-8 Hz), but not gamma (30-70 Hz) oscillations was increased, as was the ratio of theta to gamma. Over 75% of electrodes showed significant coupling between theta phase and gamma amplitude (theta-nested gamma). The strength of this coupling was also increased following learning and this was not simply a consequence of increased theta amplitude. Actual discrimination performance was significantly correlated with theta and theta-gamma coupling changes. Neuronal activity was phase-locked with theta but learning had no effect on firing rates or the magnitude or latencies of visual evoked potentials during stimuli. The neural network model developed showed that a combination of fast and slow inhibitory interneurons could generate theta-nested gamma. By increasing N-methyl-D-aspartate receptor sensitivity in the model similar changes were produced as in inferotemporal cortex after learning. The model showed that these changes could potentiate the firing of downstream neurons by a temporal desynchronization of excitatory neuron output without increasing the firing frequencies of the latter. This desynchronization effect was confirmed in IT neuronal activity following learning and its magnitude was correlated with discrimination performance. Face discrimination learning produces significant increases in both theta amplitude and the strength of theta-gamma coupling in the inferotemporal cortex which are correlated with behavioral performance. A network model which can reproduce these changes suggests that a key function of such learning-evoked alterations in theta and theta-nested gamma activity may be increased temporal desynchronization in neuronal firing leading to optimal timing of inputs to downstream neural networks potentiating their responses. In this way learning can produce potentiation in neural networks simply through altering the temporal pattern of their inputs.

  4. Learning alters theta amplitude, theta-gamma coupling and neuronal synchronization in inferotemporal cortex

    PubMed Central

    2011-01-01

    Background How oscillatory brain rhythms alone, or in combination, influence cortical information processing to support learning has yet to be fully established. Local field potential and multi-unit neuronal activity recordings were made from 64-electrode arrays in the inferotemporal cortex of conscious sheep during and after visual discrimination learning of face or object pairs. A neural network model has been developed to simulate and aid functional interpretation of learning-evoked changes. Results Following learning the amplitude of theta (4-8 Hz), but not gamma (30-70 Hz) oscillations was increased, as was the ratio of theta to gamma. Over 75% of electrodes showed significant coupling between theta phase and gamma amplitude (theta-nested gamma). The strength of this coupling was also increased following learning and this was not simply a consequence of increased theta amplitude. Actual discrimination performance was significantly correlated with theta and theta-gamma coupling changes. Neuronal activity was phase-locked with theta but learning had no effect on firing rates or the magnitude or latencies of visual evoked potentials during stimuli. The neural network model developed showed that a combination of fast and slow inhibitory interneurons could generate theta-nested gamma. By increasing N-methyl-D-aspartate receptor sensitivity in the model similar changes were produced as in inferotemporal cortex after learning. The model showed that these changes could potentiate the firing of downstream neurons by a temporal desynchronization of excitatory neuron output without increasing the firing frequencies of the latter. This desynchronization effect was confirmed in IT neuronal activity following learning and its magnitude was correlated with discrimination performance. Conclusions Face discrimination learning produces significant increases in both theta amplitude and the strength of theta-gamma coupling in the inferotemporal cortex which are correlated with behavioral performance. A network model which can reproduce these changes suggests that a key function of such learning-evoked alterations in theta and theta-nested gamma activity may be increased temporal desynchronization in neuronal firing leading to optimal timing of inputs to downstream neural networks potentiating their responses. In this way learning can produce potentiation in neural networks simply through altering the temporal pattern of their inputs. PMID:21658251

  5. Comparison of Automated and Human Instruction for Developmentally Retarded Preschool Children.

    ERIC Educational Resources Information Center

    Richmond, Glenn

    1983-01-01

    Twenty developmentally retarded preschool children were trained on two visual discriminations with automated instruction and two discriminations with human instruction. Results showed human instruction significantly better than automated instruction. Nine Ss reached criterion for both discriminations with automated instruction, therefore showing…

  6. A horse's eye view: size and shape discrimination compared with other mammals.

    PubMed

    Tomonaga, Masaki; Kumazaki, Kiyonori; Camus, Florine; Nicod, Sophie; Pereira, Carlos; Matsuzawa, Tetsuro

    2015-11-01

    Mammals have adapted to a variety of natural environments from underwater to aerial and these different adaptations have affected their specific perceptive and cognitive abilities. This study used a computer-controlled touchscreen system to examine the visual discrimination abilities of horses, particularly regarding size and shape, and compared the results with those from chimpanzee, human and dolphin studies. Horses were able to discriminate a difference of 14% in circle size but showed worse discrimination thresholds than chimpanzees and humans; these differences cannot be explained by visual acuity. Furthermore, the present findings indicate that all species use length cues rather than area cues to discriminate size. In terms of shape discrimination, horses exhibited perceptual similarities among shapes with curvatures, vertical/horizontal lines and diagonal lines, and the relative contributions of each feature to perceptual similarity in horses differed from those for chimpanzees, humans and dolphins. Horses pay more attention to local components than to global shapes. © 2015 The Author(s).

  7. Fast transfer of crossmodal time interval training.

    PubMed

    Chen, Lihan; Zhou, Xiaolin

    2014-06-01

    Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.

  8. Visual discrimination following partial telencephalic ablations in nurse sharks (Ginglymostoma cirratum).

    PubMed

    Graeber, R C; Schroeder, D M; Jane, J A; Ebbesson, S O

    1978-07-15

    An instrumental conditioning task was used to examine the role of the nurse shark telencephalon in black-white (BW) and horizontal-vertical stripes (HV) discrimination performance. In the first experiment, subjects initially received either bilateral anterior telencephalic control lesions or bilateral posterior telencephalic lesions aimed at destroying the central telencephalic nuclei (CN), which are known to receive direct input from the thalamic visual area. Postoperatively, the sharks were trained first on BW and then on HV. Those with anterior lesions learned both tasks as rapidly as unoperated subjects. Those with posterior lesions exhibited visual discrimination deficits related to the amount of damage to the CN and its connecting pathways. Severe damage resulted in an inability to learn either task but caused no impairments in motivation or general learning ability. In the second experiment, the sharks were first trained on BW and HV and then operated. Suction ablations were used to remove various portions of the CN. Sharks with 10% or less damage to the CN retained the preoperatively acquired discriminations almost perfectly. Those with 11-50% damage had to be retrained on both tasks. Almost total removal of the CN produced behavioral indications of blindness along with an inability to perform above the chance level on BW despite excellent retention of both discriminations over a 28-day period before surgery. It appears, however, that such sharks can still detect light. These results implicate the central telencephalic nuclei in the control of visually guided behavior in sharks.

  9. Mental workload while driving: effects on visual search, discrimination, and decision making.

    PubMed

    Recarte, Miguel A; Nunes, Luis M

    2003-06-01

    The effects of mental workload on visual search and decision making were studied in real traffic conditions with 12 participants who drove an instrumented car. Mental workload was manipulated by having participants perform several mental tasks while driving. A simultaneous visual-detection and discrimination test was used as performance criteria. Mental tasks produced spatial gaze concentration and visual-detection impairment, although no tunnel vision occurred. According to ocular behavior analysis, this impairment was due to late detection and poor identification more than to response selection. Verbal acquisition tasks were innocuous compared with production tasks, and complex conversations, whether by phone or with a passenger, are dangerous for road safety.

  10. The oblique effect is both allocentric and egocentric

    PubMed Central

    Mikellidou, Kyriaki; Cicchini, Guido Marco; Thompson, Peter G.; Burr, David C.

    2016-01-01

    Despite continuous movements of the head, humans maintain a stable representation of the visual world, which seems to remain always upright. The mechanisms behind this stability are largely unknown. To gain some insight on how head tilt affects visual perception, we investigate whether a well-known orientation-dependent visual phenomenon, the oblique effect—superior performance for stimuli at cardinal orientations (0° and 90°) compared with oblique orientations (45°)—is anchored in egocentric or allocentric coordinates. To this aim, we measured orientation discrimination thresholds at various orientations for different head positions both in body upright and in supine positions. We report that, in the body upright position, the oblique effect remains anchored in allocentric coordinates irrespective of head position. When lying supine, gravitational effects in the plane orthogonal to gravity are discounted. Under these conditions, the oblique effect was less marked than when upright, and anchored in egocentric coordinates. The results are well explained by a simple “compulsory fusion” model in which the head-based and the gravity-based signals are combined with different weightings (30% and 70%, respectively), even when this leads to reduced sensitivity in orientation discrimination. PMID:26129862

  11. How category learning affects object representations: Not all morphspaces stretch alike

    PubMed Central

    Folstein, Jonathan R.; Gauthier, Isabel; Palmeri, Thomas J.

    2012-01-01

    How does learning to categorize objects affect how we visually perceive them? Behavioral, neurophysiological, and neuroimaging studies have tested the degree to which category learning influences object representations, with conflicting results. Some studies find that objects become more visually discriminable along dimensions relevant to previously learned categories, while others find no such effect. One critical factor we explore here lies in the structure of the morphspaces used in different studies. Studies finding no increase in discriminability often use “blended” morphspaces, with morphparents lying at corners of the space. By contrast, studies finding increases in discriminability use “factorial” morphspaces, defined by separate morphlines forming axes of the space. Using the same four morphparents, we created both factorial and blended morphspaces matched in pairwise discriminability. Category learning caused a selective increase in discriminability along the relevant dimension of the factorial space, but not in the blended space, and led to the creation of functional dimensions in the factorial space, but not in the blended space. These findings demonstrate that not all morphspaces stretch alike: Only some morphspaces support enhanced discriminability to relevant object dimensions following category learning. Our results have important implications for interpreting neuroimaging studies reporting little or no effect of category learning on object representations in the visual system: Those studies may have been limited by their use of blended morphspaces. PMID:22746950

  12. The marmoset monkey as a model for visual neuroscience

    PubMed Central

    Mitchell, Jude F.; Leopold, David A.

    2015-01-01

    The common marmoset (Callithrix jacchus) has been valuable as a primate model in biomedical research. Interest in this species has grown recently, in part due to the successful demonstration of transgenic marmosets. Here we examine the prospects of the marmoset model for visual neuroscience research, adopting a comparative framework to place the marmoset within a broader evolutionary context. The marmoset’s small brain bears most of the organizational features of other primates, and its smooth surface offers practical advantages over the macaque for areal mapping, laminar electrode penetration, and two-photon and optical imaging. Behaviorally, marmosets are more limited at performing regimented psychophysical tasks, but do readily accept the head restraint that is necessary for accurate eye tracking and neurophysiology, and can perform simple discriminations. Their natural gaze behavior closely resembles that of other primates, with a tendency to focus on objects of social interest including faces. Their immaturity at birth and routine twinning also makes them ideal for the study of postnatal visual development. These experimental factors, together with the theoretical advantages inherent in comparing anatomy, physiology, and behavior across related species, make the marmoset an excellent model for visual neuroscience. PMID:25683292

  13. Conjoined constraints on modified gravity from the expansion history and cosmic growth

    NASA Astrophysics Data System (ADS)

    Basilakos, Spyros; Nesseris, Savvas

    2017-09-01

    In this paper we present conjoined constraints on several cosmological models from the expansion history H (z ) and cosmic growth f σ8. The models we study include the CPL w0wa parametrization, the holographic dark energy (HDE) model, the time-varying vacuum (ΛtCDM ) model, the Dvali, Gabadadze and Porrati (DGP) and Finsler-Randers (FRDE) models, a power-law f (T ) model, and finally the Hu-Sawicki f (R ) model. In all cases we perform a simultaneous fit to the SnIa, CMB, BAO, H (z ) and growth data, while also following the conjoined visualization of H (z ) and f σ8 as in Linder (2017). Furthermore, we introduce the figure of merit (FoM) in the H (z )-f σ8 parameter space as a way to constrain models that jointly fit both probes well. We use both the latest H (z ) and f σ8 data, but also LSST-like mocks with 1% measurements, and we find that the conjoined method of constraining the expansion history and cosmic growth simultaneously is able not only to place stringent constraints on these parameters, but also to provide an easy visual way to discriminate cosmological models. Finally, we confirm the existence of a tension between the growth-rate and Planck CMB data, and we find that the FoM in the conjoined parameter space of H (z )-f σ8(z ) can be used to discriminate between the Λ CDM model and certain classes of modified gravity models, namely the DGP and f (T ).

  14. Confocal laser feedback tomography for skin cancer detection

    PubMed Central

    Mowla, Alireza; Du, Benjamin Wensheng; Taimre, Thomas; Bertling, Karl; Wilson, Stephen; Soyer, H. Peter; Rakić, Aleksandar D.

    2017-01-01

    Tomographic imaging of soft tissue such as skin has a potential role in cancer detection. The penetration of infrared wavelengths makes a confocal approach based on laser feedback interferometry feasible. We present a compact system using a semiconductor laser as both transmitter and receiver. Numerical and physical models based on the known optical properties of keratinocyte cancers were developed. We validated the technique on three phantoms containing macro-structural changes in optical properties. Experimental results were in agreement with numerical simulations and structural changes were evident which would permit discrimination of healthy tissue and tumour. Furthermore, cancer type discrimination was also able to be visualized using this imaging technique. PMID:28966845

  15. Confocal laser feedback tomography for skin cancer detection.

    PubMed

    Mowla, Alireza; Du, Benjamin Wensheng; Taimre, Thomas; Bertling, Karl; Wilson, Stephen; Soyer, H Peter; Rakić, Aleksandar D

    2017-09-01

    Tomographic imaging of soft tissue such as skin has a potential role in cancer detection. The penetration of infrared wavelengths makes a confocal approach based on laser feedback interferometry feasible. We present a compact system using a semiconductor laser as both transmitter and receiver. Numerical and physical models based on the known optical properties of keratinocyte cancers were developed. We validated the technique on three phantoms containing macro-structural changes in optical properties. Experimental results were in agreement with numerical simulations and structural changes were evident which would permit discrimination of healthy tissue and tumour. Furthermore, cancer type discrimination was also able to be visualized using this imaging technique.

  16. Discriminative components of data.

    PubMed

    Peltonen, Jaakko; Kaski, Samuel

    2005-01-01

    A simple probabilistic model is introduced to generalize classical linear discriminant analysis (LDA) in finding components that are informative of or relevant for data classes. The components maximize the predictability of the class distribution which is asymptotically equivalent to 1) maximizing mutual information with the classes, and 2) finding principal components in the so-called learning or Fisher metrics. The Fisher metric measures only distances that are relevant to the classes, that is, distances that cause changes in the class distribution. The components have applications in data exploration, visualization, and dimensionality reduction. In empirical experiments, the method outperformed, in addition to more classical methods, a Renyi entropy-based alternative while having essentially equivalent computational cost.

  17. Individual variability in visual discrimination and reversal learning performance in common marmosets.

    PubMed

    Takemoto, Atsushi; Miwa, Miki; Koba, Reiko; Yamaguchi, Chieko; Suzuki, Hiromi; Nakamura, Katsuki

    2015-04-01

    Detailed information about the characteristics of learning behavior in marmosets is useful for future marmoset research. We trained 42 marmosets in visual discrimination and reversal learning. All marmosets could learn visual discrimination, and all but one could complete reversal learning, though some marmosets failed to touch the visual stimuli and were screened out. In 87% of measurements, the final percentage of correct responses was over 95%. We quantified performance with two measures: onset trial and dynamic interval. Onset trial represents the number of trials that elapsed before the marmoset started to learn. Dynamic interval represents the number of trials from the start before reaching the final percentage of correct responses. Both measures decreased drastically as a result of the formation of discrimination learning sets. In reversal learning, both measures worsened, but the effect on onset trial was far greater. The effects of age and sex were not significant as far as we used adolescent or young adult marmosets. Unexpectedly, experimental circumstance (in the colony or isolator) had only a subtle effect on performance. However, we found that marmosets from different families exhibited different learning process characteristics, suggesting some family effect on learning. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  18. Visualization of the operational space of edge-localized modes through low-dimensional embedding of probability distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shabbir, A., E-mail: aqsa.shabbir@ugent.be; Noterdaeme, J. M.; Max-Planck-Institut für Plasmaphysik, Garching D-85748

    2014-11-15

    Information visualization aimed at facilitating human perception is an important tool for the interpretation of experiments on the basis of complex multidimensional data characterizing the operational space of fusion devices. This work describes a method for visualizing the operational space on a two-dimensional map and applies it to the discrimination of type I and type III edge-localized modes (ELMs) from a series of carbon-wall ELMy discharges at JET. The approach accounts for stochastic uncertainties that play an important role in fusion data sets, by modeling measurements with probability distributions in a metric space. The method is aimed at contributing tomore » physical understanding of ELMs as well as their control. Furthermore, it is a general method that can be applied to the modeling of various other plasma phenomena as well.« less

  19. Developmental trends in the facilitation of multisensory objects with distractors

    PubMed Central

    Downing, Harriet C.; Barutchu, Ayla; Crewther, Sheila G.

    2015-01-01

    Sensory integration and the ability to discriminate target objects from distractors are critical to survival, yet the developmental trajectories of these abilities are unknown. This study investigated developmental changes in 9- (n = 18) and 11-year-old (n = 20) children, adolescents (n = 19) and adults (n = 22) using an audiovisual object discrimination task with uni- and multisensory distractors. Reaction times (RTs) were slower with visual/audiovisual distractors, and although all groups demonstrated facilitation of multisensory RTs in these conditions, children's and adolescents' responses corresponded to fewer race model violations than adults', suggesting protracted maturation of multisensory processes. Multisensory facilitation could not be explained by changes in RT variability, suggesting that tests of race model violations may still have theoretical value at least for familiar multisensory stimuli. PMID:25653630

  20. Enhanced alpha-oscillations in visual cortex during anticipation of self-generated visual stimulation.

    PubMed

    Stenner, Max-Philipp; Bauer, Markus; Haggard, Patrick; Heinze, Hans-Jochen; Dolan, Ray

    2014-11-01

    The perceived intensity of sensory stimuli is reduced when these stimuli are caused by the observer's actions. This phenomenon is traditionally explained by forward models of sensory action-outcome, which arise from motor processing. Although these forward models critically predict anticipatory modulation of sensory neural processing, neurophysiological evidence for anticipatory modulation is sparse and has not been linked to perceptual data showing sensory attenuation. By combining a psychophysical task involving contrast discrimination with source-level time-frequency analysis of MEG data, we demonstrate that the amplitude of alpha-oscillations in visual cortex is enhanced before the onset of a visual stimulus when the identity and onset of the stimulus are controlled by participants' motor actions. Critically, this prestimulus enhancement of alpha-amplitude is paralleled by psychophysical judgments of a reduced contrast for this stimulus. We suggest that alpha-oscillations in visual cortex preceding self-generated visual stimulation are a likely neurophysiological signature of motor-induced sensory anticipation and mediate sensory attenuation. We discuss our results in relation to proposals that attribute generic inhibitory functions to alpha-oscillations in prioritizing and gating sensory information via top-down control.

  1. Encoding color information for visual tracking: Algorithms and benchmark.

    PubMed

    Liang, Pengpeng; Blasch, Erik; Ling, Haibin

    2015-12-01

    While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.

  2. Surround-Masking Affects Visual Estimation Ability

    PubMed Central

    Jastrzebski, Nicola R.; Hugrass, Laila E.; Crewther, Sheila G.; Crewther, David P.

    2017-01-01

    Visual estimation of numerosity involves the discrimination of magnitude between two distributions or perceptual sets that vary in number of elements. How performance on such estimation depends on peripheral sensory stimulation is unclear, even in typically developing adults. Here, we varied the central and surround contrast of stimuli that comprised a visual estimation task in order to determine whether mechanisms involved with the removal of unessential visual input functionally contributes toward number acuity. The visual estimation judgments of typically developed adults were significantly impaired for high but not low contrast surround stimulus conditions. The center and surround contrasts of the stimuli also differentially affected the accuracy of numerosity estimation depending on whether fewer or more dots were presented. Remarkably, observers demonstrated the highest mean percentage accuracy across stimulus conditions in the discrimination of more elements when the surround contrast was low and the background luminance of the central region containing the elements was dark (black center). Conversely, accuracy was severely impaired during the discrimination of fewer elements when the surround contrast was high and the background luminance of the central region was mid level (gray center). These findings suggest that estimation ability is functionally related to the quality of low-order filtration of unessential visual information. These surround masking results may help understanding of the poor visual estimation ability commonly observed in developmental dyscalculia. PMID:28360845

  3. Origin and Function of Tuning Diversity in Macaque Visual Cortex.

    PubMed

    Goris, Robbe L T; Simoncelli, Eero P; Movshon, J Anthony

    2015-11-18

    Neurons in visual cortex vary in their orientation selectivity. We measured responses of V1 and V2 cells to orientation mixtures and fit them with a model whose stimulus selectivity arises from the combined effects of filtering, suppression, and response nonlinearity. The model explains the diversity of orientation selectivity with neuron-to-neuron variability in all three mechanisms, of which variability in the orientation bandwidth of linear filtering is the most important. The model also accounts for the cells' diversity of spatial frequency selectivity. Tuning diversity is matched to the needs of visual encoding. The orientation content found in natural scenes is diverse, and neurons with different selectivities are adapted to different stimulus configurations. Single orientations are better encoded by highly selective neurons, while orientation mixtures are better encoded by less selective neurons. A diverse population of neurons therefore provides better overall discrimination capabilities for natural images than any homogeneous population. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Incremental Structured Dictionary Learning for Video Sensor-Based Object Tracking

    PubMed Central

    Xue, Ming; Yang, Hua; Zheng, Shibao; Zhou, Yi; Yu, Zhenghua

    2014-01-01

    To tackle robust object tracking for video sensor-based applications, an online discriminative algorithm based on incremental discriminative structured dictionary learning (IDSDL-VT) is presented. In our framework, a discriminative dictionary combining both positive, negative and trivial patches is designed to sparsely represent the overlapped target patches. Then, a local update (LU) strategy is proposed for sparse coefficient learning. To formulate the training and classification process, a multiple linear classifier group based on a K-combined voting (KCV) function is proposed. As the dictionary evolves, the models are also trained to timely adapt the target appearance variation. Qualitative and quantitative evaluations on challenging image sequences compared with state-of-the-art algorithms demonstrate that the proposed tracking algorithm achieves a more favorable performance. We also illustrate its relay application in visual sensor networks. PMID:24549252

  5. Neurons Forming Optic Glomeruli Compute Figure–Ground Discriminations in Drosophila

    PubMed Central

    Aptekar, Jacob W.; Keleş, Mehmet F.; Lu, Patrick M.; Zolotova, Nadezhda M.

    2015-01-01

    Many animals rely on visual figure–ground discrimination to aid in navigation, and to draw attention to salient features like conspecifics or predators. Even figures that are similar in pattern and luminance to the visual surroundings can be distinguished by the optical disparity generated by their relative motion against the ground, and yet the neural mechanisms underlying these visual discriminations are not well understood. We show in flies that a diverse array of figure–ground stimuli containing a motion-defined edge elicit statistically similar behavioral responses to one another, and statistically distinct behavioral responses from ground motion alone. From studies in larger flies and other insect species, we hypothesized that the circuitry of the lobula—one of the four, primary neuropiles of the fly optic lobe—performs this visual discrimination. Using calcium imaging of input dendrites, we then show that information encoded in cells projecting from the lobula to discrete optic glomeruli in the central brain group these sets of figure–ground stimuli in a homologous manner to the behavior; “figure-like” stimuli are coded similar to one another and “ground-like” stimuli are encoded differently. One cell class responds to the leading edge of a figure and is suppressed by ground motion. Two other classes cluster any figure-like stimuli, including a figure moving opposite the ground, distinctly from ground alone. This evidence demonstrates that lobula outputs provide a diverse basis set encoding visual features necessary for figure detection. PMID:25972183

  6. Neurons forming optic glomeruli compute figure-ground discriminations in Drosophila.

    PubMed

    Aptekar, Jacob W; Keleş, Mehmet F; Lu, Patrick M; Zolotova, Nadezhda M; Frye, Mark A

    2015-05-13

    Many animals rely on visual figure-ground discrimination to aid in navigation, and to draw attention to salient features like conspecifics or predators. Even figures that are similar in pattern and luminance to the visual surroundings can be distinguished by the optical disparity generated by their relative motion against the ground, and yet the neural mechanisms underlying these visual discriminations are not well understood. We show in flies that a diverse array of figure-ground stimuli containing a motion-defined edge elicit statistically similar behavioral responses to one another, and statistically distinct behavioral responses from ground motion alone. From studies in larger flies and other insect species, we hypothesized that the circuitry of the lobula--one of the four, primary neuropiles of the fly optic lobe--performs this visual discrimination. Using calcium imaging of input dendrites, we then show that information encoded in cells projecting from the lobula to discrete optic glomeruli in the central brain group these sets of figure-ground stimuli in a homologous manner to the behavior; "figure-like" stimuli are coded similar to one another and "ground-like" stimuli are encoded differently. One cell class responds to the leading edge of a figure and is suppressed by ground motion. Two other classes cluster any figure-like stimuli, including a figure moving opposite the ground, distinctly from ground alone. This evidence demonstrates that lobula outputs provide a diverse basis set encoding visual features necessary for figure detection. Copyright © 2015 the authors 0270-6474/15/357587-13$15.00/0.

  7. Factors of Predicted Learning Disorders and their Interaction with Attentional and Perceptual Training Procedures.

    ERIC Educational Resources Information Center

    Friar, John T.

    Two factors of predicted learning disorders were investigated: (1) inability to maintain appropriate classroom behavior (BEH), (2) perceptual discrimination deficit (PERC). Three groups of first-graders (BEH, PERC, normal control) were administered measures of impulse control, distractability, auditory discrimination, and visual discrimination.…

  8. A Preliminary Account of the Effect of Otitis Media on 15-Month- Olds' Categorization and Some Implications for Early Language Learning.

    ERIC Educational Resources Information Center

    Roberts, Kenneth

    1997-01-01

    Infants (N=24) with history of otitis media and tube placement were tested for categorical responding within a visual familiarization-discrimination model. Findings suggest that even mild hearing loss may adversely affect categorical responding under specific input conditions, which may persist after normal hearing is restored, possibly because…

  9. Perceived Discrimination and Emotional Reactions in People with Different Types of Disabilities: A Qualitative Approach.

    PubMed

    Pérez-Garín, Daniel; Recio, Patricia; Magallares, Alejandro; Molero, Fernando; García-Ael, Cristina

    2018-05-15

    The purpose of this study is to assess the discourse of people with disabilities regarding their perception of discrimination and stigma. Semi-structured interviews were conducted with ten adults with physical disabilities, ten with hearing impairments and seven with visual impairments. The agreement between the coders showed an excellent reliability for all three groups, with kappa coefficients between .82 and .96. Differences were assessed between the three groups regarding the types of discrimination they experienced and their most frequent emotional responses. People with physical disabilities mainly reported being stared at, undervalued, and subtly discriminated at work, whereas people with hearing impairments mainly reported encountering barriers in leisure activities, and people with visual impairments spoke of a lack of equal opportunities, mockery and/or bullying, and overprotection. Regarding their emotional reactions, people with physical disabilities mainly reported feeling anxious and depressed, whereas people with hearing impairments reported feeling helpless, and people with visual impairments reported feeling anger and self-pity. Findings are relevant to guide future research and interventions on the stigma of disability.

  10. Integrated framework for developing search and discrimination metrics

    NASA Astrophysics Data System (ADS)

    Copeland, Anthony C.; Trivedi, Mohan M.

    1997-06-01

    This paper presents an experimental framework for evaluating target signature metrics as models of human visual search and discrimination. This framework is based on a prototype eye tracking testbed, the Integrated Testbed for Eye Movement Studies (ITEMS). ITEMS determines an observer's visual fixation point while he studies a displayed image scene, by processing video of the observer's eye. The utility of this framework is illustrated with an experiment using gray-scale images of outdoor scenes that contain randomly placed targets. Each target is a square region of a specific size containing pixel values from another image of an outdoor scene. The real-world analogy of this experiment is that of a military observer looking upon the sensed image of a static scene to find camouflaged enemy targets that are reported to be in the area. ITEMS provides the data necessary to compute various statistics for each target to describe how easily the observers located it, including the likelihood the target was fixated or identified and the time required to do so. The computed values of several target signature metrics are compared to these statistics, and a second-order metric based on a model of image texture was found to be the most highly correlated.

  11. Demands on attention and the role of response priming in visual discrimination of feature conjunctions.

    PubMed

    Fournier, Lisa R; Herbert, Rhonda J; Farris, Carrie

    2004-10-01

    This study examined how response mapping of features within single- and multiple-feature targets affects decision-based processing and attentional capacity demands. Observers judged the presence or absence of 1 or 2 target features within an object either presented alone or with distractors. Judging the presence of 2 features relative to the less discriminable of these features alone was faster (conjunction benefits) when the task-relevant features differed in discriminability and were consistently mapped to responses. Conjunction benefits were attributed to asynchronous decision priming across attended, task-relevant dimensions. A failure to find conjunction benefits for disjunctive conjunctions was attributed to increased memory demands and variable feature-response mapping for 2- versus single-feature targets. Further, attentional demands were similar between single- and 2-feature targets when response mapping, memory demands, and discriminability of the task-relevant features were equated between targets. Implications of the findings for recent attention models are discussed. (c) 2004 APA, all rights reserved

  12. A mobile, high-throughput semi-automated system for testing cognition in large non-primate animal models of Huntington disease.

    PubMed

    McBride, Sebastian D; Perentos, Nicholas; Morton, A Jennifer

    2016-05-30

    For reasons of cost and ethical concerns, models of neurodegenerative disorders such as Huntington disease (HD) are currently being developed in farm animals, as an alternative to non-human primates. Developing reliable methods of testing cognitive function is essential to determining the usefulness of such models. Nevertheless, cognitive testing of farm animal species presents a unique set of challenges. The primary aims of this study were to develop and validate a mobile operant system suitable for high throughput cognitive testing of sheep. We designed a semi-automated testing system with the capability of presenting stimuli (visual, auditory) and reward at six spatial locations. Fourteen normal sheep were used to validate the system using a two-choice visual discrimination task. Four stages of training devised to acclimatise animals to the system are also presented. All sheep progressed rapidly through the training stages, over eight sessions. All sheep learned the 2CVDT and performed at least one reversal stage. The mean number of trials the sheep took to reach criterion in the first acquisition learning was 13.9±1.5 and for the reversal learning was 19.1±1.8. This is the first mobile semi-automated operant system developed for testing cognitive function in sheep. We have designed and validated an automated operant behavioural testing system suitable for high throughput cognitive testing in sheep and other medium-sized quadrupeds, such as pigs and dogs. Sheep performance in the two-choice visual discrimination task was very similar to that reported for non-human primates and strongly supports the use of farm animals as pre-clinical models for the study of neurodegenerative diseases. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Techniques for Programming Visual Demonstrations.

    ERIC Educational Resources Information Center

    Gropper, George L.

    Visual demonstrations may be used as part of programs to deliver both content objectives and process objectives. Research has shown that learning of concepts is easier, more accurate, and more broadly applied when it is accompanied by visual examples. The visual examples supporting content learning should emphasize both discrimination and…

  14. Texture and haptic cues in slant discrimination: reliability-based cue weighting without statistically optimal cue combination

    NASA Astrophysics Data System (ADS)

    Rosas, Pedro; Wagemans, Johan; Ernst, Marc O.; Wichmann, Felix A.

    2005-05-01

    A number of models of depth-cue combination suggest that the final depth percept results from a weighted average of independent depth estimates based on the different cues available. The weight of each cue in such an average is thought to depend on the reliability of each cue. In principle, such a depth estimation could be statistically optimal in the sense of producing the minimum-variance unbiased estimator that can be constructed from the available information. Here we test such models by using visual and haptic depth information. Different texture types produce differences in slant-discrimination performance, thus providing a means for testing a reliability-sensitive cue-combination model with texture as one of the cues to slant. Our results show that the weights for the cues were generally sensitive to their reliability but fell short of statistically optimal combination - we find reliability-based reweighting but not statistically optimal cue combination.

  15. Spectral discrimination in color blind animals via chromatic aberration and pupil shape

    PubMed Central

    Stubbs, Alexander L.; Stubbs, Christopher W.

    2016-01-01

    We present a mechanism by which organisms with only a single photoreceptor, which have a monochromatic view of the world, can achieve color discrimination. An off-axis pupil and the principle of chromatic aberration (where different wavelengths come to focus at different distances behind a lens) can combine to provide “color-blind” animals with a way to distinguish colors. As a specific example, we constructed a computer model of the visual system of cephalopods (octopus, squid, and cuttlefish) that have a single unfiltered photoreceptor type. We compute a quantitative image quality budget for this visual system and show how chromatic blurring dominates the visual acuity in these animals in shallow water. We quantitatively show, through numerical simulations, how chromatic aberration can be exploited to obtain spectral information, especially through nonaxial pupils that are characteristic of coleoid cephalopods. We have also assessed the inherent ambiguity between range and color that is a consequence of the chromatic variation of best focus with wavelength. This proposed mechanism is consistent with the extensive suite of visual/behavioral and physiological data that has been obtained from cephalopod studies and offers a possible solution to the apparent paradox of vivid chromatic behaviors in color blind animals. Moreover, this proposed mechanism has potential applicability in organisms with limited photoreceptor complements, such as spiders and dolphins. PMID:27382180

  16. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields

    PubMed Central

    Zhao, Henan; Bryant, Garnett W.; Griffin, Wesley; Terrill, Judith E.; Chen, Jian

    2017-01-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks. PMID:28113469

  17. Validation of SplitVectors Encoding for Quantitative Visualization of Large-Magnitude-Range Vector Fields.

    PubMed

    Henan Zhao; Bryant, Garnett W; Griffin, Wesley; Terrill, Judith E; Jian Chen

    2017-06-01

    We designed and evaluated SplitVectors, a new vector field display approach to help scientists perform new discrimination tasks on large-magnitude-range scientific data shown in three-dimensional (3D) visualization environments. SplitVectors uses scientific notation to display vector magnitude, thus improving legibility. We present an empirical study comparing the SplitVectors approach with three other approaches - direct linear representation, logarithmic, and text display commonly used in scientific visualizations. Twenty participants performed three domain analysis tasks: reading numerical values (a discrimination task), finding the ratio between values (a discrimination task), and finding the larger of two vectors (a pattern detection task). Participants used both mono and stereo conditions. Our results suggest the following: (1) SplitVectors improve accuracy by about 10 times compared to linear mapping and by four times to logarithmic in discrimination tasks; (2) SplitVectors have no significant differences from the textual display approach, but reduce cluttering in the scene; (3) SplitVectors and textual display are less sensitive to data scale than linear and logarithmic approaches; (4) using logarithmic can be problematic as participants' confidence was as high as directly reading from the textual display, but their accuracy was poor; and (5) Stereoscopy improved performance, especially in more challenging discrimination tasks.

  18. Image jitter enhances visual performance when spatial resolution is impaired.

    PubMed

    Watson, Lynne M; Strang, Niall C; Scobie, Fraser; Love, Gordon D; Seidel, Dirk; Manahilov, Velitchko

    2012-09-06

    Visibility of low-spatial frequency stimuli improves when their contrast is modulated at 5 to 10 Hz compared with stationary stimuli. Therefore, temporal modulations of visual objects could enhance the performance of low vision patients who primarily perceive images of low-spatial frequency content. We investigated the effect of retinal-image jitter on word recognition speed and facial emotion recognition in subjects with central visual impairment. Word recognition speed and accuracy of facial emotion discrimination were measured in volunteers with AMD under stationary and jittering conditions. Computer-driven and optoelectronic approaches were used to induce retinal-image jitter with duration of 100 or 166 ms and amplitude within the range of 0.5 to 2.6° visual angle. Word recognition speed was also measured for participants with simulated (Bangerter filters) visual impairment. Text jittering markedly enhanced word recognition speed for people with severe visual loss (101 ± 25%), while for those with moderate visual impairment, this effect was weaker (19 ± 9%). The ability of low vision patients to discriminate the facial emotions of jittering images improved by a factor of 2. A prototype of optoelectronic jitter goggles produced similar improvement in facial emotion discrimination. Word recognition speed in participants with simulated visual impairment was enhanced for interjitter intervals over 100 ms and reduced for shorter intervals. Results suggest that retinal-image jitter with optimal frequency and amplitude is an effective strategy for enhancing visual information processing in the absence of spatial detail. These findings will enable the development of novel tools to improve the quality of life of low vision patients.

  19. Neural activity in cortical area V4 underlies fine disparity discrimination.

    PubMed

    Shiozaki, Hiroshi M; Tanabe, Seiji; Doi, Takahiro; Fujita, Ichiro

    2012-03-14

    Primates are capable of discriminating depth with remarkable precision using binocular disparity. Neurons in area V4 are selective for relative disparity, which is the crucial visual cue for discrimination of fine disparity. Here, we investigated the contribution of V4 neurons to fine disparity discrimination. Monkeys discriminated whether the center disk of a dynamic random-dot stereogram was in front of or behind its surrounding annulus. We first behaviorally tested the reference frame of the disparity representation used for performing this task. After learning the task with a set of surround disparities, the monkey generalized its responses to untrained surround disparities, indicating that the perceptual decisions were generated from a disparity representation in a relative frame of reference. We then recorded single-unit responses from V4 while the monkeys performed the task. On average, neuronal thresholds were higher than the behavioral thresholds. The most sensitive neurons reached thresholds as low as the psychophysical thresholds. For subthreshold disparities, the monkeys made frequent errors. The variable decisions were predictable from the fluctuation in the neuronal responses. The predictions were based on a decision model in which each V4 neuron transmits the evidence for the disparity it prefers. We finally altered the disparity representation artificially by means of microstimulation to V4. The decisions were systematically biased when microstimulation boosted the V4 responses. The bias was toward the direction predicted from the decision model. We suggest that disparity signals carried by V4 neurons underlie precise discrimination of fine stereoscopic depth.

  20. Simple and conditional visual discrimination with wheel running as reinforcement in rats.

    PubMed

    Iversen, I H

    1998-09-01

    Three experiments explored whether access to wheel running is sufficient as reinforcement to establish and maintain simple and conditional visual discriminations in nondeprived rats. In Experiment 1, 2 rats learned to press a lit key to produce access to running; responding was virtually absent when the key was dark, but latencies to respond were longer than for customary food and water reinforcers. Increases in the intertrial interval did not improve the discrimination performance. In Experiment 2, 3 rats acquired a go-left/go-right discrimination with a trial-initiating response and reached an accuracy that exceeded 80%; when two keys showed a steady light, pressing the left key produced access to running whereas pressing the right key produced access to running when both keys showed blinking light. Latencies to respond to the lights shortened when the trial-initiation response was introduced and became much shorter than in Experiment 1. In Experiment 3, 1 rat acquired a conditional discrimination task (matching to sample) with steady versus blinking lights at an accuracy exceeding 80%. A trial-initiation response allowed self-paced trials as in Experiment 2. When the rat was exposed to the task for 19 successive 24-hr periods with access to food and water, the discrimination performance settled in a typical circadian pattern and peak accuracy exceeded 90%. When the trial-initiation response was under extinction, without access to running, the circadian activity pattern determined the time of spontaneous recovery. The experiments demonstrate that wheel-running reinforcement can be used to establish and maintain simple and conditional visual discriminations in nondeprived rats.

  1. Nimodipine alters acquisition of a visual discrimination task in chicks.

    PubMed

    Deyo, R; Panksepp, J; Conner, R L

    1990-03-01

    Chicks 5 days old received intraperitoneal injections of nimodipine 30 min before training on either a visual discrimination task (0, 0.5, 1.0, or 5.0 mg/kg) or a test of separation-induced distress vocalizations (0, 0.5, or 2.5 mg/kg). Chicks receiving 1.0 mg/kg nimodipine made significantly fewer visual discrimination errors than vehicle controls by trials 41-60, but did not differ from controls 24 h later. Chicks in the 5 mg/kg group made significantly more errors when compared to controls both during acquisition of the task and during retention. Nimodipine did not alter separation-induced distress vocalizations at any of the doses tested, suggesting that nimodipine's effects on learning cannot be attributed to a reduction in separation distress. These data indicate that nimodipine's facilitation of learning in young subjects is dose dependent, but nimodipine failed to enhance retention.

  2. Crowding with detection and coarse discrimination of simple visual features.

    PubMed

    Põder, Endel

    2008-04-24

    Some recent studies have suggested that there are actually no crowding effects with detection and coarse discrimination of simple visual features. The present study tests the generality of this idea. A target Gabor patch, surrounded by either 2 or 6 flanker Gabors, was presented briefly at 4 deg eccentricity of the visual field. Each Gabor patch was oriented either vertically or horizontally (selected randomly). Observers' task was either to detect the presence of the target (presented with probability 0.5) or to identify the orientation of the target. The target-flanker distance was varied. Results were similar for the two tasks but different for 2 and 6 flankers. The idea that feature detection and coarse discrimination are immune to crowding may be valid for the two-flanker condition only. With six flankers, a normal crowding effect was observed. It is suggested that the complexity of the full pattern (target plus flankers) could explain the difference.

  3. Category learning increases discriminability of relevant object dimensions in visual cortex.

    PubMed

    Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel

    2013-04-01

    Learning to categorize objects can transform how they are perceived, causing relevant perceptual dimensions predictive of object category to become enhanced. For example, an expert mycologist might become attuned to species-specific patterns of spacing between mushroom gills but learn to ignore cap textures attributable to varying environmental conditions. These selective changes in perception can persist beyond the act of categorizing objects and influence our ability to discriminate between them. Using functional magnetic resonance imaging adaptation, we demonstrate that such category-specific perceptual enhancements are associated with changes in the neural discriminability of object representations in visual cortex. Regions within the anterior fusiform gyrus became more sensitive to small variations in shape that were relevant during prior category learning. In addition, extrastriate occipital areas showed heightened sensitivity to small variations in shape that spanned the category boundary. Visual representations in cortex, just like our perception, are sensitive to an object's history of categorization.

  4. VISUAL FUNCTION CHANGES AFTER SUBCHRONIC TOLUENE INHALATION IN LONG-EVANS RATS.

    EPA Science Inventory

    Chronic exposure to volatile organic compounds, including toluene, has been associated with visual deficits such as reduced visual contrast sensitivity or impaired color discrimination in studies of occupational or residential exposure. These reports remain controversial, howeve...

  5. Suggested Activities to Use With Children Who Present Symptoms of Visual Perception Problems, Elementary Level.

    ERIC Educational Resources Information Center

    Washington County Public Schools, Washington, PA.

    Symptoms displayed by primary age children with learning disabilities are listed; perceptual handicaps are explained. Activities are suggested for developing visual perception and perception involving motor activities. Also suggested are activities to develop body concept, visual discrimination and attentiveness, visual memory, and figure ground…

  6. Figure-ground segregation requires two distinct periods of activity in V1: a transcranial magnetic stimulation study.

    PubMed

    Heinen, Klaartje; Jolij, Jacob; Lamme, Victor A F

    2005-09-08

    Discriminating objects from their surroundings by the visual system is known as figure-ground segregation. This process entails two different subprocesses: boundary detection and subsequent surface segregation or 'filling in'. In this study, we used transcranial magnetic stimulation to test the hypothesis that temporally distinct processes in V1 and related early visual areas such as V2 or V3 are causally related to the process of figure-ground segregation. Our results indicate that correct discrimination between two visual stimuli, which relies on figure-ground segregation, requires two separate periods of information processing in the early visual cortex: one around 130-160 ms and the other around 250-280 ms.

  7. The time course of shape discrimination in the human brain.

    PubMed

    Ales, Justin M; Appelbaum, L Gregory; Cottereau, Benoit R; Norcia, Anthony M

    2013-02-15

    The lateral occipital cortex (LOC) activates selectively to images of intact objects versus scrambled controls, is selective for the figure-ground relationship of a scene, and exhibits at least some degree of invariance for size and position. Because of these attributes, it is considered to be a crucial part of the object recognition pathway. Here we show that human LOC is critically involved in perceptual decisions about object shape. High-density EEG was recorded while subjects performed a threshold-level shape discrimination task on texture-defined figures segmented by either phase or orientation cues. The appearance or disappearance of a figure region from a uniform background generated robust visual evoked potentials throughout retinotopic cortex as determined by inverse modeling of the scalp voltage distribution. Contrasting responses from trials containing shape changes that were correctly detected (hits) with trials in which no change occurred (correct rejects) revealed stimulus-locked, target-selective activity in the occipital visual areas LOC and V4 preceding the subject's response. Activity that was locked to the subjects' reaction time was present in the LOC. Response-locked activity in the LOC was determined to be related to shape discrimination for several reasons: shape-selective responses were silenced when subjects viewed identical stimuli but their attention was directed away from the shapes to a demanding letter discrimination task; shape-selectivity was present across four different stimulus configurations used to define the figure; LOC responses correlated with participants' reaction times. These results indicate that decision-related activity is present in the LOC when subjects are engaged in threshold-level shape discriminations. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Evaluation of Visual Field and Imaging Outcomes for Glaucoma Clinical Trials (An American Ophthalomological Society Thesis).

    PubMed

    Garway-Heath, David F; Quartilho, Ana; Prah, Philip; Crabb, David P; Cheng, Qian; Zhu, Haogang

    2017-08-01

    To evaluate the ability of various visual field (VF) analysis methods to discriminate treatment groups in glaucoma clinical trials and establish the value of time-domain optical coherence tomography (TD OCT) imaging as an additional outcome. VFs and retinal nerve fibre layer thickness (RNFLT) measurements (acquired by TD OCT) from 373 glaucoma patients in the UK Glaucoma Treatment Study (UKGTS) at up to 11 scheduled visits over a 2 year interval formed the cohort to assess the sensitivity of progression analysis methods. Specificity was assessed in 78 glaucoma patients with up to 11 repeated VF and OCT RNFLT measurements over a 3 month interval. Growth curve models assessed the difference in VF and RNFLT rate of change between treatment groups. Incident progression was identified by 3 VF-based methods: Guided Progression Analysis (GPA), 'ANSWERS' and 'PoPLR', and one based on VFs and RNFLT: 'sANSWERS'. Sensitivity, specificity and discrimination between treatment groups were evaluated. The rate of VF change was significantly faster in the placebo, compared to active treatment, group (-0.29 vs +0.03 dB/year, P <.001); the rate of RNFLT change was not different (-1.7 vs -1.1 dB/year, P =.14). After 18 months and at 95% specificity, the sensitivity of ANSWERS and PoPLR was similar (35%); sANSWERS achieved a sensitivity of 70%. GPA, ANSWERS and PoPLR discriminated treatment groups with similar statistical significance; sANSWERS did not discriminate treatment groups. Although the VF progression-detection method including VF and RNFLT measurements is more sensitive, it does not improve discrimination between treatment arms.

  9. Time-order errors and standard-position effects in duration discrimination: An experimental study and an analysis by the sensation-weighting model.

    PubMed

    Hellström, Åke; Rammsayer, Thomas H

    2015-10-01

    Studies have shown that the discriminability of successive time intervals depends on the presentation order of the standard (St) and the comparison (Co) stimuli. Also, this order affects the point of subjective equality. The first effect is here called the standard-position effect (SPE); the latter is known as the time-order error. In the present study, we investigated how these two effects vary across interval types and standard durations, using Hellström's sensation-weighting model to describe the results and relate them to stimulus comparison mechanisms. In Experiment 1, four modes of interval presentation were used, factorially combining interval type (filled, empty) and sensory modality (auditory, visual). For each mode, two presentation orders (St-Co, Co-St) and two standard durations (100 ms, 1,000 ms) were used; half of the participants received correctness feedback, and half of them did not. The interstimulus interval was 900 ms. The SPEs were negative (i.e., a smaller difference limen for St-Co than for Co-St), except for the filled-auditory and empty-visual 100-ms standards, for which a positive effect was obtained. In Experiment 2, duration discrimination was investigated for filled auditory intervals with four standards between 100 and 1,000 ms, an interstimulus interval of 900 ms, and no feedback. Standard duration interacted with presentation order, here yielding SPEs that were negative for standards of 100 and 1,000 ms, but positive for 215 and 464 ms. Our findings indicate that the SPE can be positive as well as negative, depending on the interval type and standard duration, reflecting the relative weighting of the stimulus information, as is described by the sensation-weighting model.

  10. Reducing Problems in Fine Motor Development among Primary Children through the Use of Multi-Sensory Techniques.

    ERIC Educational Resources Information Center

    Wessel, Dorothy

    A 10-week classroom intervention program was implemented to facilitate the fine-motor development of eight first-grade children assessed as being deficient in motor skills. The program was divided according to five deficits to be remediated: visual motor, visual discrimination, visual sequencing, visual figure-ground, and visual memory. Each area…

  11. Can responses to basic non-numerical visual features explain neural numerosity responses?

    PubMed

    Harvey, Ben M; Dumoulin, Serge O

    2017-04-01

    Humans and many animals can distinguish between stimuli that differ in numerosity, the number of objects in a set. Human and macaque parietal lobes contain neurons that respond to changes in stimulus numerosity. However, basic non-numerical visual features can affect neural responses to and perception of numerosity, and visual features often co-vary with numerosity. Therefore, it is debated whether numerosity or co-varying low-level visual features underlie neural and behavioral responses to numerosity. To test the hypothesis that non-numerical visual features underlie neural numerosity responses in a human parietal numerosity map, we analyze responses to a group of numerosity stimulus configurations that have the same numerosity progression but vary considerably in their non-numerical visual features. Using ultra-high-field (7T) fMRI, we measure responses to these stimulus configurations in an area of posterior parietal cortex whose responses are believed to reflect numerosity-selective activity. We describe an fMRI analysis method to distinguish between alternative models of neural response functions, following a population receptive field (pRF) modeling approach. For each stimulus configuration, we first quantify the relationships between numerosity and several non-numerical visual features that have been proposed to underlie performance in numerosity discrimination tasks. We then determine how well responses to these non-numerical visual features predict the observed fMRI responses, and compare this to the predictions of responses to numerosity. We demonstrate that a numerosity response model predicts observed responses more accurately than models of responses to simple non-numerical visual features. As such, neural responses in cognitive processing need not reflect simpler properties of early sensory inputs. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Hierarchical Learning of Tree Classifiers for Large-Scale Plant Species Identification.

    PubMed

    Fan, Jianping; Zhou, Ning; Peng, Jinye; Gao, Ling

    2015-11-01

    In this paper, a hierarchical multi-task structural learning algorithm is developed to support large-scale plant species identification, where a visual tree is constructed for organizing large numbers of plant species in a coarse-to-fine fashion and determining the inter-related learning tasks automatically. For a given parent node on the visual tree, it contains a set of sibling coarse-grained categories of plant species or sibling fine-grained plant species, and a multi-task structural learning algorithm is developed to train their inter-related classifiers jointly for enhancing their discrimination power. The inter-level relationship constraint, e.g., a plant image must first be assigned to a parent node (high-level non-leaf node) correctly if it can further be assigned to the most relevant child node (low-level non-leaf node or leaf node) on the visual tree, is formally defined and leveraged to learn more discriminative tree classifiers over the visual tree. Our experimental results have demonstrated the effectiveness of our hierarchical multi-task structural learning algorithm on training more discriminative tree classifiers for large-scale plant species identification.

  13. Beneficial effects of verbalization and visual distinctiveness on remembering and knowing faces.

    PubMed

    Brown, Charity; Lloyd-Jones, Toby J

    2006-03-01

    We examined the effect of verbally describing faces upon visual memory. In particular, we examined the locus of the facilitative effects of verbalization by manipulating the visual distinctiveness ofthe to-be-remembered faces and using the remember/know procedure as a measure of recognition performance (i.e., remember vs. know judgments). Participants were exposed to distinctive faces intermixed with typical faces and described (or not, in the control condition) each face following its presentation. Subsequently, the participants discriminated the original faces from distinctive and typical distractors in a yes/no recognition decision and made remember/know judgments. Distinctive faces elicited better discrimination performance than did typical faces. Furthermore, for both typical and distinctive faces, better discrimination performance was obtained in the description than in the control condition. Finally, these effects were evident for both recollection- and familiarity-based recognition decisions. We argue that verbalization and visual distinctiveness independently benefit face recognition, and we discuss these findings in terms of the nature of verbalization and the role of recollective and familiarity-based processes in recognition.

  14. fMRI-based Multivariate Pattern Analyses Reveal Imagery Modality and Imagery Content Specific Representations in Primary Somatosensory, Motor and Auditory Cortices.

    PubMed

    de Borst, Aline W; de Gelder, Beatrice

    2017-08-01

    Previous studies have shown that the early visual cortex contains content-specific representations of stimuli during visual imagery, and that these representational patterns of imagery content have a perceptual basis. To date, there is little evidence for the presence of a similar organization in the auditory and tactile domains. Using fMRI-based multivariate pattern analyses we showed that primary somatosensory, auditory, motor, and visual cortices are discriminative for imagery of touch versus sound. In the somatosensory, motor and visual cortices the imagery modality discriminative patterns were similar to perception modality discriminative patterns, suggesting that top-down modulations in these regions rely on similar neural representations as bottom-up perceptual processes. Moreover, we found evidence for content-specific representations of the stimuli during auditory imagery in the primary somatosensory and primary motor cortices. Both the imagined emotions and the imagined identities of the auditory stimuli could be successfully classified in these regions. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Do Rats Use Shape to Solve "Shape Discriminations"?

    ERIC Educational Resources Information Center

    Minini, Loredana; Jeffery, Kathryn J.

    2006-01-01

    Visual discrimination tasks are increasingly used to explore the neurobiology of vision in rodents, but it remains unclear how the animals solve these tasks: Do they process shapes holistically, or by using low-level features such as luminance and angle acuity? In the present study we found that when discriminating triangles from squares, rats did…

  16. Face and Object Discrimination in Autism, and Relationship to IQ and Age

    ERIC Educational Resources Information Center

    Pallett, Pamela M.; Cohen, Shereen J.; Dobkins, Karen R.

    2014-01-01

    The current study tested fine discrimination of upright and inverted faces and objects in adolescents with Autism Spectrum Disorder (ASD) as compared to age- and IQ-matched controls. Discrimination sensitivity was tested using morphed faces and morphed objects, and all stimuli were equated in low-level visual characteristics (luminance, contrast,…

  17. Basic visual function and cortical thickness patterns in posterior cortical atrophy.

    PubMed

    Lehmann, Manja; Barnes, Josephine; Ridgway, Gerard R; Wattam-Bell, John; Warrington, Elizabeth K; Fox, Nick C; Crutch, Sebastian J

    2011-09-01

    Posterior cortical atrophy (PCA) is characterized by a progressive decline in higher-visual object and space processing, but the extent to which these deficits are underpinned by basic visual impairments is unknown. This study aimed to assess basic and higher-order visual deficits in 21 PCA patients. Basic visual skills including form detection and discrimination, color discrimination, motion coherence, and point localization were measured, and associations and dissociations between specific basic visual functions and measures of higher-order object and space perception were identified. All participants showed impairment in at least one aspect of basic visual processing. However, a number of dissociations between basic visual skills indicated a heterogeneous pattern of visual impairment among the PCA patients. Furthermore, basic visual impairments were associated with particular higher-order object and space perception deficits, but not with nonvisual parietal tasks, suggesting the specific involvement of visual networks in PCA. Cortical thickness analysis revealed trends toward lower cortical thickness in occipitotemporal (ventral) and occipitoparietal (dorsal) regions in patients with visuoperceptual and visuospatial deficits, respectively. However, there was also a lot of overlap in their patterns of cortical thinning. These findings suggest that different presentations of PCA represent points in a continuum of phenotypical variation.

  18. Visual Saliency Detection Based on Multiscale Deep CNN Features.

    PubMed

    Guanbin Li; Yizhou Yu

    2016-11-01

    Visual saliency is a fundamental problem in both cognitive and computational sciences, including computer vision. In this paper, we discover that a high-quality visual saliency model can be learned from multiscale features extracted using deep convolutional neural networks (CNNs), which have had many successes in visual recognition tasks. For learning such saliency models, we introduce a neural network architecture, which has fully connected layers on top of CNNs responsible for feature extraction at three different scales. The penultimate layer of our neural network has been confirmed to be a discriminative high-level feature vector for saliency detection, which we call deep contrast feature. To generate a more robust feature, we integrate handcrafted low-level features with our deep contrast feature. To promote further research and evaluation of visual saliency models, we also construct a new large database of 4447 challenging images and their pixelwise saliency annotations. Experimental results demonstrate that our proposed method is capable of achieving the state-of-the-art performance on all public benchmarks, improving the F-measure by 6.12% and 10%, respectively, on the DUT-OMRON data set and our new data set (HKU-IS), and lowering the mean absolute error by 9% and 35.3%, respectively, on these two data sets.

  19. Honeybees (Apis mellifera) Learn Color Discriminations via Differential Conditioning Independent of Long Wavelength (Green) Photoreceptor Modulation

    PubMed Central

    Wijesekara Witharanage, Randika; Rosa, Marcello G. P.

    2012-01-01

    Background Recent studies on colour discrimination suggest that experience is an important factor in how a visual system processes spectral signals. In insects it has been shown that differential conditioning is important for processing fine colour discriminations. However, the visual system of many insects, including the honeybee, has a complex set of neural pathways, in which input from the long wavelength sensitive (‘green’) photoreceptor may be processed either as an independent achromatic signal or as part of a trichromatic opponent-colour system. Thus, a potential confound of colour learning in insects is the possibility that modulation of the ‘green’ photoreceptor could underlie observations. Methodology/Principal Findings We tested honeybee vision using light emitting diodes centered on 414 and 424 nm wavelengths, which limit activation to the short-wavelength-sensitive (‘UV’) and medium-wavelength-sensitive (‘blue’) photoreceptors. The absolute irradiance spectra of stimuli was measured and modelled at both receptor and colour processing levels, and stimuli were then presented to the bees in a Y-maze at a large visual angle (26°), to ensure chromatic processing. Sixteen bees were trained over 50 trials, using either appetitive differential conditioning (N = 8), or aversive-appetitive differential conditioning (N = 8). In both cases the bees slowly learned to discriminate between the target and distractor with significantly better accuracy than would be expected by chance. Control experiments confirmed that changing stimulus intensity in transfers tests does not significantly affect bee performance, and it was possible to replicate previous findings that bees do not learn similar colour stimuli with absolute conditioning. Conclusion Our data indicate that honeybee colour vision can be tuned to relatively small spectral differences, independent of ‘green’ photoreceptor contrast and brightness cues. We thus show that colour vision is at least partly experience dependent, and behavioural plasticity plays an important role in how bees exploit colour information. PMID:23155394

  20. Does Presentation Format Influence Visual Size Discrimination in Tufted Capuchin Monkeys (Sapajus spp.)?

    PubMed Central

    Truppa, Valentina; Carducci, Paola; Trapanese, Cinzia; Hanus, Daniel

    2015-01-01

    Most experimental paradigms to study visual cognition in humans and non-human species are based on discrimination tasks involving the choice between two or more visual stimuli. To this end, different types of stimuli and procedures for stimuli presentation are used, which highlights the necessity to compare data obtained with different methods. The present study assessed whether, and to what extent, capuchin monkeys’ ability to solve a size discrimination problem is influenced by the type of procedure used to present the problem. Capuchins’ ability to generalise knowledge across different tasks was also evaluated. We trained eight adult tufted capuchin monkeys to select the larger of two stimuli of the same shape and different sizes by using pairs of food items (Experiment 1), computer images (Experiment 1) and objects (Experiment 2). Our results indicated that monkeys achieved the learning criterion faster with food stimuli compared to both images and objects. They also required consistently fewer trials with objects than with images. Moreover, female capuchins had higher levels of acquisition accuracy with food stimuli than with images. Finally, capuchins did not immediately transfer the solution of the problem acquired in one task condition to the other conditions. Overall, these findings suggest that – even in relatively simple visual discrimination problems where a single perceptual dimension (i.e., size) has to be judged – learning speed strongly depends on the mode of presentation. PMID:25927363

  1. Discrimination of holograms and real objects by pigeons (Columba livia) and humans (Homo sapiens).

    PubMed

    Stephan, Claudia; Steurer, Michael M; Aust, Ulrike

    2014-08-01

    The type of stimulus material employed in visual tasks is crucial to all comparative cognition research that involves object recognition. There is considerable controversy about the use of 2-dimensional stimuli and the impact that the lack of the 3rd dimension (i.e., depth) may have on animals' performance in tests for their visual and cognitive abilities. We report evidence of discrimination learning using a completely novel type of stimuli, namely, holograms. Like real objects, holograms provide full 3-dimensional shape information but they also offer many possibilities for systematically modifying the appearance of a stimulus. Hence, they provide a promising means for investigating visual perception and cognition of different species in a comparative way. We trained pigeons and humans to discriminate either between 2 real objects or between holograms of the same 2 objects, and we subsequently tested both species for the transfer of discrimination to the other presentation mode. The lack of any decrements in accuracy suggests that real objects and holograms were perceived as equivalent in both species and shows the general appropriateness of holograms as stimuli in visual tasks. A follow-up experiment involving the presentation of novel views of the training objects and holograms revealed some interspecies differences in rotational invariance, thereby confirming and extending the results of previous studies. Taken together, these results suggest that holograms may not only provide a promising tool for investigating yet unexplored issues, but their use may also lead to novel insights into some crucial aspects of comparative visual perception and categorization.

  2. Prestimulus EEG Power Predicts Conscious Awareness But Not Objective Visual Performance

    PubMed Central

    Veniero, Domenica

    2017-01-01

    Abstract Prestimulus oscillatory neural activity has been linked to perceptual outcomes during performance of psychophysical detection and discrimination tasks. Specifically, the power and phase of low frequency oscillations have been found to predict whether an upcoming weak visual target will be detected or not. However, the mechanisms by which baseline oscillatory activity influences perception remain unclear. Recent studies suggest that the frequently reported negative relationship between α power and stimulus detection may be explained by changes in detection criterion (i.e., increased target present responses regardless of whether the target was present/absent) driven by the state of neural excitability, rather than changes in visual sensitivity (i.e., more veridical percepts). Here, we recorded EEG while human participants performed a luminance discrimination task on perithreshold stimuli in combination with single-trial ratings of perceptual awareness. Our aim was to investigate whether the power and/or phase of prestimulus oscillatory activity predict discrimination accuracy and/or perceptual awareness on a trial-by-trial basis. Prestimulus power (3–28 Hz) was inversely related to perceptual awareness ratings (i.e., higher ratings in states of low prestimulus power/high excitability) but did not predict discrimination accuracy. In contrast, prestimulus oscillatory phase did not predict awareness ratings or accuracy in any frequency band. These results provide evidence that prestimulus α power influences the level of subjective awareness of threshold visual stimuli but does not influence visual sensitivity when a decision has to be made regarding stimulus features. Hence, we find a clear dissociation between the influence of ongoing neural activity on conscious awareness and objective performance. PMID:29255794

  3. Neural mechanisms of coarse-to-fine discrimination in the visual cortex.

    PubMed

    Purushothaman, Gopathy; Chen, Xin; Yampolsky, Dmitry; Casagrande, Vivien A

    2014-12-01

    Vision is a dynamic process that refines the spatial scale of analysis over time, as evidenced by a progressive improvement in the ability to detect and discriminate finer details. To understand coarse-to-fine discrimination, we studied the dynamics of spatial frequency (SF) response using reverse correlation in the primary visual cortex (V1) of the primate. In a majority of V1 cells studied, preferred SF either increased monotonically with time (group 1) or changed nonmonotonically, with an initial increase followed by a decrease (group 2). Monotonic shift in preferred SF occurred with or without an early suppression at low SFs. Late suppression at high SFs always accompanied nonmonotonic SF dynamics. Bayesian analysis showed that SF discrimination performance and best discriminable SF frequencies changed with time in different ways in the two groups of neurons. In group 1 neurons, SF discrimination performance peaked on both left and right flanks of the SF tuning curve at about the same time. In group 2 neurons, peak discrimination occurred on the right flank (high SFs) later than on the left flank (low SFs). Group 2 neurons were also better discriminators of high SFs. We examined the relationship between the time at which SF discrimination performance peaked on either flank of the SF tuning curve and the corresponding best discriminable SFs in both neuronal groups. This analysis showed that the population best discriminable SF increased with time in V1. These results suggest neural mechanisms for coarse-to-fine discrimination behavior and that this process originates in V1 or earlier. Copyright © 2014 the American Physiological Society.

  4. The use of visual cues in gravity judgements on parabolic motion.

    PubMed

    Jörges, Björn; Hagenfeld, Lena; López-Moliner, Joan

    2018-06-21

    Evidence suggests that humans rely on an earth gravity prior for sensory-motor tasks like catching or reaching. Even under earth-discrepant conditions, this prior biases perception and action towards assuming a gravitational downwards acceleration of 9.81 m/s 2 . This can be particularly detrimental in interactions with virtual environments employing earth-discrepant gravity conditions for their visual presentation. The present study thus investigates how well humans discriminate visually presented gravities and which cues they use to extract gravity from the visual scene. To this end, we employed a Two-Interval Forced-Choice Design. In Experiment 1, participants had to judge which of two presented parabolas had the higher underlying gravity. We used two initial vertical velocities, two horizontal velocities and a constant target size. Experiment 2 added a manipulation of the reliability of the target size. Experiment 1 shows that participants have generally high discrimination thresholds for visually presented gravities, with weber fractions of 13 to beyond 30%. We identified the rate of change of the elevation angle (ẏ) and the visual angle (θ) as major cues. Experiment 2 suggests furthermore that size variability has a small influence on discrimination thresholds, while at the same time larger size variability increases reliance on ẏ and decreases reliance on θ. All in all, even though we use all available information, humans display low precision when extracting the governing gravity from a visual scene, which might further impact our capabilities of adapting to earth-discrepant gravity conditions with visual information alone. Copyright © 2018. Published by Elsevier Ltd.

  5. The Role of the Human Extrastriate Visual Cortex in Mirror Symmetry Discrimination: A TMS-Adaptation Study

    ERIC Educational Resources Information Center

    Cattaneo, Zaira; Mattavelli, Giulia; Papagno, Costanza; Herbert, Andrew; Silvanto, Juha

    2011-01-01

    The human visual system is able to efficiently extract symmetry information from the visual environment. Prior neuroimaging evidence has revealed symmetry-preferring neuronal representations in the dorsolateral extrastriate visual cortex; the objective of the present study was to investigate the necessity of these representations in symmetry…

  6. Enhanced Local Processing of Dynamic Visual Information in Autism: Evidence from Speed Discrimination

    ERIC Educational Resources Information Center

    Chen, Y.; Norton, D. J.; McBain, R.; Gold, J.; Frazier, J. A.; Coyle, J. T.

    2012-01-01

    An important issue for understanding visual perception in autism concerns whether individuals with this neurodevelopmental disorder possess an advantage in processing local visual information, and if so, what is the nature of this advantage. Perception of movement speed is a visual process that relies on computation of local spatiotemporal signals…

  7. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  8. Discrimination of transgenic soybean seeds by terahertz spectroscopy

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Liu, Changhong; Chen, Feng; Yang, Jianbo; Zheng, Lei

    2016-10-01

    Discrimination of genetically modified organisms is increasingly demanded by legislation and consumers worldwide. The feasibility of a non-destructive discrimination of glyphosate-resistant and conventional soybean seeds and their hybrid descendants was examined by terahertz time-domain spectroscopy system combined with chemometrics. Principal component analysis (PCA), least squares-support vector machines (LS-SVM) and PCA-back propagation neural network (PCA-BPNN) models with the first and second derivative and standard normal variate (SNV) transformation pre-treatments were applied to classify soybean seeds based on genotype. Results demonstrated clear differences among glyphosate-resistant, hybrid descendants and conventional non-transformed soybean seeds could easily be visualized with an excellent classification (accuracy was 88.33% in validation set) using the LS-SVM and the spectra with SNV pre-treatment. The results indicated that THz spectroscopy techniques together with chemometrics would be a promising technique to distinguish transgenic soybean seeds from non-transformed seeds with high efficiency and without any major sample preparation.

  9. Visual difference metric for realistic image synthesis

    NASA Astrophysics Data System (ADS)

    Bolin, Mark R.; Meyer, Gary W.

    1999-05-01

    An accurate and efficient model of human perception has been developed to control the placement of sample in a realistic image synthesis algorithm. Previous sampling techniques have sought to spread the error equally across the image plane. However, this approach neglects the fact that the renderings are intended to be displayed for a human observer. The human visual system has a varying sensitivity to error that is based upon the viewing context. This means that equivalent optical discrepancies can be very obvious in one situation and imperceptible in another. It is ultimately the perceptibility of this error that governs image quality and should be used as the basis of a sampling algorithm. This paper focuses on a simplified version of the Lubin Visual Discrimination Metric (VDM) that was developed for insertion into an image synthesis algorithm. The sampling VDM makes use of a Haar wavelet basis for the cortical transform and a less severe spatial pooling operation. The model was extended for color including the effects of chromatic aberration. Comparisons are made between the execution time and visual difference map for the original Lubin and simplified visual difference metrics. Results for the realistic image synthesis algorithm are also presented.

  10. Redundancy reduction explains the expansion of visual direction space around the cardinal axes.

    PubMed

    Perrone, John A; Liston, Dorion B

    2015-06-01

    Motion direction discrimination in humans is worse for oblique directions than for the cardinal directions (the oblique effect). For some unknown reason, the human visual system makes systematic errors in the estimation of particular motion directions; a direction displacement near a cardinal axis appears larger than it really is whereas the same displacement near an oblique axis appears to be smaller. Although the perceptual effects are robust and are clearly measurable in smooth pursuit eye movements, all attempts to identify the neural underpinnings for the oblique effect have failed. Here we show that a model of image velocity estimation based on the known properties of neurons in primary visual cortex (V1) and the middle temporal (MT) visual area of the primate brain produces the oblique effect. We also provide an explanation for the unusual asymmetric patterns of inhibition that have been found surrounding MT neurons. These patterns are consistent with a mechanism within the visual system that prevents redundant velocity signals from being passed onto the next motion-integration stage, (dorsal Medial superior temporal, MSTd). We show that model redundancy-reduction mechanisms within the MT-MSTd pathway produce the oblique effect. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Evaluation of a visual layering methodology for colour coding control room displays.

    PubMed

    Van Laar, Darren; Deshe, Ofer

    2002-07-01

    Eighteen people participated in an experiment in which they were asked to search for targets on control room like displays which had been produced using three different coding methods. The monochrome coding method displayed the information in black and white only, the maximally discriminable method contained colours chosen for their high perceptual discriminability, the visual layers method contained colours developed from psychological and cartographic principles which grouped information into a perceptual hierarchy. The visual layers method produced significantly faster search times than the other two coding methods which did not differ significantly from each other. Search time also differed significantly for presentation order and for the method x order interaction. There was no significant difference between the methods in the number of errors made. Participants clearly preferred the visual layers coding method. Proposals are made for the design of experiments to further test and develop the visual layers colour coding methodology.

  12. Left hemispheric advantage for numerical abilities in the bottlenose dolphin.

    PubMed

    Kilian, Annette; von Fersen, Lorenzo; Güntürkün, Onur

    2005-02-28

    In a two-choice discrimination paradigm, a bottlenose dolphin discriminated relational dimensions between visual numerosity stimuli under monocular viewing conditions. After prior binocular acquisition of the task, two monocular test series with different number stimuli were conducted. In accordance with recent studies on visual lateralization in the bottlenose dolphin, our results revealed an overall advantage of the right visual field. Due to the complete decussation of the optic nerve fibers, this suggests a specialization of the left hemisphere for analysing relational features between stimuli as required in tests for numerical abilities. These processes are typically right hemisphere-based in other mammals (including humans) and birds. The present data provide further evidence for a general right visual field advantage in bottlenose dolphins for visual information processing. It is thus assumed that dolphins possess a unique functional architecture of their cerebral asymmetries. (c) 2004 Elsevier B.V. All rights reserved.

  13. Perceived visual speed constrained by image segmentation

    NASA Technical Reports Server (NTRS)

    Verghese, P.; Stone, L. S.

    1996-01-01

    Little is known about how or where the visual system parses the visual scene into objects or surfaces. However, it is generally assumed that the segmentation and grouping of pieces of the image into discrete entities is due to 'later' processing stages, after the 'early' processing of the visual image by local mechanisms selective for attributes such as colour, orientation, depth, and motion. Speed perception is also thought to be mediated by early mechanisms tuned for speed. Here we show that manipulating the way in which an image is parsed changes the way in which local speed information is processed. Manipulations that cause multiple stimuli to appear as parts of a single patch degrade speed discrimination, whereas manipulations that perceptually divide a single large stimulus into parts improve discrimination. These results indicate that processes as early as speed perception may be constrained by the parsing of the visual image into discrete entities.

  14. Infants Discriminate Voicing and Place of Articulation with Reduced Spectral and Temporal Modulation Cues

    ERIC Educational Resources Information Center

    Cabrera, Laurianne; Lorenzi, Christian; Bertoncini, Josiane

    2015-01-01

    Purpose: This study assessed the role of spectro-temporal modulation cues in the discrimination of 2 phonetic contrasts (voicing and place) for young infants. Method: A visual-habituation procedure was used to assess the ability of French-learning 6-month-old infants with normal hearing to discriminate voiced versus unvoiced (/aba/-/apa/) and…

  15. Impaired Discrimination Learning in Mice Lacking the NMDA Receptor NR2A Subunit

    ERIC Educational Resources Information Center

    Brigman, Jonathan L.; Feyder, Michael; Saksida, Lisa M.; Bussey, Timothy J.; Mishina, Masayoshi; Holmes, Andrew

    2008-01-01

    N-Methyl-D-aspartate receptors (NMDARs) mediate certain forms of synaptic plasticity and learning. We used a touchscreen system to assess NR2A subunit knockout mice (KO) for (1) pairwise visual discrimination and reversal learning and (2) acquisition and extinction of an instrumental response requiring no pairwise discrimination. NR2A KO mice…

  16. Dorso-Lateral Frontal Cortex of the Ferret Encodes Perceptual Difficulty during Visual Discrimination

    PubMed Central

    Zhou, Zhe Charles; Yu, Chunxiu; Sellers, Kristin K.; Fröhlich, Flavio

    2016-01-01

    Visual discrimination requires sensory processing followed by a perceptual decision. Despite a growing understanding of visual areas in this behavior, it is unclear what role top-down signals from prefrontal cortex play, in particular as a function of perceptual difficulty. To address this gap, we investigated how neurons in dorso-lateral frontal cortex (dl-FC) of freely-moving ferrets encode task variables in a two-alternative forced choice visual discrimination task with high- and low-contrast visual input. About two-thirds of all recorded neurons in dl-FC were modulated by at least one of the two task variables, task difficulty and target location. More neurons in dl-FC preferred the hard trials; no such preference bias was found for target location. In individual neurons, this preference for specific task types was limited to brief epochs. Finally, optogenetic stimulation confirmed the functional role of the activity in dl-FC before target touch; suppression of activity in pyramidal neurons with the ArchT silencing opsin resulted in a decrease in reaction time to touch the target but not to retrieve reward. In conclusion, dl-FC activity is differentially recruited for high perceptual difficulty in the freely-moving ferret and the resulting signal may provide top-down behavioral inhibition. PMID:27025995

  17. Dorso-Lateral Frontal Cortex of the Ferret Encodes Perceptual Difficulty during Visual Discrimination.

    PubMed

    Zhou, Zhe Charles; Yu, Chunxiu; Sellers, Kristin K; Fröhlich, Flavio

    2016-03-30

    Visual discrimination requires sensory processing followed by a perceptual decision. Despite a growing understanding of visual areas in this behavior, it is unclear what role top-down signals from prefrontal cortex play, in particular as a function of perceptual difficulty. To address this gap, we investigated how neurons in dorso-lateral frontal cortex (dl-FC) of freely-moving ferrets encode task variables in a two-alternative forced choice visual discrimination task with high- and low-contrast visual input. About two-thirds of all recorded neurons in dl-FC were modulated by at least one of the two task variables, task difficulty and target location. More neurons in dl-FC preferred the hard trials; no such preference bias was found for target location. In individual neurons, this preference for specific task types was limited to brief epochs. Finally, optogenetic stimulation confirmed the functional role of the activity in dl-FC before target touch; suppression of activity in pyramidal neurons with the ArchT silencing opsin resulted in a decrease in reaction time to touch the target but not to retrieve reward. In conclusion, dl-FC activity is differentially recruited for high perceptual difficulty in the freely-moving ferret and the resulting signal may provide top-down behavioral inhibition.

  18. Electrophysiological Evidence for Ventral Stream Deficits in Schizophrenia Patients

    PubMed Central

    Plomp, Gijs; Roinishvili, Maya; Chkonia, Eka; Kapanadze, George; Kereselidze, Maia; Brand, Andreas; Herzog, Michael H.

    2013-01-01

    Schizophrenic patients suffer from many deficits including visual, attentional, and cognitive ones. Visual deficits are of particular interest because they are at the fore-end of information processing and can provide clear examples of interactions between sensory, perceptual, and higher cognitive functions. Visual deficits in schizophrenic patients are often attributed to impairments in the dorsal (where) rather than the ventral (what) stream of visual processing. We used a visual-masking paradigm in which patients and matched controls discriminated small vernier offsets. We analyzed the evoked electroencephalography (EEG) responses and applied distributed electrical source imaging techniques to estimate activity differences between conditions and groups throughout the brain. Compared with controls, patients showed strongly reduced discrimination accuracy, confirming previous work. The behavioral deficits corresponded to pronounced decreases in the evoked EEG response at around 200 ms after stimulus onset. At this latency, patients showed decreased activity for targets in left parietal cortex (dorsal stream), but the decrease was most pronounced in lateral occipital cortex (in the ventral stream). These deficiencies occurred at latencies that reflect object processing and fine shape discriminations. We relate the reduced ventral stream activity to deficient top-down processing of target stimuli and provide a framework for relating the commonly observed dorsal stream deficiencies with the currently observed ventral stream deficiencies. PMID:22258884

  19. Electrophysiological evidence for ventral stream deficits in schizophrenia patients.

    PubMed

    Plomp, Gijs; Roinishvili, Maya; Chkonia, Eka; Kapanadze, George; Kereselidze, Maia; Brand, Andreas; Herzog, Michael H

    2013-05-01

    Schizophrenic patients suffer from many deficits including visual, attentional, and cognitive ones. Visual deficits are of particular interest because they are at the fore-end of information processing and can provide clear examples of interactions between sensory, perceptual, and higher cognitive functions. Visual deficits in schizophrenic patients are often attributed to impairments in the dorsal (where) rather than the ventral (what) stream of visual processing. We used a visual-masking paradigm in which patients and matched controls discriminated small vernier offsets. We analyzed the evoked electroencephalography (EEG) responses and applied distributed electrical source imaging techniques to estimate activity differences between conditions and groups throughout the brain. Compared with controls, patients showed strongly reduced discrimination accuracy, confirming previous work. The behavioral deficits corresponded to pronounced decreases in the evoked EEG response at around 200 ms after stimulus onset. At this latency, patients showed decreased activity for targets in left parietal cortex (dorsal stream), but the decrease was most pronounced in lateral occipital cortex (in the ventral stream). These deficiencies occurred at latencies that reflect object processing and fine shape discriminations. We relate the reduced ventral stream activity to deficient top-down processing of target stimuli and provide a framework for relating the commonly observed dorsal stream deficiencies with the currently observed ventral stream deficiencies.

  20. Color names, color categories, and color-cued visual search: Sometimes, color perception is not categorical

    PubMed Central

    Brown, Angela M; Lindsey, Delwin T; Guckes, Kevin M

    2011-01-01

    The relation between colors and their names is a classic case-study for investigating the Sapir-Whorf hypothesis that categorical perception is imposed on perception by language. Here, we investigate the Sapir-Whorf prediction that visual search for a green target presented among blue distractors (or vice versa) should be faster than search for a green target presented among distractors of a different color of green (or for a blue target among different blue distractors). Gilbert, Regier, Kay & Ivry (2006) reported that this Sapir-Whorf effect is restricted to the right visual field (RVF), because the major brain language centers are in the left cerebral hemisphere. We found no categorical effect at the Green|Blue color boundary, and no categorical effect restricted to the RVF. Scaling of perceived color differences by Maximum Likelihood Difference Scaling (MLDS) also showed no categorical effect, including no effect specific to the RVF. Two models fit the data: a color difference model based on MLDS and a standard opponent-colors model of color discrimination based on the spectral sensitivities of the cones. Neither of these models, nor any of our data, suggested categorical perception of colors at the Green|Blue boundary, in either visual field. PMID:21980188

  1. Tailoring a psychophysical discrimination experiment upon assessment of the psychometric function: Predictions and results

    NASA Astrophysics Data System (ADS)

    Vilardi, Andrea; Tabarelli, Davide; Ricci, Leonardo

    2015-02-01

    Decision making is a widespread research topic and plays a crucial role in neuroscience as well as in other research and application fields of, for example, biology, medicine and economics. The most basic implementation of decision making, namely binary discrimination, is successfully interpreted by means of signal detection theory (SDT), a statistical model that is deeply linked to physics. An additional, widespread tool to investigate discrimination ability is the psychometric function, which measures the probability of a given response as a function of the magnitude of a physical quantity underlying the stimulus. However, the link between psychometric functions and binary discrimination experiments is often neglected or misinterpreted. Aim of the present paper is to provide a detailed description of an experimental investigation on a prototypical discrimination task and to discuss the results in terms of SDT. To this purpose, we provide an outline of the theory and describe the implementation of two behavioural experiments in the visual modality: upon the assessment of the so-called psychometric function, we show how to tailor a binary discrimination experiment on performance and decisional bias, and to measure these quantities on a statistical base. Attention is devoted to the evaluation of uncertainties, an aspect which is also often overlooked in the scientific literature.

  2. Examining the relationship between skilled music training and attention.

    PubMed

    Wang, Xiao; Ossher, Lynn; Reuter-Lorenz, Patricia A

    2015-11-01

    While many aspects of cognition have been investigated in relation to skilled music training, surprisingly little work has examined the connection between music training and attentional abilities. The present study investigated the performance of skilled musicians on cognitively demanding sustained attention tasks, measuring both temporal and visual discrimination over a prolonged duration. Participants with extensive formal music training were found to have superior performance on a temporal discrimination task, but not a visual discrimination task, compared to participants with no music training. In addition, no differences were found between groups in vigilance decrement in either type of task. Although no differences were evident in vigilance per se, the results indicate that performance in an attention-demanding temporal discrimination task was superior in individuals with extensive music training. We speculate that this basic cognitive ability may contribute to advantages that musicians show in other cognitive measures. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Lack of power enhances visual perceptual discrimination.

    PubMed

    Weick, Mario; Guinote, Ana; Wilkinson, David

    2011-09-01

    Powerless individuals face much challenge and uncertainty. As a consequence, they are highly vigilant and closely scrutinize their social environments. The aim of the present research was to determine whether these qualities enhance performance in more basic cognitive tasks involving simple visual feature discrimination. To test this hypothesis, participants performed a series of perceptual matching and search tasks involving colour, texture, and size discrimination. As predicted, those primed with powerlessness generated shorter reaction times and made fewer eye movements than either powerful or control participants. The results indicate that the heightened vigilance shown by powerless individuals is associated with an advantage in performing simple types of psychophysical discrimination. These findings highlight, for the first time, an underlying competency in perceptual cognition that sets powerless individuals above their powerful counterparts, an advantage that may reflect functional adaptation to the environmental challenge and uncertainty that they face. © 2011 Canadian Psychological Association

  4. Pavlovian Discriminative Stimulus Effects of Methamphetamine in Male Japanese quail (Coturnix japonica)

    PubMed Central

    Bolin, B. Levi; Singleton, Destiny L.; Akins, Chana K.

    2014-01-01

    Pavlovian drug discrimination (DD) procedures demonstrate that interoceptive drug stimuli may come to control behavior by informing the status of conditional relationships between stimuli and outcomes. This technique may provide insight into processes that contribute to drug-seeking, relapse, and other maladaptive behaviors associated with drug abuse. The purpose of the current research was to establish a model of Pavlovian DD in male Japanese quail. A Pavlovian conditioning procedure was used such that 3.0 mg/kg methamphetamine served as a feature positive stimulus for brief periods of visual access to a female quail and approach behavior was measured. After acquisition training, generalization tests were conducted with cocaine, nicotine, and haloperidol under extinction conditions. SCH 23390 was used to investigate the involvement of the dopamine D1 receptor subtype in the methamphetamine discriminative stimulus. Results showed that cocaine fully substituted for methamphetamine but nicotine only partially substituted for methamphetamine in quail. Haloperidol dose-dependently decreased approach behavior. Pretreatment with SCH 23390 modestly attenuated the methamphetamine discrimination suggesting that the D1 receptor subtype may be involved in the discriminative stimulus effects of methamphetamine. The findings are discussed in relation to drug abuse and associated negative health consequences. PMID:24965811

  5. Is attention based on spatial contextual memory preferentially guided by low spatial frequency signals?

    PubMed

    Patai, Eva Zita; Buckley, Alice; Nobre, Anna Christina

    2013-01-01

    A popular model of visual perception states that coarse information (carried by low spatial frequencies) along the dorsal stream is rapidly transmitted to prefrontal and medial temporal areas, activating contextual information from memory, which can in turn constrain detailed input carried by high spatial frequencies arriving at a slower rate along the ventral visual stream, thus facilitating the processing of ambiguous visual stimuli. We were interested in testing whether this model contributes to memory-guided orienting of attention. In particular, we asked whether global, low-spatial frequency (LSF) inputs play a dominant role in triggering contextual memories in order to facilitate the processing of the upcoming target stimulus. We explored this question over four experiments. The first experiment replicated the LSF advantage reported in perceptual discrimination tasks by showing that participants were faster and more accurate at matching a low spatial frequency version of a scene, compared to a high spatial frequency version, to its original counterpart in a forced-choice task. The subsequent three experiments tested the relative contributions of low versus high spatial frequencies during memory-guided covert spatial attention orienting tasks. Replicating the effects of memory-guided attention, pre-exposure to scenes associated with specific spatial memories for target locations (memory cues) led to higher perceptual discrimination and faster response times to identify targets embedded in the scenes. However, either high or low spatial frequency cues were equally effective; LSF signals did not selectively or preferentially contribute to the memory-driven attention benefits to performance. Our results challenge a generalized model that LSFs activate contextual memories, which in turn bias attention and facilitate perception.

  6. Is Attention Based on Spatial Contextual Memory Preferentially Guided by Low Spatial Frequency Signals?

    PubMed Central

    Patai, Eva Zita; Buckley, Alice; Nobre, Anna Christina

    2013-01-01

    A popular model of visual perception states that coarse information (carried by low spatial frequencies) along the dorsal stream is rapidly transmitted to prefrontal and medial temporal areas, activating contextual information from memory, which can in turn constrain detailed input carried by high spatial frequencies arriving at a slower rate along the ventral visual stream, thus facilitating the processing of ambiguous visual stimuli. We were interested in testing whether this model contributes to memory-guided orienting of attention. In particular, we asked whether global, low-spatial frequency (LSF) inputs play a dominant role in triggering contextual memories in order to facilitate the processing of the upcoming target stimulus. We explored this question over four experiments. The first experiment replicated the LSF advantage reported in perceptual discrimination tasks by showing that participants were faster and more accurate at matching a low spatial frequency version of a scene, compared to a high spatial frequency version, to its original counterpart in a forced-choice task. The subsequent three experiments tested the relative contributions of low versus high spatial frequencies during memory-guided covert spatial attention orienting tasks. Replicating the effects of memory-guided attention, pre-exposure to scenes associated with specific spatial memories for target locations (memory cues) led to higher perceptual discrimination and faster response times to identify targets embedded in the scenes. However, either high or low spatial frequency cues were equally effective; LSF signals did not selectively or preferentially contribute to the memory-driven attention benefits to performance. Our results challenge a generalized model that LSFs activate contextual memories, which in turn bias attention and facilitate perception. PMID:23776509

  7. THE VISUAL DISCRIMINATION OF INTENSITY AND THE WEBER-FECHNER LAW

    PubMed Central

    Hecht, Selig

    1924-01-01

    1. A study of the historical development of the Weber-Fechner law shows that it fails to describe intensity perception; first, because it is based on observations which do not record intensity discrimination accurately, and second, because it omits the essentially discontinuous nature of the recognition of intensity differences. 2. There is presented a series of data, assembled from various sources, which proves that in the visual discrimination of intensity the threshold difference ΔI bears no constant relation to the intensity I. The evidence shows unequivocally that as the intensity rises, the ratio See PDF for Equation first decreases and then increases. 3. The data are then subjected to analysis in terms of a photochemical system already proposed for the visual activity of the rods and cones. It is found that for the retinal elements to discriminate between one intensity and the next perceptible one, the transition from one to the other must involve the decomposition of a constant amount of photosensitive material. 4. The magnitude of this unitary increment in the quantity of photochemical action is greater for the rods than for the cones. Therefore, below a certain critical illumination—the cone threshold—intensity discrimination is controlled by the rods alone, but above this point it is determined by the cones alone. 5. The unitary increments in retinal photochemical action may be interpreted as being recorded by each rod and cone; or as conditioning the variability of the retinal cells so that each increment involves a constant increase in the number of active elements; or as a combination of the two interpretations. 6. Comparison with critical data of such diverse nature as dark adaptation, absolute thresholds, and visual acuity shows that the analysis is consistent with well established facts of vision. PMID:19872133

  8. Visual Deficit in Albino Rats Following Fetal X Irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    VAN DER ELST, DIRK H.; PORTER, PAUL B.; SHARP, JOSEPH C.

    1963-02-01

    To investigate the effect of radiation on visual ability, five groups of rats on the 15th day of gestation received x irradiation in doses of 0, 50, 75, 100, or 150 r at 50 r/ min. Two-thirds of the newborn rats died or were killed and eaten during the first postnatal week. The 75- and 50-r groups were lost entirely. The cannibalism occurred in all groups, so that its cause was uncertain. The remaining rats, which as fetuses had received 0, 100, and 150 r, were tested for visual discrimination in a water-flooded T. All 3 groups discriminated a lightedmore » escape ladder from the unlighted arm of the T with near- equal facility. Thereafter, as the light was dimmed progressively, performance declined in relation to dose. With the light turned off, but the bulb and ladder visible in ambient illumination, the 150-r group performed at chance, the 100-r group reliably better, and the control group better still. Thus, in the more precise task the irradiated animals failed. Since irradiation on the 15th day primarily damages the cortex, central blindness seems the most likely explanation. All animals had previously demonstrated their ability to solve the problem conceptually; hence a conclusion of visual deficiency seems justified. The similar performances of all groups during the easiest light discrimination test showed that the heavily irradiated and severely injured animals of the 150-r group were nonetheless able to learn readily. Finally, contrary to earlier studies in which irradiated rats were retarded in discriminating a light in a Skinner box, present tests reveal impairment neither in learning rate nor light discrimination.« less

  9. The effects of alphabet and expertise on letter perception

    PubMed Central

    Wiley, Robert W.; Wilson, Colin; Rapp, Brenda

    2016-01-01

    Long-standing questions in human perception concern the nature of the visual features that underlie letter recognition and the extent to which the visual processing of letters is affected by differences in alphabets and levels of viewer expertise. We examined these issues in a novel approach using a same-different judgment task on pairs of letters from the Arabic alphabet with two participant groups—one with no prior exposure to Arabic and one with reading proficiency. Hierarchical clustering and linear mixed-effects modeling of reaction times and accuracy provide evidence that both the specific characteristics of the alphabet and observers’ previous experience with it affect how letters are perceived and visually processed. The findings of this research further our understanding of the multiple factors that affect letter perception and support the view of a visual system that dynamically adjusts its weighting of visual features as expert readers come to more efficiently and effectively discriminate the letters of the specific alphabet they are viewing. PMID:26913778

  10. Visually Evoked Potential Markers of Concussion History in Patients with Convergence Insufficiency

    PubMed Central

    Poltavski, Dmitri; Lederer, Paul; Cox, Laurie Kopko

    2017-01-01

    ABSTRACT Purpose We investigated whether differences in the pattern visual evoked potentials exist between patients with convergence insufficiency and those with convergence insufficiency and a history of concussion using stimuli designed to differentiate between magnocellular (transient) and parvocellular (sustained) neural pathways. Methods Sustained stimuli included 2-rev/s, 85% contrast checkerboard patterns of 1- and 2-degree check sizes, whereas transient stimuli comprised 4-rev/s, 10% contrast vertical sinusoidal gratings with column width of 0.25 and 0.50 cycles/degree. We tested two models: an a priori clinical model based on an assumption of at least a minimal (beyond instrumentation’s margin of error) 2-millisecond lag of transient response latencies behind sustained response latencies in concussed patients and a statistical model derived from the sample data. Results Both models discriminated between concussed and nonconcussed groups significantly above chance (with 76% and 86% accuracy, respectively). In the statistical model, patients with mean vertical sinusoidal grating response latencies greater than 119 milliseconds to 0.25-cycle/degree stimuli (or mean vertical sinusoidal latencies >113 milliseconds to 0.50-cycle/degree stimuli) and mean vertical sinusoidal grating amplitudes of less than 14.75 mV to 0.50-cycle/degree stimuli were classified as having had a history of concussion. The resultant receiver operating characteristic curve for this model had excellent discrimination between the concussed and nonconcussed (area under the curve = 0.857; P < .01) groups with sensitivity of 0.92 and specificity of 0.80. Conclusions The results suggest a promising electrophysiological approach to identifying individuals with convergence insufficiency and a history of concussion. PMID:28609417

  11. Auditory processing deficits in bipolar disorder with and without a history of psychotic features.

    PubMed

    Zenisek, RyAnna; Thaler, Nicholas S; Sutton, Griffin P; Ringdahl, Erik N; Snyder, Joel S; Allen, Daniel N

    2015-11-01

    Auditory perception deficits have been identified in schizophrenia (SZ) and linked to dysfunction in the auditory cortex. Given that psychotic symptoms, including auditory hallucinations, are also seen in bipolar disorder (BD), it may be that individuals with BD who also exhibit psychotic symptoms demonstrate a similar impairment in auditory perception. Fifty individuals with SZ, 30 individuals with bipolar I disorder with a history of psychosis (BD+), 28 individuals with bipolar I disorder with no history of psychotic features (BD-), and 29 normal controls (NC) were administered a tone discrimination task and an emotion recognition task. Mixed-model analyses of covariance with planned comparisons indicated that individuals with BD+ performed at a level that was intermediate between those with BD- and those with SZ on the more difficult condition of the tone discrimination task and on the auditory condition of the emotion recognition task. There were no differences between the BD+ and BD- groups on the visual or auditory-visual affect recognition conditions. Regression analyses indicated that performance on the tone discrimination task predicted performance on all conditions of the emotion recognition task. Auditory hallucinations in BD+ were not related to performance on either task. Our findings suggested that, although deficits in frequency discrimination and emotion recognition are more severe in SZ, these impairments extend to BD+. Although our results did not support the idea that auditory hallucinations may be related to these deficits, they indicated that basic auditory deficits may be a marker for psychosis, regardless of SZ or BD diagnosis. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. [Change settings for visual analyzer of child users of mobile communication: longitudinal study].

    PubMed

    Khorseva, N I; Grigor'ev, Iu G; Gorbunova, N V

    2014-01-01

    The paper represents theresults of longitudinal monitoring of the changes in the parameters of simple visual-motor reaction, the visual acuity and the rate of the visual discrimination in the child users of mobile communication, which indicate the multivariability of the possible effects of radiation from mobile phones on the auditory system of children.

  13. The Emotion Recognition System Based on Autoregressive Model and Sequential Forward Feature Selection of Electroencephalogram Signals

    PubMed Central

    Hatamikia, Sepideh; Maghooli, Keivan; Nasrabadi, Ali Motie

    2014-01-01

    Electroencephalogram (EEG) is one of the useful biological signals to distinguish different brain diseases and mental states. In recent years, detecting different emotional states from biological signals has been merged more attention by researchers and several feature extraction methods and classifiers are suggested to recognize emotions from EEG signals. In this research, we introduce an emotion recognition system using autoregressive (AR) model, sequential forward feature selection (SFS) and K-nearest neighbor (KNN) classifier using EEG signals during emotional audio-visual inductions. The main purpose of this paper is to investigate the performance of AR features in the classification of emotional states. To achieve this goal, a distinguished AR method (Burg's method) based on Levinson-Durbin's recursive algorithm is used and AR coefficients are extracted as feature vectors. In the next step, two different feature selection methods based on SFS algorithm and Davies–Bouldin index are used in order to decrease the complexity of computing and redundancy of features; then, three different classifiers include KNN, quadratic discriminant analysis and linear discriminant analysis are used to discriminate two and three different classes of valence and arousal levels. The proposed method is evaluated with EEG signals of available database for emotion analysis using physiological signals, which are recorded from 32 participants during 40 1 min audio visual inductions. According to the results, AR features are efficient to recognize emotional states from EEG signals, and KNN performs better than two other classifiers in discriminating of both two and three valence/arousal classes. The results also show that SFS method improves accuracies by almost 10-15% as compared to Davies–Bouldin based feature selection. The best accuracies are %72.33 and %74.20 for two classes of valence and arousal and %61.10 and %65.16 for three classes, respectively. PMID:25298928

  14. Impaired Filtering of Behaviourally Irrelevant Visual Information in Dyslexia

    ERIC Educational Resources Information Center

    Roach, Neil W.; Hogben, John H.

    2007-01-01

    A recent proposal suggests that dyslexic individuals suffer from attentional deficiencies, which impair the ability to selectively process incoming visual information. To investigate this possibility, we employed a spatial cueing procedure in conjunction with a single fixation visual search task measuring thresholds for discriminating the…

  15. The development of damage identification methods for buildings with image recognition and machine learning techniques utilizing aerial photographs of the 2016 Kumamoto earthquake

    NASA Astrophysics Data System (ADS)

    Shohei, N.; Nakamura, H.; Fujiwara, H.; Naoichi, M.; Hiromitsu, T.

    2017-12-01

    It is important to get schematic information of the damage situation immediately after the earthquake utilizing photographs shot from an airplane in terms of the investigation and the decision-making for authorities. In case of the 2016 Kumamoto earthquake, we have acquired more than 1,800 orthographic projection photographs adjacent to damaged areas. These photos have taken between April 16th and 19th by airplanes, then we have distinguished damages of all buildings with 4 levels, and organized as approximately 296,000 GIS data corresponding to the fundamental Geospatial data published by Geospatial Information Authority of Japan. These data have organized by effort of hundreds of engineers. However, it is not considered practical for more extensive disasters like the Nankai Trough earthquake by only human powers. So, we have been developing the automatic damage identification method utilizing image recognition and machine learning techniques. First, we have extracted training data of more than 10,000 buildings which have equally damage levels divided in 4 grades. With these training data, we have been raster scanning in each scanning ranges of entire images, then clipping patch images which represents damage levels each. By utilizing these patch images, we have been developing discriminant models by two ways. One is a model using the Support Vector Machine (SVM). First, extract a feature quantity of each patch images. Then, with these vector values, calculate the histogram density as a method of Bag of Visual Words (BoVW), then classify borders with each damage grades by SVM. The other one is a model using the multi-layered Neural Network. First, design a multi-layered Neural Network. Second, input patch images and damage levels based on a visual judgement, and then, optimize learning parameters with error backpropagation method. By use of both discriminant models, we are going to discriminate damage levels in each patches, then create the image that shows building damage situations. It would be helpful for more prompt and widespread damage detection than visual judgement. Acknowledgment: This work was supported by CSTI through the Cross-ministerial Strategic Innovation Promotion Program (SIP), titled "Enhancement of societal resiliency against natural disasters"(Funding agency: JST).

  16. Colour discrimination and categorisation in Williams syndrome.

    PubMed

    Farran, Emily K; Cranwell, Matthew B; Alvarez, James; Franklin, Anna

    2013-10-01

    Individuals with Williams syndrome (WS) present with impaired functioning of the dorsal visual stream relative to the ventral visual stream. As such, little attention has been given to ventral stream functions in WS. We investigated colour processing, a predominantly ventral stream function, for the first time in nineteen individuals with Williams syndrome. Colour discrimination was assessed using the Farnsworth-Munsell 100 hue test. Colour categorisation was assessed using a match-to-sample test and a colour naming task. A visual search task was also included as a measure of sensitivity to the size of perceptual colour difference. Results showed that individuals with WS have reduced colour discrimination relative to typically developing participants matched for chronological age; performance was commensurate with a typically developing group matched for non-verbal ability. In contrast, categorisation was typical in WS, although there was some evidence that sensitivity to the size of perceptual colour differences was reduced in this group. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Hippocampus, perirhinal cortex, and complex visual discriminations in rats and humans

    PubMed Central

    Hales, Jena B.; Broadbent, Nicola J.; Velu, Priya D.

    2015-01-01

    Structures in the medial temporal lobe, including the hippocampus and perirhinal cortex, are known to be essential for the formation of long-term memory. Recent animal and human studies have investigated whether perirhinal cortex might also be important for visual perception. In our study, using a simultaneous oddity discrimination task, rats with perirhinal lesions were impaired and did not exhibit the normal preference for exploring the odd object. Notably, rats with hippocampal lesions exhibited the same impairment. Thus, the deficit is unlikely to illuminate functions attributed specifically to perirhinal cortex. Both lesion groups were able to acquire visual discriminations involving the same objects used in the oddity task. Patients with hippocampal damage or larger medial temporal lobe lesions were intact in a similar oddity task that allowed participants to explore objects quickly using eye movements. We suggest that humans were able to rely on an intact working memory capacity to perform this task, whereas rats (who moved slowly among the objects) needed to rely on long-term memory. PMID:25593294

  18. Perceptual Learning via Modification of Cortical Top-Down Signals

    PubMed Central

    Schäfer, Roland; Vasilaki, Eleni; Senn, Walter

    2007-01-01

    The primary visual cortex (V1) is pre-wired to facilitate the extraction of behaviorally important visual features. Collinear edge detectors in V1, for instance, mutually enhance each other to improve the perception of lines against a noisy background. The same pre-wiring that facilitates line extraction, however, is detrimental when subjects have to discriminate the brightness of different line segments. How is it possible to improve in one task by unsupervised practicing, without getting worse in the other task? The classical view of perceptual learning is that practicing modulates the feedforward input stream through synaptic modifications onto or within V1. However, any rewiring of V1 would deteriorate other perceptual abilities different from the trained one. We propose a general neuronal model showing that perceptual learning can modulate top-down input to V1 in a task-specific way while feedforward and lateral pathways remain intact. Consistent with biological data, the model explains how context-dependent brightness discrimination is improved by a top-down recruitment of recurrent inhibition and a top-down induced increase of the neuronal gain within V1. Both the top-down modulation of inhibition and of neuronal gain are suggested to be universal features of cortical microcircuits which enable perceptual learning. PMID:17715996

  19. Meta-analytic review of the development of face discrimination in infancy: Face race, face gender, infant age, and methodology moderate face discrimination.

    PubMed

    Sugden, Nicole A; Marquis, Alexandra R

    2017-11-01

    Infants show facility for discriminating between individual faces within hours of birth. Over the first year of life, infants' face discrimination shows continued improvement with familiar face types, such as own-race faces, but not with unfamiliar face types, like other-race faces. The goal of this meta-analytic review is to provide an effect size for infants' face discrimination ability overall, with own-race faces, and with other-race faces within the first year of life, how this differs with age, and how it is influenced by task methodology. Inclusion criteria were (a) infant participants aged 0 to 12 months, (b) completing a human own- or other-race face discrimination task, (c) with discrimination being determined by infant looking. Our analysis included 30 works (165 samples, 1,926 participants participated in 2,623 tasks). The effect size for infants' face discrimination was small, 6.53% greater than chance (i.e., equal looking to the novel and familiar). There was a significant difference in discrimination by race, overall (own-race, 8.18%; other-race, 3.18%) and between ages (own-race: 0- to 4.5-month-olds, 7.32%; 5- to 7.5-month-olds, 9.17%; and 8- to 12-month-olds, 7.68%; other-race: 0- to 4.5-month-olds, 6.12%; 5- to 7.5-month-olds, 3.70%; and 8- to 12-month-olds, 2.79%). Multilevel linear (mixed-effects) models were used to predict face discrimination; infants' capacity to discriminate faces is sensitive to face characteristics including race, gender, and emotion as well as the methods used, including task timing, coding method, and visual angle. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Multisensory Integration and Internal Models for Sensing Gravity Effects in Primates

    PubMed Central

    Lacquaniti, Francesco; La Scaleia, Barbara; Maffei, Vincenzo

    2014-01-01

    Gravity is crucial for spatial perception, postural equilibrium, and movement generation. The vestibular apparatus is the main sensory system involved in monitoring gravity. Hair cells in the vestibular maculae respond to gravitoinertial forces, but they cannot distinguish between linear accelerations and changes of head orientation relative to gravity. The brain deals with this sensory ambiguity (which can cause some lethal airplane accidents) by combining several cues with the otolith signals: angular velocity signals provided by the semicircular canals, proprioceptive signals from muscles and tendons, visceral signals related to gravity, and visual signals. In particular, vision provides both static and dynamic signals about body orientation relative to the vertical, but it poorly discriminates arbitrary accelerations of moving objects. However, we are able to visually detect the specific acceleration of gravity since early infancy. This ability depends on the fact that gravity effects are stored in brain regions which integrate visual, vestibular, and neck proprioceptive signals and combine this information with an internal model of gravity effects. PMID:25061610

  1. Multisensory integration and internal models for sensing gravity effects in primates.

    PubMed

    Lacquaniti, Francesco; Bosco, Gianfranco; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka

    2014-01-01

    Gravity is crucial for spatial perception, postural equilibrium, and movement generation. The vestibular apparatus is the main sensory system involved in monitoring gravity. Hair cells in the vestibular maculae respond to gravitoinertial forces, but they cannot distinguish between linear accelerations and changes of head orientation relative to gravity. The brain deals with this sensory ambiguity (which can cause some lethal airplane accidents) by combining several cues with the otolith signals: angular velocity signals provided by the semicircular canals, proprioceptive signals from muscles and tendons, visceral signals related to gravity, and visual signals. In particular, vision provides both static and dynamic signals about body orientation relative to the vertical, but it poorly discriminates arbitrary accelerations of moving objects. However, we are able to visually detect the specific acceleration of gravity since early infancy. This ability depends on the fact that gravity effects are stored in brain regions which integrate visual, vestibular, and neck proprioceptive signals and combine this information with an internal model of gravity effects.

  2. Visual feature discrimination versus compression ratio for polygonal shape descriptors

    NASA Astrophysics Data System (ADS)

    Heuer, Joerg; Sanahuja, Francesc; Kaup, Andre

    2000-10-01

    In the last decade several methods for low level indexing of visual features appeared. Most often these were evaluated with respect to their discrimination power using measures like precision and recall. Accordingly, the targeted application was indexing of visual data within databases. During the standardization process of MPEG-7 the view on indexing of visual data changed, taking also communication aspects into account where coding efficiency is important. Even if the descriptors used for indexing are small compared to the size of images, it is recognized that there can be several descriptors linked to an image, characterizing different features and regions. Beside the importance of a small memory footprint for the transmission of the descriptor and the memory footprint in a database, eventually the search and filtering can be sped up by reducing the dimensionality of the descriptor if the metric of the matching can be adjusted. Based on a polygon shape descriptor presented for MPEG-7 this paper compares the discrimination power versus memory consumption of the descriptor. Different methods based on quantization are presented and their effect on the retrieval performance are measured. Finally an optimized computation of the descriptor is presented.

  3. Concurrent and discriminant validity of the Star Excursion Balance Test for military personnel with lateral ankle sprain.

    PubMed

    Bastien, Maude; Moffet, Hélène; Bouyer, Laurent; Perron, Marc; Hébert, Luc J; Leblond, Jean

    2014-02-01

    The Star Excursion Balance Test (SEBT) has frequently been used to measure motor control and residual functional deficits at different stages of recovery from lateral ankle sprain (LAS) in various populations. However, the validity of the measure used to characterize performance--the maximal reach distance (MRD) measured by visual estimation--is still unknown. To evaluate the concurrent validity of the MRD in the SEBT estimated visually vs the MRD measured with a 3D motion-capture system and evaluate and compare the discriminant validity of 2 MRD-normalization methods (by height or by lower-limb length) in participants with or without LAS (n = 10 per group). There is a high concurrent validity and a good degree of accuracy between the visual estimation measurement and the MRD gold-standard measurement for both groups and under all conditions. The Cohen d ratios between groups and MANOVA products were higher when computed from MRD data normalized by height. The results support the concurrent validity of visual estimation of the MRD and the use of the SEBT to evaluate motor control. Moreover, normalization of MRD data by height appears to increase the discriminant validity of this test.

  4. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning.

    PubMed

    Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.

  5. Basic quantitative assessment of visual performance in patients with very low vision.

    PubMed

    Bach, Michael; Wilke, Michaela; Wilhelm, Barbara; Zrenner, Eberhart; Wilke, Robert

    2010-02-01

    A variety of approaches to developing visual prostheses are being pursued: subretinal, epiretinal, via the optic nerve, or via the visual cortex. This report presents a method of comparing their efficacy at genuinely improving visual function, starting at no light perception (NLP). A test battery (a computer program, Basic Assessment of Light and Motion [BaLM]) was developed in four basic visual dimensions: (1) light perception (light/no light), with an unstructured large-field stimulus; (2) temporal resolution, with single versus double flash discrimination; (3) localization of light, where a wedge extends from the center into four possible directions; and (4) motion, with a coarse pattern moving in one of four directions. Two- or four-alternative, forced-choice paradigms were used. The participants' responses were self-paced and delivered with a keypad. The feasibility of the BaLM was tested in 73 eyes of 51 patients with low vision. The light and time test modules discriminated between NLP and light perception (LP). The localization and motion modules showed no significant response for NLP but discriminated between LP and hand movement (HM). All four modules reached their ceilings in the acuity categories higher than HM. BaLM results systematically differed between the very-low-acuity categories NLP, LP, and HM. Light and time yielded similar results, as did localization and motion; still, for assessing the visual prostheses with differing temporal characteristics, they are not redundant. The results suggest that this simple test battery provides a quantitative assessment of visual function in the very-low-vision range from NLP to HM.

  6. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning

    PubMed Central

    Katyal, Sucharit; Engel, Stephen A.; Oxenham, Andrew J.

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects. PMID:28107359

  7. Effects of chronic iTBS-rTMS and enriched environment on visual cortex early critical period and visual pattern discrimination in dark-reared rats.

    PubMed

    Castillo-Padilla, Diana V; Funke, Klaus

    2016-01-01

    Early cortical critical period resembles a state of enhanced neuronal plasticity enabling the establishment of specific neuronal connections during first sensory experience. Visual performance with regard to pattern discrimination is impaired if the cortex is deprived from visual input during the critical period. We wondered how unspecific activation of the visual cortex before closure of the critical period using repetitive transcranial magnetic stimulation (rTMS) could affect the critical period and the visual performance of the experimental animals. Would it cause premature closure of the plastic state and thus worsen experience-dependent visual performance, or would it be able to preserve plasticity? Effects of intermittent theta-burst stimulation (iTBS) were compared with those of an enriched environment (EE) during dark-rearing (DR) from birth. Rats dark-reared in a standard cage showed poor improvement in a visual pattern discrimination task, while rats housed in EE or treated with iTBS showed a performance indistinguishable from rats reared in normal light/dark cycle. The behavioral effects were accompanied by correlated changes in the expression of brain-derived neurotrophic factor (BDNF) and atypical PKC (PKCζ/PKMζ), two factors controlling stabilization of synaptic potentiation. It appears that not only nonvisual sensory activity and exercise but also cortical activation induced by rTMS has the potential to alleviate the effects of DR on cortical development, most likely due to stimulation of BDNF synthesis and release. As we showed previously, iTBS reduced the expression of parvalbumin in inhibitory cortical interneurons, indicating that modulation of the activity of fast-spiking interneurons contributes to the observed effects of iTBS. © 2015 Wiley Periodicals, Inc.

  8. A parametric texture model based on deep convolutional features closely matches texture appearance for humans.

    PubMed

    Wallis, Thomas S A; Funke, Christina M; Ecker, Alexander S; Gatys, Leon A; Wichmann, Felix A; Bethge, Matthias

    2017-10-01

    Our visual environment is full of texture-"stuff" like cloth, bark, or gravel as distinct from "things" like dresses, trees, or paths-and humans are adept at perceiving subtle variations in material properties. To investigate image features important for texture perception, we psychophysically compare a recent parametric model of texture appearance (convolutional neural network [CNN] model) that uses the features encoded by a deep CNN (VGG-19) with two other models: the venerable Portilla and Simoncelli model and an extension of the CNN model in which the power spectrum is additionally matched. Observers discriminated model-generated textures from original natural textures in a spatial three-alternative oddity paradigm under two viewing conditions: when test patches were briefly presented to the near-periphery ("parafoveal") and when observers were able to make eye movements to all three patches ("inspection"). Under parafoveal viewing, observers were unable to discriminate 10 of 12 original images from CNN model images, and remarkably, the simpler Portilla and Simoncelli model performed slightly better than the CNN model (11 textures). Under foveal inspection, matching CNN features captured appearance substantially better than the Portilla and Simoncelli model (nine compared to four textures), and including the power spectrum improved appearance matching for two of the three remaining textures. None of the models we test here could produce indiscriminable images for one of the 12 textures under the inspection condition. While deep CNN (VGG-19) features can often be used to synthesize textures that humans cannot discriminate from natural textures, there is currently no uniformly best model for all textures and viewing conditions.

  9. The role of Broca's area in speech perception: evidence from aphasia revisited.

    PubMed

    Hickok, Gregory; Costanzo, Maddalena; Capasso, Rita; Miceli, Gabriele

    2011-12-01

    Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that individuals with Broca's aphasia, and therefore inferred damage to Broca's area, can have deficits in speech sound discrimination. Here we re-examine this issue in 24 patients with radiologically confirmed lesions to Broca's area and various degrees of associated non-fluent speech production. Patients performed two same-different discrimination tasks involving pairs of CV syllables, one in which both CVs were presented auditorily, and the other in which one syllable was auditorily presented and the other visually presented as an orthographic form; word comprehension was also assessed using word-to-picture matching tasks in both auditory and visual forms. Discrimination performance on the all-auditory task was four standard deviations above chance, as measured using d', and was unrelated to the degree of non-fluency in the patients' speech production. Performance on the auditory-visual task, however, was worse than, and not correlated with, the all-auditory task. The auditory-visual task was related to the degree of speech non-fluency. Word comprehension was at ceiling for the auditory version (97% accuracy) and near ceiling for the orthographic version (90% accuracy). We conclude that the motor speech system is not necessary for speech perception as measured both by discrimination and comprehension paradigms, but may play a role in orthographic decoding or in auditory-visual matching of phonological forms. 2011 Elsevier Inc. All rights reserved.

  10. Empirical mode decomposition processing to improve multifocal-visual-evoked-potential signal analysis in multiple sclerosis

    PubMed Central

    2018-01-01

    Objective To study the performance of multifocal-visual-evoked-potential (mfVEP) signals filtered using empirical mode decomposition (EMD) in discriminating, based on amplitude, between control and multiple sclerosis (MS) patient groups, and to reduce variability in interocular latency in control subjects. Methods MfVEP signals were obtained from controls, clinically definitive MS and MS-risk progression patients (radiologically isolated syndrome (RIS) and clinically isolated syndrome (CIS)). The conventional method of processing mfVEPs consists of using a 1–35 Hz bandpass frequency filter (XDFT). The EMD algorithm was used to decompose the XDFT signals into several intrinsic mode functions (IMFs). This signal processing was assessed by computing the amplitudes and latencies of the XDFT and IMF signals (XEMD). The amplitudes from the full visual field and from ring 5 (9.8–15° eccentricity) were studied. The discrimination index was calculated between controls and patients. Interocular latency values were computed from the XDFT and XEMD signals in a control database to study variability. Results Using the amplitude of the mfVEP signals filtered with EMD (XEMD) obtains higher discrimination index values than the conventional method when control, MS-risk progression (RIS and CIS) and MS subjects are studied. The lowest variability in interocular latency computations from the control patient database was obtained by comparing the XEMD signals with the XDFT signals. Even better results (amplitude discrimination and latency variability) were obtained in ring 5 (9.8–15° eccentricity of the visual field). Conclusions Filtering mfVEP signals using the EMD algorithm will result in better identification of subjects at risk of developing MS and better accuracy in latency studies. This could be applied to assess visual cortex activity in MS diagnosis and evolution studies. PMID:29677200

  11. Executive function deficits in team sport athletes with a history of concussion revealed by a visual-auditory dual task paradigm.

    PubMed

    Tapper, Anthony; Gonzalez, Dave; Roy, Eric; Niechwiej-Szwedo, Ewa

    2017-02-01

    The purpose of this study was to examine executive functions in team sport athletes with and without a history of concussion. Executive functions comprise many cognitive processes including, working memory, attention and multi-tasking. Past research has shown that concussions cause difficulties in vestibular-visual and vestibular-auditory dual-tasking, however, visual-auditory tasks have been examined rarely. Twenty-nine intercollegiate varsity ice hockey athletes (age = 19.13, SD = 1.56; 15 females) performed an experimental dual-task paradigm that required simultaneously processing visual and auditory information. A brief interview, event description and self-report questionnaires were used to assign participants to each group (concussion, no-concussion). Eighteen athletes had a history of concussion and 11 had no concussion history. The two tests involved visuospatial working memory (i.e., Corsi block test) and auditory tone discrimination. Participants completed both tasks individually, then simultaneously. Two outcome variables were measured, Corsi block memory span and auditory tone discrimination accuracy. No differences were shown when each task was performed alone; however, athletes with a history of concussion had a significantly worse performance on the tone discrimination task in the dual-task condition. In conclusion, long-term deficits in executive functions were associated with a prior history of concussion when cognitive resources were stressed. Evaluations of executive functions and divided attention appear to be helpful in discriminating participants with and without a history concussion.

  12. A perceptual learning deficit in Chinese developmental dyslexia as revealed by visual texture discrimination training.

    PubMed

    Wang, Zhengke; Cheng-Lai, Alice; Song, Yan; Cutting, Laurie; Jiang, Yuzheng; Lin, Ou; Meng, Xiangzhi; Zhou, Xiaolin

    2014-08-01

    Learning to read involves discriminating between different written forms and establishing connections with phonology and semantics. This process may be partially built upon visual perceptual learning, during which the ability to process the attributes of visual stimuli progressively improves with practice. The present study investigated to what extent Chinese children with developmental dyslexia have deficits in perceptual learning by using a texture discrimination task, in which participants were asked to discriminate the orientation of target bars. Experiment l demonstrated that, when all of the participants started with the same initial stimulus-to-mask onset asynchrony (SOA) at 300 ms, the threshold SOA, adjusted according to response accuracy for reaching 80% accuracy, did not show a decrement over 5 days of training for children with dyslexia, whereas this threshold SOA steadily decreased over the training for the control group. Experiment 2 used an adaptive procedure to determine the threshold SOA for each participant during training. Results showed that both the group of dyslexia and the control group attained perceptual learning over the sessions in 5 days, although the threshold SOAs were significantly higher for the group of dyslexia than for the control group; moreover, over individual participants, the threshold SOA negatively correlated with their performance in Chinese character recognition. These findings suggest that deficits in visual perceptual processing and learning might, in part, underpin difficulty in reading Chinese. Copyright © 2014 John Wiley & Sons, Ltd.

  13. The loss of short-term visual representations over time: decay or temporal distinctiveness?

    PubMed

    Mercer, Tom

    2014-12-01

    There has been much recent interest in the loss of visual short-term memories over the passage of time. According to decay theory, visual representations are gradually forgotten as time passes, reflecting a slow and steady distortion of the memory trace. However, this is controversial and decay effects can be explained in other ways. The present experiment aimed to reexamine the maintenance and loss of visual information over the short term. Decay and temporal distinctiveness models were tested using a delayed discrimination task, in which participants compared complex and novel objects over unfilled retention intervals of variable length. Experiment 1 found no significant change in the accuracy of visual memory from 2 to 6 s, but the gap separating trials reliably influenced task performance. Experiment 2 found evidence for information loss at a 10-s retention interval, but temporally separating trials restored the fidelity of visual memory, possibly because temporally isolated representations are distinct from older memory traces. In conclusion, visual representations lose accuracy at some point after 6 s, but only within temporally crowded contexts. These findings highlight the importance of temporal distinctiveness within visual short-term memory. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  14. A possible structural correlate of learning performance on a colour discrimination task in the brain of the bumblebee

    PubMed Central

    Li, Li; MaBouDi, HaDi; Egertová, Michaela; Elphick, Maurice R.

    2017-01-01

    Synaptic plasticity is considered to be a basis for learning and memory. However, the relationship between synaptic arrangements and individual differences in learning and memory is poorly understood. Here, we explored how the density of microglomeruli (synaptic complexes) within specific regions of the bumblebee (Bombus terrestris) brain relates to both visual learning and inter-individual differences in learning and memory performance on a visual discrimination task. Using whole-brain immunolabelling, we measured the density of microglomeruli in the collar region (visual association areas) of the mushroom bodies of the bumblebee brain. We found that bumblebees which made fewer errors during training in a visual discrimination task had higher microglomerular density. Similarly, bumblebees that had better retention of the learned colour-reward associations two days after training had higher microglomerular density. Further experiments indicated experience-dependent changes in neural circuitry: learning a colour-reward contingency with 10 colours (but not two colours) does result, and exposure to many different colours may result, in changes to microglomerular density in the collar region of the mushroom bodies. These results reveal the varying roles that visual experience, visual learning and foraging activity have on neural structure. Although our study does not provide a causal link between microglomerular density and performance, the observed positive correlations provide new insights for future studies into how neural structure may relate to inter-individual differences in learning and memory. PMID:28978727

  15. A possible structural correlate of learning performance on a colour discrimination task in the brain of the bumblebee.

    PubMed

    Li, Li; MaBouDi, HaDi; Egertová, Michaela; Elphick, Maurice R; Chittka, Lars; Perry, Clint J

    2017-10-11

    Synaptic plasticity is considered to be a basis for learning and memory. However, the relationship between synaptic arrangements and individual differences in learning and memory is poorly understood. Here, we explored how the density of microglomeruli (synaptic complexes) within specific regions of the bumblebee ( Bombus terrestris ) brain relates to both visual learning and inter-individual differences in learning and memory performance on a visual discrimination task. Using whole-brain immunolabelling, we measured the density of microglomeruli in the collar region (visual association areas) of the mushroom bodies of the bumblebee brain. We found that bumblebees which made fewer errors during training in a visual discrimination task had higher microglomerular density. Similarly, bumblebees that had better retention of the learned colour-reward associations two days after training had higher microglomerular density. Further experiments indicated experience-dependent changes in neural circuitry: learning a colour-reward contingency with 10 colours (but not two colours) does result, and exposure to many different colours may result, in changes to microglomerular density in the collar region of the mushroom bodies. These results reveal the varying roles that visual experience, visual learning and foraging activity have on neural structure. Although our study does not provide a causal link between microglomerular density and performance, the observed positive correlations provide new insights for future studies into how neural structure may relate to inter-individual differences in learning and memory. © 2017 The Authors.

  16. Visual training improves perceptual grouping based on basic stimulus features.

    PubMed

    Kurylo, Daniel D; Waxman, Richard; Kidron, Rachel; Silverstein, Steven M

    2017-10-01

    Training on visual tasks improves performance on basic and higher order visual capacities. Such improvement has been linked to changes in connectivity among mediating neurons. We investigated whether training effects occur for perceptual grouping. It was hypothesized that repeated engagement of integration mechanisms would enhance grouping processes. Thirty-six participants underwent 15 sessions of training on a visual discrimination task that required perceptual grouping. Participants viewed 20 × 20 arrays of dots or Gabor patches and indicated whether the array appeared grouped as vertical or horizontal lines. Across trials stimuli became progressively disorganized, contingent upon successful discrimination. Four visual dimensions were examined, in which grouping was based on similarity in luminance, color, orientation, and motion. Psychophysical thresholds of grouping were assessed before and after training. Results indicate that performance in all four dimensions improved with training. Training on a control condition, which paralleled the discrimination task but without a grouping component, produced no improvement. In addition, training on only the luminance and orientation dimensions improved performance for those conditions as well as for grouping by color, on which training had not occurred. However, improvement from partial training did not generalize to motion. Results demonstrate that a training protocol emphasizing stimulus integration enhanced perceptual grouping. Results suggest that neural mechanisms mediating grouping by common luminance and/or orientation contribute to those mediating grouping by color but do not share resources for grouping by common motion. Results are consistent with theories of perceptual learning emphasizing plasticity in early visual processing regions.

  17. Experience, Context, and the Visual Perception of Human Movement

    ERIC Educational Resources Information Center

    Jacobs, Alissa; Pinto, Jeannine; Shiffrar, Maggie

    2004-01-01

    Why are human observers particularly sensitive to human movement? Seven experiments examined the roles of visual experience and motor processes in human movement perception by comparing visual sensitivities to point-light displays of familiar, unusual, and impossible gaits across gait-speed and identity discrimination tasks. In both tasks, visual…

  18. Effects of Peripheral Eccentricity and Head Orientation on Gaze Discrimination.

    PubMed

    Palanica, Adam; Itier, Roxane J

    2014-01-01

    Visual search tasks support a special role for direct gaze in human cognition, while classic gaze judgment tasks suggest the congruency between head orientation and gaze direction plays a central role in gaze perception. Moreover, whether gaze direction can be accurately discriminated in the periphery using covert attention is unknown. In the present study, individual faces in frontal and in deviated head orientations with a direct or an averted gaze were flashed for 150 ms across the visual field; participants focused on a centred fixation while judging the gaze direction. Gaze discrimination speed and accuracy varied with head orientation and eccentricity. The limit of accurate gaze discrimination was less than ±6° eccentricity. Response times suggested a processing facilitation for direct gaze in fovea, irrespective of head orientation, however, by ±3° eccentricity, head orientation started biasing gaze judgments, and this bias increased with eccentricity. Results also suggested a special processing of frontal heads with direct gaze in central vision, rather than a general congruency effect between eye and head cues. Thus, while both head and eye cues contribute to gaze discrimination, their role differs with eccentricity.

  19. Saliency affects feedforward more than feedback processing in early visual cortex.

    PubMed

    Emmanouil, Tatiana Aloi; Avigan, Philip; Persuh, Marjan; Ro, Tony

    2013-07-01

    Early visual cortex activity is influenced by both bottom-up and top-down factors. To investigate the influences of bottom-up (saliency) and top-down (task) factors on different stages of visual processing, we used transcranial magnetic stimulation (TMS) of areas V1/V2 to induce visual suppression at varying temporal intervals. Subjects were asked to detect and discriminate the color or the orientation of briefly-presented small lines that varied on color saliency based on color contrast with the surround. Regardless of task, color saliency modulated the magnitude of TMS-induced visual suppression, especially at earlier temporal processing intervals that reflect the feedforward stage of visual processing in V1/V2. In a second experiment we found that our color saliency effects were also influenced by an inherent advantage of the color red relative to other hues and that color discrimination difficulty did not affect visual suppression. These results support the notion that early visual processing is stimulus driven and that feedforward and feedback processing encode different types of information about visual scenes. They further suggest that certain hues can be prioritized over others within our visual systems by being more robustly represented during early temporal processing intervals. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. The role of visual perception measures used in sports vision programmes in predicting actual game performance in Division I collegiate hockey players.

    PubMed

    Poltavski, Dmitri; Biberdorf, David

    2015-01-01

    Abstract In the growing field of sports vision little is still known about unique attributes of visual processing in ice hockey and what role visual processing plays in the overall athlete's performance. In the present study we evaluated whether visual, perceptual and cognitive/motor variables collected using the Nike SPARQ Sensory Training Station have significant relevance to the real game statistics of 38 Division I collegiate male and female hockey players. The results demonstrated that 69% of variance in the goals made by forwards in 2011-2013 could be predicted by their faster reaction time to a visual stimulus, better visual memory, better visual discrimination and a faster ability to shift focus between near and far objects. Approximately 33% of variance in game points was significantly related to better discrimination among competing visual stimuli. In addition, reaction time to a visual stimulus as well as stereoptic quickness significantly accounted for 24% of variance in the mean duration of the player's penalty time. This is one of the first studies to show that some of the visual skills that state-of-the-art generalised sports vision programmes are purported to target may indeed be important for hockey players' actual performance on the ice.

  1. Attentional Capture of Objects Referred to by Spoken Language

    ERIC Educational Resources Information Center

    Salverda, Anne Pier; Altmann, Gerry T. M.

    2011-01-01

    Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…

  2. Face-gender discrimination is possible in the near-absence of attention.

    PubMed

    Reddy, Leila; Wilken, Patrick; Koch, Christof

    2004-03-02

    The attentional cost associated with the visual discrimination of the gender of a face was investigated. Participants performed a face-gender discrimination task either alone (single-task) or concurrently (dual-task) with a known attentional demanding task (5-letter T/L discrimination). Overall performance on face-gender discrimination suffered remarkably little under the dual-task condition compared to the single-task condition. Similar results were obtained in experiments that controlled for potential training effects or the use of low-level cues in this discrimination task. Our results provide further evidence against the notion that only low-level representations can be accessed outside the focus of attention.

  3. High resolution satellite image indexing and retrieval using SURF features and bag of visual words

    NASA Astrophysics Data System (ADS)

    Bouteldja, Samia; Kourgli, Assia

    2017-03-01

    In this paper, we evaluate the performance of SURF descriptor for high resolution satellite imagery (HRSI) retrieval through a BoVW model on a land-use/land-cover (LULC) dataset. Local feature approaches such as SIFT and SURF descriptors can deal with a large variation of scale, rotation and illumination of the images, providing, therefore, a better discriminative power and retrieval efficiency than global features, especially for HRSI which contain a great range of objects and spatial patterns. Moreover, we combine SURF and color features to improve the retrieval accuracy, and we propose to learn a category-specific dictionary for each image category which results in a more discriminative image representation and boosts the image retrieval performance.

  4. A Discussion of Assessment Needs in Manual Communication for Pre-College Students.

    ERIC Educational Resources Information Center

    Cokely, Dennis R.

    The paper reviews issues in evaluating the manual communications skills of pre-college hearing impaired students, including testing of visual discrimination and visual memory, simultaneous communication, and attention span. (CL)

  5. Pharmacological evidence that both cognitive memory and habit formation contribute to within-session learning of concurrent visual discriminations

    PubMed Central

    Turchi, Janita; Devan, Bryan; Yin, Pingbo; Sigrist, Emmalynn; Mishkin, Mortimer

    2010-01-01

    The monkey's ability to learn a set of visual discriminations presented concurrently just once a day on successive days (24-hr ITI task) is based on habit formation, which is known to rely on a visuo-striatal circuit and to be independent of visuo-rhinal circuits that support one-trial memory. Consistent with this dissociation, we recently reported that performance on the 24-hr ITI task is impaired by a striatal-function blocking agent, the dopaminergic antagonist haloperidol, and not by a rhinal-function blocking agent, the muscarinic cholinergic antagonist scopolamine. In the present study, monkeys were trained on a short-ITI form of concurrent visual discrimination learning, one in which a set of stimulus pairs is repeated not only across daily sessions but also several times within each session (in this case, at about 4-min ITIs). Asymptotic discrimination learning rates in the non-drug condition were reduced by half, from ~11 trials/pair on the 24-hr ITI task to ~5 trials/pair on the 4-min ITI task, and this faster learning was impaired by systemic injections of either haloperidol or scopolamine. The results suggest that in the version of concurrent discrimination learning used here, the short ITIs within a session recruit both visuo-rhinal and visuo-striatal circuits, and that the final performance level is driven by both cognitive memory and habit formation working in concert. PMID:20144631

  6. Neural networks for Braille reading by the blind.

    PubMed

    Sadato, N; Pascual-Leone, A; Grafman, J; Deiber, M P; Ibañez, V; Hallett, M

    1998-07-01

    To explore the neural networks used for Braille reading, we measured regional cerebral blood flow with PET during tactile tasks performed both by Braille readers blinded early in life and by sighted subjects. Eight proficient Braille readers were studied during Braille reading with both right and left index fingers. Eight-character, non-contracted Braille-letter strings were used, and subjects were asked to discriminate between words and non-words. To compare the behaviour of the brain of the blind and the sighted directly, non-Braille tactile tasks were performed by six different blind subjects and 10 sighted control subjects using the right index finger. The tasks included a non-discrimination task and three discrimination tasks (angle, width and character). Irrespective of reading finger (right or left), Braille reading by the blind activated the inferior parietal lobule, primary visual cortex, superior occipital gyri, fusiform gyri, ventral premotor area, superior parietal lobule, cerebellum and primary sensorimotor area bilaterally, also the right dorsal premotor cortex, right middle occipital gyrus and right prefrontal area. During non-Braille discrimination tasks, in blind subjects, the ventral occipital regions, including the primary visual cortex and fusiform gyri bilaterally were activated while the secondary somatosensory area was deactivated. The reverse pattern was found in sighted subjects where the secondary somatosensory area was activated while the ventral occipital regions were suppressed. These findings suggest that the tactile processing pathways usually linked in the secondary somatosensory area are rerouted in blind subjects to the ventral occipital cortical regions originally reserved for visual shape discrimination.

  7. Pharmacological evidence that both cognitive memory and habit formation contribute to within-session learning of concurrent visual discriminations.

    PubMed

    Turchi, Janita; Devan, Bryan; Yin, Pingbo; Sigrist, Emmalynn; Mishkin, Mortimer

    2010-07-01

    The monkey's ability to learn a set of visual discriminations presented concurrently just once a day on successive days (24-h ITI task) is based on habit formation, which is known to rely on a visuo-striatal circuit and to be independent of visuo-rhinal circuits that support one-trial memory. Consistent with this dissociation, we recently reported that performance on the 24-h ITI task is impaired by a striatal-function blocking agent, the dopaminergic antagonist haloperidol, and not by a rhinal-function blocking agent, the muscarinic cholinergic antagonist scopolamine. In the present study, monkeys were trained on a short-ITI form of concurrent visual discrimination learning, one in which a set of stimulus pairs is repeated not only across daily sessions but also several times within each session (in this case, at about 4-min ITIs). Asymptotic discrimination learning rates in the non-drug condition were reduced by half, from approximately 11 trials/pair on the 24-h ITI task to approximately 5 trials/pair on the 4-min ITI task, and this faster learning was impaired by systemic injections of either haloperidol or scopolamine. The results suggest that in the version of concurrent discrimination learning used here, the short ITIs within a session recruit both visuo-rhinal and visuo-striatal circuits, and that the final performance level is driven by both cognitive memory and habit formation working in concert.

  8. A DCM study of spectral asymmetries in feedforward and feedback connections between visual areas V1 and V4 in the monkey.

    PubMed

    Bastos, A M; Litvak, V; Moran, R; Bosman, C A; Fries, P; Friston, K J

    2015-03-01

    This paper reports a dynamic causal modeling study of electrocorticographic (ECoG) data that addresses functional asymmetries between forward and backward connections in the visual cortical hierarchy. Specifically, we ask whether forward connections employ gamma-band frequencies, while backward connections preferentially use lower (beta-band) frequencies. We addressed this question by modeling empirical cross spectra using a neural mass model equipped with superficial and deep pyramidal cell populations-that model the source of forward and backward connections, respectively. This enabled us to reconstruct the transfer functions and associated spectra of specific subpopulations within cortical sources. We first established that Bayesian model comparison was able to discriminate between forward and backward connections, defined in terms of their cells of origin. We then confirmed that model selection was able to identify extrastriate (V4) sources as being hierarchically higher than early visual (V1) sources. Finally, an examination of the auto spectra and transfer functions associated with superficial and deep pyramidal cells confirmed that forward connections employed predominantly higher (gamma) frequencies, while backward connections were mediated by lower (alpha/beta) frequencies. We discuss these findings in relation to current views about alpha, beta, and gamma oscillations and predictive coding in the brain. Copyright © 2015. Published by Elsevier Inc.

  9. Similarities in neural activations of face and Chinese character discrimination.

    PubMed

    Liu, Jiangang; Tian, Jie; Li, Jun; Gong, Qiyong; Lee, Kang

    2009-02-18

    This study compared Chinese participants' visual discrimination of Chinese faces with that of Chinese characters, which are highly similar to faces on a variety of dimensions. Both Chinese faces and characters activated the bilateral middle fusiform with high levels of correlations. These findings suggest that although the expertise systems for faces and written symbols are known to be anatomically differentiated at the later stages of processing to serve face processing or written-symbol-specific processing purposes, they may share similar neural structures in the ventral occipitotemporal cortex at the stages of visual processing.

  10. Discrimination among Panax species using spectral fingerprinting

    USDA-ARS?s Scientific Manuscript database

    Spectral fingerprints of samples of three Panax species (P. quinquefolius L., P. ginseng, and P. notoginseng) were acquired using UV, NIR, and MS spectrometry. With principal components analysis (PCA), all three methods allowed visual discrimination between all three species. All three methods wer...

  11. Exploring social inclusion strategies for public health research and practice: The use of participatory visual methods to counter stigmas surrounding street-based substance abuse in Colombia.

    PubMed

    Ritterbusch, Amy E

    2016-01-01

    This paper presents the participatory visual research design and findings from a qualitative assessment of the social impact of bazuco and inhalant/glue consumption among street youth in Bogotá, Colombia. The paper presents the visual methodologies our participatory action research (PAR) team employed in order to identify and overcome the stigmas and discrimination that street youth experience in society and within state-sponsored drug rehabilitation programmes. I call for critical reflection regarding the broad application of the terms 'participation' and 'participatory' in visual research and urge scholars and public health practitioners to consider the transformative potential of PAR for both the research and practice of global public health in general and rehabilitation programmes for street-based substance abuse in Colombia in particular. The paper concludes with recommendations as to how participatory visual methods can be used to promote social inclusion practices and to work against stigma and discrimination in health-related research and within health institutions.

  12. Time course influences transfer of visual perceptual learning across spatial location.

    PubMed

    Larcombe, S J; Kennard, C; Bridge, H

    2017-06-01

    Visual perceptual learning describes the improvement of visual perception with repeated practice. Previous research has established that the learning effects of perceptual training may be transferable to untrained stimulus attributes such as spatial location under certain circumstances. However, the mechanisms involved in transfer have not yet been fully elucidated. Here, we investigated the effect of altering training time course on the transferability of learning effects. Participants were trained on a motion direction discrimination task or a sinusoidal grating orientation discrimination task in a single visual hemifield. The 4000 training trials were either condensed into one day, or spread evenly across five training days. When participants were trained over a five-day period, there was transfer of learning to both the untrained visual hemifield and the untrained task. In contrast, when the same amount of training was condensed into a single day, participants did not show any transfer of learning. Thus, learning time course may influence the transferability of perceptual learning effects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Specific problems in visual cognition of dyslexic readers: Face discrimination deficits predict dyslexia over and above discrimination of scrambled faces and novel objects.

    PubMed

    Sigurdardottir, Heida Maria; Fridriksdottir, Liv Elisabet; Gudjonsdottir, Sigridur; Kristjánsson, Árni

    2018-06-01

    Evidence of interdependencies of face and word processing mechanisms suggest possible links between reading problems and abnormal face processing. In two experiments we assessed such high-level visual deficits in people with a history of reading problems. Experiment 1 showed that people who were worse at face matching had greater reading problems. In experiment 2, matched dyslexic and typical readers were tested, and difficulties with face matching were consistently found to predict dyslexia over and above both novel-object matching as well as matching noise patterns that shared low-level visual properties with faces. Furthermore, ADHD measures could not account for face matching problems. We speculate that reading difficulties in dyslexia are partially caused by specific deficits in high-level visual processing, in particular for visual object categories such as faces and words with which people have extensive experience. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. High-order statistics of weber local descriptors for image representation.

    PubMed

    Han, Xian-Hua; Chen, Yen-Wei; Xu, Gang

    2015-06-01

    Highly discriminant visual features play a key role in different image classification applications. This study aims to realize a method for extracting highly-discriminant features from images by exploring a robust local descriptor inspired by Weber's law. The investigated local descriptor is based on the fact that human perception for distinguishing a pattern depends not only on the absolute intensity of the stimulus but also on the relative variance of the stimulus. Therefore, we firstly transform the original stimulus (the images in our study) into a differential excitation-domain according to Weber's law, and then explore a local patch, called micro-Texton, in the transformed domain as Weber local descriptor (WLD). Furthermore, we propose to employ a parametric probability process to model the Weber local descriptors, and extract the higher-order statistics to the model parameters for image representation. The proposed strategy can adaptively characterize the WLD space using generative probability model, and then learn the parameters for better fitting the training space, which would lead to more discriminant representation for images. In order to validate the efficiency of the proposed strategy, we apply three different image classification applications including texture, food images and HEp-2 cell pattern recognition, which validates that our proposed strategy has advantages over the state-of-the-art approaches.

  15. Local connected fractal dimension analysis in gill of fish experimentally exposed to toxicants.

    PubMed

    Manera, Maurizio; Giari, Luisa; De Pasquale, Joseph A; Sayyaf Dezfuli, Bahram

    2016-06-01

    An operator-neutral method was implemented to objectively assess European seabass, Dicentrarchus labrax (Linnaeus, 1758) gill pathology after experimental exposure to cadmium (Cd) and terbuthylazine (TBA) for 24 and 48h. An algorithm-derived local connected fractal dimension (LCFD) frequency measure was used in this comparative analysis. Canonical variates (CVA) and linear discriminant analysis (LDA) were used to evaluate the discrimination power of the method among exposure classes (unexposed, Cd exposed, TBA exposed). Misclassification, sensitivity and specificity, both with original and cross-validated cases, were determined. LCFDs frequencies enhanced the differences among classes which were visually selected after their means, respective variances and the differences between Cd and TBA exposed means, with respect to unexposed mean, were analyzed by scatter plots. Selected frequencies were then scanned by means of LDA, stepwise analysis, and Mahalanobis distance to detect the most discriminative frequencies out of ten originally selected. Discrimination resulted in 91.7% of cross-validated cases correctly classified (22 out of 24 total cases), with sensitivity and specificity, respectively, of 95.5% (1 false negative with respect to 21 really positive cases) and 75% (1 false positive with respect to 3 really negative cases). CVA with convex hull polygons ensured prompt, visually intuitive discrimination among exposure classes and graphically supported the false positive case. The combined use of semithin sections, which enhanced the visual evaluation of the overall lamellar structure; of LCFD analysis, which objectively detected local variation in complexity, without the possible bias connected to human personnel; and of CVA/LDA, could be an objective, sensitive and specific approach to study fish gill lamellar pathology. Furthermore this approach enabled discrimination with sufficient confidence between exposure classes or pathological states and avoided misdiagnosis. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. SEMI-SUPERVISED OBJECT RECOGNITION USING STRUCTURE KERNEL

    PubMed Central

    Wang, Botao; Xiong, Hongkai; Jiang, Xiaoqian; Ling, Fan

    2013-01-01

    Object recognition is a fundamental problem in computer vision. Part-based models offer a sparse, flexible representation of objects, but suffer from difficulties in training and often use standard kernels. In this paper, we propose a positive definite kernel called “structure kernel”, which measures the similarity of two part-based represented objects. The structure kernel has three terms: 1) the global term that measures the global visual similarity of two objects; 2) the part term that measures the visual similarity of corresponding parts; 3) the spatial term that measures the spatial similarity of geometric configuration of parts. The contribution of this paper is to generalize the discriminant capability of local kernels to complex part-based object models. Experimental results show that the proposed kernel exhibit higher accuracy than state-of-art approaches using standard kernels. PMID:23666108

  17. Handwriting Error Patterns of Children with Mild Motor Difficulties.

    ERIC Educational Resources Information Center

    Malloy-Miller, Theresa; And Others

    1995-01-01

    A test of handwriting legibility and 6 perceptual-motor tests were completed by 66 children ages 7-12. Among handwriting error patterns, execution was associated with visual-motor skill and sensory discrimination, aiming with visual-motor and fine-motor skills. The visual-spatial factor had no significant association with perceptual-motor…

  18. An empirical investigation of the visual rightness theory of picture perception.

    PubMed

    Locher, Paul J

    2003-10-01

    This research subjected the visual rightness theory of picture perception to experimental scrutiny. It investigated the ability of adults untrained in the visual arts to discriminate between reproductions of original abstract and representational paintings by renowned artists from two experimentally manipulated less well-organized versions of each art stimulus. Perturbed stimuli contained either minor or major disruptions in the originals' principal structural networks. It was found that participants were significantly more successful in discriminating between originals and their highly altered, but not slightly altered, perturbation than expected by chance. Accuracy of detection was found to be a function of style of painting and a viewer's way of thinking about a work as determined from their verbal reactions to it. Specifically, hit rates for originals were highest for abstract works when participants focused on their compositional style and form and highest for representational works when their content and realism were the focus of attention. Findings support the view that visually right (i.e., "good") compositions have efficient structural organizations that are visually salient to viewers who lack formal training in the visual arts.

  19. Visual Equivalence and Amodal Completion in Cuttlefish

    PubMed Central

    Lin, I-Rong; Chiao, Chuan-Chin

    2017-01-01

    Modern cephalopods are notably the most intelligent invertebrates and this is accompanied by keen vision. Despite extensive studies investigating the visual systems of cephalopods, little is known about their visual perception and object recognition. In the present study, we investigated the visual processing of the cuttlefish Sepia pharaonis, including visual equivalence and amodal completion. Cuttlefish were trained to discriminate images of shrimp and fish using the operant conditioning paradigm. After cuttlefish reached the learning criteria, a series of discrimination tasks were conducted. In the visual equivalence experiment, several transformed versions of the training images, such as images reduced in size, images reduced in contrast, sketches of the images, the contours of the images, and silhouettes of the images, were used. In the amodal completion experiment, partially occluded views of the original images were used. The results showed that cuttlefish were able to treat the training images of reduced size and sketches as the visual equivalence. Cuttlefish were also capable of recognizing partially occluded versions of the training image. Furthermore, individual differences in performance suggest that some cuttlefish may be able to recognize objects when visual information was partly removed. These findings support the hypothesis that the visual perception of cuttlefish involves both visual equivalence and amodal completion. The results from this research also provide insights into the visual processing mechanisms used by cephalopods. PMID:28220075

  20. Bag-of-visual-ngrams for histopathology image classification

    NASA Astrophysics Data System (ADS)

    López-Monroy, A. Pastor; Montes-y-Gómez, Manuel; Escalante, Hugo Jair; Cruz-Roa, Angel; González, Fabio A.

    2013-11-01

    This paper describes an extension of the Bag-of-Visual-Words (BoVW) representation for image categorization (IC) of histophatology images. This representation is one of the most used approaches in several high-level computer vision tasks. However, the BoVW representation has an important limitation: the disregarding of spatial information among visual words. This information may be useful to capture discriminative visual-patterns in specific computer vision tasks. In order to overcome this problem we propose the use of visual n-grams. N-grams based-representations are very popular in the field of natural language processing (NLP), in particular within text mining and information retrieval. We propose building a codebook of n-grams and then representing images by histograms of visual n-grams. We evaluate our proposal in the challenging task of classifying histopathology images. The novelty of our proposal lies in the fact that we use n-grams as attributes for a classification model (together with visual-words, i.e., 1-grams). This is common practice within NLP, although, to the best of our knowledge, this idea has not been explored yet within computer vision. We report experimental results in a database of histopathology images where our proposed method outperforms the traditional BoVWs formulation.

  1. Systematic distortions of perceptual stability investigated using immersive virtual reality

    PubMed Central

    Tcheang, Lili; Gilson, Stuart J.; Glennerster, Andrew

    2010-01-01

    Using an immersive virtual reality system, we measured the ability of observers to detect the rotation of an object when its movement was yoked to the observer's own translation. Most subjects had a large bias such that a static object appeared to rotate away from them as they moved. Thresholds for detecting target rotation were similar to those for an equivalent speed discrimination task carried out by static observers, suggesting that visual discrimination is the predominant limiting factor in detecting target rotation. Adding a stable visual reference frame almost eliminated the bias. Varying the viewing distance of the target had little effect, consistent with observers under-estimating distance walked. However, accuracy of walking to a briefly presented visual target was high and not consistent with an under-estimation of distance walked. We discuss implications for theories of a task-independent representation of visual space. PMID:15845248

  2. Psychophysical Evaluation of Achromatic and Chromatic Vision of Workers Chronically Exposed to Organic Solvents

    PubMed Central

    Lacerda, Eliza Maria da Costa Brito; Lima, Monica Gomes; Rodrigues, Anderson Raiol; Teixeira, Cláudio Eduardo Correa; de Lima, Lauro José Barata; Ventura, Dora Fix; Silveira, Luiz Carlos de Lima

    2012-01-01

    The purpose of this paper was to evaluate achromatic and chromatic vision of workers chronically exposed to organic solvents through psychophysical methods. Thirty-one gas station workers (31.5 ± 8.4 years old) were evaluated. Psychophysical tests were achromatic tests (Snellen chart, spatial and temporal contrast sensitivity, and visual perimetry) and chromatic tests (Ishihara's test, color discrimination ellipses, and Farnsworth-Munsell 100 hue test—FM100). Spatial contrast sensitivities of exposed workers were lower than the control at spatial frequencies of 20 and 30 cpd whilst the temporal contrast sensitivity was preserved. Visual field losses were found in 10–30 degrees of eccentricity in the solvent exposed workers. The exposed workers group had higher error values of FM100 and wider color discrimination ellipses area compared to the controls. Workers occupationally exposed to organic solvents had abnormal visual functions, mainly color vision losses and visual field constriction. PMID:22220188

  3. Action Video Games Improve Direction Discrimination of Parafoveal Translational Global Motion but Not Reaction Times.

    PubMed

    Pavan, Andrea; Boyce, Matthew; Ghin, Filippo

    2016-10-01

    Playing action video games enhances visual motion perception. However, there is psychophysical evidence that action video games do not improve motion sensitivity for translational global moving patterns presented in fovea. This study investigates global motion perception in action video game players and compares their performance to that of non-action video game players and non-video game players. Stimuli were random dot kinematograms presented in the parafovea. Observers discriminated the motion direction of a target random dot kinematogram presented in one of the four visual quadrants. Action video game players showed lower motion coherence thresholds than the other groups. However, when the task was performed at threshold, we did not find differences between groups in terms of distributions of reaction times. These results suggest that action video games improve visual motion sensitivity in the near periphery of the visual field, rather than speed response. © The Author(s) 2016.

  4. The effect of age upon the perception of 3-D shape from motion.

    PubMed

    Norman, J Farley; Cheeseman, Jacob R; Pyles, Jessica; Baxter, Michael W; Thomason, Kelsey E; Calloway, Autum B

    2013-12-18

    Two experiments evaluated the ability of 50 older, middle-aged, and younger adults to discriminate the 3-dimensional (3-D) shape of curved surfaces defined by optical motion. In Experiment 1, temporal correspondence was disrupted by limiting the lifetimes of the moving surface points. In order to discriminate 3-D surface shape reliably, the younger and middle-aged adults needed a surface point lifetime of approximately 4 views (in the apparent motion sequences). In contrast, the older adults needed a much longer surface point lifetime of approximately 9 views in order to reliably perform the same task. In Experiment 2, the negative effect of age upon 3-D shape discrimination from motion was replicated. In this experiment, however, the participants' abilities to discriminate grating orientation and speed were also assessed. Edden et al. (2009) have recently demonstrated that behavioral grating orientation discrimination correlates with GABA (gamma aminobutyric acid) concentration in human visual cortex. Our results demonstrate that the negative effect of age upon 3-D shape perception from motion is not caused by impairments in the ability to perceive motion per se, but does correlate significantly with grating orientation discrimination. This result suggests that the age-related decline in 3-D shape discrimination from motion is related to decline in GABA concentration in visual cortex. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Effects of X-ray radiation on complex visual discrimination learning and social recognition memory in rats.

    PubMed

    Davis, Catherine M; Roma, Peter G; Armour, Elwood; Gooden, Virginia L; Brady, Joseph V; Weed, Michael R; Hienz, Robert D

    2014-01-01

    The present report describes an animal model for examining the effects of radiation on a range of neurocognitive functions in rodents that are similar to a number of basic human cognitive functions. Fourteen male Long-Evans rats were trained to perform an automated intra-dimensional set shifting task that consisted of their learning a basic discrimination between two stimulus shapes followed by more complex discrimination stages (e.g., a discrimination reversal, a compound discrimination, a compound reversal, a new shape discrimination, and an intra-dimensional stimulus discrimination reversal). One group of rats was exposed to head-only X-ray radiation (2.3 Gy at a dose rate of 1.9 Gy/min), while a second group received a sham-radiation exposure using the same anesthesia protocol. The irradiated group responded less, had elevated numbers of omitted trials, increased errors, and greater response latencies compared to the sham-irradiated control group. Additionally, social odor recognition memory was tested after radiation exposure by assessing the degree to which rats explored wooden beads impregnated with either their own odors or with the odors of novel, unfamiliar rats; however, no significant effects of radiation on social odor recognition memory were observed. These data suggest that rodent tasks assessing higher-level human cognitive domains are useful in examining the effects of radiation on the CNS, and may be applicable in approximating CNS risks from radiation exposure in clinical populations receiving whole brain irradiation.

  6. Effects of X-Ray Radiation on Complex Visual Discrimination Learning and Social Recognition Memory in Rats

    PubMed Central

    Davis, Catherine M.; Roma, Peter G.; Armour, Elwood; Gooden, Virginia L.; Brady, Joseph V.; Weed, Michael R.; Hienz, Robert D.

    2014-01-01

    The present report describes an animal model for examining the effects of radiation on a range of neurocognitive functions in rodents that are similar to a number of basic human cognitive functions. Fourteen male Long-Evans rats were trained to perform an automated intra-dimensional set shifting task that consisted of their learning a basic discrimination between two stimulus shapes followed by more complex discrimination stages (e.g., a discrimination reversal, a compound discrimination, a compound reversal, a new shape discrimination, and an intra-dimensional stimulus discrimination reversal). One group of rats was exposed to head-only X-ray radiation (2.3 Gy at a dose rate of 1.9 Gy/min), while a second group received a sham-radiation exposure using the same anesthesia protocol. The irradiated group responded less, had elevated numbers of omitted trials, increased errors, and greater response latencies compared to the sham-irradiated control group. Additionally, social odor recognition memory was tested after radiation exposure by assessing the degree to which rats explored wooden beads impregnated with either their own odors or with the odors of novel, unfamiliar rats; however, no significant effects of radiation on social odor recognition memory were observed. These data suggest that rodent tasks assessing higher-level human cognitive domains are useful in examining the effects of radiation on the CNS, and may be applicable in approximating CNS risks from radiation exposure in clinical populations receiving whole brain irradiation. PMID:25099152

  7. Neuron analysis of visual perception

    NASA Technical Reports Server (NTRS)

    Chow, K. L.

    1980-01-01

    The receptive fields of single cells in the visual system of cat and squirrel monkey were studied investigating the vestibular input affecting the cells, and the cell's responses during visual discrimination learning process. The receptive field characteristics of the rabbit visual system, its normal development, its abnormal development following visual deprivation, and on the structural and functional re-organization of the visual system following neo-natal and prenatal surgery were also studied. The results of each individual part of each investigation are detailed.

  8. Usefulness of 3-dimensional stereotactic surface projection FDG PET images for the diagnosis of dementia

    PubMed Central

    Kim, Jahae; Cho, Sang-Geon; Song, Minchul; Kang, Sae-Ryung; Kwon, Seong Young; Choi, Kang-Ho; Choi, Seong-Min; Kim, Byeong-Chae; Song, Ho-Chun

    2016-01-01

    Abstract To compare diagnostic performance and confidence of a standard visual reading and combined 3-dimensional stereotactic surface projection (3D-SSP) results to discriminate between Alzheimer disease (AD)/mild cognitive impairment (MCI), dementia with Lewy bodies (DLB), and frontotemporal dementia (FTD). [18F]fluorodeoxyglucose (FDG) PET brain images were obtained from 120 patients (64 AD/MCI, 38 DLB, and 18 FTD) who were clinically confirmed over 2 years follow-up. Three nuclear medicine physicians performed the diagnosis and rated diagnostic confidence twice; once by standard visual methods, and once by adding of 3D-SSP. Diagnostic performance and confidence were compared between the 2 methods. 3D-SSP showed higher sensitivity, specificity, accuracy, positive, and negative predictive values to discriminate different types of dementia compared with the visual method alone, except for AD/MCI specificity and FTD sensitivity. Correction of misdiagnosis after adding 3D-SSP images was greatest for AD/MCI (56%), followed by DLB (13%) and FTD (11%). Diagnostic confidence also increased in DLB (visual: 3.2; 3D-SSP: 4.1; P < 0.001), followed by AD/MCI (visual: 3.1; 3D-SSP: 3.8; P = 0.002) and FTD (visual: 3.5; 3D-SSP: 4.2; P = 0.022). Overall, 154/360 (43%) cases had a corrected misdiagnosis or improved diagnostic confidence for the correct diagnosis. The addition of 3D-SSP images to visual analysis helped to discriminate different types of dementia in FDG PET scans, by correcting misdiagnoses and enhancing diagnostic confidence in the correct diagnosis. Improvement of diagnostic accuracy and confidence by 3D-SSP images might help to determine the cause of dementia and appropriate treatment. PMID:27930593

  9. Acquisition of a visual discrimination and reversal learning task by Labrador retrievers.

    PubMed

    Lazarowski, Lucia; Foster, Melanie L; Gruen, Margaret E; Sherman, Barbara L; Case, Beth C; Fish, Richard E; Milgram, Norton W; Dorman, David C

    2014-05-01

    Optimal cognitive ability is likely important for military working dogs (MWD) trained to detect explosives. An assessment of a dog's ability to rapidly learn discriminations might be useful in the MWD selection process. In this study, visual discrimination and reversal tasks were used to assess cognitive performance in Labrador retrievers selected for an explosives detection program using a modified version of the Toronto General Testing Apparatus (TGTA), a system developed for assessing performance in a battery of neuropsychological tests in canines. The results of the current study revealed that, as previously found with beagles tested using the TGTA, Labrador retrievers (N = 16) readily acquired both tasks and learned the discrimination task significantly faster than the reversal task. The present study confirmed that the modified TGTA system is suitable for cognitive evaluations in Labrador retriever MWDs and can be used to further explore effects of sex, phenotype, age, and other factors in relation to canine cognition and learning, and may provide an additional screening tool for MWD selection.

  10. Visual Selective Attention Biases Contribute to the Other-Race Effect Among 9-Month-Old Infants

    PubMed Central

    Oakes, Lisa M.; Amso, Dima

    2016-01-01

    During the first year of life, infants maintain their ability to discriminate faces from their own race but become less able to differentiate other-race faces. Though this is likely due to daily experience with own-race faces, the mechanisms linking repeated exposure to optimal face processing remain unclear. One possibility is that frequent experience with own-race faces generates a selective attention bias to these faces. Selective attention elicits enhancement of attended information and suppression of distraction to improve visual processing of attended objects. Thus attention biases to own-race faces may boost processing and discrimination of these faces relative to other-race faces. We used a spatial cueing task to bias attention to own- or other-race faces among Caucasian 9-month-old infants. Infants discriminated faces in the focus of the attention bias, regardless of race, indicating that infants remained sensitive to differences among other-race faces. Instead, efficacy of face discrimination reflected the extent of attention engagement. PMID:26486228

  11. Visual selective attention biases contribute to the other-race effect among 9-month-old infants.

    PubMed

    Markant, Julie; Oakes, Lisa M; Amso, Dima

    2016-04-01

    During the first year of life, infants maintain their ability to discriminate faces from their own race but become less able to differentiate other-race faces. Though this is likely due to daily experience with own-race faces, the mechanisms linking repeated exposure to optimal face processing remain unclear. One possibility is that frequent experience with own-race faces generates a selective attention bias to these faces. Selective attention elicits enhancement of attended information and suppression of distraction to improve visual processing of attended objects. Thus attention biases to own-race faces may boost processing and discrimination of these faces relative to other-race faces. We used a spatial cueing task to bias attention to own- or other-race faces among Caucasian 9-month-old infants. Infants discriminated faces in the focus of the attention bias, regardless of race, indicating that infants remained sensitive to differences among other-race faces. Instead, efficacy of face discrimination reflected the extent of attention engagement. © 2015 Wiley Periodicals, Inc.

  12. Accurate discrimination of the wake-sleep states of mice using non-invasive whole-body plethysmography.

    PubMed

    Bastianini, Stefano; Alvente, Sara; Berteotti, Chiara; Lo Martire, Viviana; Silvani, Alessandro; Swoap, Steven J; Valli, Alice; Zoccoli, Giovanna; Cohen, Gary

    2017-01-31

    A major limitation in the study of sleep breathing disorders in mouse models of pathology is the need to combine whole-body plethysmography (WBP) to measure respiration with electroencephalography/electromyography (EEG/EMG) to discriminate wake-sleep states. However, murine wake-sleep states may be discriminated from breathing and body movements registered by the WBP signal alone. Our goal was to compare the EEG/EMG-based and the WBP-based scoring of wake-sleep states of mice, and provide formal guidelines for the latter. EEG, EMG, blood pressure and WBP signals were simultaneously recorded from 20 mice. Wake-sleep states were scored based either on EEG/EMG or on WBP signals and sleep-dependent respiratory and cardiovascular estimates were calculated. We found that the overall agreement between the 2 methods was 90%, with a high Cohen's Kappa index (0.82). The inter-rater agreement between 2 experts and between 1 expert and 1 naïve sleep investigators gave similar results. Sleep-dependent respiratory and cardiovascular estimates did not depend on the scoring method. We show that non-invasive discrimination of the wake-sleep states of mice based on visual inspection of the WBP signal is accurate, reliable and reproducible. This work may set the stage for non-invasive high-throughput experiments evaluating sleep and breathing patterns on mouse models of pathophysiology.

  13. Discriminative exemplar coding for sign language recognition with Kinect.

    PubMed

    Sun, Chao; Zhang, Tianzhu; Bao, Bing-Kun; Xu, Changsheng; Mei, Tao

    2013-10-01

    Sign language recognition is a growing research area in the field of computer vision. A challenge within it is to model various signs, varying with time resolution, visual manual appearance, and so on. In this paper, we propose a discriminative exemplar coding (DEC) approach, as well as utilizing Kinect sensor, to model various signs. The proposed DEC method can be summarized as three steps. First, a quantity of class-specific candidate exemplars are learned from sign language videos in each sign category by considering their discrimination. Then, every video of all signs is described as a set of similarities between frames within it and the candidate exemplars. Instead of simply using a heuristic distance measure, the similarities are decided by a set of exemplar-based classifiers through the multiple instance learning, in which a positive (or negative) video is treated as a positive (or negative) bag and those frames similar to the given exemplar in Euclidean space as instances. Finally, we formulate the selection of the most discriminative exemplars into a framework and simultaneously produce a sign video classifier to recognize sign. To evaluate our method, we collect an American sign language dataset, which includes approximately 2000 phrases, while each phrase is captured by Kinect sensor with color, depth, and skeleton information. Experimental results on our dataset demonstrate the feasibility and effectiveness of the proposed approach for sign language recognition.

  14. Evaluation of Oil-Palm Fungal Disease Infestation with Canopy Hyperspectral Reflectance Data

    PubMed Central

    Lelong, Camille C. D.; Roger, Jean-Michel; Brégand, Simon; Dubertret, Fabrice; Lanore, Mathieu; Sitorus, Nurul A.; Raharjo, Doni A.; Caliman, Jean-Pierre

    2010-01-01

    Fungal disease detection in perennial crops is a major issue in estate management and production. However, nowadays such diagnostics are long and difficult when only made from visual symptom observation, and very expensive and damaging when based on root or stem tissue chemical analysis. As an alternative, we propose in this study to evaluate the potential of hyperspectral reflectance data to help detecting the disease efficiently without destruction of tissues. This study focuses on the calibration of a statistical model of discrimination between several stages of Ganoderma attack on oil palm trees, based on field hyperspectral measurements at tree scale. Field protocol and measurements are first described. Then, combinations of pre-processing, partial least square regression and linear discriminant analysis are tested on about hundred samples to prove the efficiency of canopy reflectance in providing information about the plant sanitary status. A robust algorithm is thus derived, allowing classifying oil-palm in a 4-level typology, based on disease severity from healthy to critically sick stages, with a global performance close to 94%. Moreover, this model discriminates sick from healthy trees with a confidence level of almost 98%. Applications and further improvements of this experiment are finally discussed. PMID:22315565

  15. Efficient visual object and word recognition relies on high spatial frequency coding in the left posterior fusiform gyrus: evidence from a case-series of patients with ventral occipito-temporal cortex damage.

    PubMed

    Roberts, Daniel J; Woollams, Anna M; Kim, Esther; Beeson, Pelagie M; Rapcsak, Steven Z; Lambon Ralph, Matthew A

    2013-11-01

    Recent visual neuroscience investigations suggest that ventral occipito-temporal cortex is retinotopically organized, with high acuity foveal input projecting primarily to the posterior fusiform gyrus (pFG), making this region crucial for coding high spatial frequency information. Because high spatial frequencies are critical for fine-grained visual discrimination, we hypothesized that damage to the left pFG should have an adverse effect not only on efficient reading, as observed in pure alexia, but also on the processing of complex non-orthographic visual stimuli. Consistent with this hypothesis, we obtained evidence that a large case series (n = 20) of patients with lesions centered on left pFG: 1) Exhibited reduced sensitivity to high spatial frequencies; 2) demonstrated prolonged response latencies both in reading (pure alexia) and object naming; and 3) were especially sensitive to visual complexity and similarity when discriminating between novel visual patterns. These results suggest that the patients' dual reading and non-orthographic recognition impairments have a common underlying mechanism and reflect the loss of high spatial frequency visual information normally coded in the left pFG.

  16. Sexual selection in the squirrel treefrog Hyla squirella: the role of multimodal cue assessment in female choice

    USGS Publications Warehouse

    Taylor, Ryan C.; Buchanan, Bryant W.; Doherty, Jessie L.

    2007-01-01

    Anuran amphibians have provided an excellent system for the study of animal communication and sexual selection. Studies of female mate choice in anurans, however, have focused almost exclusively on the role of auditory signals. In this study, we examined the effect of both auditory and visual cues on female choice in the squirrel treefrog. Our experiments used a two-choice protocol in which we varied male vocalization properties, visual cues, or both, to assess female preferences for the different cues. Females discriminated against high-frequency calls and expressed a strong preference for calls that contained more energy per unit time (faster call rate). Females expressed a preference for the visual stimulus of a model of a calling male when call properties at the two speakers were held the same. They also showed a significant attraction to a model possessing a relatively large lateral body stripe. These data indicate that visual cues do play a role in mate attraction in this nocturnal frog species. Furthermore, this study adds to a growing body of evidence that suggests that multimodal signals play an important role in sexual selection.

  17. Visual saliency detection based on in-depth analysis of sparse representation

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Shen, Siqiu; Ning, Chen

    2018-03-01

    Visual saliency detection has been receiving great attention in recent years since it can facilitate a wide range of applications in computer vision. A variety of saliency models have been proposed based on different assumptions within which saliency detection via sparse representation is one of the newly arisen approaches. However, most existing sparse representation-based saliency detection methods utilize partial characteristics of sparse representation, lacking of in-depth analysis. Thus, they may have limited detection performance. Motivated by this, this paper proposes an algorithm for detecting visual saliency based on in-depth analysis of sparse representation. A number of discriminative dictionaries are first learned with randomly sampled image patches by means of inner product-based dictionary atom classification. Then, the input image is partitioned into many image patches, and these patches are classified into salient and nonsalient ones based on the in-depth analysis of sparse coding coefficients. Afterward, sparse reconstruction errors are calculated for the salient and nonsalient patch sets. By investigating the sparse reconstruction errors, the most salient atoms, which tend to be from the most salient region, are screened out and taken away from the discriminative dictionaries. Finally, an effective method is exploited for saliency map generation with the reduced dictionaries. Comprehensive evaluations on publicly available datasets and comparisons with some state-of-the-art approaches demonstrate the effectiveness of the proposed algorithm.

  18. Visual processing affects the neural basis of auditory discrimination.

    PubMed

    Kislyuk, Daniel S; Möttönen, Riikka; Sams, Mikko

    2008-12-01

    The interaction between auditory and visual speech streams is a seamless and surprisingly effective process. An intriguing example is the "McGurk effect": The acoustic syllable /ba/ presented simultaneously with a mouth articulating /ga/ is typically heard as /da/ [McGurk, H., & MacDonald, J. Hearing lips and seeing voices. Nature, 264, 746-748, 1976]. Previous studies have demonstrated the interaction of auditory and visual streams at the auditory cortex level, but the importance of these interactions for the qualitative perception change remained unclear because the change could result from interactions at higher processing levels as well. In our electroencephalogram experiment, we combined the McGurk effect with mismatch negativity (MMN), a response that is elicited in the auditory cortex at a latency of 100-250 msec by any above-threshold change in a sequence of repetitive sounds. An "odd-ball" sequence of acoustic stimuli consisting of frequent /va/ syllables (standards) and infrequent /ba/ syllables (deviants) was presented to 11 participants. Deviant stimuli in the unisensory acoustic stimulus sequence elicited a typical MMN, reflecting discrimination of acoustic features in the auditory cortex. When the acoustic stimuli were dubbed onto a video of a mouth constantly articulating /va/, the deviant acoustic /ba/ was heard as /va/ due to the McGurk effect and was indistinguishable from the standards. Importantly, such deviants did not elicit MMN, indicating that the auditory cortex failed to discriminate between the acoustic stimuli. Our findings show that visual stream can qualitatively change the auditory percept at the auditory cortex level, profoundly influencing the auditory cortex mechanisms underlying early sound discrimination.

  19. A Role for Mouse Primary Visual Cortex in Motion Perception.

    PubMed

    Marques, Tiago; Summers, Mathew T; Fioreze, Gabriela; Fridman, Marina; Dias, Rodrigo F; Feller, Marla B; Petreanu, Leopoldo

    2018-06-04

    Visual motion is an ethologically important stimulus throughout the animal kingdom. In primates, motion perception relies on specific higher-order cortical regions. Although mouse primary visual cortex (V1) and higher-order visual areas show direction-selective (DS) responses, their role in motion perception remains unknown. Here, we tested whether V1 is involved in motion perception in mice. We developed a head-fixed discrimination task in which mice must report their perceived direction of motion from random dot kinematograms (RDKs). After training, mice made around 90% correct choices for stimuli with high coherence and performed significantly above chance for 16% coherent RDKs. Accuracy increased with both stimulus duration and visual field coverage of the stimulus, suggesting that mice in this task integrate motion information in time and space. Retinal recordings showed that thalamically projecting On-Off DS ganglion cells display DS responses when stimulated with RDKs. Two-photon calcium imaging revealed that neurons in layer (L) 2/3 of V1 display strong DS tuning in response to this stimulus. Thus, RDKs engage motion-sensitive retinal circuits as well as downstream visual cortical areas. Contralateral V1 activity played a key role in this motion direction discrimination task because its reversible inactivation with muscimol led to a significant reduction in performance. Neurometric-psychometric comparisons showed that an ideal observer could solve the task with the information encoded in DS L2/3 neurons. Motion discrimination of RDKs presents a powerful behavioral tool for dissecting the role of retino-forebrain circuits in motion processing. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Visual perception of fatigued lifting actions.

    PubMed

    Fischer, Steven L; Albert, Wayne J; McGarry, Tim

    2012-12-01

    Fatigue-related changes in lifting kinematics may expose workers to undue injury risks. Early detection of accumulating fatigue offers the prospect of intervention strategies to mitigate such fatigue-related risks. In a first step towards this objective, this study investigated whether fatigue detection was accessible to visual perception and, if so, what was the key visual information required for successful fatigue discrimination. Eighteen participants were tasked with identifying fatigued lifts when viewing 24 trials presented using both video and point-light representations. Each trial comprised a pair of lifting actions containing a fresh and a fatigued lift from the same individual presented in counter-balanced sequence. Confidence intervals demonstrated that the frequency of correct responses for both sexes exceeded chance expectations (50%) for both video (68%±12%) and point-light representations (67%±10%), demonstrating that fatigued lifting kinematics are open to visual perception. There were no significant differences between sexes or viewing condition, the latter result indicating kinematic dynamics as providing sufficient information for successful fatigue discrimination. Moreover, results from single viewer investigation reported fatigue detection (75%) from point-light information describing only the kinematics of the box lifted. These preliminary findings may have important workplace applications if fatigue discrimination rates can be improved upon through future research. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Visual variability affects early verb learning.

    PubMed

    Twomey, Katherine E; Lush, Lauren; Pearce, Ruth; Horst, Jessica S

    2014-09-01

    Research demonstrates that within-category visual variability facilitates noun learning; however, the effect of visual variability on verb learning is unknown. We habituated 24-month-old children to a novel verb paired with an animated star-shaped actor. Across multiple trials, children saw either a single action from an action category (identical actions condition, for example, travelling while repeatedly changing into a circle shape) or multiple actions from that action category (variable actions condition, for example, travelling while changing into a circle shape, then a square shape, then a triangle shape). Four test trials followed habituation. One paired the habituated verb with a new action from the habituated category (e.g., 'dacking' + pentagon shape) and one with a completely novel action (e.g., 'dacking' + leg movement). The others paired a new verb with a new same-category action (e.g., 'keefing' + pentagon shape), or a completely novel category action (e.g., 'keefing' + leg movement). Although all children discriminated novel verb/action pairs, children in the identical actions condition discriminated trials that included the completely novel verb, while children in the variable actions condition discriminated the out-of-category action. These data suggest that - as in noun learning - visual variability affects verb learning and children's ability to form action categories. © 2014 The British Psychological Society.

  2. Photoreceptor Cells With Profound Structural Deficits Can Support Useful Vision in Mice

    PubMed Central

    Thompson, Stewart; Blodi, Frederick R.; Lee, Swan; Welder, Chris R.; Mullins, Robert F.; Tucker, Budd A.; Stasheff, Steven F.; Stone, Edwin M.

    2014-01-01

    Purpose. In animal models of degenerative photoreceptor disease, there has been some success in restoring photoreception by transplanting stem cell–derived photoreceptor cells into the subretinal space. However, only a small proportion of transplanted cells develop extended outer segments, considered critical for photoreceptor cell function. The purpose of this study was to determine whether photoreceptor cells that lack a fully formed outer segment could usefully contribute to vision. Methods. Retinal and visual function was tested in wild-type and Rds mice at 90 days of age (RdsP90). Photoreceptor cells of mice homozygous for the Rds mutation in peripherin 2 never develop a fully formed outer segment. The electroretinogram and multielectrode recording of retinal ganglion cells were used to test retinal responses to light. Three distinct visual behaviors were used to assess visual capabilities: the optokinetic tracking response, the discrimination-based visual water task, and a measure of the effect of vision on wheel running. Results. RdsP90 mice had reduced but measurable electroretinogram responses to light, and exhibited light-evoked responses in multiple types of retinal ganglion cells, the output neurons of the retina. In optokinetic and discrimination-based tests, acuity was measurable but reduced, most notably when contrast was decreased. The wheel running test showed that RdsP90 mice needed 3 log units brighter luminance than wild type to support useful vision (10 cd/m2). Conclusions. Photoreceptors that lack fully formed outer segments can support useful vision. This challenges the idea that normal cellular structure needs to be completely reproduced for transplanted cells to contribute to useful vision. PMID:24569582

  3. Crossmodal Statistical Binding of Temporal Information and Stimuli Properties Recalibrates Perception of Visual Apparent Motion

    PubMed Central

    Zhang, Yi; Chen, Lihan

    2016-01-01

    Recent studies of brain plasticity that pertain to time perception have shown that fast training of temporal discrimination in one modality, for example, the auditory modality, can improve performance of temporal discrimination in another modality, such as the visual modality. We here examined whether the perception of visual Ternus motion could be recalibrated through fast crossmodal statistical binding of temporal information and stimuli properties binding. We conducted two experiments, composed of three sessions each: pre-test, learning, and post-test. In both the pre-test and the post-test, participants classified the Ternus display as either “element motion” or “group motion.” For the training session in Experiment 1, we constructed two types of temporal structures, in which two consecutively presented sound beeps were dominantly (80%) flanked by one leading visual Ternus frame and by one lagging visual Ternus frame (VAAV) or dominantly inserted by two Ternus visual frames (AVVA). Participants were required to respond which interval (auditory vs. visual) was longer. In Experiment 2, we presented only a single auditory–visual pair but with similar temporal configurations as in Experiment 1, and asked participants to perform an audio–visual temporal order judgment. The results of these two experiments support that statistical binding of temporal information and stimuli properties can quickly and selectively recalibrate the sensitivity of perceiving visual motion, according to the protocols of the specific bindings. PMID:27065910

  4. The effect of listening experience on the discrimination of /ba/ and /pa/ in Hebrew-learning and Arabic-learning infants.

    PubMed

    Segal, Osnat; Hejli-Assi, Saja; Kishon-Rabin, Liat

    2016-02-01

    Infant speech discrimination can follow multiple trajectories depending on the language and the specific phonemes involved. Two understudied languages in terms of the development of infants' speech discrimination are Arabic and Hebrew. The purpose of the present study was to examine the influence of listening experience with the native language on the discrimination of the voicing contrast /ba-pa/ in Arabic-learning infants whose native language includes only the phoneme /b/ and in Hebrew-learning infants whose native language includes both phonemes. 128 Arabic-learning infants and Hebrew-learning infants, 4-to-6 and 10-to-12-month-old infants, were tested with the Visual Habituation Procedure. The results showed that 4-to-6-month-old infants discriminated between /ba-pa/ regardless of their native language and order of presentation. However, only 10-to-12-month-old infants learning Hebrew retained this ability. 10-to-12-month-old infants learning Arabic did not discriminate the change from /ba/ to /pa/ but showed a tendency for discriminating the change from /pa/ to /ba/. This is the first study to report on the reduced discrimination of /ba-pa/ in older infants learning Arabic. Our findings are consistent with the notion that experience with the native language changes discrimination abilities and alters sensitivity to non-native contrasts, thus providing evidence for 'top-down' processing in young infants. The directional asymmetry in older infants learning Arabic can be explained by assimilation of the non-native consonant /p/ to the native Arabic category /b/ as predicted by current speech perception models. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Peripheral Vision of Youths with Low Vision: Motion Perception, Crowding, and Visual Search

    PubMed Central

    Tadin, Duje; Nyquist, Jeffrey B.; Lusk, Kelly E.; Corn, Anne L.; Lappin, Joseph S.

    2012-01-01

    Purpose. Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. Methods. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10–17) and low vision (n = 24, ages 9–18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. Results. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Conclusions. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function. PMID:22836766

  6. Peripheral vision of youths with low vision: motion perception, crowding, and visual search.

    PubMed

    Tadin, Duje; Nyquist, Jeffrey B; Lusk, Kelly E; Corn, Anne L; Lappin, Joseph S

    2012-08-24

    Effects of low vision on peripheral visual function are poorly understood, especially in children whose visual skills are still developing. The aim of this study was to measure both central and peripheral visual functions in youths with typical and low vision. Of specific interest was the extent to which measures of foveal function predict performance of peripheral tasks. We assessed central and peripheral visual functions in youths with typical vision (n = 7, ages 10-17) and low vision (n = 24, ages 9-18). Experimental measures used both static and moving stimuli and included visual crowding, visual search, motion acuity, motion direction discrimination, and multitarget motion comparison. In most tasks, visual function was impaired in youths with low vision. Substantial differences, however, were found both between participant groups and, importantly, across different tasks within participant groups. Foveal visual acuity was a modest predictor of peripheral form vision and motion sensitivity in either the central or peripheral field. Despite exhibiting normal motion discriminations in fovea, motion sensitivity of youths with low vision deteriorated in the periphery. This contrasted with typically sighted participants, who showed improved motion sensitivity with increasing eccentricity. Visual search was greatly impaired in youths with low vision. Our results reveal a complex pattern of visual deficits in peripheral vision and indicate a significant role of attentional mechanisms in observed impairments. These deficits were not adequately captured by measures of foveal function, arguing for the importance of independently assessing peripheral visual function.

  7. Mouse V1 population correlates of visual detection rely on heterogeneity within neuronal response patterns

    PubMed Central

    Montijn, Jorrit S; Goltstein, Pieter M; Pennartz, Cyriel MA

    2015-01-01

    Previous studies have demonstrated the importance of the primary sensory cortex for the detection, discrimination, and awareness of visual stimuli, but it is unknown how neuronal populations in this area process detected and undetected stimuli differently. Critical differences may reside in the mean strength of responses to visual stimuli, as reflected in bulk signals detectable in functional magnetic resonance imaging, electro-encephalogram, or magnetoencephalography studies, or may be more subtly composed of differentiated activity of individual sensory neurons. Quantifying single-cell Ca2+ responses to visual stimuli recorded with in vivo two-photon imaging, we found that visual detection correlates more strongly with population response heterogeneity rather than overall response strength. Moreover, neuronal populations showed consistencies in activation patterns across temporally spaced trials in association with hit responses, but not during nondetections. Contrary to models relying on temporally stable networks or bulk signaling, these results suggest that detection depends on transient differentiation in neuronal activity within cortical populations. DOI: http://dx.doi.org/10.7554/eLife.10163.001 PMID:26646184

  8. Effects of Attention and Laterality on Motion and Orientation Discrimination in Deaf Signers

    ERIC Educational Resources Information Center

    Bosworth, Rain G.; Petrich, Jennifer A. F.; Dobkins, Karen R.

    2013-01-01

    Previous studies have asked whether visual sensitivity and attentional processing in deaf signers are enhanced or altered as a result of their different sensory experiences during development, i.e., auditory deprivation and exposure to a visual language. In particular, deaf and hearing signers have been shown to exhibit a right visual field/left…

  9. Visual Sensitivities and Discriminations and Their Roles in Aviation.

    DTIC Science & Technology

    1986-03-01

    D. Low contrast letter charts in early diabetic retinopathy , octrlar hypertension, glaucoma and Parkinson’s disease. Br J Ophthalmol, 1984, 68, 885...to detect a camouflaged object that was visible only when moving, and compared these data with similar measurements for conventional objects that were...3) Compare visual detection (i.e. visual acquisition) of camouflaged objects whose edges are defined by velocity differences with visual detection

  10. Efficiencies for the statistics of size discrimination.

    PubMed

    Solomon, Joshua A; Morgan, Michael; Chubb, Charles

    2011-10-19

    Different laboratories have achieved a consensus regarding how well human observers can estimate the average orientation in a set of N objects. Such estimates are not only limited by visual noise, which perturbs the visual signal of each object's orientation, they are also inefficient: Observers effectively use only √N objects in their estimates (e.g., S. C. Dakin, 2001; J. A. Solomon, 2010). More controversial is the efficiency with which observers can estimate the average size in an array of circles (e.g., D. Ariely, 2001, 2008; S. C. Chong, S. J. Joo, T.-A. Emmanouil, & A. Treisman, 2008; K. Myczek & D. J. Simons, 2008). Of course, there are some important differences between orientation and size; nonetheless, it seemed sensible to compare the two types of estimate against the same ideal observer. Indeed, quantitative evaluation of statistical efficiency requires this sort of comparison (R. A. Fisher, 1925). Our first step was to measure the noise that limits size estimates when only two circles are compared. Our results (Weber fractions between 0.07 and 0.14 were necessary for 84% correct 2AFC performance) are consistent with the visual system adding the same amount of Gaussian noise to all logarithmically transduced circle diameters. We exaggerated this visual noise by randomly varying the diameters in (uncrowded) arrays of 1, 2, 4, and 8 circles and measured its effect on discrimination between mean sizes. Efficiencies inferred from all four observers significantly exceed 25% and, in two cases, approach 100%. More consistent are our measurements of just-noticeable differences in size variance. These latter results suggest between 62 and 75% efficiency for variance discriminations. Although our observers were no more efficient comparing size variances than they were at comparing mean sizes, they were significantly more precise. In other words, our results contain evidence for a non-negligible source of late noise that limits mean discriminations but not variance discriminations.

  11. Dry-eye screening by using a functional visual acuity measurement system: the Osaka Study.

    PubMed

    Kaido, Minako; Uchino, Miki; Yokoi, Norihiko; Uchino, Yuichi; Dogru, Murat; Kawashima, Motoko; Komuro, Aoi; Sonomura, Yukiko; Kato, Hiroaki; Kinoshita, Shigeru; Tsubota, Kazuo

    2014-05-06

    We determined whether functional visual acuity (VA) parameters and a dry eyes (DEs) symptoms questionnaire could predict DEs in a population of visual terminal display (VDT) users. This prospective study included 491 VDT users from the Osaka Study. Subjects with definite DE, diagnosed with the presence of DE symptoms, tear abnormality (Schirmer test ≤ 5 mm or tear breakup time [TBUT] ≤ 5 seconds), and conjunctivocorneal epithelial damage (total staining score of ≥3 points), or probable DE, diagnosed with the presence of two of them, were assigned to a DE group, and the remainder to a non-DE group. Functional VA was assessed, and DE questionnaires were administered. We assessed whether univariate and discriminant analyses could determine to which group a subject belonged. Sensitivity and specificity were assessed. Of 491 subjects, 320 and 171 were assigned to the DE and non-DE groups, respectively. No significant differences were observed between DE and non-DE groups in Schirmer test value and epithelial damage, but TBUT value (3.1 ± 1.5 vs. 5.9 ± 3.0 seconds). The sensitivity and specificity of single test using functional VA parameters were 59% and 49% in functional VA, 60% and 50% in visual maintenance ratio, and 83% and 30% in frequency of blinking, respectively. According to a discriminant analysis using a combination of functional VA parameters and a DE questionnaire, six variables were selected for the discriminant equation, of which area under the curve (AUC) was 0.735. Sensitivity and specificity of diagnoses predicted by the discriminant equation were 85.9% and 45.6%, respectively. The discriminant equation obtained using functional VA measurement combined with a symptoms questionnaire may suggest the possibility for the first step screening of DE with unstable tear film. Since the questionnaire has an overall poor sensitivity and specificity, further amelioration may be necessary for the actual utilization of this screening tool. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  12. Enhancement of plant metabolite fingerprinting by machine learning.

    PubMed

    Scott, Ian M; Vermeer, Cornelia P; Liakata, Maria; Corol, Delia I; Ward, Jane L; Lin, Wanchang; Johnson, Helen E; Whitehead, Lynne; Kular, Baldeep; Baker, John M; Walsh, Sean; Dave, Anuja; Larson, Tony R; Graham, Ian A; Wang, Trevor L; King, Ross D; Draper, John; Beale, Michael H

    2010-08-01

    Metabolite fingerprinting of Arabidopsis (Arabidopsis thaliana) mutants with known or predicted metabolic lesions was performed by (1)H-nuclear magnetic resonance, Fourier transform infrared, and flow injection electrospray-mass spectrometry. Fingerprinting enabled processing of five times more plants than conventional chromatographic profiling and was competitive for discriminating mutants, other than those affected in only low-abundance metabolites. Despite their rapidity and complexity, fingerprints yielded metabolomic insights (e.g. that effects of single lesions were usually not confined to individual pathways). Among fingerprint techniques, (1)H-nuclear magnetic resonance discriminated the most mutant phenotypes from the wild type and Fourier transform infrared discriminated the fewest. To maximize information from fingerprints, data analysis was crucial. One-third of distinctive phenotypes might have been overlooked had data models been confined to principal component analysis score plots. Among several methods tested, machine learning (ML) algorithms, namely support vector machine or random forest (RF) classifiers, were unsurpassed for phenotype discrimination. Support vector machines were often the best performing classifiers, but RFs yielded some particularly informative measures. First, RFs estimated margins between mutant phenotypes, whose relations could then be visualized by Sammon mapping or hierarchical clustering. Second, RFs provided importance scores for the features within fingerprints that discriminated mutants. These scores correlated with analysis of variance F values (as did Kruskal-Wallis tests, true- and false-positive measures, mutual information, and the Relief feature selection algorithm). ML classifiers, as models trained on one data set to predict another, were ideal for focused metabolomic queries, such as the distinctiveness and consistency of mutant phenotypes. Accessible software for use of ML in plant physiology is highlighted.

  13. Effects of Peripheral Eccentricity and Head Orientation on Gaze Discrimination

    PubMed Central

    Palanica, Adam; Itier, Roxane J.

    2017-01-01

    Visual search tasks support a special role for direct gaze in human cognition, while classic gaze judgment tasks suggest the congruency between head orientation and gaze direction plays a central role in gaze perception. Moreover, whether gaze direction can be accurately discriminated in the periphery using covert attention is unknown. In the present study, individual faces in frontal and in deviated head orientations with a direct or an averted gaze were flashed for 150 ms across the visual field; participants focused on a centred fixation while judging the gaze direction. Gaze discrimination speed and accuracy varied with head orientation and eccentricity. The limit of accurate gaze discrimination was less than ±6° eccentricity. Response times suggested a processing facilitation for direct gaze in fovea, irrespective of head orientation, however, by ±3° eccentricity, head orientation started biasing gaze judgments, and this bias increased with eccentricity. Results also suggested a special processing of frontal heads with direct gaze in central vision, rather than a general congruency effect between eye and head cues. Thus, while both head and eye cues contribute to gaze discrimination, their role differs with eccentricity. PMID:28344501

  14. Discriminative power of visual attributes in dermatology.

    PubMed

    Giotis, Ioannis; Visser, Margaretha; Jonkman, Marcel; Petkov, Nicolai

    2013-02-01

    Visual characteristics such as color and shape of skin lesions play an important role in the diagnostic process. In this contribution, we quantify the discriminative power of such attributes using an information theoretical approach. We estimate the probability of occurrence of each attribute as a function of the skin diseases. We use the distribution of this probability across the studied diseases and its entropy to define the discriminative power of the attribute. The discriminative power has a maximum value for attributes that occur (or do not occur) for only one disease and a minimum value for those which are equally likely to be observed among all diseases. Verrucous surface, red and brown colors, and the presence of more than 10 lesions are among the most informative attributes. A ranking of attributes is also carried out and used together with a naive Bayesian classifier, yielding results that confirm the soundness of the proposed method. proposed measure is proven to be a reliable way of assessing the discriminative power of dermatological attributes, and it also helps generate a condensed dermatological lexicon. Therefore, it can be of added value to the manual or computer-aided diagnostic process. © 2012 John Wiley & Sons A/S.

  15. Paper spray mass spectrometry and PLS-DA improved by variable selection for the forensic discrimination of beers.

    PubMed

    Pereira, Hebert Vinicius; Amador, Victória Silva; Sena, Marcelo Martins; Augusti, Rodinei; Piccin, Evandro

    2016-10-12

    Paper spray mass spectrometry (PS-MS) combined with partial least squares discriminant analysis (PLS-DA) was applied for the first time in a forensic context to a fast and effective differentiation of beers. Eight different brands of American standard lager beers produced by four different breweries (141 samples from 55 batches) were studied with the aim at performing a differentiation according to their market prices. The three leader brands in the Brazilian beer market, which have been subject to fraud, were modeled as the higher-price class, while the five brands most used for counterfeiting were modeled as the lower-price class. Parameters affecting the paper spray ionization were examined and optimized. The best MS signal stability and intensity was obtained while using the positive ion mode, with PS(+) mass spectra characterized by intense pairs of signals corresponding to sodium and potassium adducts of malto-oligosaccharides. Discrimination was not apparent neither by using visual inspection nor principal component analysis (PCA). However, supervised classification models provided high rates of sensitivity and specificity. A PLS-DA model using full scan mass spectra were improved by variable selection with ordered predictors selection (OPS), providing 100% of reliability rate and reducing the number of variables from 1701 to 60. This model was interpreted by detecting fifteen variables as the most significant VIP (variable importance in projection) scores, which were therefore considered diagnostic ions for this type of beer counterfeit. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Visual and haptic integration in the estimation of softness of deformable objects

    PubMed Central

    Cellini, Cristiano; Kaim, Lukas; Drewing, Knut

    2013-01-01

    Softness perception intrinsically relies on haptic information. However, through everyday experiences we learn correspondences between felt softness and the visual effects of exploratory movements that are executed to feel softness. Here, we studied how visual and haptic information is integrated to assess the softness of deformable objects. Participants discriminated between the softness of two softer or two harder objects using only-visual, only-haptic or both visual and haptic information. We assessed the reliabilities of the softness judgments using the method of constant stimuli. In visuo-haptic trials, discrepancies between the two senses' information allowed us to measure the contribution of the individual senses to the judgments. Visual information (finger movement and object deformation) was simulated using computer graphics; input in visual trials was taken from previous visuo-haptic trials. Participants were able to infer softness from vision alone, and vision considerably contributed to bisensory judgments (∼35%). The visual contribution was higher than predicted from models of optimal integration (senses are weighted according to their reliabilities). Bisensory judgments were less reliable than predicted from optimal integration. We conclude that the visuo-haptic integration of softness information is biased toward vision, rather than being optimal, and might even be guided by a fixed weighting scheme. PMID:25165510

  17. Models of Speed Discrimination

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The prime purpose of this project was to investigate various theoretical issues concerning the integration of information across visual space. To date, most of the research efforts in the study of the visual system seem to have been focused in two almost non-overlaping directions. One research focus has been the low level perception as studied by psychophysics. The other focus has been the study of high level vision exemplified by the study of object perception. Most of the effort in psychophysics has been devoted to the search for the fundamental "features" of perception. The general idea is that the most peripheral processes of the visual system decompose the input into features that are then used for classification and recognition. The experimental and theoretical focus has been on finding and describing these analyzers that decompose images into useful components. Various models are then compared to the physiological measurements performed on neurons in the sensory systems. In the study of higher level perception, the work has been focused on the representation of objects and on the connections between various physical effects and object perception. In this category we find the perception of 3D from a variety of physical measurements including motion, shading and other physical phenomena. With few exceptions, there seem to be very limited development of theories describing how the visual system might combine the output of the analyzers to form the representation of visual objects. Therefore, the processes underlying the integration of information over space represent critical aspects of vision system. The understanding of these processes will have implications on our expectations for the underlying physiological mechanisms, as well as for our models of the internal representation for visual percepts. In this project, we explored several mechanisms related to spatial summation, attention, and eye movements. The project comprised three components: 1. Modeling visual search for the detection of speed deviation. 2. Perception of moving objects. 3. Exploring the role of eye movements in various visual tasks.

  18. Transfer of perceptual learning between different visual tasks

    PubMed Central

    McGovern, David P.; Webb, Ben S.; Peirce, Jonathan W.

    2012-01-01

    Practice in most sensory tasks substantially improves perceptual performance. A hallmark of this ‘perceptual learning' is its specificity for the basic attributes of the trained stimulus and task. Recent studies have challenged the specificity of learned improvements, although transfer between substantially different tasks has yet to be demonstrated. Here, we measure the degree of transfer between three distinct perceptual tasks. Participants trained on an orientation discrimination, a curvature discrimination, or a ‘global form' task, all using stimuli comprised of multiple oriented elements. Before and after training they were tested on all three and a contrast discrimination control task. A clear transfer of learning was observed, in a pattern predicted by the relative complexity of the stimuli in the training and test tasks. Our results suggest that sensory improvements derived from perceptual learning can transfer between very different visual tasks. PMID:23048211

  19. Transfer of perceptual learning between different visual tasks.

    PubMed

    McGovern, David P; Webb, Ben S; Peirce, Jonathan W

    2012-10-09

    Practice in most sensory tasks substantially improves perceptual performance. A hallmark of this 'perceptual learning' is its specificity for the basic attributes of the trained stimulus and task. Recent studies have challenged the specificity of learned improvements, although transfer between substantially different tasks has yet to be demonstrated. Here, we measure the degree of transfer between three distinct perceptual tasks. Participants trained on an orientation discrimination, a curvature discrimination, or a 'global form' task, all using stimuli comprised of multiple oriented elements. Before and after training they were tested on all three and a contrast discrimination control task. A clear transfer of learning was observed, in a pattern predicted by the relative complexity of the stimuli in the training and test tasks. Our results suggest that sensory improvements derived from perceptual learning can transfer between very different visual tasks.

  20. Preserved Discrimination Performance and Neural Processing during Crossmodal Attention in Aging

    PubMed Central

    Mishra, Jyoti; Gazzaley, Adam

    2013-01-01

    In a recent study in younger adults (19-29 year olds) we showed evidence that distributed audiovisual attention resulted in improved discrimination performance for audiovisual stimuli compared to focused visual attention. Here, we extend our findings to healthy older adults (60-90 year olds), showing that performance benefits of distributed audiovisual attention in this population match those of younger adults. Specifically, improved performance was revealed in faster response times for semantically congruent audiovisual stimuli during distributed relative to focused visual attention, without any differences in accuracy. For semantically incongruent stimuli, discrimination accuracy was significantly improved during distributed relative to focused attention. Furthermore, event-related neural processing showed intact crossmodal integration in higher performing older adults similar to younger adults. Thus, there was insufficient evidence to support an age-related deficit in crossmodal attention. PMID:24278464

  1. Braille character discrimination in blindfolded human subjects.

    PubMed

    Kauffman, Thomas; Théoret, Hugo; Pascual-Leone, Alvaro

    2002-04-16

    Visual deprivation may lead to enhanced performance in other sensory modalities. Whether this is the case in the tactile modality is controversial and may depend upon specific training and experience. We compared the performance of sighted subjects on a Braille character discrimination task to that of normal individuals blindfolded for a period of five days. Some participants in each group (blindfolded and sighted) received intensive Braille training to offset the effects of experience. Blindfolded subjects performed better than sighted subjects in the Braille discrimination task, irrespective of tactile training. For the left index finger, which had not been used in the formal Braille classes, blindfolding had no effect on performance while subjects who underwent tactile training outperformed non-stimulated participants. These results suggest that visual deprivation speeds up Braille learning and may be associated with behaviorally relevant neuroplastic changes.

  2. Exploring What’s Missing: What Do Target Absent Trials Reveal About Autism Search Superiority?

    PubMed Central

    Keehn, Brandon; Joseph, Robert M.

    2016-01-01

    We used eye-tracking to investigate the roles of enhanced discrimination and peripheral selection in superior visual search in autism spectrum disorder (ASD). Children with ASD were faster at visual search than their typically developing peers. However, group differences in performance and eye-movements did not vary with the level of difficulty of discrimination or selection. Rather, consistent with prior ASD research, group differences were mainly the effect of faster performance on target-absent trials. Eye-tracking revealed a lack of left-visual-field search asymmetry in ASD, which may confer an additional advantage when the target is absent. Lastly, ASD symptomatology was positively associated with search superiority, the mechanisms of which may shed light on the atypical brain organization that underlies social-communicative impairment in ASD. PMID:26762114

  3. A description of discrete internal representation schemes for visual pattern discrimination.

    PubMed

    Foster, D H

    1980-01-01

    A general description of a class of schemes for pattern vision is outlined in which the visual system is assumed to form a discrete internal representation of the stimulus. These representations are discrete in that they are considered to comprise finite combinations of "components" which are selected from a fixed and finite repertoire, and which designate certain simple pattern properties or features. In the proposed description it is supposed that the construction of an internal representation is a probabilistic process. A relationship is then formulated associating the probability density functions governing this construction and performance in visually discriminating patterns when differences in pattern shape are small. Some questions related to the application of this relationship to the experimental investigation of discrete internal representations are briefly discussed.

  4. A preliminary study of right hemisphere cognitive deficits and impaired social judgments among young people with Asperger syndrome.

    PubMed

    Ellis, Hadyn D; Ellis, Diane M; Fraser, William; Deb, Shoumitro

    1994-10-01

    Seven children and young adults with definite signs of Asperger syndrome were administered a battery of tests designed to test: intelligence; left and right cerebral hemisphere functioning; ability to discriminate eye gaze; and social judgment. The subjects revealed a non significant tendency to have a higher verbal IQ than visual IQ; and their right hemisphere functioning seemed impaired. They were also poorer at discriminating eye gaze and revealed difficulties in making hypothetical social judgments. The data are considered with reference to Rourke's (1988) work on non-verbal learning disabilities together with the ideas of Tantam (1992) on the "social gaze response" and Baron-Cohen's (1993) Eye-Detection Detector model. The possible links between social judgment and theory of mind (Frith, 1991) are briefly explored.

  5. In search of a recognition memory engram

    PubMed Central

    Brown, M.W.; Banks, P.J.

    2015-01-01

    A large body of data from human and animal studies using psychological, recording, imaging, and lesion techniques indicates that recognition memory involves at least two separable processes: familiarity discrimination and recollection. Familiarity discrimination for individual visual stimuli seems to be effected by a system centred on the perirhinal cortex of the temporal lobe. The fundamental change that encodes prior occurrence within the perirhinal cortex is a reduction in the responses of neurones when a stimulus is repeated. Neuronal network modelling indicates that a system based on such a change in responsiveness is potentially highly efficient in information theoretic terms. A review is given of findings indicating that perirhinal cortex acts as a storage site for recognition memory of objects and that such storage depends upon processes producing synaptic weakening. PMID:25280908

  6. Rapid Elemental Analysis and Provenance Study of Blumea balsamifera DC Using Laser-Induced Breakdown Spectroscopy

    PubMed Central

    Liu, Xiaona; Zhang, Qiao; Wu, Zhisheng; Shi, Xinyuan; Zhao, Na; Qiao, Yanjiang

    2015-01-01

    Laser-induced breakdown spectroscopy (LIBS) was applied to perform a rapid elemental analysis and provenance study of Blumea balsamifera DC. Principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA) were implemented to exploit the multivariate nature of the LIBS data. Scores and loadings of computed principal components visually illustrated the differing spectral data. The PLS-DA algorithm showed good classification performance. The PLS-DA model using complete spectra as input variables had similar discrimination performance to using selected spectral lines as input variables. The down-selection of spectral lines was specifically focused on the major elements of B. balsamifera samples. Results indicated that LIBS could be used to rapidly analyze elements and to perform provenance study of B. balsamifera. PMID:25558999

  7. Teaching in an Open Classroom: Informal Checks, Diagnoses, and Learning Strategies for Beginning Reading and Math.

    ERIC Educational Resources Information Center

    Langstaff, Nancy

    This book, intended for use by inservice teachers, preservice teachers, and parents interested in open classrooms, contains three chapters. "Beginning Reading in an Open Classroom" discusses language development, sight vocabulary, visual discrimination, auditory discrimination, directional concepts, small muscle control, and measurement of…

  8. Role of Gamma-Band Synchronization in Priming of Form Discrimination for Multiobject Displays

    ERIC Educational Resources Information Center

    Lu, Hongjing; Morrison, Robert G.; Hummel, John E.; Holyoak, Keith J.

    2006-01-01

    Previous research has shown that synchronized flicker can facilitate detection of a single Kanizsa square. The present study investigated the role of temporally structured priming in discrimination tasks involving perceptual relations between multiple Kanizsa-type figures. Results indicate that visual information presented as temporally structured…

  9. Life Span Changes in Visual Enumeration: The Number Discrimination Task.

    ERIC Educational Resources Information Center

    Trick, Lana M.; And Others

    1996-01-01

    Ninety-eight participants from 5 age groups with mean ages of 6, 8, 10, 22, and 72 years were tested in a series of speeded number discriminations. Found that response time slope as a function of number size decreased with age for numbers in the 1-4 range. (MDM)

  10. Kansas Center for Research in Early Childhood Education Annual Report, FY 1973.

    ERIC Educational Resources Information Center

    Horowitz, Frances D.

    This monograph is a collection of papers describing a series of loosely related studies of visual attention, auditory stimulation, and language discrimination in young infants. Titles include: (1) Infant Attention and Discrimination: Methodological and Substantive Issues; (2) The Addition of Auditory Stimulation (Music) and an Interspersed…

  11. Deep learning of orthographic representations in baboons.

    PubMed

    Hannagan, Thomas; Ziegler, Johannes C; Dufau, Stéphane; Fagot, Joël; Grainger, Jonathan

    2014-01-01

    What is the origin of our ability to learn orthographic knowledge? We use deep convolutional networks to emulate the primate's ventral visual stream and explore the recent finding that baboons can be trained to discriminate English words from nonwords. The networks were exposed to the exact same sequence of stimuli and reinforcement signals as the baboons in the experiment, and learned to map real visual inputs (pixels) of letter strings onto binary word/nonword responses. We show that the networks' highest levels of representations were indeed sensitive to letter combinations as postulated in our previous research. The model also captured the key empirical findings, such as generalization to novel words, along with some intriguing inter-individual differences. The present work shows the merits of deep learning networks that can simulate the whole processing chain all the way from the visual input to the response while allowing researchers to analyze the complex representations that emerge during the learning process.

  12. The Role of Visual Area V4 in the Discrimination of Partially Occluded Shapes

    PubMed Central

    Kosai, Yoshito; El-Shamayleh, Yasmine; Fyall, Amber M.

    2014-01-01

    The primate brain successfully recognizes objects, even when they are partially occluded. To begin to elucidate the neural substrates of this perceptual capacity, we measured the responses of shape-selective neurons in visual area V4 while monkeys discriminated pairs of shapes under varying degrees of occlusion. We found that neuronal shape selectivity always decreased with increasing occlusion level, with some neurons being notably more robust to occlusion than others. The responses of neurons that maintained their selectivity across a wider range of occlusion levels were often sufficiently sensitive to support behavioral performance. Many of these same neurons were distinctively selective for the curvature of local boundary features and their shape tuning was well fit by a model of boundary curvature (curvature-tuned neurons). A significant subset of V4 neurons also signaled the animal's upcoming behavioral choices; these decision signals had short onset latencies that emerged progressively later for higher occlusion levels. The time course of the decision signals in V4 paralleled that of shape selectivity in curvature-tuned neurons: shape selectivity in curvature-tuned neurons, but not others, emerged earlier than the decision signals. These findings provide evidence for the involvement of contour-based mechanisms in the segmentation and recognition of partially occluded objects, consistent with psychophysical theory. Furthermore, they suggest that area V4 participates in the representation of the relevant sensory signals and the generation of decision signals underlying discrimination. PMID:24948811

  13. On the search for an appropriate metric for reaction time to suprathreshold increments and decrements.

    PubMed

    Vassilev, Angel; Murzac, Adrian; Zlatkova, Margarita B; Anderson, Roger S

    2009-03-01

    Weber contrast, DeltaL/L, is a widely used contrast metric for aperiodic stimuli. Zele, Cao & Pokorny [Zele, A. J., Cao, D., & Pokorny, J. (2007). Threshold units: A correct metric for reaction time? Vision Research, 47, 608-611] found that neither Weber contrast nor its transform to detection-threshold units equates human reaction times in response to luminance increments and decrements under selective rod stimulation. Here we show that their rod reaction times are equated when plotted against the spatial luminance ratio between the stimulus and its background (L(max)/L(min), the larger and smaller of background and stimulus luminances). Similarly, reaction times to parafoveal S-cone selective increments and decrements from our previous studies [Murzac, A. (2004). A comparative study of the temporal characteristics of processing of S-cone incremental and decremental signals. PhD thesis, New Bulgarian University, Sofia, Murzac, A., & Vassilev, A. (2004). Reaction time to S-cone increments and decrements. In: 7th European conference on visual perception, Budapest, August 22-26. Perception, 33, 180 (Abstract).], are better described by the spatial luminance ratio than by Weber contrast. We assume that the type of stimulus detection by temporal (successive) luminance discrimination, by spatial (simultaneous) luminance discrimination or by both [Sperling, G., & Sondhi, M. M. (1968). Model for visual luminance discrimination and flicker detection. Journal of the Optical Society of America, 58, 1133-1145.] determines the appropriateness of one or other contrast metric for reaction time.

  14. Optimal Audiovisual Integration in the Ventriloquism Effect But Pervasive Deficits in Unisensory Spatial Localization in Amblyopia.

    PubMed

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-01-01

    Classically understood as a deficit in spatial vision, amblyopia is increasingly recognized to also impair audiovisual multisensory processing. Studies to date, however, have not determined whether the audiovisual abnormalities reflect a failure of multisensory integration, or an optimal strategy in the face of unisensory impairment. We use the ventriloquism effect and the maximum-likelihood estimation (MLE) model of optimal integration to investigate integration of audiovisual spatial information in amblyopia. Participants with unilateral amblyopia (n = 14; mean age 28.8 years; 7 anisometropic, 3 strabismic, 4 mixed mechanism) and visually normal controls (n = 16, mean age 29.2 years) localized brief unimodal auditory, unimodal visual, and bimodal (audiovisual) stimuli during binocular viewing using a location discrimination task. A subset of bimodal trials involved the ventriloquism effect, an illusion in which auditory and visual stimuli originating from different locations are perceived as originating from a single location. Localization precision and bias were determined by psychometric curve fitting, and the observed parameters were compared with predictions from the MLE model. Spatial localization precision was significantly reduced in the amblyopia group compared with the control group for unimodal visual, unimodal auditory, and bimodal stimuli. Analyses of localization precision and bias for bimodal stimuli showed no significant deviations from the MLE model in either the amblyopia group or the control group. Despite pervasive deficits in localization precision for visual, auditory, and audiovisual stimuli, audiovisual integration remains intact and optimal in unilateral amblyopia.

  15. Exploring experiential value in online mobile gaming adoption.

    PubMed

    Okazaki, Shintaro

    2008-10-01

    Despite the growing importance of the online mobile gaming industry, little research has been undertaken to explain why consumers engage in this ubiquitous entertainment. This study attempts to develop an instrument to measure experiential value in online mobile gaming adoption. The proposed scale consists of seven first-order factors of experiential value: intrinsic enjoyment, escapism, efficiency, economic value, visual appeal, perceived novelty, and perceived risklessness. The survey obtained 164 usable responses from Japanese college students. The empirical data fit our first-order model well, indicating a high level of reliability as well as convergent and discriminant validity. The single second-order model also shows an acceptable model fit.

  16. Colour vision in ADHD: part 1--testing the retinal dopaminergic hypothesis.

    PubMed

    Kim, Soyeon; Al-Haj, Mohamed; Chen, Samantha; Fuller, Stuart; Jain, Umesh; Carrasco, Marisa; Tannock, Rosemary

    2014-10-24

    To test the retinal dopaminergic hypothesis, which posits deficient blue color perception in ADHD, resulting from hypofunctioning CNS and retinal dopamine, to which blue cones are exquisitely sensitive. Also, purported sex differences in red color perception were explored. 30 young adults diagnosed with ADHD and 30 healthy young adults, matched on age and gender, performed a psychophysical task to measure blue and red color saturation and contrast discrimination ability. Visual function measures, such as the Visual Activities Questionnaire (VAQ) and Farnsworth-Munsell 100 hue test (FMT), were also administered. Females with ADHD were less accurate in discriminating blue and red color saturation relative to controls but did not differ in contrast sensitivity. Female control participants were better at discriminating red saturation than males, but no sex difference was present within the ADHD group. Poorer discrimination of red as well as blue color saturation in the female ADHD group may be partly attributable to a hypo-dopaminergic state in the retina, given that color perception (blue-yellow and red-green) is based on input from S-cones (short wavelength cone system) early in the visual pathway. The origin of female superiority in red perception may be rooted in sex-specific functional specialization in hunter-gather societies. The absence of this sexual dimorphism for red colour perception in ADHD females warrants further investigation.

  17. TMS over the right precuneus reduces the bilateral field advantage in visual short term memory capacity.

    PubMed

    Kraft, Antje; Dyrholm, Mads; Kehrer, Stefanie; Kaufmann, Christian; Bruening, Jovita; Kathmann, Norbert; Bundesen, Claus; Irlbacher, Kerstin; Brandt, Stephan A

    2015-01-01

    Several studies have demonstrated a bilateral field advantage (BFA) in early visual attentional processing, that is, enhanced visual processing when stimuli are spread across both visual hemifields. The results are reminiscent of a hemispheric resource model of parallel visual attentional processing, suggesting more attentional resources on an early level of visual processing for bilateral displays [e.g. Sereno AB, Kosslyn SM. Discrimination within and between hemifields: a new constraint on theories of attention. Neuropsychologia 1991;29(7):659-75.]. Several studies have shown that the BFA extends beyond early stages of visual attentional processing, demonstrating that visual short term memory (VSTM) capacity is higher when stimuli are distributed bilaterally rather than unilaterally. Here we examine whether hemisphere-specific resources are also evident on later stages of visual attentional processing. Based on the Theory of Visual Attention (TVA) [Bundesen C. A theory of visual attention. Psychol Rev 1990;97(4):523-47.] we used a whole report paradigm that allows investigating visual attention capacity variability in unilateral and bilateral displays during navigated repetitive transcranial magnetic stimulation (rTMS) of the precuneus region. A robust BFA in VSTM storage capacity was apparent after rTMS over the left precuneus and in the control condition without rTMS. In contrast, the BFA diminished with rTMS over the right precuneus. This finding indicates that the right precuneus plays a causal role in VSTM capacity, particularly in bilateral visual displays. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Visual search among items of different salience: removal of visual attention mimics a lesion in extrastriate area V4.

    PubMed

    Braun, J

    1994-02-01

    In more than one respect, visual search for the most salient or the least salient item in a display are different kinds of visual tasks. The present work investigated whether this difference is primarily one of perceptual difficulty, or whether it is more fundamental and relates to visual attention. Display items of different salience were produced by varying either size, contrast, color saturation, or pattern. Perceptual masking was employed and, on average, mask onset was delayed longer in search for the least salient item than in search for the most salient item. As a result, the two types of visual search presented comparable perceptual difficulty, as judged by psychophysical measures of performance, effective stimulus contrast, and stability of decision criterion. To investigate the role of attention in the two types of search, observers attempted to carry out a letter discrimination and a search task concurrently. To discriminate the letters, observers had to direct visual attention at the center of the display and, thus, leave unattended the periphery, which contained target and distractors of the search task. In this situation, visual search for the least salient item was severely impaired while visual search for the most salient item was only moderately affected, demonstrating a fundamental difference with respect to visual attention. A qualitatively identical pattern of results was encountered by Schiller and Lee (1991), who used similar visual search tasks to assess the effect of a lesion in extrastriate area V4 of the macaque.

  19. Joint Entropy for Space and Spatial Frequency Domains Estimated from Psychometric Functions of Achromatic Discrimination

    PubMed Central

    Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima

    2014-01-01

    We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised. PMID:24466158

  20. Joint entropy for space and spatial frequency domains estimated from psychometric functions of achromatic discrimination.

    PubMed

    Silveira, Vladímir de Aquino; Souza, Givago da Silva; Gomes, Bruno Duarte; Rodrigues, Anderson Raiol; Silveira, Luiz Carlos de Lima

    2014-01-01

    We used psychometric functions to estimate the joint entropy for space discrimination and spatial frequency discrimination. Space discrimination was taken as discrimination of spatial extent. Seven subjects were tested. Gábor functions comprising unidimensionalsinusoidal gratings (0.4, 2, and 10 cpd) and bidimensionalGaussian envelopes (1°) were used as reference stimuli. The experiment comprised the comparison between reference and test stimulithat differed in grating's spatial frequency or envelope's standard deviation. We tested 21 different envelope's standard deviations around the reference standard deviation to study spatial extent discrimination and 19 different grating's spatial frequencies around the reference spatial frequency to study spatial frequency discrimination. Two series of psychometric functions were obtained for 2%, 5%, 10%, and 100% stimulus contrast. The psychometric function data points for spatial extent discrimination or spatial frequency discrimination were fitted with Gaussian functions using the least square method, and the spatial extent and spatial frequency entropies were estimated from the standard deviation of these Gaussian functions. Then, joint entropy was obtained by multiplying the square root of space extent entropy times the spatial frequency entropy. We compared our results to the theoretical minimum for unidimensional Gábor functions, 1/4π or 0.0796. At low and intermediate spatial frequencies and high contrasts, joint entropy reached levels below the theoretical minimum, suggesting non-linear interactions between two or more visual mechanisms. We concluded that non-linear interactions of visual pathways, such as the M and P pathways, could explain joint entropy values below the theoretical minimum at low and intermediate spatial frequencies and high contrasts. These non-linear interactions might be at work at intermediate and high contrasts at all spatial frequencies once there was a substantial decrease in joint entropy for these stimulus conditions when contrast was raised.

  1. Land-use Scene Classification in High-Resolution Remote Sensing Images by Multiscale Deeply Described Correlatons

    NASA Astrophysics Data System (ADS)

    Qi, K.; Qingfeng, G.

    2017-12-01

    With the popular use of High-Resolution Satellite (HRS) images, more and more research efforts have been placed on land-use scene classification. However, it makes the task difficult with HRS images for the complex background and multiple land-cover classes or objects. This article presents a multiscale deeply described correlaton model for land-use scene classification. Specifically, the convolutional neural network is introduced to learn and characterize the local features at different scales. Then, learnt multiscale deep features are explored to generate visual words. The spatial arrangement of visual words is achieved through the introduction of adaptive vector quantized correlograms at different scales. Experiments on two publicly available land-use scene datasets demonstrate that the proposed model is compact and yet discriminative for efficient representation of land-use scene images, and achieves competitive classification results with the state-of-art methods.

  2. Image statistics underlying natural texture selectivity of neurons in macaque V4

    PubMed Central

    Okazawa, Gouki; Tajima, Satohiro; Komatsu, Hidehiko

    2015-01-01

    Our daily visual experiences are inevitably linked to recognizing the rich variety of textures. However, how the brain encodes and differentiates a plethora of natural textures remains poorly understood. Here, we show that many neurons in macaque V4 selectively encode sparse combinations of higher-order image statistics to represent natural textures. We systematically explored neural selectivity in a high-dimensional texture space by combining texture synthesis and efficient-sampling techniques. This yielded parameterized models for individual texture-selective neurons. The models provided parsimonious but powerful predictors for each neuron’s preferred textures using a sparse combination of image statistics. As a whole population, the neuronal tuning was distributed in a way suitable for categorizing textures and quantitatively predicts human ability to discriminate textures. Together, we suggest that the collective representation of visual image statistics in V4 plays a key role in organizing the natural texture perception. PMID:25535362

  3. Vision/Visual Perception: An Annotated Bibliography.

    ERIC Educational Resources Information Center

    Weintraub, Sam, Comp.; Cowan, Robert J., Comp.

    An update and modification of "Vision-Visual Discrimination" published in 1973, this annotated bibliography contains entries from the annual summaries of research in reading published by the International Reading Association (IRA) since then. The first large section, "Vision," is divided into two subgroups: (1) "Visually…

  4. Facial Recognition in a Discus Fish (Cichlidae): Experimental Approach Using Digital Models

    PubMed Central

    Satoh, Shun; Tanaka, Hirokazu; Kohda, Masanori

    2016-01-01

    A number of mammals and birds are known to be capable of visually discriminating between familiar and unfamiliar individuals, depending on facial patterns in some species. Many fish also visually recognize other conspecifics individually, and previous studies report that facial color patterns can be an initial signal for individual recognition. For example, a cichlid fish and a damselfish will use individual-specific color patterns that develop only in the facial area. However, it remains to be determined whether the facial area is an especially favorable site for visual signals in fish, and if so why? The monogamous discus fish, Symphysopdon aequifasciatus (Cichlidae), is capable of visually distinguishing its pair-partner from other conspecifics. Discus fish have individual-specific coloration patterns on entire body including the facial area, frontal head, trunk and vertical fins. If the facial area is an inherently important site for the visual cues, this species will use facial patterns for individual recognition, but otherwise they will use patterns on other body parts as well. We used modified digital models to examine whether discus fish use only facial coloration for individual recognition. Digital models of four different combinations of familiar and unfamiliar fish faces and bodies were displayed in frontal and lateral views. Focal fish frequently performed partner-specific displays towards partner-face models, and did aggressive displays towards models of non-partner’s faces. We conclude that to identify individuals this fish does not depend on frontal color patterns but does on lateral facial color patterns, although they have unique color patterns on the other parts of body. We discuss the significance of facial coloration for individual recognition in fish compared with birds and mammals. PMID:27191162

  5. Facial Recognition in a Discus Fish (Cichlidae): Experimental Approach Using Digital Models.

    PubMed

    Satoh, Shun; Tanaka, Hirokazu; Kohda, Masanori

    2016-01-01

    A number of mammals and birds are known to be capable of visually discriminating between familiar and unfamiliar individuals, depending on facial patterns in some species. Many fish also visually recognize other conspecifics individually, and previous studies report that facial color patterns can be an initial signal for individual recognition. For example, a cichlid fish and a damselfish will use individual-specific color patterns that develop only in the facial area. However, it remains to be determined whether the facial area is an especially favorable site for visual signals in fish, and if so why? The monogamous discus fish, Symphysopdon aequifasciatus (Cichlidae), is capable of visually distinguishing its pair-partner from other conspecifics. Discus fish have individual-specific coloration patterns on entire body including the facial area, frontal head, trunk and vertical fins. If the facial area is an inherently important site for the visual cues, this species will use facial patterns for individual recognition, but otherwise they will use patterns on other body parts as well. We used modified digital models to examine whether discus fish use only facial coloration for individual recognition. Digital models of four different combinations of familiar and unfamiliar fish faces and bodies were displayed in frontal and lateral views. Focal fish frequently performed partner-specific displays towards partner-face models, and did aggressive displays towards models of non-partner's faces. We conclude that to identify individuals this fish does not depend on frontal color patterns but does on lateral facial color patterns, although they have unique color patterns on the other parts of body. We discuss the significance of facial coloration for individual recognition in fish compared with birds and mammals.

  6. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.

  7. Understanding How to Build Long-Lived Learning Collaborators

    DTIC Science & Technology

    2016-03-16

    discrimination in learning, and dynamic encoding strategies to improve visual encoding for learning via analogical generalization. We showed that spatial concepts...a 20,000 sketch corpus to examine the tradeoffs involved in visual representation and analogical generalization. 15. SUBJECT TERMS...strategies to improve visual encoding for learning via analogical generalization. We showed that spatial concepts can be learned via analogical

  8. The selective processing of emotional visual stimuli while detecting auditory targets: an ERP analysis.

    PubMed

    Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O

    2008-09-16

    Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.

  9. Comparing visual search and eye movements in bilinguals and monolinguals

    PubMed Central

    Hout, Michael C.; Walenchok, Stephen C.; Azuma, Tamiko; Goldinger, Stephen D.

    2017-01-01

    Recent research has suggested that bilinguals show advantages over monolinguals in visual search tasks, although these findings have been derived from global behavioral measures of accuracy and response times. In the present study we sought to explore the bilingual advantage by using more sensitive eyetracking techniques across three visual search experiments. These spatially and temporally fine-grained measures allowed us to carefully investigate any nuanced attentional differences between bilinguals and monolinguals. Bilingual and monolingual participants completed visual search tasks that varied in difficulty. The experiments required participants to make careful discriminations in order to detect target Landolt Cs among similar distractors. In Experiment 1, participants performed both feature and conjunction search. In Experiments 2 and 3, participants performed visual search while making different types of speeded discriminations, after either locating the target or mentally updating a constantly changing target. The results across all experiments revealed that bilinguals and monolinguals were equally efficient at guiding attention and generating responses. These findings suggest that the bilingual advantage does not reflect a general benefit in attentional guidance, but could reflect more efficient guidance only under specific task demands. PMID:28508116

  10. Perceptual learning in visual search: fast, enduring, but non-specific.

    PubMed

    Sireteanu, R; Rettenbach, R

    1995-07-01

    Visual search has been suggested as a tool for isolating visual primitives. Elementary "features" were proposed to involve parallel search, while serial search is necessary for items without a "feature" status, or, in some cases, for conjunctions of "features". In this study, we investigated the role of practice in visual search tasks. We found that, under some circumstances, initially serial tasks can become parallel after a few hundred trials. Learning in visual search is far less specific than learning of visual discriminations and hyperacuity, suggesting that it takes place at another level in the central visual pathway, involving different neural circuits.

  11. Attention stabilizes the shared gain of V4 populations

    PubMed Central

    Rabinowitz, Neil C; Goris, Robbe L; Cohen, Marlene; Simoncelli, Eero P

    2015-01-01

    Responses of sensory neurons represent stimulus information, but are also influenced by internal state. For example, when monkeys direct their attention to a visual stimulus, the response gain of specific subsets of neurons in visual cortex changes. Here, we develop a functional model of population activity to investigate the structure of this effect. We fit the model to the spiking activity of bilateral neural populations in area V4, recorded while the animal performed a stimulus discrimination task under spatial attention. The model reveals four separate time-varying shared modulatory signals, the dominant two of which each target task-relevant neurons in one hemisphere. In attention-directed conditions, the associated shared modulatory signal decreases in variance. This finding provides an interpretable and parsimonious explanation for previous observations that attention reduces variability and noise correlations of sensory neurons. Finally, the recovered modulatory signals reflect previous reward, and are predictive of subsequent choice behavior. DOI: http://dx.doi.org/10.7554/eLife.08998.001 PMID:26523390

  12. General principles in motion vision: color blindness of object motion depends on pattern velocity in honeybee and goldfish.

    PubMed

    Stojcev, Maja; Radtke, Nils; D'Amaro, Daniele; Dyer, Adrian G; Neumeyer, Christa

    2011-07-01

    Visual systems can undergo striking adaptations to specific visual environments during evolution, but they can also be very "conservative." This seems to be the case in motion vision, which is surprisingly similar in species as distant as honeybee and goldfish. In both visual systems, motion vision measured with the optomotor response is color blind and mediated by one photoreceptor type only. Here, we ask whether this is also the case if the moving stimulus is restricted to a small part of the visual field, and test what influence velocity may have on chromatic motion perception. Honeybees were trained to discriminate between clockwise- and counterclockwise-rotating sector disks. Six types of disk stimuli differing in green receptor contrast were tested using three different rotational velocities. When green receptor contrast was at a minimum, bees were able to discriminate rotation directions with all colored disks at slow velocities of 6 and 12 Hz contrast frequency but not with a relatively high velocity of 24 Hz. In the goldfish experiment, the animals were trained to detect a moving red or blue disk presented in a green surround. Discrimination ability between this stimulus and a homogenous green background was poor when the M-cone type was not or only slightly modulated considering high stimulus velocity (7 cm/s). However, discrimination was improved with slower stimulus velocities (4 and 2 cm/s). These behavioral results indicate that there is potentially an object motion system in both honeybee and goldfish, which is able to incorporate color information at relatively low velocities but is color blind with higher speed. We thus propose that both honeybees and goldfish have multiple subsystems of object motion, which include achromatic as well as chromatic processing.

  13. Unsupervised visual discrimination learning of complex stimuli: Accuracy, bias and generalization.

    PubMed

    Montefusco-Siegmund, Rodrigo; Toro, Mauricio; Maldonado, Pedro E; Aylwin, María de la L

    2018-07-01

    Through same-different judgements, we can discriminate an immense variety of stimuli and consequently, they are critical in our everyday interaction with the environment. The quality of the judgements depends on familiarity with stimuli. A way to improve the discrimination is through learning, but to this day, we lack direct evidence of how learning shapes the same-different judgments with complex stimuli. We studied unsupervised visual discrimination learning in 42 participants, as they performed same-different judgments with two types of unfamiliar complex stimuli in the absence of labeling or individuation. Across nine daily training sessions with equiprobable same and different stimuli pairs, participants increased the sensitivity and the criterion by reducing the errors with both same and different pairs. With practice, there was a superior performance for different pairs and a bias for different response. To evaluate the process underlying this bias, we manipulated the proportion of same and different pairs, which resulted in an additional proportion-induced bias, suggesting that the bias observed with equal proportions was a stimulus processing bias. Overall, these results suggest that unsupervised discrimination learning occurs through changes in the stimulus processing that increase the sensory evidence and/or the precision of the working memory. Finally, the acquired discrimination ability was fully transferred to novel exemplars of the practiced stimuli category, in agreement with the acquisition of a category specific perceptual expertise. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Neurons in the pigeon caudolateral nidopallium differentiate Pavlovian conditioned stimuli but not their associated reward value in a sign-tracking paradigm

    PubMed Central

    Kasties, Nils; Starosta, Sarah; Güntürkün, Onur; Stüttgen, Maik C.

    2016-01-01

    Animals exploit visual information to identify objects, form stimulus-reward associations, and prepare appropriate behavioral responses. The nidopallium caudolaterale (NCL), an associative region of the avian endbrain, contains neurons exhibiting prominent response modulation during presentation of reward-predicting visual stimuli, but it is unclear whether neural activity represents valuation signals, stimulus properties, or sensorimotor contingencies. To test the hypothesis that NCL neurons represent stimulus value, we subjected pigeons to a Pavlovian sign-tracking paradigm in which visual cues predicted rewards differing in magnitude (large vs. small) and delay to presentation (short vs. long). Subjects’ strength of conditioned responding to visual cues reliably differentiated between predicted reward types and thus indexed valuation. The majority of NCL neurons discriminated between visual cues, with discriminability peaking shortly after stimulus onset and being maintained at lower levels throughout the stimulus presentation period. However, while some cells’ firing rates correlated with reward value, such neurons were not more frequent than expected by chance. Instead, neurons formed discernible clusters which differed in their preferred visual cue. We propose that this activity pattern constitutes a prerequisite for using visual information in more complex situations e.g. requiring value-based choices. PMID:27762287

  15. Abnormalities in the Visual Processing of Viewing Complex Visual Stimuli Amongst Individuals With Body Image Concern.

    PubMed

    Duncum, A J F; Atkins, K J; Beilharz, F L; Mundy, M E

    2016-01-01

    Individuals with body dysmorphic disorder (BDD) and clinically concerning body-image concern (BIC) appear to possess abnormalities in the way they perceive visual information in the form of a bias towards local visual processing. As inversion interrupts normal global processing, forcing individuals to process locally, an upright-inverted stimulus discrimination task was used to investigate this phenomenon. We examined whether individuals with nonclinical, yet high levels of BIC would show signs of this bias, in the form of reduced inversion effects (i.e., increased local processing). Furthermore, we assessed whether this bias appeared for general visual stimuli or specifically for appearance-related stimuli, such as faces and bodies. Participants with high-BIC (n = 25) and low-BIC (n = 30) performed a stimulus discrimination task with upright and inverted faces, scenes, objects, and bodies. Unexpectedly, the high-BIC group showed an increased inversion effect compared to the low-BIC group, indicating perceptual abnormalities may not be present as local processing biases, as originally thought. There was no significant difference in performance across stimulus types, signifying that any visual processing abnormalities may be general rather than appearance-based. This has important implications for whether visual processing abnormalities are predisposing factors for BDD or develop throughout the disorder.

  16. Effects of attention and laterality on motion and orientation discrimination in deaf signers.

    PubMed

    Bosworth, Rain G; Petrich, Jennifer A F; Dobkins, Karen R

    2013-06-01

    Previous studies have asked whether visual sensitivity and attentional processing in deaf signers are enhanced or altered as a result of their different sensory experiences during development, i.e., auditory deprivation and exposure to a visual language. In particular, deaf and hearing signers have been shown to exhibit a right visual field/left hemisphere advantage for motion processing, while hearing nonsigners do not. To examine whether this finding extends to other aspects of visual processing, we compared deaf signers and hearing nonsigners on motion, form, and brightness discrimination tasks. Secondly, to examine whether hemispheric lateralities are affected by attention, we employed a dual-task paradigm to measure form and motion thresholds under "full" vs. "poor" attention conditions. Deaf signers, but not hearing nonsigners, exhibited a right visual field advantage for motion processing. This effect was also seen for form processing and not for the brightness task. Moreover, no group differences were observed in attentional effects, and the motion and form visual field asymmetries were not modulated by attention, suggesting they occur at early levels of sensory processing. In sum, the results show that processing of motion and form, believed to be mediated by dorsal and ventral visual pathways, respectively, are left-hemisphere dominant in deaf signers. Published by Elsevier Inc.

  17. A new metaphor for projection-based visual analysis and data exploration

    NASA Astrophysics Data System (ADS)

    Schreck, Tobias; Panse, Christian

    2007-01-01

    In many important application domains such as Business and Finance, Process Monitoring, and Security, huge and quickly increasing volumes of complex data are collected. Strong efforts are underway developing automatic and interactive analysis tools for mining useful information from these data repositories. Many data analysis algorithms require an appropriate definition of similarity (or distance) between data instances to allow meaningful clustering, classification, and retrieval, among other analysis tasks. Projection-based data visualization is highly interesting (a) for visual discrimination analysis of a data set within a given similarity definition, and (b) for comparative analysis of similarity characteristics of a given data set represented by different similarity definitions. We introduce an intuitive and effective novel approach for projection-based similarity visualization for interactive discrimination analysis, data exploration, and visual evaluation of metric space effectiveness. The approach is based on the convex hull metaphor for visually aggregating sets of points in projected space, and it can be used with a variety of different projection techniques. The effectiveness of the approach is demonstrated by application on two well-known data sets. Statistical evidence supporting the validity of the hull metaphor is presented. We advocate the hull-based approach over the standard symbol-based approach to projection visualization, as it allows a more effective perception of similarity relationships and class distribution characteristics.

  18. Empiric determination of corrected visual acuity standards for train crews.

    PubMed

    Schwartz, Steven H; Swanson, William H

    2005-08-01

    Probably the most common visual standard for employment in the transportation industry is best-corrected, high-contrast visual acuity. Because such standards were often established absent empiric linkage to job performance, it is possible that a job applicant or employee who has visual acuity less than the standard may be able to satisfactorily perform the required job activities. For the transportation system that we examined, the train crew is required to inspect visually the length of the train before and during the time it leaves the station. The purpose of the inspection is to determine if an individual is in a hazardous position with respect to the train. In this article, we determine the extent to which high-contrast visual acuity can predict performance on a simulated task. Performance at discriminating hazardous from safe conditions, as depicted in projected photographic slides, was determined as a function of visual acuity. For different levels of visual acuity, which was varied through the use of optical defocus, a subject was required to label scenes as hazardous or safe. Task performance was highly correlated with visual acuity as measured under conditions normally used for vision screenings (high-illumination and high-contrast): as the acuity decreases, performance at discriminating hazardous from safe scenes worsens. This empirically based methodology can be used to establish a corrected high-contrast visual acuity standard for safety-sensitive work in transportation that is linked to the performance of a job-critical task.

  19. Visual awareness suppression by pre-stimulus brain stimulation; a neural effect.

    PubMed

    Jacobs, Christianne; Goebel, Rainer; Sack, Alexander T

    2012-01-02

    Transcranial magnetic stimulation (TMS) has established the functional relevance of early visual cortex (EVC) for visual awareness with great temporal specificity non-invasively in conscious human volunteers. Many studies have found a suppressive effect when TMS was applied over EVC 80-100 ms after the onset of the visual stimulus (post-stimulus TMS time window). Yet, few studies found task performance to also suffer when TMS was applied even before visual stimulus presentation (pre-stimulus TMS time window). This pre-stimulus TMS effect, however, remains controversially debated and its origin had mainly been ascribed to TMS-induced eye-blinking artifacts. Here, we applied chronometric TMS over EVC during the execution of a visual discrimination task, covering an exhaustive range of visual stimulus-locked TMS time windows ranging from -80 pre-stimulus to 300 ms post-stimulus onset. Electrooculographical (EoG) recordings, sham TMS stimulation, and vertex TMS stimulation controlled for different types of non-neural TMS effects. Our findings clearly reveal TMS-induced masking effects for both pre- and post-stimulus time windows, and for both objective visual discrimination performance and subjective visibility. Importantly, all effects proved to be still present after post hoc removal of eye blink trials, suggesting a neural origin for the pre-stimulus TMS suppression effect on visual awareness. We speculate based on our data that TMS exerts its pre-stimulus effect via generation of a neural state which interacts with subsequent visual input. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. The Symmetry of Visual Fields in Chromatic Discrimination

    ERIC Educational Resources Information Center

    Danilova, M. V.; Mollon, J. D.

    2009-01-01

    Both classical and recent reports suggest a right-hemisphere superiority for color discrimination. Testing highly-trained normal subjects and taking care to eliminate asymmetries from the testing situation, we found no significant differences between left and right hemifields or between upper and lower hemifields. This was the case for both of the…

  1. Long-term memory of color stimuli in the jungle crow (Corvus macrorhynchos).

    PubMed

    Bogale, Bezawork Afework; Sugawara, Satoshi; Sakano, Katsuhisa; Tsuda, Sonoko; Sugita, Shoei

    2012-03-01

    Wild-caught jungle crows (n = 20) were trained to discriminate between color stimuli in a two-alternative discrimination task. Next, crows were tested for long-term memory after 1-, 2-, 3-, 6-, and 10-month retention intervals. This preliminary study showed that jungle crows learn the task and reach a discrimination criterion (80% or more correct choices in two consecutive sessions of ten trials) in a few trials, and some even in a single session. Most, if not all, crows successfully remembered the constantly reinforced visual stimulus during training after all retention intervals. These results suggest that jungle crows have a high retention capacity for learned information, at least after a 10-month retention interval and make no or very few errors. This study is the first to show long-term memory capacity of color stimuli in corvids following a brief training that memory rather than rehearsal was apparent. Memory of visual color information is vital for exploitation of biological resources in crows. We suspect that jungle crows could remember the learned color discrimination task even after a much longer retention interval.

  2. Errorless discrimination and picture fading as techniques for teaching sight words to TMR students.

    PubMed

    Walsh, B F; Lamberts, F

    1979-03-01

    The effectiveness of two approaches for teaching beginning sight words to 30 TMR students was compared. In Dorry and Zeaman's picture-fading technique, words are taught through association with pictures that are faded out over a series of trials, while in the Edmark program errorless-discrimination technique, words are taught through shaped sequences of visual and auditory--visual matching-to-sample, with the target word first appearing alone and eventually appearing with orthographically similar words. Students were instructed on two lists of 10 words each, one list in the picture-fading and one in the discrimination method, in a double counter-balanced, repeated-measures design. Covariance analysis on three measures (word identification, word recognition, and picture--word matching) showed highly significant differences between the two methods. Students' performance was better after instruction with the errorless-discrimination method than after instruction with the picture-fading method. The findings on picture fading were interpreted as indicating a possible failure of the shifting of control from picture to printed word that earlier researchers have hypothesized as occurring.

  3. A Perceptuo-Cognitive-Motor Approach to the Special Child.

    ERIC Educational Resources Information Center

    Kornblum, Rena Beth

    A movement therapist reviews ways in which a perceptuo-cognitive approach can help handicapped children in learning and in social adjustment. She identifies specific auditory problems (hearing loss, sound-ground confusion, auditory discrimination, auditory localization, auditory memory, auditory sequencing), visual problems (visual acuity,…

  4. The pieces fit: Constituent structure and global coherence of visual narrative in RSVP.

    PubMed

    Hagmann, Carl Erick; Cohn, Neil

    2016-02-01

    Recent research has shown that comprehension of visual narrative relies on the ordering and timing of sequential images. Here we tested if rapidly presented 6-image long visual sequences could be understood as coherent narratives. Half of the sequences were correctly ordered and half had two of the four internal panels switched. Participants reported whether the sequence was correctly ordered and rated its coherence. Accuracy in detecting a switch increased when panels were presented for 1 s rather than 0.5 s. Doubling the duration of the first panel did not affect results. When two switched panels were further apart, order was discriminated more accurately and coherence ratings were low, revealing that a strong local adjacency effect influenced order and coherence judgments. Switched panels at constituent boundaries or within constituents were most disruptive to order discrimination, indicating that the preservation of constituent structure is critical to visual narrative grammar. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns

    PubMed Central

    Duarte, Fabiola; Lemus, Luis

    2017-01-01

    The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406

  6. Seeing without Seeing? Degraded Conscious Vision in a Blindsight Patient.

    PubMed

    Overgaard, Morten; Fehl, Katrin; Mouridsen, Kim; Bergholt, Bo; Cleeremans, Axel

    2008-08-21

    Blindsight patients, whose primary visual cortex is lesioned, exhibit preserved ability to discriminate visual stimuli presented in their "blind" field, yet report no visual awareness hereof. Blindsight is generally studied in experimental investigations of single patients, as very few patients have been given this "diagnosis". In our single case study of patient GR, we ask whether blindsight is best described as unconscious vision, or rather as conscious, yet severely degraded vision. In experiment 1 and 2, we successfully replicate the typical findings of previous studies on blindsight. The third experiment, however, suggests that GR's ability to discriminate amongst visual stimuli does not reflect unconscious vision, but rather degraded, yet conscious vision. As our finding results from using a method for obtaining subjective reports that has not previously used in blindsight studies (but validated in studies of healthy subjects and other patients with brain injury), our results call for a reconsideration of blindsight, and, arguably also of many previous studies of unconscious perception in healthy subjects.

  7. Visual body perception in anorexia nervosa.

    PubMed

    Urgesi, Cosimo; Fornasari, Livia; Perini, Laura; Canalaz, Francesca; Cremaschi, Silvana; Faleschini, Laura; Balestrieri, Matteo; Fabbro, Franco; Aglioti, Salvatore Maria; Brambilla, Paolo

    2012-05-01

    Disturbance of body perception is a central aspect of anorexia nervosa (AN) and several neuroimaging studies have documented structural and functional alterations of occipito-temporal cortices involved in visual body processing. However, it is unclear whether these perceptual deficits involve more basic aspects of others' body perception. A consecutive sample of 15 adolescent patients with AN were compared with a group of 15 age- and gender-matched controls in delayed matching to sample tasks requiring the visual discrimination of the form or of the action of others' body. Patients showed better visual discrimination performance than controls in detail-based processing of body forms but not of body actions, which positively correlated with their increased tendency to convert a signal of punishment into a signal of reinforcement (higher persistence scores). The paradoxical advantage of patients with AN in detail-based body processing may be associated to their tendency to routinely explore body parts as a consequence of their obsessive worries about body appearance. Copyright © 2012 Wiley Periodicals, Inc.

  8. A visual ergonomic evaluation of different screen types and screen technologies with respect to discrimination performance.

    PubMed

    Oetjen, Sophie; Ziefle, Martina

    2009-01-01

    An increasing demand to work with electronic displays and to use mobile computers emphasises the need to compare visual performance while working with different screen types. In the present study, a cathode ray tube (CRT) was compared to an external liquid crystal display (LCD) and a Notebook-LCD. The influence of screen type and viewing angle on discrimination performance was studied. Physical measurements revealed that luminance and contrast values change with varying viewing angles (anisotropy). This is most pronounced in Notebook-LCDs, followed by external LCDs and CRTs. Performance data showed that LCD's anisotropy has negative impacts on completing time critical visual tasks. The best results were achieved when a CRT was used. The largest deterioration of performance resulted when participants worked with a Notebook-LCD. When it is necessary to react quickly and accurately, LCD screens have disadvantages. The anisotropy of LCD-TFTs is therefore considered to be as a limiting factor deteriorating visual performance.

  9. Can a self-administered questionnaire identify workers with chronic or recurring low back pain?

    PubMed Central

    TAKEKAWA, Karina Satiko; GONÇALVES, Josiane Sotrate; MORIGUCHI, Cristiane Shinohara; COURY, Helenice Jane Cote Gil; SATO, Tatiana de Oliveira

    2015-01-01

    To verify if the Nordic Musculoskeletal Questionnaire (NMQ), Visual Analogue Scale (VAS), Roland-Morris Disability Questionnaire (RDQ) and physical examination of the lumbar spine can identify workers with chronic or recurring low back pain, using health history for reference. Fifty office workers of both sexes, aged between 19 and 55 yr, were evaluated using a standardized physical examination and the NMQ, VAS and RDQ. Discriminant analysis was performed to determine the discriminant properties of these instruments. A higher success rate (94%) was observed in the model including only the NMQ and in the model including the NMQ and the physical examination. The lowest success rate (82%) was observed in the model including the NMQ, RDQ and VAS. The NMQ was able to detect subjects with chronic or recurring low back pain with 100% sensitivity and 88% specificity. The NMQ appears to be the best instrument for identifying subjects with chronic or recurring low back pain. Thus, this self-reported questionnaire is suitable for screening workers for chronic or recurring low back pain in occupational settings. PMID:25810448

  10. Do you see what I see? The difference between dog and human visual perception may affect the outcome of experiments.

    PubMed

    Pongrácz, Péter; Ujvári, Vera; Faragó, Tamás; Miklósi, Ádám; Péter, András

    2017-07-01

    The visual sense of dogs is in many aspects different than that of humans. Unfortunately, authors do not explicitly take into consideration dog-human differences in visual perception when designing their experiments. With an image manipulation program we altered stationary images, according to the present knowledge about dog-vision. Besides the effect of dogs' dichromatic vision, the software shows the effect of the lower visual acuity and brightness discrimination, too. Fifty adult humans were tested with pictures showing a female experimenter pointing, gazing or glancing to the left or right side. Half of the pictures were shown after they were altered to a setting that approximated dog vision. Participants had difficulty to find out the direction of glancing when the pictures were in dog-vision mode. Glances in dog-vision setting were followed less correctly and with a slower response time than other cues. Our results are the first that show the visual performance of humans under circumstances that model how dogs' weaker vision would affect their responses in an ethological experiment. We urge researchers to take into consideration the differences between perceptual abilities of dogs and humans, by developing visual stimuli that fit more appropriately to dogs' visual capabilities. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Pigeons' discrimination of paintings by Monet and Picasso

    PubMed Central

    Watanabe, Shigeru; Sakamoto, Junko; Wakita, Masumi

    1995-01-01

    Pigeons successfully learned to discriminate color slides of paintings by Monet and Picasso. Following this training, they discriminated novel paintings by Monet and Picasso that had never been presented during the discrimination training. Furthermore, they showed generalization from Monet's to Cezanne's and Renoir's paintings or from Picasso's to Braque's and Matisse's paintings. These results suggest that pigeons' behavior can be controlled by complex visual stimuli in ways that suggest categorization. Upside-down images of Monet's paintings disrupted the discrimination, whereas inverted images of Picasso's did not. This result may indicate that the pigeons' behavior was controlled by objects depicted in impressionists' paintings but was not controlled by objects in cubists' paintings. PMID:16812755

  12. Design Definition Study Report. Full Crew Interaction Simulator-Laboratory Model (FCIS-LM) (Device X17B7). Volume II. Requirements.

    DTIC Science & Technology

    1978-06-01

    stimulate at-least three levels of crew function. At the most complex level, visual cues are used to discriminate the presence or activities of...limited to motion on- set cues washed out at subliminal levels.. Because of the cues they provide the driver, gunner, and commander, and the dis...motion, i.e.,which physiological receptors are affected, how they function,and how they may be stimulated by a simulator motion system. I Motion is

  13. Effects of spatial congruency on saccade and visual discrimination performance in a dual-task paradigm.

    PubMed

    Moehler, Tobias; Fiehler, Katja

    2014-12-01

    The present study investigated the coupling of selection-for-perception and selection-for-action during saccadic eye movement planning in three dual-task experiments. We focused on the effects of spatial congruency of saccade target (ST) location and discrimination target (DT) location and the time between ST-cue and Go-signal (SOA) on saccadic eye movement performance. In two experiments, participants performed a visual discrimination task at a cued location while programming a saccadic eye movement to a cued location. In the third experiment, the discrimination task was not cued and appeared at a random location. Spatial congruency of ST-location and DT-location resulted in enhanced perceptual performance irrespective of SOA. Perceptual performance in spatially incongruent trials was above chance, but only when the DT-location was cued. Saccade accuracy and precision were also affected by spatial congruency showing superior performance when the ST- and DT-location coincided. Saccade latency was only affected by spatial congruency when the DT-cue was predictive of the ST-location. Moreover, saccades consistently curved away from the incongruent DT-locations. Importantly, the effects of spatial congruency on saccade parameters only occurred when the DT-location was cued; therefore, results from experiments 1 and 2 are due to the endogenous allocation of attention to the DT-location and not caused by the salience of the probe. The SOA affected saccade latency showing decreasing latencies with increasing SOA. In conclusion, our results demonstrate that visuospatial attention can be voluntarily distributed upon spatially distinct perceptual and motor goals in dual-task situations, resulting in a decline of visual discrimination and saccade performance.

  14. Double dissociation of pharmacologically induced deficits in visual recognition and visual discrimination learning

    PubMed Central

    Turchi, Janita; Buffalari, Deanne; Mishkin, Mortimer

    2008-01-01

    Monkeys trained in either one-trial recognition at 8- to 10-min delays or multi-trial discrimination habits with 24-h intertrial intervals received systemic cholinergic and dopaminergic antagonists, scopolamine and haloperidol, respectively, in separate sessions. Recognition memory was impaired markedly by scopolamine but not at all by haloperidol, whereas habit formation was impaired markedly by haloperidol but only minimally by scopolamine. These differential drug effects point to differences in synaptic modification induced by the two neuromodulators that parallel the contrasting properties of the two types of learning, namely, fast acquisition but weak retention of memories versus slow acquisition but durable retention of habits. PMID:18685146

  15. Double dissociation of pharmacologically induced deficits in visual recognition and visual discrimination learning.

    PubMed

    Turchi, Janita; Buffalari, Deanne; Mishkin, Mortimer

    2008-08-01

    Monkeys trained in either one-trial recognition at 8- to 10-min delays or multi-trial discrimination habits with 24-h intertrial intervals received systemic cholinergic and dopaminergic antagonists, scopolamine and haloperidol, respectively, in separate sessions. Recognition memory was impaired markedly by scopolamine but not at all by haloperidol, whereas habit formation was impaired markedly by haloperidol but only minimally by scopolamine. These differential drug effects point to differences in synaptic modification induced by the two neuromodulators that parallel the contrasting properties of the two types of learning, namely, fast acquisition but weak retention of memories versus slow acquisition but durable retention of habits.

  16. Contrast discrimination: Second responses reveal the relationship between the mean and variance of visual signals

    PubMed Central

    Solomon, Joshua A.

    2007-01-01

    To explain the relationship between first- and second-response accuracies in a detection experiment, Swets, Tanner, and Birdsall [Swets, J., Tanner, W. P., Jr., & Birdsall, T. G. (1961). Decision processes in perception. Psychological Review, 68, 301–340] proposed that the variance of visual signals increased with their means. However, both a low threshold and intrinsic uncertainty produce similar relationships. I measured the relationship between first- and second-response accuracies for suprathreshold contrast discrimination, which is thought to be unaffected by sensory thresholds and intrinsic uncertainty. The results are consistent with a slowly increasing variance. PMID:17961625

  17. Fast Depiction Invariant Visual Similarity for Content Based Image Retrieval Based on Data-driven Visual Similarity using Linear Discriminant Analysis

    NASA Astrophysics Data System (ADS)

    Wihardi, Y.; Setiawan, W.; Nugraha, E.

    2018-01-01

    On this research we try to build CBIRS based on Learning Distance/Similarity Function using Linear Discriminant Analysis (LDA) and Histogram of Oriented Gradient (HoG) feature. Our method is invariant to depiction of image, such as similarity of image to image, sketch to image, and painting to image. LDA can decrease execution time compared to state of the art method, but it still needs an improvement in term of accuracy. Inaccuracy in our experiment happen because we did not perform sliding windows search and because of low number of negative samples as natural-world images.

  18. Color-dependent learning in restrained Africanized honey bees.

    PubMed

    Jernigan, C M; Roubik, D W; Wcislo, W T; Riveros, A J

    2014-02-01

    Associative color learning has been demonstrated to be very poor using restrained European honey bees unless the antennae are amputated. Consequently, our understanding of proximate mechanisms in visual information processing is handicapped. Here we test learning performance of Africanized honey bees under restrained conditions with visual and olfactory stimulation using the proboscis extension response (PER) protocol. Restrained individuals were trained to learn an association between a color stimulus and a sugar-water reward. We evaluated performance for 'absolute' learning (learned association between a stimulus and a reward) and 'discriminant' learning (discrimination between two stimuli). Restrained Africanized honey bees (AHBs) readily learned the association of color stimulus for both blue and green LED stimuli in absolute and discriminatory learning tasks within seven presentations, but not with violet as the rewarded color. Additionally, 24-h memory improved considerably during the discrimination task, compared with absolute association (15-55%). We found that antennal amputation was unnecessary and reduced performance in AHBs. Thus color learning can now be studied using the PER protocol with intact AHBs. This finding opens the way towards investigating visual and multimodal learning with application of neural techniques commonly used in restrained honey bees.

  19. Nicotine deprivation elevates neural representation of smoking-related cues in object-sensitive visual cortex: a proof of concept study.

    PubMed

    Havermans, Anne; van Schayck, Onno C P; Vuurman, Eric F P M; Riedel, Wim J; van den Hurk, Job

    2017-08-01

    In the current study, we use functional magnetic resonance imaging (fMRI) and multi-voxel pattern analysis (MVPA) to investigate whether tobacco addiction biases basic visual processing in favour of smoking-related images. We hypothesize that the neural representation of smoking-related stimuli in the lateral occipital complex (LOC) is elevated after a period of nicotine deprivation compared to a satiated state, but that this is not the case for object categories unrelated to smoking. Current smokers (≥10 cigarettes a day) underwent two fMRI scanning sessions: one after 10 h of nicotine abstinence and the other one after smoking ad libitum. Regional blood oxygenated level-dependent (BOLD) response was measured while participants were presented with 24 blocks of 8 colour-matched pictures of cigarettes, pencils or chairs. The functional data of 10 participants were analysed through a pattern classification approach. In bilateral LOC clusters, the classifier was able to discriminate between patterns of activity elicited by visually similar smoking-related (cigarettes) and neutral objects (pencils) above empirically estimated chance levels only during deprivation (mean = 61.0%, chance (permutations) = 50.0%, p = .01) but not during satiation (mean = 53.5%, chance (permutations) = 49.9%, ns.). For all other stimulus contrasts, there was no difference in discriminability between the deprived and satiated conditions. The discriminability between smoking and non-smoking visual objects was elevated in object-selective brain region LOC after a period of nicotine abstinence. This indicates that attention bias likely affects basic visual object processing.

  20. Neural Summation in the Hawkmoth Visual System Extends the Limits of Vision in Dim Light.

    PubMed

    Stöckl, Anna Lisa; O'Carroll, David Charles; Warrant, Eric James

    2016-03-21

    Most of the world's animals are active in dim light and depend on good vision for the tasks of daily life. Many have evolved visual adaptations that permit a performance superior to that of manmade imaging devices [1]. In insects, a major model visual system, nocturnal species show impressive visual abilities ranging from flight control [2, 3], to color discrimination [4, 5], to navigation using visual landmarks [6-8] or dim celestial compass cues [9, 10]. In addition to optical adaptations that improve their sensitivity in dim light [11], neural summation of light in space and time-which enhances the coarser and slower features of the scene at the expense of noisier finer and faster features-has been suggested to improve sensitivity in theoretical [12-14], anatomical [15-17], and behavioral [18-20] studies. How these summation strategies function neurally is, however, presently unknown. Here, we quantified spatial and temporal summation in the motion vision pathway of a nocturnal hawkmoth. We show that spatial and temporal summation combine supralinearly to substantially increase contrast sensitivity and visual information rate over four decades of light intensity, enabling hawkmoths to see at light levels 100 times dimmer than without summation. Our results reveal how visual motion is calculated neurally in dim light and how spatial and temporal summation improve sensitivity while simultaneously maximizing spatial and temporal resolution, thus extending models of insect motion vision derived predominantly from diurnal flies. Moreover, the summation strategies we have revealed may benefit manmade vision systems optimized for variable light levels [21]. Copyright © 2016 Elsevier Ltd. All rights reserved.

Top