Sample records for low-level visual information

  1. A task-dependent causal role for low-level visual processes in spoken word comprehension.

    PubMed

    Ostarek, Markus; Huettig, Falk

    2017-08-01

    It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical-decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete versus abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. A color fusion method of infrared and low-light-level images based on visual perception

    NASA Astrophysics Data System (ADS)

    Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa

    2014-11-01

    The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.

  3. Image Feature Types and Their Predictions of Aesthetic Preference and Naturalness

    PubMed Central

    Ibarra, Frank F.; Kardan, Omid; Hunter, MaryCarol R.; Kotabe, Hiroki P.; Meyer, Francisco A. C.; Berman, Marc G.

    2017-01-01

    Previous research has investigated ways to quantify visual information of a scene in terms of a visual processing hierarchy, i.e., making sense of visual environment by segmentation and integration of elementary sensory input. Guided by this research, studies have developed categories for low-level visual features (e.g., edges, colors), high-level visual features (scene-level entities that convey semantic information such as objects), and how models of those features predict aesthetic preference and naturalness. For example, in Kardan et al. (2015a), 52 participants provided aesthetic preference and naturalness ratings, which are used in the current study, for 307 images of mixed natural and urban content. Kardan et al. (2015a) then developed a model using low-level features to predict aesthetic preference and naturalness and could do so with high accuracy. What has yet to be explored is the ability of higher-level visual features (e.g., horizon line position relative to viewer, geometry of building distribution relative to visual access) to predict aesthetic preference and naturalness of scenes, and whether higher-level features mediate some of the association between the low-level features and aesthetic preference or naturalness. In this study we investigated these relationships and found that low- and high- level features explain 68.4% of the variance in aesthetic preference ratings and 88.7% of the variance in naturalness ratings. Additionally, several high-level features mediated the relationship between the low-level visual features and aaesthetic preference. In a multiple mediation analysis, the high-level feature mediators accounted for over 50% of the variance in predicting aesthetic preference. These results show that high-level visual features play a prominent role predicting aesthetic preference, but do not completely eliminate the predictive power of the low-level visual features. These strong predictors provide powerful insights for future research relating to landscape and urban design with the aim of maximizing subjective well-being, which could lead to improved health outcomes on a larger scale. PMID:28503158

  4. Low-level information and high-level perception: the case of speech in noise.

    PubMed

    Nahum, Mor; Nelken, Israel; Ahissar, Merav

    2008-05-20

    Auditory information is processed in a fine-to-crude hierarchical scheme, from low-level acoustic information to high-level abstract representations, such as phonological labels. We now ask whether fine acoustic information, which is not retained at high levels, can still be used to extract speech from noise. Previous theories suggested either full availability of low-level information or availability that is limited by task difficulty. We propose a third alternative, based on the Reverse Hierarchy Theory (RHT), originally derived to describe the relations between the processing hierarchy and visual perception. RHT asserts that only the higher levels of the hierarchy are immediately available for perception. Direct access to low-level information requires specific conditions, and can be achieved only at the cost of concurrent comprehension. We tested the predictions of these three views in a series of experiments in which we measured the benefits from utilizing low-level binaural information for speech perception, and compared it to that predicted from a model of the early auditory system. Only auditory RHT could account for the full pattern of the results, suggesting that similar defaults and tradeoffs underlie the relations between hierarchical processing and perception in the visual and auditory modalities.

  5. The contribution of foveal and peripheral visual information to ensemble representation of face race.

    PubMed

    Jung, Wonmo; Bülthoff, Isabelle; Armann, Regine G M

    2017-11-01

    The brain can only attend to a fraction of all the information that is entering the visual system at any given moment. One way of overcoming the so-called bottleneck of selective attention (e.g., J. M. Wolfe, Võ, Evans, & Greene, 2011) is to make use of redundant visual information and extract summarized statistical information of the whole visual scene. Such ensemble representation occurs for low-level features of textures or simple objects, but it has also been reported for complex high-level properties. While the visual system has, for example, been shown to compute summary representations of facial expression, gender, or identity, it is less clear whether perceptual input from all parts of the visual field contributes equally to the ensemble percept. Here we extend the line of ensemble-representation research into the realm of race and look at the possibility that ensemble perception relies on weighting visual information differently depending on its origin from either the fovea or the visual periphery. We find that observers can judge the mean race of a set of faces, similar to judgments of mean emotion from faces and ensemble representations in low-level domains of visual processing. We also find that while peripheral faces seem to be taken into account for the ensemble percept, far more weight is given to stimuli presented foveally than peripherally. Whether this precision weighting of information stems from differences in the accuracy with which the visual system processes information across the visual field or from statistical inferences about the world needs to be determined by further research.

  6. Visual information underpinning skilled anticipation: The effect of blur on a coupled and uncoupled in situ anticipatory response.

    PubMed

    Mann, David L; Abernethy, Bruce; Farrow, Damian

    2010-07-01

    Coupled interceptive actions are understood to be the result of neural processing-and visual information-which is distinct from that used for uncoupled perceptual responses. To examine the visual information used for action and perception, skilled cricket batters anticipated the direction of balls bowled toward them using a coupled movement (an interceptive action that preserved the natural coupling between perception and action) or an uncoupled (verbal) response, in each of four different visual blur conditions (plano, +1.00, +2.00, +3.00). Coupled responses were found to be better than uncoupled ones, with the blurring of vision found to result in different effects for the coupled and uncoupled response conditions. Low levels of visual blur did not affect coupled anticipation, a finding consistent with the comparatively poorer visual information on which online interceptive actions are proposed to rely. In contrast, some evidence was found to suggest that low levels of blur may enhance the uncoupled verbal perception of movement.

  7. Using a familiar risk comparison within a risk ladder to improve risk understanding by low numerates: a study of visual attention.

    PubMed

    Keller, Carmen

    2011-07-01

    Previous experimental research provides evidence that a familiar risk comparison within a risk ladder is understood by low- and high-numerate individuals. It especially helps low numerates to better evaluate risk. In the present study, an eye tracker was used to capture individuals' visual attention to a familiar risk comparison, such as the risk associated with smoking. Two parameters of information processing-efficiency and level-were derived from visual attention. A random sample of participants from the general population (N= 68) interpreted a given risk level with the help of the risk ladder. Numeracy was negatively correlated with overall visual attention on the risk ladder (r(s) =-0.28, p= 0.01), indicating that the lower the numeracy, the more the time spent looking at the whole risk ladder. Numeracy was positively correlated with the efficiency of processing relevant frequency (r(s) = 0.34, p < 0.001) and relevant textual information (r(s) = 0.34, p < 0.001), but not with the efficiency of processing relevant comparative information and numerical information. There was a significant negative correlation between numeracy and the level of processing of relevant comparative risk information (r(s) =-0.21, p < 0.01), indicating that low numerates processed the comparative risk information more deeply than the high numerates. There was no correlation between numeracy and perceived risk. These results add to previous experimental research, indicating that the smoking risk comparison was crucial for low numerates to evaluate and understand risk. Furthermore, the eye-tracker method is promising for studying information processing and improving risk communication formats. © 2011 Society for Risk Analysis.

  8. Shape Similarity, Better than Semantic Membership, Accounts for the Structure of Visual Object Representations in a Population of Monkey Inferotemporal Neurons

    PubMed Central

    DiCarlo, James J.; Zecchina, Riccardo; Zoccolan, Davide

    2013-01-01

    The anterior inferotemporal cortex (IT) is the highest stage along the hierarchy of visual areas that, in primates, processes visual objects. Although several lines of evidence suggest that IT primarily represents visual shape information, some recent studies have argued that neuronal ensembles in IT code the semantic membership of visual objects (i.e., represent conceptual classes such as animate and inanimate objects). In this study, we investigated to what extent semantic, rather than purely visual information, is represented in IT by performing a multivariate analysis of IT responses to a set of visual objects. By relying on a variety of machine-learning approaches (including a cutting-edge clustering algorithm that has been recently developed in the domain of statistical physics), we found that, in most instances, IT representation of visual objects is accounted for by their similarity at the level of shape or, more surprisingly, low-level visual properties. Only in a few cases we observed IT representations of semantic classes that were not explainable by the visual similarity of their members. Overall, these findings reassert the primary function of IT as a conveyor of explicit visual shape information, and reveal that low-level visual properties are represented in IT to a greater extent than previously appreciated. In addition, our work demonstrates how combining a variety of state-of-the-art multivariate approaches, and carefully estimating the contribution of shape similarity to the representation of object categories, can substantially advance our understanding of neuronal coding of visual objects in cortex. PMID:23950700

  9. Visual cues in low-level flight - Implications for pilotage, training, simulation, and enhanced/synthetic vision systems

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Kaiser, Mary K.; Johnson, Walter W.

    1992-01-01

    This paper reviews some of the sources of visual information that are available in the out-the-window scene and describes how these visual cues are important for routine pilotage and training, as well as the development of simulator visual systems and enhanced or synthetic vision systems for aircraft cockpits. It is shown how these visual cues may change or disappear under environmental or sensor conditions, and how the visual scene can be augmented by advanced displays to capitalize on the pilot's excellent ability to extract visual information from the visual scene.

  10. Dissociation between perceptual processing and priming in long-term lorazepam users.

    PubMed

    Giersch, Anne; Vidailhet, Pierre

    2006-12-01

    Acute effects of lorazepam on visual information processing, perceptual priming and explicit memory are well established. However, visual processing and perceptual priming have rarely been explored in long-term lorazepam users. By exploring these functions it was possible to test the hypothesis that difficulty in processing visual information may lead to deficiencies in perceptual priming. Using a simple blind procedure, we tested explicit memory, perceptual priming and visual perception in 15 long-term lorazepam users and 15 control subjects individually matched according to sex, age and education level. Explicit memory, perceptual priming, and the identification of fragmented pictures were found to be preserved in long-term lorazepam users, contrary to what is usually observed after an acute drug intake. The processing of visual contour, on the other hand, was still significantly impaired. These results suggest that the effects observed on low-level visual perception are independent of the acute deleterious effects of lorazepam on perceptual priming. A comparison of perceptual priming in subjects with low- vs. high-level identification of new fragmented pictures further suggests that the ability to identify fragmented pictures has no influence on priming. Despite the fact that they were treated with relatively low doses and far from peak plasma concentration, it is noteworthy that in long-term users memory was preserved.

  11. Early Visual Foraging in Relationship to Familial Risk for Autism and Hyperactivity/Inattention.

    PubMed

    Gliga, Teodora; Smith, Tim J; Likely, Noreen; Charman, Tony; Johnson, Mark H

    2015-12-04

    Information foraging is atypical in both autism spectrum disorders (ASDs) and ADHD; however, while ASD is associated with restricted exploration and preference for sameness, ADHD is characterized by hyperactivity and increased novelty seeking. Here, we ask whether similar biases are present in visual foraging in younger siblings of children with a diagnosis of ASD with or without additional high levels of hyperactivity and inattention. Fifty-four low-risk controls (LR) and 50 high-risk siblings (HR) took part in an eye-tracking study at 8 and 14 months and at 3 years of age. At 8 months, siblings of children with ASD and low levels of hyperactivity/inattention (HR/ASD-HI) were more likely to return to previously visited areas in the visual scene than were LR and siblings of children with ASD and high levels of hyperactivity/inattention (HR/ASD+HI). We show that visual foraging is atypical in infants at-risk for ASD. We also reveal a paradoxical effect, in that additional family risk for ADHD core symptoms mitigates the effect of ASD risk on visual information foraging. © The Author(s) 2015.

  12. Decoding the time-course of object recognition in the human brain: From visual features to categorical decisions.

    PubMed

    Contini, Erika W; Wardle, Susan G; Carlson, Thomas A

    2017-10-01

    Visual object recognition is a complex, dynamic process. Multivariate pattern analysis methods, such as decoding, have begun to reveal how the brain processes complex visual information. Recently, temporal decoding methods for EEG and MEG have offered the potential to evaluate the temporal dynamics of object recognition. Here we review the contribution of M/EEG time-series decoding methods to understanding visual object recognition in the human brain. Consistent with the current understanding of the visual processing hierarchy, low-level visual features dominate decodable object representations early in the time-course, with more abstract representations related to object category emerging later. A key finding is that the time-course of object processing is highly dynamic and rapidly evolving, with limited temporal generalisation of decodable information. Several studies have examined the emergence of object category structure, and we consider to what degree category decoding can be explained by sensitivity to low-level visual features. Finally, we evaluate recent work attempting to link human behaviour to the neural time-course of object processing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF

    PubMed Central

    Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A.; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan

    2016-01-01

    With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration. PMID:27315101

  14. Developing visual images for communicating information aboutantiretroviral side effects to a low-literate population.

    PubMed

    Dowse, Ros; Ramela, Thato; Barford, Kirsty-Lee; Browne, Sara

    2010-09-01

    The side effects of antiretroviral (ARV) therapy are linked to altered quality of life and adherence. Poor adherence has also been associated with low health-literacy skills, with an uninformed patient more likely to make ARV-related decisions that compromise the efficacy of the treatment. Low literacy skills disempower patients in interactions with healthcare providers and preclude the use of existing written patient information materials, which are generally written at a high reading level. Visual images or pictograms used as a counselling tool or included in patient information leaflets have been shown to improve patients' knowledge, particularly in low-literate groups. The objective of this study was to design visuals or pictograms illustrating various ARV side effects and to evaluate them in a low-literate South African Xhosa population. Core images were generated either from a design workshop or from posed photos or images from textbooks. The research team worked closely with a graphic artist. Initial versions of the images were discussed and assessed in group discussions, and then modified and eventually evaluated quantitatively in individual interviews with 40 participants who each had a maximum of 10 years of schooling. The familiarity of the human body, its facial expressions, postures and actions contextualised the information and contributed to the participants' understanding. Visuals that were simple, had a clear central focus and reflected familiar body experiences (e.g. vomiting) were highly successful. The introduction of abstract elements (e.g. fever) and metaphorical images (e.g. nightmares) presented problems for interpretation, particularly to those with the lowest educational levels. We recommend that such visual images should be designed in collaboration with the target population and a graphic artist, taking cognisance of the audience's literacy skills and culture, and should employ a multistage iterative process of modification and evaluation.

  15. Self-Taught Low-Rank Coding for Visual Learning.

    PubMed

    Li, Sheng; Li, Kang; Fu, Yun

    2018-03-01

    The lack of labeled data presents a common challenge in many computer vision and machine learning tasks. Semisupervised learning and transfer learning methods have been developed to tackle this challenge by utilizing auxiliary samples from the same domain or from a different domain, respectively. Self-taught learning, which is a special type of transfer learning, has fewer restrictions on the choice of auxiliary data. It has shown promising performance in visual learning. However, existing self-taught learning methods usually ignore the structure information in data. In this paper, we focus on building a self-taught coding framework, which can effectively utilize the rich low-level pattern information abstracted from the auxiliary domain, in order to characterize the high-level structural information in the target domain. By leveraging a high quality dictionary learned across auxiliary and target domains, the proposed approach learns expressive codings for the samples in the target domain. Since many types of visual data have been proven to contain subspace structures, a low-rank constraint is introduced into the coding objective to better characterize the structure of the given target set. The proposed representation learning framework is called self-taught low-rank (S-Low) coding, which can be formulated as a nonconvex rank-minimization and dictionary learning problem. We devise an efficient majorization-minimization augmented Lagrange multiplier algorithm to solve it. Based on the proposed S-Low coding mechanism, both unsupervised and supervised visual learning algorithms are derived. Extensive experiments on five benchmark data sets demonstrate the effectiveness of our approach.

  16. Electrophysiological evidence for biased competition in V1 for fear expressions.

    PubMed

    West, Greg L; Anderson, Adam A K; Ferber, Susanne; Pratt, Jay

    2011-11-01

    When multiple stimuli are concurrently displayed in the visual field, they must compete for neural representation at the processing expense of their contemporaries. This biased competition is thought to begin as early as primary visual cortex, and can be driven by salient low-level stimulus features. Stimuli important for an organism's survival, such as facial expressions signaling environmental threat, might be similarly prioritized at this early stage of visual processing. In the present study, we used ERP recordings from striate cortex to examine whether fear expressions can bias the competition for neural representation at the earliest stage of retinotopic visuo-cortical processing when in direct competition with concurrently presented visual information of neutral valence. We found that within 50 msec after stimulus onset, information processing in primary visual cortex is biased in favor of perceptual representations of fear at the expense of competing visual information (Experiment 1). Additional experiments confirmed that the facial display's emotional content rather than low-level features is responsible for this prioritization in V1 (Experiment 2), and that this competition is reliant on a face's upright canonical orientation (Experiment 3). These results suggest that complex stimuli important for an organism's survival can indeed be prioritized at the earliest stage of cortical processing at the expense of competing information, with competition possibly beginning before encoding in V1.

  17. Lateralized hybrid faces: evidence of a valence-specific bias in the processing of implicit emotions.

    PubMed

    Prete, Giulia; Laeng, Bruno; Tommasi, Luca

    2014-01-01

    It is well known that hemispheric asymmetries exist for both the analyses of low-level visual information (such as spatial frequency) and high-level visual information (such as emotional expressions). In this study, we assessed which of the above factors underlies perceptual laterality effects with "hybrid faces": a type of stimulus that allows testing for unaware processing of emotional expressions, when the emotion is displayed in the low-frequency information while an image of the same face with a neutral expression is superimposed to it. Despite hybrid faces being perceived as neutral, the emotional information modulates observers' social judgements. In the present study, participants were asked to assess friendliness of hybrid faces displayed tachistoscopically, either centrally or laterally to fixation. We found a clear influence of the hidden emotions also with lateral presentations. Happy faces were rated as more friendly and angry faces as less friendly with respect to neutral faces. In general, hybrid faces were evaluated as less friendly when they were presented in the left visual field/right hemisphere than in the right visual field/left hemisphere. The results extend the validity of the valence hypothesis in the specific domain of unaware (subcortical) emotion processing.

  18. Spatially Pooled Contrast Responses Predict Neural and Perceptual Similarity of Naturalistic Image Categories

    PubMed Central

    Groen, Iris I. A.; Ghebreab, Sennay; Lamme, Victor A. F.; Scholte, H. Steven

    2012-01-01

    The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs) in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis). Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task. PMID:23093921

  19. Conscious Vision Proceeds from Global to Local Content in Goal-Directed Tasks and Spontaneous Vision.

    PubMed

    Campana, Florence; Rebollo, Ignacio; Urai, Anne; Wyart, Valentin; Tallon-Baudry, Catherine

    2016-05-11

    The reverse hierarchy theory (Hochstein and Ahissar, 2002) makes strong, but so far untested, predictions on conscious vision. In this theory, local details encoded in lower-order visual areas are unconsciously processed before being automatically and rapidly combined into global information in higher-order visual areas, where conscious percepts emerge. Contingent on current goals, local details can afterward be consciously retrieved. This model therefore predicts that (1) global information is perceived faster than local details, (2) global information is computed regardless of task demands during early visual processing, and (3) spontaneous vision is dominated by global percepts. We designed novel textured stimuli that are, as opposed to the classic Navon's letters, truly hierarchical (i.e., where global information is solely defined by local information but where local and global orientations can still be manipulated separately). In line with the predictions, observers were systematically faster reporting global than local properties of those stimuli. Second, global information could be decoded from magneto-encephalographic data during early visual processing regardless of task demands. Last, spontaneous subjective reports were dominated by global information and the frequency and speed of spontaneous global perception correlated with the accuracy and speed in the global task. No such correlation was observed for local information. We therefore show that information at different levels of the visual hierarchy is not equally likely to become conscious; rather, conscious percepts emerge preferentially at a global level. We further show that spontaneous reports can be reliable and are tightly linked to objective performance at the global level. Is information encoded at different levels of the visual system (local details in low-level areas vs global shapes in high-level areas) equally likely to become conscious? We designed new hierarchical stimuli and provide the first empirical evidence based on behavioral and MEG data that global information encoded at high levels of the visual hierarchy dominates perception. This result held both in the presence and in the absence of task demands. The preferential emergence of percepts at high levels can account for two properties of conscious vision, namely, the dominance of global percepts and the feeling of visual richness reported independently of the perception of local details. Copyright © 2016 the authors 0270-6474/16/365200-14$15.00/0.

  20. Eagle-eyed visual acuity: an experimental investigation of enhanced perception in autism.

    PubMed

    Ashwin, Emma; Ashwin, Chris; Rhydderch, Danielle; Howells, Jessica; Baron-Cohen, Simon

    2009-01-01

    Anecdotal accounts of sensory hypersensitivity in individuals with autism spectrum conditions (ASC) have been noted since the first reports of the condition. Over time, empirical evidence has supported the notion that those with ASC have superior visual abilities compared with control subjects. However, it remains unclear whether these abilities are specifically the result of differences in sensory thresholds (low-level processing), rather than higher-level cognitive processes. This study investigates visual threshold in n = 15 individuals with ASC and n = 15 individuals without ASC, using a standardized optometric test, the Freiburg Visual Acuity and Contrast Test, to investigate basic low-level visual acuity. Individuals with ASC have significantly better visual acuity (20:7) compared with control subjects (20:13)-acuity so superior that it lies in the region reported for birds of prey. The results of this study suggest that inclusion of sensory hypersensitivity in the diagnostic criteria for ASC may be warranted and that basic standardized tests of sensory thresholds may inform causal theories of ASC.

  1. Effect of physical workload and modality of information presentation on pattern recognition and navigation task performance by high-fit young males.

    PubMed

    Zahabi, Maryam; Zhang, Wenjuan; Pankok, Carl; Lau, Mei Ying; Shirley, James; Kaber, David

    2017-11-01

    Many occupations require both physical exertion and cognitive task performance. Knowledge of any interaction between physical demands and modalities of cognitive task information presentation can provide a basis for optimising performance. This study examined the effect of physical exertion and modality of information presentation on pattern recognition and navigation-related information processing. Results indicated males of equivalent high fitness, between the ages of 18 and 34, rely more on visual cues vs auditory or haptic for pattern recognition when exertion level is high. We found that navigation response time was shorter under low and medium exertion levels as compared to high intensity. Navigation accuracy was lower under high level exertion compared to medium and low levels. In general, findings indicated that use of the haptic modality for cognitive task cueing decreased accuracy in pattern recognition responses. Practitioner Summary: An examination was conducted on the effect of physical exertion and information presentation modality in pattern recognition and navigation. In occupations requiring information presentation to workers, who are simultaneously performing a physical task, the visual modality appears most effective under high level exertion while haptic cueing degrades performance.

  2. Position Information Encoded by Population Activity in Hierarchical Visual Areas

    PubMed Central

    Majima, Kei; Horikawa, Tomoyasu

    2017-01-01

    Abstract Neurons in high-level visual areas respond to more complex visual features with broader receptive fields (RFs) compared to those in low-level visual areas. Thus, high-level visual areas are generally considered to carry less information regarding the position of seen objects in the visual field. However, larger RFs may not imply loss of position information at the population level. Here, we evaluated how accurately the position of a seen object could be predicted (decoded) from activity patterns in each of six representative visual areas with different RF sizes [V1–V4, lateral occipital complex (LOC), and fusiform face area (FFA)]. We collected functional magnetic resonance imaging (fMRI) responses while human subjects viewed a ball randomly moving in a two-dimensional field. To estimate population RF sizes of individual fMRI voxels, RF models were fitted for individual voxels in each brain area. The voxels in higher visual areas showed larger estimated RFs than those in lower visual areas. Then, the ball’s position in a separate session was predicted by maximum likelihood estimation using the RF models of individual voxels. We also tested a model-free multivoxel regression (support vector regression, SVR) to predict the position. We found that regardless of the difference in RF size, all visual areas showed similar prediction accuracies, especially on the horizontal dimension. Higher areas showed slightly lower accuracies on the vertical dimension, which appears to be attributed to the narrower spatial distributions of the RF centers. The results suggest that much position information is preserved in population activity through the hierarchical visual pathway regardless of RF sizes and is potentially available in later processing for recognition and behavior. PMID:28451634

  3. Influences of selective adaptation on perception of audiovisual speech

    PubMed Central

    Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.

    2016-01-01

    Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781

  4. Visualizing speciation in artificial cichlid fish.

    PubMed

    Clement, Ross

    2006-01-01

    The Cichlid Speciation Project (CSP) is an ALife simulation system for investigating open problems in the speciation of African cichlid fish. The CSP can be used to perform a wide range of experiments that show that speciation is a natural consequence of certain biological systems. A visualization system capable of extracting the history of speciation from low-level trace data and creating a phylogenetic tree has been implemented. Unlike previous approaches, this visualization system presents a concrete trace of speciation, rather than a summary of low-level information from which the viewer can make subjective decisions on how speciation progressed. The phylogenetic trees are a more objective visualization of speciation, and enable automated collection and summarization of the results of experiments. The visualization system is used to create a phylogenetic tree from an experiment that models sympatric speciation.

  5. Visual information processing of faces in body dysmorphic disorder.

    PubMed

    Feusner, Jamie D; Townsend, Jennifer; Bystritsky, Alexander; Bookheimer, Susan

    2007-12-01

    Body dysmorphic disorder (BDD) is a severe psychiatric condition in which individuals are preoccupied with perceived appearance defects. Clinical observation suggests that patients with BDD focus on details of their appearance at the expense of configural elements. This study examines abnormalities in visual information processing in BDD that may underlie clinical symptoms. To determine whether patients with BDD have abnormal patterns of brain activation when visually processing others' faces with high, low, or normal spatial frequency information. Case-control study. University hospital. Twelve right-handed, medication-free subjects with BDD and 13 control subjects matched by age, sex, and educational achievement. Intervention Functional magnetic resonance imaging while performing matching tasks of face stimuli. Stimuli were neutral-expression photographs of others' faces that were unaltered, altered to include only high spatial frequency visual information, or altered to include only low spatial frequency visual information. Blood oxygen level-dependent functional magnetic resonance imaging signal changes in the BDD and control groups during tasks with each stimulus type. Subjects with BDD showed greater left hemisphere activity relative to controls, particularly in lateral prefrontal cortex and lateral temporal lobe regions for all face tasks (and dorsal anterior cingulate activity for the low spatial frequency task). Controls recruited left-sided prefrontal and dorsal anterior cingulate activity only for the high spatial frequency task. Subjects with BDD demonstrate fundamental differences from controls in visually processing others' faces. The predominance of left-sided activity for low spatial frequency and normal faces suggests detail encoding and analysis rather than holistic processing, a pattern evident in controls only for high spatial frequency faces. These abnormalities may be associated with apparent perceptual distortions in patients with BDD. The fact that these findings occurred while subjects viewed others' faces suggests differences in visual processing beyond distortions of their own appearance.

  6. The relationship between level of autistic traits and local bias in the context of the McGurk effect

    PubMed Central

    Ujiie, Yuta; Asai, Tomohisa; Wakabayashi, Akio

    2015-01-01

    The McGurk effect is a well-known illustration that demonstrates the influence of visual information on hearing in the context of speech perception. Some studies have reported that individuals with autism spectrum disorder (ASD) display abnormal processing of audio-visual speech integration, while other studies showed contradictory results. Based on the dimensional model of ASD, we administered two analog studies to examine the link between level of autistic traits, as assessed by the Autism Spectrum Quotient (AQ), and the McGurk effect among a sample of university students. In the first experiment, we found that autistic traits correlated negatively with fused (McGurk) responses. Then, we manipulated presentation types of visual stimuli to examine whether the local bias toward visual speech cues modulated individual differences in the McGurk effect. The presentation included four types of visual images, comprising no image, mouth only, mouth and eyes, and full face. The results revealed that global facial information facilitates the influence of visual speech cues on McGurk stimuli. Moreover, individual differences between groups with low and high levels of autistic traits appeared when the full-face visual speech cue with an incongruent voice condition was presented. These results suggest that individual differences in the McGurk effect might be due to a weak ability to process global facial information in individuals with high levels of autistic traits. PMID:26175705

  7. Visual representation of spatiotemporal structure

    NASA Astrophysics Data System (ADS)

    Schill, Kerstin; Zetzsche, Christoph; Brauer, Wilfried; Eisenkolb, A.; Musto, A.

    1998-07-01

    The processing and representation of motion information is addressed from an integrated perspective comprising low- level signal processing properties as well as higher-level cognitive aspects. For the low-level processing of motion information we argue that a fundamental requirement is the existence of a spatio-temporal memory. Its key feature, the provision of an orthogonal relation between external time and its internal representation, is achieved by a mapping of temporal structure into a locally distributed activity distribution accessible in parallel by higher-level processing stages. This leads to a reinterpretation of the classical concept of `iconic memory' and resolves inconsistencies on ultra-short-time processing and visual masking. The spatial-temporal memory is further investigated by experiments on the perception of spatio-temporal patterns. Results on the direction discrimination of motion paths provide evidence that information about direction and location are not processed and represented independent of each other. This suggests a unified representation on an early level, in the sense that motion information is internally available in form of a spatio-temporal compound. For the higher-level representation we have developed a formal framework for the qualitative description of courses of motion that may occur with moving objects.

  8. Target-present guessing as a function of target prevalence and accumulated information in visual search.

    PubMed

    Peltier, Chad; Becker, Mark W

    2017-05-01

    Target prevalence influences visual search behavior. At low target prevalence, miss rates are high and false alarms are low, while the opposite is true at high prevalence. Several models of search aim to describe search behavior, one of which has been specifically intended to model search at varying prevalence levels. The multiple decision model (Wolfe & Van Wert, Current Biology, 20(2), 121--124, 2010) posits that all searches that end before the observer detects a target result in a target-absent response. However, researchers have found very high false alarms in high-prevalence searches, suggesting that prevalence rates may be used as a source of information to make "educated guesses" after search termination. Here, we further examine the ability for prevalence level and knowledge gained during visual search to influence guessing rates. We manipulate target prevalence and the amount of information that an observer accumulates about a search display prior to making a response to test if these sources of evidence are used to inform target present guess rates. We find that observers use both information about target prevalence rates and information about the proportion of the array inspected prior to making a response allowing them to make an informed and statistically driven guess about the target's presence.

  9. The role of pulvinar in the transmission of information in the visual hierarchy.

    PubMed

    Cortes, Nelson; van Vreeswijk, Carl

    2012-01-01

    VISUAL RECEPTIVE FIELD (RF) ATTRIBUTES IN VISUAL CORTEX OF PRIMATES HAVE BEEN EXPLAINED MAINLY FROM CORTICAL CONNECTIONS: visual RFs progress from simple to complex through cortico-cortical pathways from lower to higher levels in the visual hierarchy. This feedforward flow of information is paired with top-down processes through the feedback pathway. Although the hierarchical organization explains the spatial properties of RFs, is unclear how a non-linear transmission of activity through the visual hierarchy can yield smooth contrast response functions in all level of the hierarchy. Depending on the gain, non-linear transfer functions create either a bimodal response to contrast, or no contrast dependence of the response in the highest level of the hierarchy. One possible mechanism to regulate this transmission of visual contrast information from low to high level involves an external component that shortcuts the flow of information through the hierarchy. A candidate for this shortcut is the Pulvinar nucleus of the thalamus. To investigate representation of stimulus contrast a hierarchical model network of ten cortical areas is examined. In each level of the network, the activity from the previous layer is integrated and then non-linearly transmitted to the next level. The arrangement of interactions creates a gradient from simple to complex RFs of increasing size as one moves from lower to higher cortical levels. The visual input is modeled as a Gaussian random input, whose width codes for the contrast. This input is applied to the first area. The output activity ratio among different contrast values is analyzed for the last level to observe sensitivity to a contrast and contrast invariant tuning. For a purely cortical system, the output of the last area can be approximately contrast invariant, but the sensitivity to contrast is poor. To account for an alternative visual processing pathway, non-reciprocal connections from and to a parallel pulvinar like structure of nine areas is coupled to the system. Compared to the pure feedforward model, cortico-pulvino-cortical output presents much more sensitivity to contrast and has a similar level of contrast invariance of the tuning.

  10. The Role of Pulvinar in the Transmission of Information in the Visual Hierarchy

    PubMed Central

    Cortes, Nelson; van Vreeswijk, Carl

    2012-01-01

    Visual receptive field (RF) attributes in visual cortex of primates have been explained mainly from cortical connections: visual RFs progress from simple to complex through cortico-cortical pathways from lower to higher levels in the visual hierarchy. This feedforward flow of information is paired with top-down processes through the feedback pathway. Although the hierarchical organization explains the spatial properties of RFs, is unclear how a non-linear transmission of activity through the visual hierarchy can yield smooth contrast response functions in all level of the hierarchy. Depending on the gain, non-linear transfer functions create either a bimodal response to contrast, or no contrast dependence of the response in the highest level of the hierarchy. One possible mechanism to regulate this transmission of visual contrast information from low to high level involves an external component that shortcuts the flow of information through the hierarchy. A candidate for this shortcut is the Pulvinar nucleus of the thalamus. To investigate representation of stimulus contrast a hierarchical model network of ten cortical areas is examined. In each level of the network, the activity from the previous layer is integrated and then non-linearly transmitted to the next level. The arrangement of interactions creates a gradient from simple to complex RFs of increasing size as one moves from lower to higher cortical levels. The visual input is modeled as a Gaussian random input, whose width codes for the contrast. This input is applied to the first area. The output activity ratio among different contrast values is analyzed for the last level to observe sensitivity to a contrast and contrast invariant tuning. For a purely cortical system, the output of the last area can be approximately contrast invariant, but the sensitivity to contrast is poor. To account for an alternative visual processing pathway, non-reciprocal connections from and to a parallel pulvinar like structure of nine areas is coupled to the system. Compared to the pure feedforward model, cortico-pulvino-cortical output presents much more sensitivity to contrast and has a similar level of contrast invariance of the tuning. PMID:22654750

  11. Visual motherese? Signal-to-noise ratios in toddler-directed television

    PubMed Central

    Wass, Sam V; Smith, Tim J

    2015-01-01

    Younger brains are noisier information processing systems; this means that information for younger individuals has to allow clearer differentiation between those aspects that are required for the processing task in hand (the ‘signal’) and those that are not (the ‘noise’). We compared toddler-directed and adult-directed TV programmes (TotTV/ATV). We examined how low-level visual features (that previous research has suggested influence gaze allocation) relate to semantic information, namely the location of the character speaking in each frame. We show that this relationship differs between TotTV and ATV. First, we conducted Receiver Operator Characteristics analyses and found that feature congestion predicted speaking character location in TotTV but not ATV. Second, we used multiple analytical strategies to show that luminance differentials (flicker) predict face location more strongly in TotTV than ATV. Our results suggest that TotTV designers have intuited techniques for controlling toddler attention using low-level visual cues. The implications of these findings for structuring childhood learning experiences away from a screen are discussed. PMID:24702791

  12. Visual motherese? Signal-to-noise ratios in toddler-directed television.

    PubMed

    Wass, Sam V; Smith, Tim J

    2015-01-01

    Younger brains are noisier information processing systems; this means that information for younger individuals has to allow clearer differentiation between those aspects that are required for the processing task in hand (the 'signal') and those that are not (the 'noise'). We compared toddler-directed and adult-directed TV programmes (TotTV/ATV). We examined how low-level visual features (that previous research has suggested influence gaze allocation) relate to semantic information, namely the location of the character speaking in each frame. We show that this relationship differs between TotTV and ATV. First, we conducted Receiver Operator Characteristics analyses and found that feature congestion predicted speaking character location in TotTV but not ATV. Second, we used multiple analytical strategies to show that luminance differentials (flicker) predict face location more strongly in TotTV than ATV. Our results suggest that TotTV designers have intuited techniques for controlling toddler attention using low-level visual cues. The implications of these findings for structuring childhood learning experiences away from a screen are discussed. © 2014 The Authors. Developmental Science Published by John Wiley & Sons Ltd.

  13. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    PubMed

    Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C

    2009-01-01

    Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  14. Breaking continuous flash suppression: competing for consciousness on the pre-semantic battlefield

    PubMed Central

    Gayet, Surya; Van der Stigchel, Stefan; Paffen, Chris L. E.

    2014-01-01

    Traditionally, interocular suppression is believed to disrupt high-level (i.e., semantic or conceptual) processing of the suppressed visual input. The development of a new experimental paradigm, breaking continuous flash suppression (b-CFS), has caused a resurgence of studies demonstrating high-level processing of visual information in the absence of visual awareness. In this method the time it takes for interocularly suppressed stimuli to breach the threshold of visibility, is regarded as a measure of access to awareness. The aim of the current review is twofold. First, we provide an overview of the literature using this b-CFS method, while making a distinction between two types of studies: those in which suppression durations are compared between different stimulus classes (such as upright faces versus inverted faces), and those in which suppression durations are compared for stimuli that either match or mismatch concurrently available information (such as a colored target that either matches or mismatches a color retained in working memory). Second, we aim at dissociating high-level processing from low-level (i.e., crude visual) processing of the suppressed stimuli. For this purpose, we include a thorough review of the control conditions that are used in these experiments. Additionally, we provide recommendations for proper control conditions that we deem crucial for disentangling high-level from low-level effects. Based on this review, we argue that crude visual processing suffices for explaining differences in breakthrough times reported using b-CFS. As such, we conclude that there is as yet no reason to assume that interocularly suppressed stimuli receive full semantic analysis. PMID:24904476

  15. RE Data Explorer: Informing Variable Renewable Energy Grid Integration for Low Emission Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cox, Sarah L

    The RE Data Explorer, developed by the National Renewable Energy Laboratory, is an innovative web-based analysis tool that utilizes geospatial and spatiotemporal renewable energy data to visualize, execute, and support analysis of renewable energy potential under various user-defined scenarios. This analysis can inform high-level prospecting, integrated planning, and policy making to enable low emission development.

  16. Visual and skill effects on soccer passing performance, kinematics, and outcome estimations

    PubMed Central

    Basevitch, Itay; Tenenbaum, Gershon; Land, William M.; Ward, Paul

    2015-01-01

    The role of visual information and action representations in executing a motor task was examined from a mental representations approach. High-skill (n = 20) and low-skill (n = 20) soccer players performed a passing task to two targets at distances of 9.14 and 18.29 m, under three visual conditions: normal, occluded, and distorted vision (i.e., +4.0 corrective lenses, a visual acuity of approximately 6/75) without knowledge of results. Following each pass, participants estimated the relative horizontal distance from the target as the ball crossed the target plane. Kinematic data during each pass were also recorded for the shorter distance. Results revealed that performance on the motor task decreased as a function of visual information and task complexity (i.e., distance from target) regardless of skill level. High-skill players performed significantly better than low-skill players on both the actual passing and estimation tasks, at each target distance and visual condition. In addition, kinematic data indicated that high-skill participants were more consistent and had different kinematic movement patterns than low-skill participants. Findings contribute to the understanding of the underlying mechanisms required for successful performance in a self-paced, discrete and closed motor task. PMID:25784886

  17. A ganglion-cell-based primary image representation method and its contribution to object recognition

    NASA Astrophysics Data System (ADS)

    Wei, Hui; Dai, Zhi-Long; Zuo, Qing-Song

    2016-10-01

    A visual stimulus is represented by the biological visual system at several levels: in the order from low to high levels, they are: photoreceptor cells, ganglion cells (GCs), lateral geniculate nucleus cells and visual cortical neurons. Retinal GCs at the early level need to represent raw data only once, but meet a wide number of diverse requests from different vision-based tasks. This means the information representation at this level is general and not task-specific. Neurobiological findings have attributed this universal adaptation to GCs' receptive field (RF) mechanisms. For the purposes of developing a highly efficient image representation method that can facilitate information processing and interpretation at later stages, here we design a computational model to simulate the GC's non-classical RF. This new image presentation method can extract major structural features from raw data, and is consistent with other statistical measures of the image. Based on the new representation, the performances of other state-of-the-art algorithms in contour detection and segmentation can be upgraded remarkably. This work concludes that applying sophisticated representation schema at early state is an efficient and promising strategy in visual information processing.

  18. EEG Topographic Mapping of Visual and Kinesthetic Imagery in Swimmers.

    PubMed

    Wilson, V E; Dikman, Z; Bird, E I; Williams, J M; Harmison, R; Shaw-Thornton, L; Schwartz, G E

    2016-03-01

    This study investigated differences in QEEG measures between kinesthetic and visual imagery of a 100-m swim in 36 elite competitive swimmers. Background information and post-trial checks controlled for the modality of imagery, swimming skill level, preferred imagery style, intensity of image and task equality. Measures of EEG relative magnitude in theta, low (7-9 Hz) and high alpha (8-10 Hz), and low and high beta were taken from 19 scalp sites during baseline, visual, and kinesthetic imagery. QEEG magnitudes in the low alpha band during the visual and kinesthetic conditions were attenuated from baseline in low band alpha but no changes were seen in any other bands. Swimmers produced more low alpha EEG magnitude during visual versus kinesthetic imagery. This was interpreted as the swimmers having a greater efficiency at producing visual imagery. Participants who reported a strong intensity versus a weaker feeling of the image (kinesthetic) had less low alpha magnitude, i.e., there was use of more cortical resources, but not for the visual condition. These data suggest that low band (7-9 Hz) alpha distinguishes imagery modalities from baseline, visual imagery requires less cortical resources than kinesthetic imagery, and that intense feelings of swimming requires more brain activity than less intense feelings.

  19. The singular nature of auditory and visual scene analysis in autism

    PubMed Central

    Lin, I.-Fan; Shirama, Aya; Kato, Nobumasa

    2017-01-01

    Individuals with autism spectrum disorder often have difficulty acquiring relevant auditory and visual information in daily environments, despite not being diagnosed as hearing impaired or having low vision. Resent psychophysical and neurophysiological studies have shown that autistic individuals have highly specific individual differences at various levels of information processing, including feature extraction, automatic grouping and top-down modulation in auditory and visual scene analysis. Comparison of the characteristics of scene analysis between auditory and visual modalities reveals some essential commonalities, which could provide clues about the underlying neural mechanisms. Further progress in this line of research may suggest effective methods for diagnosing and supporting autistic individuals. This article is part of the themed issue ‘Auditory and visual scene analysis'. PMID:28044025

  20. Image understanding systems based on the unifying representation of perceptual and conceptual information and the solution of mid-level and high-level vision problems

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2001-10-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolving ambiguity and uncertainty via feedback, and provide image understanding, that is an interpretation of visual information in terms of such knowledge models. A computer vision system based on such principles requires unifying representation of perceptual and conceptual information. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/networks models is found. That means a very important shift of paradigm in our knowledge about brain from neural networks to the cortical software. Starting from the primary visual areas, brain analyzes an image as a graph-type spatial structure. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. The spatial combination of different neighbor features cannot be described as a statistical/integral characteristic of the analyzed region, but uniquely characterizes such region itself. Spatial logic and topology naturally present in such structures. Mid-level vision processes like clustering, perceptual grouping, multilevel hierarchical compression, separation of figure from ground, etc. are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena like shape from shading, occlusion, etc. are results of such analysis. Such approach gives opportunity not only to explain frequently unexplainable results of the cognitive science, but also to create intelligent computer vision systems that simulate perceptional processes in both what and where visual pathways. Such systems can open new horizons for robotic and computer vision industries.

  1. Beyond the cockpit: The visual world as a flight instrument

    NASA Technical Reports Server (NTRS)

    Johnson, W. W.; Kaiser, M. K.; Foyle, D. C.

    1992-01-01

    The use of cockpit instruments to guide flight control is not always an option (e.g., low level rotorcraft flight). Under such circumstances the pilot must use out-the-window information for control and navigation. Thus it is important to determine the basis of visually guided flight for several reasons: (1) to guide the design and construction of the visual displays used in training simulators; (2) to allow modeling of visibility restrictions brought about by weather, cockpit constraints, or distortions introduced by sensor systems; and (3) to aid in the development of displays that augment the cockpit window scene and are compatible with the pilot's visual extraction of information from the visual scene. The authors are actively pursuing these questions. We have on-going studies using both low-cost, lower fidelity flight simulators, and state-of-the-art helicopter simulation research facilities. Research results will be presented on: (1) the important visual scene information used in altitude and speed control; (2) the utility of monocular, stereo, and hyperstereo cues for the control of flight; (3) perceptual effects due to the differences between normal unaided daylight vision, and that made available by various night vision devices (e.g., light intensifying goggles and infra-red sensor displays); and (4) the utility of advanced contact displays in which instrument information is made part of the visual scene, as on a 'scene linked' head-up display (e.g., displaying altimeter information on a virtual billboard located on the ground).

  2. Finding regions of interest in pathological images: an attentional model approach

    NASA Astrophysics Data System (ADS)

    Gómez, Francisco; Villalón, Julio; Gutierrez, Ricardo; Romero, Eduardo

    2009-02-01

    This paper introduces an automated method for finding diagnostic regions-of-interest (RoIs) in histopathological images. This method is based on the cognitive process of visual selective attention that arises during a pathologist's image examination. Specifically, it emulates the first examination phase, which consists in a coarse search for tissue structures at a "low zoom" to separate the image into relevant regions.1 The pathologist's cognitive performance depends on inherent image visual cues - bottom-up information - and on acquired clinical medicine knowledge - top-down mechanisms -. Our pathologist's visual attention model integrates the latter two components. The selected bottom-up information includes local low level features such as intensity, color, orientation and texture information. Top-down information is related to the anatomical and pathological structures known by the expert. A coarse approximation to these structures is achieved by an oversegmentation algorithm, inspired by psychological grouping theories. The algorithm parameters are learned from an expert pathologist's segmentation. Top-down and bottom-up integration is achieved by calculating a unique index for each of the low level characteristics inside the region. Relevancy is estimated as a simple average of these indexes. Finally, a binary decision rule defines whether or not a region is interesting. The method was evaluated on a set of 49 images using a perceptually-weighted evaluation criterion, finding a quality gain of 3dB when comparing to a classical bottom-up model of attention.

  3. Sandwich masking eliminates both visual awareness of faces and face-specific brain activity through a feedforward mechanism.

    PubMed

    Harris, Joseph A; Wu, Chien-Te; Woldorff, Marty G

    2011-06-07

    It is generally agreed that considerable amounts of low-level sensory processing of visual stimuli can occur without conscious awareness. On the other hand, the degree of higher level visual processing that occurs in the absence of awareness is as yet unclear. Here, event-related potential (ERP) measures of brain activity were recorded during a sandwich-masking paradigm, a commonly used approach for attenuating conscious awareness of visual stimulus content. In particular, the present study used a combination of ERP activation contrasts to track both early sensory-processing ERP components and face-specific N170 ERP activations, in trials with versus without awareness. The electrophysiological measures revealed that the sandwich masking abolished the early face-specific N170 neural response (peaking at ~170 ms post-stimulus), an effect that paralleled the abolition of awareness of face versus non-face image content. Furthermore, however, the masking appeared to render a strong attenuation of earlier feedforward visual sensory-processing signals. This early attenuation presumably resulted in insufficient information being fed into the higher level visual system pathways specific to object category processing, thus leading to unawareness of the visual object content. These results support a coupling of visual awareness and neural indices of face processing, while also demonstrating an early low-level mechanism of interference in sandwich masking.

  4. Attentional gain and processing capacity limits predict the propensity to neglect unexpected visual stimuli.

    PubMed

    Papera, Massimiliano; Richards, Anne

    2016-05-01

    Exogenous allocation of attentional resources allows the visual system to encode and maintain representations of stimuli in visual working memory (VWM). However, limits in the processing capacity to allocate resources can prevent unexpected visual stimuli from gaining access to VWM and thereby to consciousness. Using a novel approach to create unbiased stimuli of increasing saliency, we investigated visual processing during a visual search task in individuals who show a high or low propensity to neglect unexpected stimuli. When propensity to inattention is high, ERP recordings show a diminished amplification concomitantly with a decrease in theta band power during the N1 latency, followed by a poor target enhancement during the N2 latency. Furthermore, a later modulation in the P3 latency was also found in individuals showing propensity to visual neglect, suggesting that more effort is required for conscious maintenance of visual information in VWM. Effects during early stages of processing (N80 and P1) were also observed suggesting that sensitivity to contrasts and medium-to-high spatial frequencies may be modulated by low-level saliency (albeit no statistical group differences were found). In accordance with the Global Workplace Model, our data indicate that a lack of resources in low-level processors and visual attention may be responsible for the failure to "ignite" a state of high-level activity spread across several brain areas that is necessary for stimuli to access awareness. These findings may aid in the development of diagnostic tests and intervention to detect/reduce inattention propensity to visual neglect of unexpected stimuli. © 2016 Society for Psychophysiological Research.

  5. People-oriented Information Visualization Design

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyong; Zhang, Bolun

    2018-04-01

    In the 21st century with rapid development, in the wake of the continuous progress of science and technology, human society enters the information era and the era of big data, and the lifestyle and aesthetic system also change accordingly, so the emerging field of information visualization is increasingly popular. Information visualization design is the process of visualizing all kinds of tedious information data, so as to quickly accept information and save time-cost. Along with the development of the process of information visualization, information design, also becomes hotter and hotter, and emotional design, people-oriented design is an indispensable part of in the design of information. This paper probes information visualization design through emotional analysis of information design based on the social context of people-oriented experience from the perspective of art design. Based on the three levels of emotional information design: instinct level, behavior level and reflective level research, to explore and discuss information visualization design.

  6. Enhanced and diminished visuo-spatial information processing in autism depends on stimulus complexity.

    PubMed

    Bertone, Armando; Mottron, Laurent; Jelenic, Patricia; Faubert, Jocelyn

    2005-10-01

    Visuo-perceptual processing in autism is characterized by intact or enhanced performance on static spatial tasks and inferior performance on dynamic tasks, suggesting a deficit of dorsal visual stream processing in autism. However, previous findings by Bertone et al. indicate that neuro-integrative mechanisms used to detect complex motion, rather than motion perception per se, may be impaired in autism. We present here the first demonstration of concurrent enhanced and decreased performance in autism on the same visuo-spatial static task, wherein the only factor dichotomizing performance was the neural complexity required to discriminate grating orientation. The ability of persons with autism was found to be superior for identifying the orientation of simple, luminance-defined (or first-order) gratings but inferior for complex, texture-defined (or second-order) gratings. Using a flicker contrast sensitivity task, we demonstrated that this finding is probably not due to abnormal information processing at a sub-cortical level (magnocellular and parvocellular functioning). Together, these findings are interpreted as a clear indication of altered low-level perceptual information processing in autism, and confirm that the deficits and assets observed in autistic visual perception are contingent on the complexity of the neural network required to process a given type of visual stimulus. We suggest that atypical neural connectivity, resulting in enhanced lateral inhibition, may account for both enhanced and decreased low-level information processing in autism.

  7. Scene and human face recognition in the central vision of patients with glaucoma

    PubMed Central

    Aptel, Florent; Attye, Arnaud; Guyader, Nathalie; Boucart, Muriel; Chiquet, Christophe; Peyrin, Carole

    2018-01-01

    Primary open-angle glaucoma (POAG) firstly mainly affects peripheral vision. Current behavioral studies support the idea that visual defects of patients with POAG extend into parts of the central visual field classified as normal by static automated perimetry analysis. This is particularly true for visual tasks involving processes of a higher level than mere detection. The purpose of this study was to assess visual abilities of POAG patients in central vision. Patients were assigned to two groups following a visual field examination (Humphrey 24–2 SITA-Standard test). Patients with both peripheral and central defects and patients with peripheral but no central defect, as well as age-matched controls, participated in the experiment. All participants had to perform two visual tasks where low-contrast stimuli were presented in the central 6° of the visual field. A categorization task of scene images and human face images assessed high-level visual recognition abilities. In contrast, a detection task using the same stimuli assessed low-level visual function. The difference in performance between detection and categorization revealed the cost of high-level visual processing. Compared to controls, patients with a central visual defect showed a deficit in both detection and categorization of all low-contrast images. This is consistent with the abnormal retinal sensitivity as assessed by perimetry. However, the deficit was greater for categorization than detection. Patients without a central defect showed similar performances to the controls concerning the detection and categorization of faces. However, while the detection of scene images was well-maintained, these patients showed a deficit in their categorization. This suggests that the simple loss of peripheral vision could be detrimental to scene recognition, even when the information is displayed in central vision. This study revealed subtle defects in the central visual field of POAG patients that cannot be predicted by static automated perimetry assessment using Humphrey 24–2 SITA-Standard test. PMID:29481572

  8. The Limits of Shape Recognition following Late Emergence from Blindness.

    PubMed

    McKyton, Ayelet; Ben-Zion, Itay; Doron, Ravid; Zohary, Ehud

    2015-09-21

    Visual object recognition develops during the first years of life. But what if one is deprived of vision during early post-natal development? Shape information is extracted using both low-level cues (e.g., intensity- or color-based contours) and more complex algorithms that are largely based on inference assumptions (e.g., illumination is from above, objects are often partially occluded). Previous studies, testing visual acuity using a 2D shape-identification task (Lea symbols), indicate that contour-based shape recognition can improve with visual experience, even after years of visual deprivation from birth. We hypothesized that this may generalize to other low-level cues (shape, size, and color), but not to mid-level functions (e.g., 3D shape from shading) that might require prior visual knowledge. To that end, we studied a unique group of subjects in Ethiopia that suffered from an early manifestation of dense bilateral cataracts and were surgically treated only years later. Our results suggest that the newly sighted rapidly acquire the ability to recognize an odd element within an array, on the basis of color, size, or shape differences. However, they are generally unable to find the odd shape on the basis of illusory contours, shading, or occlusion relationships. Little recovery of these mid-level functions is seen within 1 year post-operation. We find that visual performance using low-level cues is relatively robust to prolonged deprivation from birth. However, the use of pictorial depth cues to infer 3D structure from the 2D retinal image is highly susceptible to early and prolonged visual deprivation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Fear improves mental rotation of low-spatial-frequency visual representation.

    PubMed

    Borst, Grégoire

    2013-10-01

    Previous studies have demonstrated that the brief presentation of a fearful face improves not only low-level visual processing such as contrast and orientation sensitivity but also improves visuospatial processing. In the present study, we investigated whether fear improves mental rotation efficiency (i.e., the mental rotation rate) because of the effect of fear on the sensitivity of magnocellular neurons. We asked 2 groups of participants to perform a mental rotation task with either low-pass or high-pass filtered 3-dimensional objects. Following the presentation of a fearful face, participants mentally rotated objects faster compared with when a neutral face was presented but only for low-pass filtered objects. The results suggest that fear improves mental rotation efficiency by increasing sensitivity to motion-related visual information within the magnocellular pathway.

  10. Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer.

    PubMed

    Ashtiani, Matin N; Kheradpisheh, Saeed R; Masquelier, Timothée; Ganjtabesh, Mohammad

    2017-01-01

    The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the "entry" level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies).

  11. Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer

    PubMed Central

    Ashtiani, Matin N.; Kheradpisheh, Saeed R.; Masquelier, Timothée; Ganjtabesh, Mohammad

    2017-01-01

    The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the “entry” level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies). PMID:28790954

  12. Role of somatosensory and vestibular cues in attenuating visually induced human postural sway

    NASA Technical Reports Server (NTRS)

    Peterka, Robert J.; Benolken, Martha S.

    1993-01-01

    The purpose was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena was observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal and vestibular loss subjects were nearly identical implying that (1) normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) vestibular loss subjects did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system 'gain' was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost three times greater than the amplitude of the visual stimulus in normals and vestibular loss subjects. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about four in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied that (1) the vestibular loss subjects did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system 'gain' were not used to compensate for a vestibular deficit, and (2) the threshold for the use of vestibular cues in normals was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.

  13. Modeling global scene factors in attention

    NASA Astrophysics Data System (ADS)

    Torralba, Antonio

    2003-07-01

    Models of visual attention have focused predominantly on bottom-up approaches that ignored structured contextual and scene information. I propose a model of contextual cueing for attention guidance based on the global scene configuration. It is shown that the statistics of low-level features across the whole image can be used to prime the presence or absence of objects in the scene and to predict their location, scale, and appearance before exploring the image. In this scheme, visual context information can become available early in the visual processing chain, which allows modulation of the saliency of image regions and provides an efficient shortcut for object detection and recognition. 2003 Optical Society of America

  14. Selectivity in Postencoding Connectivity with High-Level Visual Cortex Is Associated with Reward-Motivated Memory

    PubMed Central

    Murty, Vishnu P.; Tompary, Alexa; Adcock, R. Alison

    2017-01-01

    Reward motivation has been demonstrated to enhance declarative memory by facilitating systems-level consolidation. Although high-reward information is often intermixed with lower reward information during an experience, memory for high value information is prioritized. How is this selectivity achieved? One possibility is that postencoding consolidation processes bias memory strengthening to those representations associated with higher reward. To test this hypothesis, we investigated the influence of differential reward motivation on the selectivity of postencoding markers of systems-level memory consolidation. Human participants encoded intermixed, trial-unique memoranda that were associated with either high or low-value during fMRI acquisition. Encoding was interleaved with periods of rest, allowing us to investigate experience-dependent changes in connectivity as they related to later memory. Behaviorally, we found that reward motivation enhanced 24 h associative memory. Analysis of patterns of postencoding connectivity showed that, even though learning trials were intermixed, there was significantly greater connectivity with regions of high-level, category-selective visual cortex associated with high-reward trials. Specifically, increased connectivity of category-selective visual cortex with both the VTA and the anterior hippocampus predicted associative memory for high- but not low-reward memories. Critically, these results were independent of encoding-related connectivity and univariate activity measures. Thus, these findings support a model by which the selective stabilization of memories for salient events is supported by postencoding interactions with sensory cortex associated with reward. SIGNIFICANCE STATEMENT Reward motivation is thought to promote memory by supporting memory consolidation. Yet, little is known as to how brain selects relevant information for subsequent consolidation based on reward. We show that experience-dependent changes in connectivity of both the anterior hippocampus and the VTA with high-level visual cortex selectively predicts memory for high-reward memoranda at a 24 h delay. These findings provide evidence for a novel mechanism guiding the consolidation of memories for valuable events, namely, postencoding interactions between neural systems supporting mesolimbic dopamine activation, episodic memory, and perception. PMID:28100737

  15. Language-Mediated Visual Orienting Behavior in Low and High Literates

    PubMed Central

    Huettig, Falk; Singh, Niharika; Mishra, Ramesh Kumar

    2011-01-01

    The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., “magar,” crocodile) while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., “matar,” peas; a semantic competitor, e.g., “kachuwa,” turtle, and two unrelated distractors). In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1). In both experiments high literates shifted their eye gaze toward phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were not present (Experiment 2) but in contrast to high literates these phonologically mediated shifts in eye gaze were not closely time-locked to the speech input. These data provide further evidence that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word–object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2), low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate counterparts. PMID:22059083

  16. Image Statistics and the Representation of Material Properties in the Visual Cortex

    PubMed Central

    Baumgartner, Elisabeth; Gegenfurtner, Karl R.

    2016-01-01

    We explored perceived material properties (roughness, texturedness, and hardness) with a novel approach that compares perception, image statistics and brain activation, as measured with fMRI. We initially asked participants to rate 84 material images with respect to the above mentioned properties, and then scanned 15 of the participants with fMRI while they viewed the material images. The images were analyzed with a set of image statistics capturing their spatial frequency and texture properties. Linear classifiers were then applied to the image statistics as well as the voxel patterns of visually responsive voxels and early visual areas to discriminate between images with high and low perceptual ratings. Roughness and texturedness could be classified above chance level based on image statistics. Roughness and texturedness could also be classified based on the brain activation patterns in visual cortex, whereas hardness could not. Importantly, the agreement in classification based on image statistics and brain activation was also above chance level. Our results show that information about visual material properties is to a large degree contained in low-level image statistics, and that these image statistics are also partially reflected in brain activity patterns induced by the perception of material images. PMID:27582714

  17. Image Statistics and the Representation of Material Properties in the Visual Cortex.

    PubMed

    Baumgartner, Elisabeth; Gegenfurtner, Karl R

    2016-01-01

    We explored perceived material properties (roughness, texturedness, and hardness) with a novel approach that compares perception, image statistics and brain activation, as measured with fMRI. We initially asked participants to rate 84 material images with respect to the above mentioned properties, and then scanned 15 of the participants with fMRI while they viewed the material images. The images were analyzed with a set of image statistics capturing their spatial frequency and texture properties. Linear classifiers were then applied to the image statistics as well as the voxel patterns of visually responsive voxels and early visual areas to discriminate between images with high and low perceptual ratings. Roughness and texturedness could be classified above chance level based on image statistics. Roughness and texturedness could also be classified based on the brain activation patterns in visual cortex, whereas hardness could not. Importantly, the agreement in classification based on image statistics and brain activation was also above chance level. Our results show that information about visual material properties is to a large degree contained in low-level image statistics, and that these image statistics are also partially reflected in brain activity patterns induced by the perception of material images.

  18. Short-term retention of visual information: Evidence in support of feature-based attention as an underlying mechanism.

    PubMed

    Sneve, Markus H; Sreenivasan, Kartik K; Alnæs, Dag; Endestad, Tor; Magnussen, Svein

    2015-01-01

    Retention of features in visual short-term memory (VSTM) involves maintenance of sensory traces in early visual cortex. However, the mechanism through which this is accomplished is not known. Here, we formulate specific hypotheses derived from studies on feature-based attention to test the prediction that visual cortex is recruited by attentional mechanisms during VSTM of low-level features. Functional magnetic resonance imaging (fMRI) of human visual areas revealed that neural populations coding for task-irrelevant feature information are suppressed during maintenance of detailed spatial frequency memory representations. The narrow spectral extent of this suppression agrees well with known effects of feature-based attention. Additionally, analyses of effective connectivity during maintenance between retinotopic areas in visual cortex show that the observed highlighting of task-relevant parts of the feature spectrum originates in V4, a visual area strongly connected with higher-level control regions and known to convey top-down influence to earlier visual areas during attentional tasks. In line with this property of V4 during attentional operations, we demonstrate that modulations of earlier visual areas during memory maintenance have behavioral consequences, and that these modulations are a result of influences from V4. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Cognitive abilities predict death during the next 15 years in older Japanese adults.

    PubMed

    Nishita, Yukiko; Tange, Chikako; Tomida, Makiko; Otsuka, Rei; Ando, Fujiko; Shimokata, Hiroshi

    2017-10-01

    The longitudinal relationship between cognitive abilities and subsequent death was investigated among community-dwelling older Japanese adults. Participants (n = 1060; age range 60-79 years) comprised the first-wave participants of the National Institute for Longevity Sciences-Longitudinal Study of Aging. Participants' cognitive abilities were measured at baseline using the Japanese Wechsler Adult Intelligence Scale-Revised Short Form, which includes the following tests: Information (general knowledge), Similarities (logical abstract thinking), Picture Completion (visual perception and long-term visual memory) and Digit Symbol (information processing speed). By each cognitive test score, participants were classified into three groups: the high-level group (≥ the mean + 1SD), the low-level group (≤ the mean - 1SD) and the middle-level group. Data on death and moving during the subsequent 15 years were collected and analyzed using the multiple Cox proportional hazard model adjusted for physical and psychosocial covariates. During the follow-up period, 308 participants (29.06%) had died and 93 participants (8.77%) had moved. In the Similarities test, adjusted hazard ratios (HR) of the low-level group to the high-level group were significant (HR 1.49, 95% CI 1.02-2.17, P = 0.038). Furthermore, in the Digit symbol test, the adjusted HR of the low-level group to the high-level group was significant (HR 1.62, 95% CI 1.03-2.58, P = 0.038). Significant adjusted HR were not observed for the Information or Picture Completion tests. It is suggested that a lower level of logical abstract thinking and slower information processing speed are associated with shorter survival among older Japanese adults. Geriatr Gerontol Int 2017; 17: 1654-1660. © 2016 Japan Geriatrics Society.

  20. Emergence of transformation-tolerant representations of visual objects in rat lateral extrastriate cortex

    PubMed Central

    Tafazoli, Sina; Safaai, Houman; De Franceschi, Gioia; Rosselli, Federica Bianca; Vanzella, Walter; Riggi, Margherita; Buffolo, Federica; Panzeri, Stefano; Zoccolan, Davide

    2017-01-01

    Rodents are emerging as increasingly popular models of visual functions. Yet, evidence that rodent visual cortex is capable of advanced visual processing, such as object recognition, is limited. Here we investigate how neurons located along the progression of extrastriate areas that, in the rat brain, run laterally to primary visual cortex, encode object information. We found a progressive functional specialization of neural responses along these areas, with: (1) a sharp reduction of the amount of low-level, energy-related visual information encoded by neuronal firing; and (2) a substantial increase in the ability of both single neurons and neuronal populations to support discrimination of visual objects under identity-preserving transformations (e.g., position and size changes). These findings strongly argue for the existence of a rat object-processing pathway, and point to the rodents as promising models to dissect the neuronal circuitry underlying transformation-tolerant recognition of visual objects. DOI: http://dx.doi.org/10.7554/eLife.22794.001 PMID:28395730

  1. The emergence of polychronization and feature binding in a spiking neural network model of the primate ventral visual system.

    PubMed

    Eguchi, Akihiro; Isbister, James B; Ahmad, Nasir; Stringer, Simon

    2018-07-01

    We present a hierarchical neural network model, in which subpopulations of neurons develop fixed and regularly repeating temporal chains of spikes (polychronization), which respond specifically to randomized Poisson spike trains representing the input training images. The performance is improved by including top-down and lateral synaptic connections, as well as introducing multiple synaptic contacts between each pair of pre- and postsynaptic neurons, with different synaptic contacts having different axonal delays. Spike-timing-dependent plasticity thus allows the model to select the most effective axonal transmission delay between neurons. Furthermore, neurons representing the binding relationship between low-level and high-level visual features emerge through visually guided learning. This begins to provide a way forward to solving the classic feature binding problem in visual neuroscience and leads to a new hypothesis concerning how information about visual features at every spatial scale may be projected upward through successive neuronal layers. We name this hypothetical upward projection of information the "holographic principle." (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Simulated Prosthetic Vision: The Benefits of Computer-Based Object Recognition and Localization.

    PubMed

    Macé, Marc J-M; Guivarch, Valérian; Denis, Grégoire; Jouffrais, Christophe

    2015-07-01

    Clinical trials with blind patients implanted with a visual neuroprosthesis showed that even the simplest tasks were difficult to perform with the limited vision restored with current implants. Simulated prosthetic vision (SPV) is a powerful tool to investigate the putative functions of the upcoming generations of visual neuroprostheses. Recent studies based on SPV showed that several generations of implants will be required before usable vision is restored. However, none of these studies relied on advanced image processing. High-level image processing could significantly reduce the amount of information required to perform visual tasks and help restore visuomotor behaviors, even with current low-resolution implants. In this study, we simulated a prosthetic vision device based on object localization in the scene. We evaluated the usability of this device for object recognition, localization, and reaching. We showed that a very low number of electrodes (e.g., nine) are sufficient to restore visually guided reaching movements with fair timing (10 s) and high accuracy. In addition, performance, both in terms of accuracy and speed, was comparable with 9 and 100 electrodes. Extraction of high level information (object recognition and localization) from video images could drastically enhance the usability of current visual neuroprosthesis. We suggest that this method-that is, localization of targets of interest in the scene-may restore various visuomotor behaviors. This method could prove functional on current low-resolution implants. The main limitation resides in the reliability of the vision algorithms, which are improving rapidly. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  3. Comparing object recognition from binary and bipolar edge images for visual prostheses.

    PubMed

    Jung, Jae-Hyun; Pu, Tian; Peli, Eli

    2016-11-01

    Visual prostheses require an effective representation method due to the limited display condition which has only 2 or 3 levels of grayscale in low resolution. Edges derived from abrupt luminance changes in images carry essential information for object recognition. Typical binary (black and white) edge images have been used to represent features to convey essential information. However, in scenes with a complex cluttered background, the recognition rate of the binary edge images by human observers is limited and additional information is required. The polarity of edges and cusps (black or white features on a gray background) carries important additional information; the polarity may provide shape from shading information missing in the binary edge image. This depth information may be restored by using bipolar edges. We compared object recognition rates from 16 binary edge images and bipolar edge images by 26 subjects to determine the possible impact of bipolar filtering in visual prostheses with 3 or more levels of grayscale. Recognition rates were higher with bipolar edge images and the improvement was significant in scenes with complex backgrounds. The results also suggest that erroneous shape from shading interpretation of bipolar edges resulting from pigment rather than boundaries of shape may confound the recognition.

  4. Improving Readability of an Evaluation Tool for Low-Income Clients Using Visual Information Processing Theories

    ERIC Educational Resources Information Center

    Townsend, Marilyn S.; Sylva, Kathryn; Martin, Anna; Metz, Diane; Wooten-Swanson, Patti

    2008-01-01

    Literacy is an issue for many low-income audiences. Using visual information processing theories, the goal was improving readability of a food behavior checklist and ultimately improving its ability to accurately capture existing changes in dietary behaviors. Using group interviews, low-income clients (n = 18) evaluated 4 visual styles. The text…

  5. A Task-Dependent Causal Role for Low-Level Visual Processes in Spoken Word Comprehension

    ERIC Educational Resources Information Center

    Ostarek, Markus; Huettig, Falk

    2017-01-01

    It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual…

  6. Active vision and image/video understanding with decision structures based on the network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2003-08-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.

  7. A hexagonal orthogonal-oriented pyramid as a model of image representation in visual cortex

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1989-01-01

    Retinal ganglion cells represent the visual image with a spatial code, in which each cell conveys information about a small region in the image. In contrast, cells of the primary visual cortex use a hybrid space-frequency code in which each cell conveys information about a region that is local in space, spatial frequency, and orientation. A mathematical model for this transformation is described. The hexagonal orthogonal-oriented quadrature pyramid (HOP) transform, which operates on a hexagonal input lattice, uses basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The basis functions, which are generated from seven basic types through a recursive process, form an image code of the pyramid type. The seven basis functions, six bandpass and one low-pass, occupy a point and a hexagon of six nearest neighbors on a hexagonal lattice. The six bandpass basis functions consist of three with even symmetry, and three with odd symmetry. At the lowest level, the inputs are image samples. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing square root of 7 larger than the previous level, so that the number of coefficients is reduced by a factor of seven at each level. In the biological model, the input lattice is the retinal ganglion cell array. The resulting scheme provides a compact, efficient code of the image and generates receptive fields that resemble those of the primary visual cortex.

  8. The Effects of Varying Contextual Demands on Age-related Positive Gaze Preferences

    PubMed Central

    Noh, Soo Rim; Isaacowitz, Derek M.

    2015-01-01

    Despite many studies on the age-related positivity effect and its role in visual attention, discrepancies remain regarding whether one’s full attention is required for age-related differences to emerge. The present study took a new approach to this question by varying the contextual demands of emotion processing. This was done by adding perceptual distractions, such as visual and auditory noise, that could disrupt attentional control. Younger and older participants viewed pairs of happy–neutral and fearful–neutral faces while their eye movements were recorded. Facial stimuli were shown either without noise, embedded in a background of visual noise (low, medium, or high), or with simultaneous auditory babble. Older adults showed positive gaze preferences, looking toward happy faces and away from fearful faces; however, their gaze preferences tended to be influenced by the level of visual noise. Specifically, the tendency to look away from fearful faces was not present in conditions with low and medium levels of visual noise, but was present where there were high levels of visual noise. It is important to note, however, that in the high-visual-noise condition, external cues were present to facilitate the processing of emotional information. In addition, older adults’ positive gaze preferences disappeared or were reduced when they first viewed emotional faces within a distracting context. The current results indicate that positive gaze preferences may be less likely to occur in distracting contexts that disrupt control of visual attention. PMID:26030774

  9. The effects of varying contextual demands on age-related positive gaze preferences.

    PubMed

    Noh, Soo Rim; Isaacowitz, Derek M

    2015-06-01

    Despite many studies on the age-related positivity effect and its role in visual attention, discrepancies remain regarding whether full attention is required for age-related differences to emerge. The present study took a new approach to this question by varying the contextual demands of emotion processing. This was done by adding perceptual distractions, such as visual and auditory noise, that could disrupt attentional control. Younger and older participants viewed pairs of happy-neutral and fearful-neutral faces while their eye movements were recorded. Facial stimuli were shown either without noise, embedded in a background of visual noise (low, medium, or high), or with simultaneous auditory babble. Older adults showed positive gaze preferences, looking toward happy faces and away from fearful faces; however, their gaze preferences tended to be influenced by the level of visual noise. Specifically, the tendency to look away from fearful faces was not present in conditions with low and medium levels of visual noise but was present when there were high levels of visual noise. It is important to note, however, that in the high-visual-noise condition, external cues were present to facilitate the processing of emotional information. In addition, older adults' positive gaze preferences disappeared or were reduced when they first viewed emotional faces within a distracting context. The current results indicate that positive gaze preferences may be less likely to occur in distracting contexts that disrupt control of visual attention. (c) 2015 APA, all rights reserved.

  10. Role of somatosensory and vestibular cues in attenuating visually induced human postural sway

    NASA Technical Reports Server (NTRS)

    Peterka, R. J.; Benolken, M. S.

    1995-01-01

    The purpose of this study was to determine the contribution of visual, vestibular, and somatosensory cues to the maintenance of stance in humans. Postural sway was induced by full-field, sinusoidal visual surround rotations about an axis at the level of the ankle joints. The influences of vestibular and somatosensory cues were characterized by comparing postural sway in normal and bilateral vestibular absent subjects in conditions that provided either accurate or inaccurate somatosensory orientation information. In normal subjects, the amplitude of visually induced sway reached a saturation level as stimulus amplitude increased. The saturation amplitude decreased with increasing stimulus frequency. No saturation phenomena were observed in subjects with vestibular loss, implying that vestibular cues were responsible for the saturation phenomenon. For visually induced sways below the saturation level, the stimulus-response curves for both normal subjects and subjects experiencing vestibular loss were nearly identical, implying (1) that normal subjects were not using vestibular information to attenuate their visually induced sway, possibly because sway was below a vestibular-related threshold level, and (2) that subjects with vestibular loss did not utilize visual cues to a greater extent than normal subjects; that is, a fundamental change in visual system "gain" was not used to compensate for a vestibular deficit. An unexpected finding was that the amplitude of body sway induced by visual surround motion could be almost 3 times greater than the amplitude of the visual stimulus in normal subjects and subjects with vestibular loss. This occurred in conditions where somatosensory cues were inaccurate and at low stimulus amplitudes. A control system model of visually induced postural sway was developed to explain this finding. For both subject groups, the amplitude of visually induced sway was smaller by a factor of about 4 in tests where somatosensory cues provided accurate versus inaccurate orientation information. This implied (1) that the subjects experiencing vestibular loss did not utilize somatosensory cues to a greater extent than normal subjects; that is, changes in somatosensory system "gain" were not used to compensate for a vestibular deficit, and (2) that the threshold for the use of vestibular cues in normal subjects was apparently lower in test conditions where somatosensory cues were providing accurate orientation information.

  11. Combining Low-Level Perception and Expectations in Conceptual Learning

    DTIC Science & Technology

    2004-01-12

    human representation and processing of visual information. San Francisco: W. H. Freeman, 1982. U . Neisser , Cognitive Psychology. New York: Appleton...expectation that characters are from a standard alphabet enables correct identification of characters which would otherwise be ambiguous ( Neisser , 1966

  12. Postural response to predictable and nonpredictable visual flow in children and adults.

    PubMed

    Schmuckler, Mark A

    2017-11-01

    Children's (3-5years) and adults' postural reactions to different conditions of visual flow information varying in its frequency content was examined using a moving room apparatus. Both groups experienced four conditions of visual input: low-frequency (0.20Hz) visual oscillations, high-frequency (0.60Hz) oscillations, multifrequency nonpredictable visual input, and no imposed visual information. Analyses of the frequency content of anterior-posterior (AP) sway revealed that postural reactions to the single-frequency conditions replicated previous findings; children were responsive to low- and high-frequency oscillations, whereas adults were responsive to low-frequency information. Extending previous work, AP sway in response to the nonpredictable condition revealed that both groups were responsive to the different components contained in the multifrequency visual information, although adults retained their frequency selectivity to low-frequency versus high-frequency content. These findings are discussed in relation to work examining feedback versus feedforward control of posture, and the reweighting of sensory inputs for postural control, as a function of development and task context. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Audiovisual associations alter the perception of low-level visual motion

    PubMed Central

    Kafaligonul, Hulusi; Oluk, Can

    2015-01-01

    Motion perception is a pervasive nature of vision and is affected by both immediate pattern of sensory inputs and prior experiences acquired through associations. Recently, several studies reported that an association can be established quickly between directions of visual motion and static sounds of distinct frequencies. After the association is formed, sounds are able to change the perceived direction of visual motion. To determine whether such rapidly acquired audiovisual associations and their subsequent influences on visual motion perception are dependent on the involvement of higher-order attentive tracking mechanisms, we designed psychophysical experiments using regular and reverse-phi random dot motions isolating low-level pre-attentive motion processing. Our results show that an association between the directions of low-level visual motion and static sounds can be formed and this audiovisual association alters the subsequent perception of low-level visual motion. These findings support the view that audiovisual associations are not restricted to high-level attention based motion system and early-level visual motion processing has some potential role. PMID:25873869

  14. Modulation of microsaccades by spatial frequency during object categorization.

    PubMed

    Craddock, Matt; Oppermann, Frank; Müller, Matthias M; Martinovic, Jasna

    2017-01-01

    The organization of visual processing into a coarse-to-fine information processing based on the spatial frequency properties of the input forms an important facet of the object recognition process. During visual object categorization tasks, microsaccades occur frequently. One potential functional role of these eye movements is to resolve high spatial frequency information. To assess this hypothesis, we examined the rate, amplitude and speed of microsaccades in an object categorization task in which participants viewed object and non-object images and classified them as showing either natural objects, man-made objects or non-objects. Images were presented unfiltered (broadband; BB) or filtered to contain only low (LSF) or high spatial frequency (HSF) information. This allowed us to examine whether microsaccades were modulated independently by the presence of a high-level feature - the presence of an object - and by low-level stimulus characteristics - spatial frequency. We found a bimodal distribution of saccades based on their amplitude, with a split between smaller and larger microsaccades at 0.4° of visual angle. The rate of larger saccades (⩾0.4°) was higher for objects than non-objects, and higher for objects with high spatial frequency content (HSF and BB objects) than for LSF objects. No effects were observed for smaller microsaccades (<0.4°). This is consistent with a role for larger microsaccades in resolving HSF information for object identification, and previous evidence that more microsaccades are directed towards informative image regions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Representational dynamics of object recognition: Feedforward and feedback information flows.

    PubMed

    Goddard, Erin; Carlson, Thomas A; Dermody, Nadene; Woolgar, Alexandra

    2016-03-01

    Object perception involves a range of visual and cognitive processes, and is known to include both a feedfoward flow of information from early visual cortical areas to higher cortical areas, along with feedback from areas such as prefrontal cortex. Previous studies have found that low and high spatial frequency information regarding object identity may be processed over different timescales. Here we used the high temporal resolution of magnetoencephalography (MEG) combined with multivariate pattern analysis to measure information specifically related to object identity in peri-frontal and peri-occipital areas. Using stimuli closely matched in their low-level visual content, we found that activity in peri-occipital cortex could be used to decode object identity from ~80ms post stimulus onset, and activity in peri-frontal cortex could also be used to decode object identity from a later time (~265ms post stimulus onset). Low spatial frequency information related to object identity was present in the MEG signal at an earlier time than high spatial frequency information for peri-occipital cortex, but not for peri-frontal cortex. We additionally used Granger causality analysis to compare feedforward and feedback influences on representational content, and found evidence of both an early feedfoward flow and later feedback flow of information related to object identity. We discuss our findings in relation to existing theories of object processing and propose how the methods we use here could be used to address further questions of the neural substrates underlying object perception. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Temporal information entropy of the Blood-Oxygenation Level-Dependent signals increases in the activated human primary visual cortex

    NASA Astrophysics Data System (ADS)

    DiNuzzo, Mauro; Mascali, Daniele; Moraschi, Marta; Bussu, Giorgia; Maraviglia, Bruno; Mangia, Silvia; Giove, Federico

    2017-02-01

    Time-domain analysis of blood-oxygenation level-dependent (BOLD) signals allows the identification of clusters of voxels responding to photic stimulation in primary visual cortex (V1). However, the characterization of information encoding into temporal properties of the BOLD signals of an activated cluster is poorly investigated. Here, we used Shannon entropy to determine spatial and temporal information encoding in the BOLD signal within the most strongly activated area of the human visual cortex during a hemifield photic stimulation. We determined the distribution profile of BOLD signals during epochs at rest and under stimulation within small (19-121 voxels) clusters designed to include only voxels driven by the stimulus as highly and uniformly as possible. We found consistent and significant increases (2-4% on average) in temporal information entropy during activation in contralateral but not ipsilateral V1, which was mirrored by an expected loss of spatial information entropy. These opposite changes coexisted with increases in both spatial and temporal mutual information (i.e. dependence) in contralateral V1. Thus, we showed that the first cortical stage of visual processing is characterized by a specific spatiotemporal rearrangement of intracluster BOLD responses. Our results indicate that while in the space domain BOLD maps may be incapable of capturing the functional specialization of small neuronal populations due to relatively low spatial resolution, some information encoding may still be revealed in the temporal domain by an increase of temporal information entropy.

  17. Multisensory speech perception in autism spectrum disorder: From phoneme to whole-word perception.

    PubMed

    Stevenson, Ryan A; Baum, Sarah H; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Wallace, Mark T

    2017-07-01

    Speech perception in noisy environments is boosted when a listener can see the speaker's mouth and integrate the auditory and visual speech information. Autistic children have a diminished capacity to integrate sensory information across modalities, which contributes to core symptoms of autism, such as impairments in social communication. We investigated the abilities of autistic and typically-developing (TD) children to integrate auditory and visual speech stimuli in various signal-to-noise ratios (SNR). Measurements of both whole-word and phoneme recognition were recorded. At the level of whole-word recognition, autistic children exhibited reduced performance in both the auditory and audiovisual modalities. Importantly, autistic children showed reduced behavioral benefit from multisensory integration with whole-word recognition, specifically at low SNRs. At the level of phoneme recognition, autistic children exhibited reduced performance relative to their TD peers in auditory, visual, and audiovisual modalities. However, and in contrast to their performance at the level of whole-word recognition, both autistic and TD children showed benefits from multisensory integration for phoneme recognition. In accordance with the principle of inverse effectiveness, both groups exhibited greater benefit at low SNRs relative to high SNRs. Thus, while autistic children showed typical multisensory benefits during phoneme recognition, these benefits did not translate to typical multisensory benefit of whole-word recognition in noisy environments. We hypothesize that sensory impairments in autistic children raise the SNR threshold needed to extract meaningful information from a given sensory input, resulting in subsequent failure to exhibit behavioral benefits from additional sensory information at the level of whole-word recognition. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1280-1290. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  18. Action video game playing is associated with improved visual sensitivity, but not alterations in visual sensory memory.

    PubMed

    Appelbaum, L Gregory; Cain, Matthew S; Darling, Elise F; Mitroff, Stephen R

    2013-08-01

    Action video game playing has been experimentally linked to a number of perceptual and cognitive improvements. These benefits are captured through a wide range of psychometric tasks and have led to the proposition that action video game experience may promote the ability to extract statistical evidence from sensory stimuli. Such an advantage could arise from a number of possible mechanisms: improvements in visual sensitivity, enhancements in the capacity or duration for which information is retained in visual memory, or higher-level strategic use of information for decision making. The present study measured the capacity and time course of visual sensory memory using a partial report performance task as a means to distinguish between these three possible mechanisms. Sensitivity measures and parameter estimates that describe sensory memory capacity and the rate of memory decay were compared between individuals who reported high evels and low levels of action video game experience. Our results revealed a uniform increase in partial report accuracy at all stimulus-to-cue delays for action video game players but no difference in the rate or time course of the memory decay. The present findings suggest that action video game playing may be related to enhancements in the initial sensitivity to visual stimuli, but not to a greater retention of information in iconic memory buffers.

  19. Visual short-term memory: activity supporting encoding and maintenance in retinotopic visual cortex.

    PubMed

    Sneve, Markus H; Alnæs, Dag; Endestad, Tor; Greenlee, Mark W; Magnussen, Svein

    2012-10-15

    Recent studies have demonstrated that retinotopic cortex maintains information about visual stimuli during retention intervals. However, the process by which transient stimulus-evoked sensory responses are transformed into enduring memory representations is unknown. Here, using fMRI and short-term visual memory tasks optimized for univariate and multivariate analysis approaches, we report differential involvement of human retinotopic areas during memory encoding of the low-level visual feature orientation. All visual areas show weaker responses when memory encoding processes are interrupted, possibly due to effects in orientation-sensitive primary visual cortex (V1) propagating across extrastriate areas. Furthermore, intermediate areas in both dorsal (V3a/b) and ventral (LO1/2) streams are significantly more active during memory encoding compared with non-memory (active and passive) processing of the same stimulus material. These effects in intermediate visual cortex are also observed during memory encoding of a different stimulus feature (spatial frequency), suggesting that these areas are involved in encoding processes on a higher level of representation. Using pattern-classification techniques to probe the representational content in visual cortex during delay periods, we further demonstrate that simply initiating memory encoding is not sufficient to produce long-lasting memory traces. Rather, active maintenance appears to underlie the observed memory-specific patterns of information in retinotopic cortex. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Vision for navigation: What can we learn from ants?

    PubMed

    Graham, Paul; Philippides, Andrew

    2017-09-01

    The visual systems of all animals are used to provide information that can guide behaviour. In some cases insects demonstrate particularly impressive visually-guided behaviour and then we might reasonably ask how the low-resolution vision and limited neural resources of insects are tuned to particular behavioural strategies. Such questions are of interest to both biologists and to engineers seeking to emulate insect-level performance with lightweight hardware. One behaviour that insects share with many animals is the use of learnt visual information for navigation. Desert ants, in particular, are expert visual navigators. Across their foraging life, ants can learn long idiosyncratic foraging routes. What's more, these routes are learnt quickly and the visual cues that define them can be implemented for guidance independently of other social or personal information. Here we review the style of visual navigation in solitary foraging ants and consider the physiological mechanisms that underpin it. Our perspective is to consider that robust navigation comes from the optimal interaction between behavioural strategy, visual mechanisms and neural hardware. We consider each of these in turn, highlighting the value of ant-like mechanisms in biomimetic endeavours. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  1. Iconic memory requires attention

    PubMed Central

    Persuh, Marjan; Genzer, Boris; Melara, Robert D.

    2012-01-01

    Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when concurrently performing an easy visual search task (low load). However, when the concurrent search was made difficult (high load), observers' performance dropped to almost chance levels, while search accuracy held at single-task levels. The effects of attentional load remained the same across paradigms. The results suggest that, without attention, participants consolidate in iconic memory only gross representations of the visual scene, information too impoverished for successful detection of perceptual change or categorization of features. PMID:22586389

  2. Iconic memory requires attention.

    PubMed

    Persuh, Marjan; Genzer, Boris; Melara, Robert D

    2012-01-01

    Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when concurrently performing an easy visual search task (low load). However, when the concurrent search was made difficult (high load), observers' performance dropped to almost chance levels, while search accuracy held at single-task levels. The effects of attentional load remained the same across paradigms. The results suggest that, without attention, participants consolidate in iconic memory only gross representations of the visual scene, information too impoverished for successful detection of perceptual change or categorization of features.

  3. Understanding face perception by means of human electrophysiology.

    PubMed

    Rossion, Bruno

    2014-06-01

    Electrophysiological recordings on the human scalp provide a wealth of information about the temporal dynamics and nature of face perception at a global level of brain organization. The time window between 100 and 200 ms witnesses the transition between low-level and high-level vision, an N170 component correlating with conscious interpretation of a visual stimulus as a face. This face representation is rapidly refined as information accumulates during this time window, allowing the individualization of faces. To improve the sensitivity and objectivity of face perception measures, it is increasingly important to go beyond transient visual stimulation by recording electrophysiological responses at periodic frequency rates. This approach has recently provided face perception thresholds and the first objective signature of integration of facial parts in the human brain. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Advanced biologically plausible algorithms for low-level image processing

    NASA Astrophysics Data System (ADS)

    Gusakova, Valentina I.; Podladchikova, Lubov N.; Shaposhnikov, Dmitry G.; Markin, Sergey N.; Golovan, Alexander V.; Lee, Seong-Whan

    1999-08-01

    At present, in computer vision, the approach based on modeling the biological vision mechanisms is extensively developed. However, up to now, real world image processing has no effective solution in frameworks of both biologically inspired and conventional approaches. Evidently, new algorithms and system architectures based on advanced biological motivation should be developed for solution of computational problems related to this visual task. Basic problems that should be solved for creation of effective artificial visual system to process real world imags are a search for new algorithms of low-level image processing that, in a great extent, determine system performance. In the present paper, the result of psychophysical experiments and several advanced biologically motivated algorithms for low-level processing are presented. These algorithms are based on local space-variant filter, context encoding visual information presented in the center of input window, and automatic detection of perceptually important image fragments. The core of latter algorithm are using local feature conjunctions such as noncolinear oriented segment and composite feature map formation. Developed algorithms were integrated into foveal active vision model, the MARR. It is supposed that proposed algorithms may significantly improve model performance while real world image processing during memorizing, search, and recognition.

  5. Computable visually observed phenotype ontological framework for plants

    PubMed Central

    2011-01-01

    Background The ability to search for and precisely compare similar phenotypic appearances within and across species has vast potential in plant science and genetic research. The difficulty in doing so lies in the fact that many visual phenotypic data, especially visually observed phenotypes that often times cannot be directly measured quantitatively, are in the form of text annotations, and these descriptions are plagued by semantic ambiguity, heterogeneity, and low granularity. Though several bio-ontologies have been developed to standardize phenotypic (and genotypic) information and permit comparisons across species, these semantic issues persist and prevent precise analysis and retrieval of information. A framework suitable for the modeling and analysis of precise computable representations of such phenotypic appearances is needed. Results We have developed a new framework called the Computable Visually Observed Phenotype Ontological Framework for plants. This work provides a novel quantitative view of descriptions of plant phenotypes that leverages existing bio-ontologies and utilizes a computational approach to capture and represent domain knowledge in a machine-interpretable form. This is accomplished by means of a robust and accurate semantic mapping module that automatically maps high-level semantics to low-level measurements computed from phenotype imagery. The framework was applied to two different plant species with semantic rules mined and an ontology constructed. Rule quality was evaluated and showed high quality rules for most semantics. This framework also facilitates automatic annotation of phenotype images and can be adopted by different plant communities to aid in their research. Conclusions The Computable Visually Observed Phenotype Ontological Framework for plants has been developed for more efficient and accurate management of visually observed phenotypes, which play a significant role in plant genomics research. The uniqueness of this framework is its ability to bridge the knowledge of informaticians and plant science researchers by translating descriptions of visually observed phenotypes into standardized, machine-understandable representations, thus enabling the development of advanced information retrieval and phenotype annotation analysis tools for the plant science community. PMID:21702966

  6. Facilitation of listening comprehension by visual information under noisy listening condition

    NASA Astrophysics Data System (ADS)

    Kashimada, Chiho; Ito, Takumi; Ogita, Kazuki; Hasegawa, Hiroshi; Kamata, Kazuo; Ayama, Miyoshi

    2009-02-01

    Comprehension of a sentence under a wide range of delay conditions between auditory and visual stimuli was measured in the environment with low auditory clarity of the level of -10dB and -15dB pink noise. Results showed that the image was helpful for comprehension of the noise-obscured voice stimulus when the delay between the auditory and visual stimuli was 4 frames (=132msec) or less, the image was not helpful for comprehension when the delay between the auditory and visual stimulus was 8 frames (=264msec) or more, and in some cases of the largest delay (32 frames), the video image interfered with comprehension.

  7. Differential roles of low and high spatial frequency content in abnormal facial emotion perception in schizophrenia.

    PubMed

    McBain, Ryan; Norton, Daniel; Chen, Yue

    2010-09-01

    While schizophrenia patients are impaired at facial emotion perception, the role of basic visual processing in this deficit remains relatively unclear. We examined emotion perception when spatial frequency content of facial images was manipulated via high-pass and low-pass filtering. Unlike controls (n=29), patients (n=30) perceived images with low spatial frequencies as more fearful than those without this information, across emotional salience levels. Patients also perceived images with high spatial frequencies as happier. In controls, this effect was found only at low emotional salience. These results indicate that basic visual processing has an amplified modulatory effect on emotion perception in schizophrenia. (c) 2010 Elsevier B.V. All rights reserved.

  8. The effect of gender and level of vision on the physical activity level of children and adolescents with visual impairment.

    PubMed

    Aslan, Ummuhan Bas; Calik, Bilge Basakcı; Kitiş, Ali

    2012-01-01

    This study was planned in order to determine physical activity levels of visually impaired children and adolescents and to investigate the effect of gender and level of vision on physical activity level in visually impaired children and adolescents. A total of 30 visually impaired children and adolescents (16 low vision and 14 blind) aged between 8 and 16 years participated in the study. The physical activity level of cases was evaluated with a physical activity diary (PAD) and one-mile run/walk test (OMR-WT). No difference was found between the PAD and the OMR-WT results of low vision and blind children and adolescents. The visually impaired children and adolescents were detected not to participate in vigorous physical activity. A difference was found in favor of low vision boys in terms of mild, moderate activities and OMR-WT durations. However, no difference was found between physical activity levels of blind girls and boys. The results of our study suggested that the physical activity level of visually impaired children and adolescents was low, and gender affected physical activity in low vision children and adolescents. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Selectivity in Postencoding Connectivity with High-Level Visual Cortex Is Associated with Reward-Motivated Memory.

    PubMed

    Murty, Vishnu P; Tompary, Alexa; Adcock, R Alison; Davachi, Lila

    2017-01-18

    Reward motivation has been demonstrated to enhance declarative memory by facilitating systems-level consolidation. Although high-reward information is often intermixed with lower reward information during an experience, memory for high value information is prioritized. How is this selectivity achieved? One possibility is that postencoding consolidation processes bias memory strengthening to those representations associated with higher reward. To test this hypothesis, we investigated the influence of differential reward motivation on the selectivity of postencoding markers of systems-level memory consolidation. Human participants encoded intermixed, trial-unique memoranda that were associated with either high or low-value during fMRI acquisition. Encoding was interleaved with periods of rest, allowing us to investigate experience-dependent changes in connectivity as they related to later memory. Behaviorally, we found that reward motivation enhanced 24 h associative memory. Analysis of patterns of postencoding connectivity showed that, even though learning trials were intermixed, there was significantly greater connectivity with regions of high-level, category-selective visual cortex associated with high-reward trials. Specifically, increased connectivity of category-selective visual cortex with both the VTA and the anterior hippocampus predicted associative memory for high- but not low-reward memories. Critically, these results were independent of encoding-related connectivity and univariate activity measures. Thus, these findings support a model by which the selective stabilization of memories for salient events is supported by postencoding interactions with sensory cortex associated with reward. Reward motivation is thought to promote memory by supporting memory consolidation. Yet, little is known as to how brain selects relevant information for subsequent consolidation based on reward. We show that experience-dependent changes in connectivity of both the anterior hippocampus and the VTA with high-level visual cortex selectively predicts memory for high-reward memoranda at a 24 h delay. These findings provide evidence for a novel mechanism guiding the consolidation of memories for valuable events, namely, postencoding interactions between neural systems supporting mesolimbic dopamine activation, episodic memory, and perception. Copyright © 2017 the authors 0270-6474/17/370537-09$15.00/0.

  10. Visual processing in Alzheimer's disease: surface detail and colour fail to aid object identification.

    PubMed

    Adlington, Rebecca L; Laws, Keith R; Gale, Tim M

    2009-10-01

    It has been suggested that object recognition in patients with Alzheimer's disease (AD) may be strongly influenced both by image format (e.g. colour vs. line-drawn) and by low-level visual impairments. To examine these notions, we tested basic visual functioning and picture naming in 41 AD patients and 40 healthy elderly controls. Picture naming was examined using 105 images representing a wide range of living and nonliving subcategories (from the Hatfield image test [HIT]: [Adlington, R. A., Laws, K. R., & Gale, T. M. (in press). The Hatfield image test (HIT): A new picture test and norms for experimental and clinical use. Journal of Clinical and Experimental Neuropsychology]), with each item presented in colour, greyscale, or line-drawn formats. Whilst naming for elderly controls improved linearly with the addition of surface detail and colour, AD patients showed no benefit from the addition of either surface information or colour. Additionally, controls showed a significant category by format interaction; however, the same profile did not emerge for AD patients. Finally, AD patients showed widespread and significant impairment on tasks of visual functioning, and low-level visual impairment was predictive of patient naming.

  11. Low-level visual attention and its relation to joint attention in autism spectrum disorder.

    PubMed

    Jaworski, Jessica L Bean; Eigsti, Inge-Marie

    2017-04-01

    Visual attention is integral to social interaction and is a critical building block for development in other domains (e.g., language). Furthermore, atypical attention (especially joint attention) is one of the earliest markers of autism spectrum disorder (ASD). The current study assesses low-level visual attention and its relation to social attentional processing in youth with ASD and typically developing (TD) youth, aged 7 to 18 years. The findings indicate difficulty overriding incorrect attentional cues in ASD, particularly with non-social (arrow) cues relative to social (face) cues. The findings also show reduced competition in ASD from cues that remain on-screen. Furthermore, social attention, autism severity, and age were all predictors of competing cue processing. The results suggest that individuals with ASD may be biased towards speeded rather than accurate responding, and further, that reduced engagement with visual information may impede responses to visual attentional cues. Once attention is engaged, individuals with ASD appear to interpret directional cues as meaningful. These findings from a controlled, experimental paradigm were mirrored in results from an ecologically valid measure of social attention. Attentional difficulties may be exacerbated during the complex and dynamic experience of actual social interaction. Implications for intervention are discussed.

  12. Comparing object recognition from binary and bipolar edge images for visual prostheses

    PubMed Central

    Jung, Jae-Hyun; Pu, Tian; Peli, Eli

    2017-01-01

    Visual prostheses require an effective representation method due to the limited display condition which has only 2 or 3 levels of grayscale in low resolution. Edges derived from abrupt luminance changes in images carry essential information for object recognition. Typical binary (black and white) edge images have been used to represent features to convey essential information. However, in scenes with a complex cluttered background, the recognition rate of the binary edge images by human observers is limited and additional information is required. The polarity of edges and cusps (black or white features on a gray background) carries important additional information; the polarity may provide shape from shading information missing in the binary edge image. This depth information may be restored by using bipolar edges. We compared object recognition rates from 16 binary edge images and bipolar edge images by 26 subjects to determine the possible impact of bipolar filtering in visual prostheses with 3 or more levels of grayscale. Recognition rates were higher with bipolar edge images and the improvement was significant in scenes with complex backgrounds. The results also suggest that erroneous shape from shading interpretation of bipolar edges resulting from pigment rather than boundaries of shape may confound the recognition. PMID:28458481

  13. The forward masking effects of low-level laser glare on target location performance in a visual search task

    NASA Astrophysics Data System (ADS)

    Reddix, M. D.; Dandrea, J. A.; Collyer, P. D.

    1992-01-01

    The present study examined the effects of low-intensity laser glue, far below a level that would cause ocular damage or flashblindness, on the visually guided performance of aviators. With a forward-masking paradigm, this study showed that the time at which laser glare is experienced, relative to initial acquisition of visual information, differentially affects the speed and accuracy of target-location performance. Brief exposure (300 ms) to laser glare, terminating with a visual scene's onset, produced significant decrements in target-location performance relative to a no-glare control whereas a 150 and 300-ms delay of display onset (DDO) had very little effect. The intensity of the light entering the eye and producing these effects was far below the Maximum Permissible Exposure (MPE) limit for safe viewing of coherent light produced by an argon laser. In addition, these effects were modulated by the distance of the target from the center of the visual display. This study demonstrated that the presence of laser glare is not sufficient, in and of itself, to diminish target-location performance. The time at which laser glare is experienced is an important factor in determining the probability and extent of visually mediated performance decrements.

  14. Enhanced perceptual functioning in autism: an update, and eight principles of autistic perception.

    PubMed

    Mottron, Laurent; Dawson, Michelle; Soulières, Isabelle; Hubert, Benedicte; Burack, Jake

    2006-01-01

    We propose an "Enhanced Perceptual Functioning" model encompassing the main differences between autistic and non-autistic social and non-social perceptual processing: locally oriented visual and auditory perception, enhanced low-level discrimination, use of a more posterior network in "complex" visual tasks, enhanced perception of first order static stimuli, diminished perception of complex movement, autonomy of low-level information processing toward higher-order operations, and differential relation between perception and general intelligence. Increased perceptual expertise may be implicated in the choice of special ability in savant autistics, and in the variability of apparent presentations within PDD (autism with and without typical speech, Asperger syndrome) in non-savant autistics. The overfunctioning of brain regions typically involved in primary perceptual functions may explain the autistic perceptual endophenotype.

  15. Abnormalities of Object Visual Processing in Body Dysmorphic Disorder

    PubMed Central

    Feusner, Jamie D.; Hembacher, Emily; Moller, Hayley; Moody, Teena D.

    2013-01-01

    Background Individuals with body dysmorphic disorder may have perceptual distortions for their appearance. Previous studies suggest imbalances in detailed relative to configural/holistic visual processing when viewing faces. No study has investigated the neural correlates of processing non-symptom-related stimuli. The objective of this study was to determine whether individuals with body dysmorphic disorder have abnormal patterns of brain activation when viewing non-face/non-body object stimuli. Methods Fourteen medication-free participants with DSM-IV body dysmorphic disorder and 14 healthy controls participated. We performed functional magnetic resonance imaging while participants matched photographs of houses that were unaltered, contained only high spatial frequency (high detail) information, or only low spatial frequency (low detail) information. The primary outcome was group differences in blood oxygen level-dependent signal changes. Results The body dysmorphic disorder group showed lesser activity in the parahippocampal gyrus, lingual gyrus, and precuneus for low spatial frequency images. There were greater activations in medial prefrontal regions for high spatial frequency images, although no significant differences when compared to a low-level baseline. Greater symptom severity was associated with lesser activity in dorsal occipital cortex and ventrolateral prefrontal cortex for normal and high spatial frequency images. Conclusions Individuals with body dysmorphic disorder have abnormal brain activation patterns when viewing objects. Hypoactivity in visual association areas for configural and holistic (low detail) elements and abnormal allocation of prefrontal systems for details is consistent with a model of imbalances in global vs. local processing. This may occur not only for appearance but also for general stimuli unrelated to their symptoms. PMID:21557897

  16. Access to Awareness for Faces during Continuous Flash Suppression Is Not Modulated by Affective Knowledge

    PubMed Central

    Rabovsky, Milena; Stein, Timo; Abdel Rahman, Rasha

    2016-01-01

    It is a controversially debated topic whether stimuli can be analyzed up to the semantic level when they are suppressed from visual awareness during continuous flash suppression (CFS). Here, we investigated whether affective knowledge, i.e., affective biographical information about faces, influences the time it takes for initially invisible faces with neutral expressions to overcome suppression and break into consciousness. To test this, we used negative, positive, and neutral famous faces as well as initially unfamiliar faces, which were associated with negative, positive or neutral biographical information. Affective knowledge influenced ratings of facial expressions, corroborating recent evidence and indicating the success of our affective learning paradigm. Furthermore, we replicated shorter suppression durations for upright than for inverted faces, demonstrating the suitability of our CFS paradigm. However, affective biographical information did not modulate suppression durations for newly learned faces, and even though suppression durations for famous faces were influenced by affective knowledge, these effects did not differ between upright and inverted faces, indicating that they might have been due to low-level visual differences. Thus, we did not obtain unequivocal evidence for genuine influences of affective biographical information on access to visual awareness for faces during CFS. PMID:27119743

  17. Audiovisual Perception of Noise Vocoded Speech in Dyslexic and Non-Dyslexic Adults: The Role of Low-Frequency Visual Modulations

    ERIC Educational Resources Information Center

    Megnin-Viggars, Odette; Goswami, Usha

    2013-01-01

    Visual speech inputs can enhance auditory speech information, particularly in noisy or degraded conditions. The natural statistics of audiovisual speech highlight the temporal correspondence between visual and auditory prosody, with lip, jaw, cheek and head movements conveying information about the speech envelope. Low-frequency spatial and…

  18. Emotion and Perception: The Role of Affective Information

    PubMed Central

    Zadra, Jonathan R.; Clore, Gerald L.

    2011-01-01

    Visual perception and emotion are traditionally considered separate domains of study. In this article, however, we review research showing them to be less separable that usually assumed. In fact, emotions routinely affect how and what we see. Fear, for example, can affect low-level visual processes, sad moods can alter susceptibility to visual illusions, and goal-directed desires can change the apparent size of goal-relevant objects. In addition, the layout of the physical environment, including the apparent steepness of a hill and the distance to the ground from a balcony can both be affected by emotional states. We propose that emotions provide embodied information about the costs and benefits of anticipated action, information that can be used automatically and immediately, circumventing the need for cogitating on the possible consequences of potential actions. Emotions thus provide a strong motivating influence on how the environment is perceived. PMID:22039565

  19. An integrative view of storage of low- and high-level visual dimensions in visual short-term memory.

    PubMed

    Magen, Hagit

    2017-03-01

    Efficient performance in an environment filled with complex objects is often achieved through the temporal maintenance of conjunctions of features from multiple dimensions. The most striking finding in the study of binding in visual short-term memory (VSTM) is equal memory performance for single features and for integrated multi-feature objects, a finding that has been central to several theories of VSTM. Nevertheless, research on binding in VSTM focused almost exclusively on low-level features, and little is known about how items from low- and high-level visual dimensions (e.g., colored manmade objects) are maintained simultaneously in VSTM. The present study tested memory for combinations of low-level features and high-level representations. In agreement with previous findings, Experiments 1 and 2 showed decrements in memory performance when non-integrated low- and high-level stimuli were maintained simultaneously compared to maintaining each dimension in isolation. However, contrary to previous findings the results of Experiments 3 and 4 showed decrements in memory performance even when integrated objects of low- and high-level stimuli were maintained in memory, compared to maintaining single-dimension objects. Overall, the results demonstrate that low- and high-level visual dimensions compete for the same limited memory capacity, and offer a more comprehensive view of VSTM.

  20. Late maturation of visual spatial integration in humans

    PubMed Central

    Kovács, Ilona; Kozma, Petra; Fehér, Ákos; Benedek, György

    1999-01-01

    Visual development is thought to be completed at an early age. We suggest that the maturation of the visual brain is not homogeneous: functions with greater need for early availability, such as visuomotor control, mature earlier, and the development of other visual functions may extend well into childhood. We found significant improvement in children between 5 and 14 years in visual spatial integration by using a contour-detection task. The data show that long-range spatial interactions—subserving the integration of orientational information across the visual field—span a shorter spatial range in children than in adults. Performance in the task improves in a cue-specific manner with practice, which indicates the participation of fairly low-level perceptual mechanisms. We interpret our findings in terms of a protracted development of ventral visual-stream function in humans. PMID:10518600

  1. Stepwise emergence of the face-sensitive N170 event-related potential component.

    PubMed

    Jemel, Boutheina; Schuller, Anne-Marie; Cheref-Khan, Yasémine; Goffaux, Valérie; Crommelinck, Marc; Bruyer, Raymond

    2003-11-14

    The present study used a parametric design to characterize early event-related potentials (ERP) to face stimuli embedded in gradually decreasing random noise levels. For both N170 and the vertex positive potential (VPP) there was a linear increase in amplitude and decrease in latency with decreasing levels of noise. In contrast, the earlier visual P1 component was stable across noise levels. The P1/N170 dissociation suggests not only a functional dissociation between low and high-level visual processing of faces but also that the N170 reflects the integration of sensorial information into a unitary representation. In addition, the N170/VPP association supports the view that they reflect the same processes operating when viewing faces.

  2. Learning Building Layouts with Non-geometric Visual Information: The Effects of Visual Impairment and Age

    PubMed Central

    Kalia, Amy A.; Legge, Gordon E.; Giudice, Nicholas A.

    2009-01-01

    Previous studies suggest that humans rely on geometric visual information (hallway structure) rather than non-geometric visual information (e.g., doors, signs and lighting) for acquiring cognitive maps of novel indoor layouts. This study asked whether visual impairment and age affect reliance on non-geometric visual information for layout learning. We tested three groups of participants—younger (< 50 years) normally sighted, older (50–70 years) normally sighted, and low vision (people with heterogeneous forms of visual impairment ranging in age from 18–67). Participants learned target locations in building layouts using four presentation modes: a desktop virtual environment (VE) displaying only geometric cues (Sparse VE), a VE displaying both geometric and non-geometric cues (Photorealistic VE), a Map, and a Real building. Layout knowledge was assessed by map drawing and by asking participants to walk to specified targets in the real space. Results indicate that low-vision and older normally-sighted participants relied on additional non-geometric information to accurately learn layouts. In conclusion, visual impairment and age may result in reduced perceptual and/or memory processing that makes it difficult to learn layouts without non-geometric visual information. PMID:19189732

  3. Integration and binding in rehabilitative sensory substitution: Increasing resolution using a new Zooming-in approach

    PubMed Central

    Buchs, Galit; Maidenbaum, Shachar; Levy-Tzedek, Shelly; Amedi, Amir

    2015-01-01

    Purpose: To visually perceive our surroundings we constantly move our eyes and focus on particular details, and then integrate them into a combined whole. Current visual rehabilitation methods, both invasive, like bionic-eyes and non-invasive, like Sensory Substitution Devices (SSDs), down-sample visual stimuli into low-resolution images. Zooming-in to sub-parts of the scene could potentially improve detail perception. Can congenitally blind individuals integrate a ‘visual’ scene when offered this information via different sensory modalities, such as audition? Can they integrate visual information –perceived in parts - into larger percepts despite never having had any visual experience? Methods: We explored these questions using a zooming-in functionality embedded in the EyeMusic visual-to-auditory SSD. Eight blind participants were tasked with identifying cartoon faces by integrating their individual components recognized via the EyeMusic’s zooming mechanism. Results: After specialized training of just 6–10 hours, blind participants successfully and actively integrated facial features into cartooned identities in 79±18% of the trials in a highly significant manner, (chance level 10% ; rank-sum P <  1.55E-04). Conclusions: These findings show that even users who lacked any previous visual experience whatsoever can indeed integrate this visual information with increased resolution. This potentially has important practical visual rehabilitation implications for both invasive and non-invasive methods. PMID:26518671

  4. Terminal Information Processing System (TIPS) Consolidated CAB Display (CCD) Comparative Analysis.

    DTIC Science & Technology

    1982-04-01

    Barometric pressure 3. Center field wind speed, direction and gusts 4. Runway visual range 5. Low-level wind shear 6. Vortex advisory 7. Runway equipment...PASSWORD Command (standard user) u. PAUSE Command (standard user) v. PMSG Command (standard user) w. PPD Command (standard user) x. PURGE Command (standard

  5. Results of the spatial resolution simulation for multispectral data (resolution brochures)

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The variable information content of Earth Resource products at different levels of spatial resolution and in different spectral bands is addressed. A low-cost brochure that scientists and laymen could use to visualize the effects of increasing the spatial resolution of multispectral scanner images was produced.

  6. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  7. Generic decoding of seen and imagined objects using hierarchical visual features.

    PubMed

    Horikawa, Tomoyasu; Kamitani, Yukiyasu

    2017-05-22

    Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.

  8. The sensory components of high-capacity iconic memory and visual working memory.

    PubMed

    Bradley, Claire; Pearson, Joel

    2012-01-01

    EARLY VISUAL MEMORY CAN BE SPLIT INTO TWO PRIMARY COMPONENTS: a high-capacity, short-lived iconic memory followed by a limited-capacity visual working memory that can last many seconds. Whereas a large number of studies have investigated visual working memory for low-level sensory features, much research on iconic memory has used more "high-level" alphanumeric stimuli such as letters or numbers. These two forms of memory are typically examined separately, despite an intrinsic overlap in their characteristics. Here, we used a purely sensory paradigm to examine visual short-term memory for 10 homogeneous items of three different visual features (color, orientation and motion) across a range of durations from 0 to 6 s. We found that the amount of information stored in iconic memory is smaller for motion than for color or orientation. Performance declined exponentially with longer storage durations and reached chance levels after ∼2 s. Further experiments showed that performance for the 10 items at 1 s was contingent on unperturbed attentional resources. In addition, for orientation stimuli, performance was contingent on the location of stimuli in the visual field, especially for short cue delays. Overall, our results suggest a smooth transition between an automatic, high-capacity, feature-specific sensory-iconic memory, and an effortful "lower-capacity" visual working memory.

  9. Training a cell-level classifier for detecting basal-cell carcinoma by combining human visual attention maps with low-level handcrafted features

    PubMed Central

    Corredor, Germán; Whitney, Jon; Arias, Viviana; Madabhushi, Anant; Romero, Eduardo

    2017-01-01

    Abstract. Computational histomorphometric approaches typically use low-level image features for building machine learning classifiers. However, these approaches usually ignore high-level expert knowledge. A computational model (M_im) combines low-, mid-, and high-level image information to predict the likelihood of cancer in whole slide images. Handcrafted low- and mid-level features are computed from area, color, and spatial nuclei distributions. High-level information is implicitly captured from the recorded navigations of pathologists while exploring whole slide images during diagnostic tasks. This model was validated by predicting the presence of cancer in a set of unseen fields of view. The available database was composed of 24 cases of basal-cell carcinoma, from which 17 served to estimate the model parameters and the remaining 7 comprised the evaluation set. A total of 274 fields of view of size 1024×1024  pixels were extracted from the evaluation set. Then 176 patches from this set were used to train a support vector machine classifier to predict the presence of cancer on a patch-by-patch basis while the remaining 98 image patches were used for independent testing, ensuring that the training and test sets do not comprise patches from the same patient. A baseline model (M_ex) estimated the cancer likelihood for each of the image patches. M_ex uses the same visual features as M_im, but its weights are estimated from nuclei manually labeled as cancerous or noncancerous by a pathologist. M_im achieved an accuracy of 74.49% and an F-measure of 80.31%, while M_ex yielded corresponding accuracy and F-measures of 73.47% and 77.97%, respectively. PMID:28382314

  10. The Neural Correlates of Hierarchical Predictions for Perceptual Decisions.

    PubMed

    Weilnhammer, Veith A; Stuke, Heiner; Sterzer, Philipp; Schmack, Katharina

    2018-05-23

    Sensory information is inherently noisy, sparse, and ambiguous. In contrast, visual experience is usually clear, detailed, and stable. Bayesian theories of perception resolve this discrepancy by assuming that prior knowledge about the causes underlying sensory stimulation actively shapes perceptual decisions. The CNS is believed to entertain a generative model aligned to dynamic changes in the hierarchical states of our volatile sensory environment. Here, we used model-based fMRI to study the neural correlates of the dynamic updating of hierarchically structured predictions in male and female human observers. We devised a crossmodal associative learning task with covertly interspersed ambiguous trials in which participants engaged in hierarchical learning based on changing contingencies between auditory cues and visual targets. By inverting a Bayesian model of perceptual inference, we estimated individual hierarchical predictions, which significantly biased perceptual decisions under ambiguity. Although "high-level" predictions about the cue-target contingency correlated with activity in supramodal regions such as orbitofrontal cortex and hippocampus, dynamic "low-level" predictions about the conditional target probabilities were associated with activity in retinotopic visual cortex. Our results suggest that our CNS updates distinct representations of hierarchical predictions that continuously affect perceptual decisions in a dynamically changing environment. SIGNIFICANCE STATEMENT Bayesian theories posit that our brain entertains a generative model to provide hierarchical predictions regarding the causes of sensory information. Here, we use behavioral modeling and fMRI to study the neural underpinnings of such hierarchical predictions. We show that "high-level" predictions about the strength of dynamic cue-target contingencies during crossmodal associative learning correlate with activity in orbitofrontal cortex and the hippocampus, whereas "low-level" conditional target probabilities were reflected in retinotopic visual cortex. Our findings empirically corroborate theorizations on the role of hierarchical predictions in visual perception and contribute substantially to a longstanding debate on the link between sensory predictions and orbitofrontal or hippocampal activity. Our work fundamentally advances the mechanistic understanding of perceptual inference in the human brain. Copyright © 2018 the authors 0270-6474/18/385008-14$15.00/0.

  11. Educating the blind brain: a panorama of neural bases of vision and of training programs in organic neurovisual deficits

    PubMed Central

    Coubard, Olivier A.; Urbanski, Marika; Bourlon, Clémence; Gaumet, Marie

    2014-01-01

    Vision is a complex function, which is achieved by movements of the eyes to properly foveate targets at any location in 3D space and to continuously refresh neural information in the different visual pathways. The visual system involves five main routes originating in the retinas but varying in their destination within the brain: the occipital cortex, but also the superior colliculus (SC), the pretectum, the supra-chiasmatic nucleus, the nucleus of the optic tract and terminal dorsal, medial and lateral nuclei. Visual pathway architecture obeys systematization in sagittal and transversal planes so that visual information from left/right and upper/lower hemi-retinas, corresponding respectively to right/left and lower/upper visual fields, is processed ipsilaterally and ipsialtitudinally to hemi-retinas in left/right hemispheres and upper/lower fibers. Organic neurovisual deficits may occur at any level of this circuitry from the optic nerve to subcortical and cortical destinations, resulting in low or high-level visual deficits. In this didactic review article, we provide a panorama of the neural bases of eye movements and visual systems, and of related neurovisual deficits. Additionally, we briefly review the different schools of rehabilitation of organic neurovisual deficits, and show that whatever the emphasis is put on action or perception, benefits may be observed at both motor and perceptual levels. Given the extent of its neural bases in the brain, vision in its motor and perceptual aspects is also a useful tool to assess and modulate central nervous system (CNS) in general. PMID:25538575

  12. The impact of modality and working memory capacity on achievement in a multimedia environment

    NASA Astrophysics Data System (ADS)

    Stromfors, Charlotte M.

    This study explored the impact of working memory capacity and student learning in a dual modality, multimedia environment titled Visualizing Topography. This computer-based instructional program focused on the basic skills in reading and interpreting topographic maps. Two versions of the program presented the same instructional content but varied the modality of verbal information: the audio-visual condition coordinated topographic maps and narration; the visual-visual condition provided the same topographic maps with readable text. An analysis of covariance procedure was conducted to evaluate the effects due to the two conditions in relation to working memory capacity, controlling for individual differences in spatial visualization and prior knowledge. The scores on the Figural Intersection Test were used to separate subjects into three levels in terms of their measured working memory capacity: low, medium, and high. Subjects accessed Visualizing Topography by way of the Internet and proceeded independently through the program. The program architecture was linear in format. Subjects had a minimum amount of flexibility within each of five segments, but not between segments. One hundred and fifty-one subjects were randomly assigned to either the audio-visual or the visual-visual condition. The average time spent in the program was thirty-one minutes. The results of the ANCOVA revealed a small to moderate modality effect favoring an audio-visual condition. The results also showed that subjects with low and medium working capacity benefited more from the audio-visual condition than the visual-visual condition, while subjects with a high working memory capacity did not benefit from either condition. Although splitting the data reduced group sizes, ANCOVA results by gender suggested that the audio-visual condition favored females with low working memory capacities. The results have implications for designers of educational software, the teachers who select software, and the students themselves. Splitting information into two, non-redundant sources, one audio and one visual, may effectively extend working memory capacity. This is especially significant for the student population encountering difficult science concepts that require the formation and manipulation of mental representations. It is recommended that multimedia environments be designed or selected with attention to modality conditions that facilitate student learning.

  13. Neurophysiology underlying influence of stimulus reliability on audiovisual integration.

    PubMed

    Shatzer, Hannah; Shen, Stanley; Kerlin, Jess R; Pitt, Mark A; Shahin, Antoine J

    2018-01-24

    We tested the predictions of the dynamic reweighting model (DRM) of audiovisual (AV) speech integration, which posits that spectrotemporally reliable (informative) AV speech stimuli induce a reweighting of processing from low-level to high-level auditory networks. This reweighting decreases sensitivity to acoustic onsets and in turn increases tolerance to AV onset asynchronies (AVOA). EEG was recorded while subjects watched videos of a speaker uttering trisyllabic nonwords that varied in spectrotemporal reliability and asynchrony of the visual and auditory inputs. Subjects judged the stimuli as in-sync or out-of-sync. Results showed that subjects exhibited greater AVOA tolerance for non-blurred than blurred visual speech and for less than more degraded acoustic speech. Increased AVOA tolerance was reflected in reduced amplitude of the P1-P2 auditory evoked potentials, a neurophysiological indication of reduced sensitivity to acoustic onsets and successful AV integration. There was also sustained visual alpha band (8-14 Hz) suppression (desynchronization) following acoustic speech onsets for non-blurred vs. blurred visual speech, consistent with continuous engagement of the visual system as the speech unfolds. The current findings suggest that increased spectrotemporal reliability of acoustic and visual speech promotes robust AV integration, partly by suppressing sensitivity to acoustic onsets, in support of the DRM's reweighting mechanism. Increased visual signal reliability also sustains the engagement of the visual system with the auditory system to maintain alignment of information across modalities. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  14. Art Expertise Reduces Influence of Visual Salience on Fixation in Viewing Abstract-Paintings

    PubMed Central

    Koide, Naoko; Kubo, Takatomi; Nishida, Satoshi; Shibata, Tomohiro; Ikeda, Kazushi

    2015-01-01

    When viewing a painting, artists perceive more information from the painting on the basis of their experience and knowledge than art novices do. This difference can be reflected in eye scan paths during viewing of paintings. Distributions of scan paths of artists are different from those of novices even when the paintings contain no figurative object (i.e. abstract paintings). There are two possible explanations for this difference of scan paths. One is that artists have high sensitivity to high-level features such as textures and composition of colors and therefore their fixations are more driven by such features compared with novices. The other is that fixations of artists are more attracted by salient features than those of novices and the fixations are driven by low-level features. To test these, we measured eye fixations of artists and novices during the free viewing of various abstract paintings and compared the distribution of their fixations for each painting with a topological attentional map that quantifies the conspicuity of low-level features in the painting (i.e. saliency map). We found that the fixation distribution of artists was more distinguishable from the saliency map than that of novices. This difference indicates that fixations of artists are less driven by low-level features than those of novices. Our result suggests that artists may extract visual information from paintings based on high-level features. This ability of artists may be associated with artists’ deep aesthetic appreciation of paintings. PMID:25658327

  15. Temporal Restricted Visual Tracking Via Reverse-Low-Rank Sparse Learning.

    PubMed

    Yang, Yehui; Hu, Wenrui; Xie, Yuan; Zhang, Wensheng; Zhang, Tianzhu

    2017-02-01

    An effective representation model, which aims to mine the most meaningful information in the data, plays an important role in visual tracking. Some recent particle-filter-based trackers achieve promising results by introducing the low-rank assumption into the representation model. However, their assumed low-rank structure of candidates limits the robustness when facing severe challenges such as abrupt motion. To avoid the above limitation, we propose a temporal restricted reverse-low-rank learning algorithm for visual tracking with the following advantages: 1) the reverse-low-rank model jointly represents target and background templates via candidates, which exploits the low-rank structure among consecutive target observations and enforces the temporal consistency of target in a global level; 2) the appearance consistency may be broken when target suffers from sudden changes. To overcome this issue, we propose a local constraint via l 1,2 mixed-norm, which can not only ensures the local consistency of target appearance, but also tolerates the sudden changes between two adjacent frames; and 3) to alleviate the inference of unreasonable representation values due to outlier candidates, an adaptive weighted scheme is designed to improve the robustness of the tracker. By evaluating on 26 challenge video sequences, the experiments show the effectiveness and favorable performance of the proposed algorithm against 12 state-of-the-art visual trackers.

  16. Influence of anxiety on memory performance in temporal lobe epilepsy.

    PubMed

    Brown, Franklin C; Westerveld, Michael; Langfitt, John T; Hamberger, Marla; Hamid, Hamada; Shinnar, Shlomo; Sperling, Michael R; Devinsky, Orrin; Barr, William; Tracy, Joseph; Masur, David; Bazil, Carl W; Spencer, Susan S

    2014-02-01

    This study examined the degree to which anxiety contributed to inconsistent material-specific memory difficulties among 243 patients with temporal lobe epilepsy from the Multisite Epilepsy Study. Visual memory performance on the Rey Complex Figure Test (RCFT) was poorer for those with high versus low levels of anxiety but was not found to be related to the TLE side. The verbal memory score on the California Verbal Learning Test (CVLT) was significantly lower for patients with left-sided TLE than for patients with right-sided TLE with low anxiety levels but equally impaired for those with high anxiety levels. These results suggest that we can place more confidence in the ability of verbal memory tests like the CVLT to lateralize to left-sided TLE for those with low anxiety levels, but that verbal memory will be less likely to produce lateralizing information for those with high anxiety levels. This suggests that more caution is needed when interpreting verbal memory tests for those with high anxiety levels. These results indicated that RCFT performance was significantly affected by anxiety and did not lateralize to either side, regardless of anxiety levels. This study adds to the existing literature which suggests that drawing-based visual memory tests do not lateralize among patients with TLE, regardless of anxiety levels. © 2013.

  17. Spoken words can make the invisible visible-Testing the involvement of low-level visual representations in spoken word processing.

    PubMed

    Ostarek, Markus; Huettig, Falk

    2017-03-01

    The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. In the present study, we investigated whether participants can detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Our results showed facilitated detection for congruent ("bottle" → picture of a bottle) versus incongruent ("bottle" → picture of a banana) trials. A second experiment investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400 ms after word onset and decays at 600 ms after word onset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, that is, what we see. More generally, our findings fit best with the notion that spoken words activate modality-specific visual representations that are low level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens but also for generalizing to novel exemplars one has never seen before. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Cognitive tunneling: use of visual information under stress.

    PubMed

    Dirkin, G R

    1983-02-01

    References to "tunnel vision" under stress are considered to describe a process of attentional, rather than visual, narrowing. The hypothesis of Easterbrook that the range of cue utilization is reduced under stress was tested with a primary task located in the visual periphery. High school volunteers performed a visual discrimination task with choice reaction time (RT) as the dependent variable. A 2 X 3 order of presentation by practice design, with repeated measures on the last factor, was employed. Two levels of stress, high and low, were operationalized by the subject's performing in the presence of an evaluative audience or alone. Pulse rate was employed as a manipulation check on arousal. The results partially supported the hypothesis that a peripherally visual primary task could be attended to under stress without decrement in performance.

  19. Multivariate Patterns in the Human Object-Processing Pathway Reveal a Shift from Retinotopic to Shape Curvature Representations in Lateral Occipital Areas, LO-1 and LO-2.

    PubMed

    Vernon, Richard J W; Gouws, André D; Lawrence, Samuel J D; Wade, Alex R; Morland, Antony B

    2016-05-25

    Representations in early visual areas are organized on the basis of retinotopy, but this organizational principle appears to lose prominence in the extrastriate cortex. Nevertheless, an extrastriate region, such as the shape-selective lateral occipital cortex (LO), must still base its activation on the responses from earlier retinotopic visual areas, implying that a transition from retinotopic to "functional" organizations should exist. We hypothesized that such a transition may lie in LO-1 or LO-2, two visual areas lying between retinotopically defined V3d and functionally defined LO. Using a rapid event-related fMRI paradigm, we measured neural similarity in 12 human participants between pairs of stimuli differing along dimensions of shape exemplar and shape complexity within both retinotopically and functionally defined visual areas. These neural similarity measures were then compared with low-level and more abstract (curvature-based) measures of stimulus similarity. We found that low-level, but not abstract, stimulus measures predicted V1-V3 responses, whereas the converse was true for LO, a double dissociation. Critically, abstract stimulus measures were most predictive of responses within LO-2, akin to LO, whereas both low-level and abstract measures were predictive for responses within LO-1, perhaps indicating a transitional point between those two organizational principles. Similar transitions to abstract representations were not observed in the more ventral stream passing through V4 and VO-1/2. The transition we observed in LO-1 and LO-2 demonstrates that a more "abstracted" representation, typically considered the preserve of "category-selective" extrastriate cortex, can nevertheless emerge in retinotopic regions. Visual areas are typically identified either through retinotopy (e.g., V1-V3) or from functional selectivity [e.g., shape-selective lateral occipital complex (LOC)]. We combined these approaches to explore the nature of shape representations through the visual hierarchy. Two different representations emerged: the first reflected low-level shape properties (dependent on the spatial layout of the shape outline), whereas the second captured more abstract curvature-related shape features. Critically, early visual cortex represented low-level information but this diminished in the extrastriate cortex (LO-1/LO-2/LOC), in which the abstract representation emerged. Therefore, this work further elucidates the nature of shape representations in the LOC, provides insight into how those representations emerge from early retinotopic cortex, and crucially demonstrates that retinotopically tuned regions (LO-1/LO-2) are not necessarily constrained to retinotopic representations. Copyright © 2016 Vernon et al.

  20. Multivariate Patterns in the Human Object-Processing Pathway Reveal a Shift from Retinotopic to Shape Curvature Representations in Lateral Occipital Areas, LO-1 and LO-2

    PubMed Central

    Vernon, Richard J. W.; Gouws, André D.; Lawrence, Samuel J. D.; Wade, Alex R.

    2016-01-01

    Representations in early visual areas are organized on the basis of retinotopy, but this organizational principle appears to lose prominence in the extrastriate cortex. Nevertheless, an extrastriate region, such as the shape-selective lateral occipital cortex (LO), must still base its activation on the responses from earlier retinotopic visual areas, implying that a transition from retinotopic to “functional” organizations should exist. We hypothesized that such a transition may lie in LO-1 or LO-2, two visual areas lying between retinotopically defined V3d and functionally defined LO. Using a rapid event-related fMRI paradigm, we measured neural similarity in 12 human participants between pairs of stimuli differing along dimensions of shape exemplar and shape complexity within both retinotopically and functionally defined visual areas. These neural similarity measures were then compared with low-level and more abstract (curvature-based) measures of stimulus similarity. We found that low-level, but not abstract, stimulus measures predicted V1–V3 responses, whereas the converse was true for LO, a double dissociation. Critically, abstract stimulus measures were most predictive of responses within LO-2, akin to LO, whereas both low-level and abstract measures were predictive for responses within LO-1, perhaps indicating a transitional point between those two organizational principles. Similar transitions to abstract representations were not observed in the more ventral stream passing through V4 and VO-1/2. The transition we observed in LO-1 and LO-2 demonstrates that a more “abstracted” representation, typically considered the preserve of “category-selective” extrastriate cortex, can nevertheless emerge in retinotopic regions. SIGNIFICANCE STATEMENT Visual areas are typically identified either through retinotopy (e.g., V1–V3) or from functional selectivity [e.g., shape-selective lateral occipital complex (LOC)]. We combined these approaches to explore the nature of shape representations through the visual hierarchy. Two different representations emerged: the first reflected low-level shape properties (dependent on the spatial layout of the shape outline), whereas the second captured more abstract curvature-related shape features. Critically, early visual cortex represented low-level information but this diminished in the extrastriate cortex (LO-1/LO-2/LOC), in which the abstract representation emerged. Therefore, this work further elucidates the nature of shape representations in the LOC, provides insight into how those representations emerge from early retinotopic cortex, and crucially demonstrates that retinotopically tuned regions (LO-1/LO-2) are not necessarily constrained to retinotopic representations. PMID:27225766

  1. Automatic summarization of soccer highlights using audio-visual descriptors.

    PubMed

    Raventós, A; Quijada, R; Torres, Luis; Tarrés, Francesc

    2015-01-01

    Automatic summarization generation of sports video content has been object of great interest for many years. Although semantic descriptions techniques have been proposed, many of the approaches still rely on low-level video descriptors that render quite limited results due to the complexity of the problem and to the low capability of the descriptors to represent semantic content. In this paper, a new approach for automatic highlights summarization generation of soccer videos using audio-visual descriptors is presented. The approach is based on the segmentation of the video sequence into shots that will be further analyzed to determine its relevance and interest. Of special interest in the approach is the use of the audio information that provides additional robustness to the overall performance of the summarization system. For every video shot a set of low and mid level audio-visual descriptors are computed and lately adequately combined in order to obtain different relevance measures based on empirical knowledge rules. The final summary is generated by selecting those shots with highest interest according to the specifications of the user and the results of relevance measures. A variety of results are presented with real soccer video sequences that prove the validity of the approach.

  2. The Usefulness of Tactual Maps of the New York City Subway System.

    ERIC Educational Resources Information Center

    Luxton, K.; And Others

    1994-01-01

    Sixteen people with blindness or visual impairments used three different types of tactual maps of the New York City subway system presenting information at three levels of specificity. Results indicated that the tactual maps improved participants' attitudes toward the subway and benefited blind as well as low vision participants. (Author/DB)

  3. Optimizing text for an individual's visual system: The contribution of visual crowding to reading difficulties.

    PubMed

    Joo, Sung Jun; White, Alex L; Strodtman, Douglas J; Yeatman, Jason D

    2018-06-01

    Reading is a complex process that involves low-level visual processing, phonological processing, and higher-level semantic processing. Given that skilled reading requires integrating information among these different systems, it is likely that reading difficulty-known as dyslexia-can emerge from impairments at any stage of the reading circuitry. To understand contributing factors to reading difficulties within individuals, it is necessary to diagnose the function of each component of the reading circuitry. Here, we investigated whether adults with dyslexia who have impairments in visual processing respond to a visual manipulation specifically targeting their impairment. We collected psychophysical measures of visual crowding and tested how each individual's reading performance was affected by increased text-spacing, a manipulation designed to alleviate severe crowding. Critically, we identified a sub-group of individuals with dyslexia showing elevated crowding and found that these individuals read faster when text was rendered with increased letter-, word- and line-spacing. Our findings point to a subtype of dyslexia involving elevated crowding and demonstrate that individuals benefit from interventions personalized to their specific impairments. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Electrocortical amplification for emotionally arousing natural scenes: The contribution of luminance and chromatic visual channels

    PubMed Central

    Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias M.; Petro, Nathan M.; Bradley, Margaret M.; Keil, Andreas

    2015-01-01

    Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene’s physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. PMID:25640949

  5. Electrocortical amplification for emotionally arousing natural scenes: the contribution of luminance and chromatic visual channels.

    PubMed

    Miskovic, Vladimir; Martinovic, Jasna; Wieser, Matthias J; Petro, Nathan M; Bradley, Margaret M; Keil, Andreas

    2015-03-01

    Emotionally arousing scenes readily capture visual attention, prompting amplified neural activity in sensory regions of the brain. The physical stimulus features and related information channels in the human visual system that contribute to this modulation, however, are not known. Here, we manipulated low-level physical parameters of complex scenes varying in hedonic valence and emotional arousal in order to target the relative contributions of luminance based versus chromatic visual channels to emotional perception. Stimulus-evoked brain electrical activity was measured during picture viewing and used to quantify neural responses sensitive to lower-tier visual cortical involvement (steady-state visual evoked potentials) as well as the late positive potential, reflecting a more distributed cortical event. Results showed that the enhancement for emotional content was stimulus-selective when examining the steady-state segments of the evoked visual potentials. Response amplification was present only for low spatial frequency, grayscale stimuli, and not for high spatial frequency, red/green stimuli. In contrast, the late positive potential was modulated by emotion regardless of the scene's physical properties. Our findings are discussed in relation to neurophysiologically plausible constraints operating at distinct stages of the cortical processing stream. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain.

    PubMed

    Groen, Iris I A; Silson, Edward H; Baker, Chris I

    2017-02-19

    Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  7. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain

    PubMed Central

    2017-01-01

    Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044013

  8. Impact of language on development of auditory-visual speech perception.

    PubMed

    Sekiyama, Kaoru; Burnham, Denis

    2008-03-01

    The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various signal-to-noise levels. In Experiment 1 with two groups of adults, native speakers of Japanese and native speakers of English, the results on both percent visually influenced responses and reaction time supported previous reports of a weaker visual influence for Japanese participants. In Experiment 2, an additional three age groups (6, 8, and 11 years) in each language group were tested. The results showed that the degree of visual influence was low and equivalent for Japanese and English language 6-year-olds, and increased over age for English language participants, especially between 6 and 8 years, but remained the same for Japanese participants. This may be related to the fact that English language adults and older children processed visual speech information relatively faster than auditory information whereas no such inter-modal differences were found in the Japanese participants' reaction times.

  9. Surround-Masking Affects Visual Estimation Ability

    PubMed Central

    Jastrzebski, Nicola R.; Hugrass, Laila E.; Crewther, Sheila G.; Crewther, David P.

    2017-01-01

    Visual estimation of numerosity involves the discrimination of magnitude between two distributions or perceptual sets that vary in number of elements. How performance on such estimation depends on peripheral sensory stimulation is unclear, even in typically developing adults. Here, we varied the central and surround contrast of stimuli that comprised a visual estimation task in order to determine whether mechanisms involved with the removal of unessential visual input functionally contributes toward number acuity. The visual estimation judgments of typically developed adults were significantly impaired for high but not low contrast surround stimulus conditions. The center and surround contrasts of the stimuli also differentially affected the accuracy of numerosity estimation depending on whether fewer or more dots were presented. Remarkably, observers demonstrated the highest mean percentage accuracy across stimulus conditions in the discrimination of more elements when the surround contrast was low and the background luminance of the central region containing the elements was dark (black center). Conversely, accuracy was severely impaired during the discrimination of fewer elements when the surround contrast was high and the background luminance of the central region was mid level (gray center). These findings suggest that estimation ability is functionally related to the quality of low-order filtration of unessential visual information. These surround masking results may help understanding of the poor visual estimation ability commonly observed in developmental dyscalculia. PMID:28360845

  10. A Notation for Rapid Specification of Information Visualization

    ERIC Educational Resources Information Center

    Lee, Sang Yun

    2013-01-01

    This thesis describes a notation for rapid specification of information visualization, which can be used as a theoretical framework of integrating various types of information visualization, and its applications at a conceptual level. The notation is devised to codify the major characteristics of data/visual structures in conventionally-used…

  11. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    PubMed

    Wilbiks, Jonathan M P; Dyson, Benjamin J

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  12. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA

    PubMed Central

    Wilbiks, Jonathan M. P.; Dyson, Benjamin J.

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus. PMID:27977790

  13. Nonretinotopic visual processing in the brain.

    PubMed

    Melcher, David; Morrone, Maria Concetta

    2015-01-01

    A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.

  14. The perception of surface layout during low level flight

    NASA Technical Reports Server (NTRS)

    Perrone, John A.

    1991-01-01

    Although it is fairly well established that information about surface layout can be gained from motion cues, it is not so clear as to what information humans can use and what specific information they should be provided. Theoretical analyses tell us that the information is in the stimulus. It will take more experiments to verify that this information can be used by humans to extract surface layout from the 2D velocity flow field. The visual motion factors that can affect the pilot's ability to control an aircraft and to infer the layout of the terrain ahead are discussed.

  15. Low Vision Rehabilitation for Adult African Americans in Two Settings.

    PubMed

    Draper, Erin M; Feng, Rui; Appel, Sarah D; Graboyes, Marcy; Engle, Erin; Ciner, Elise B; Ellenberg, Jonas H; Stambolian, Dwight

    2016-07-01

    The Vision Rehabilitation for African Americans with Central Vision Impairment (VISRAC) study is a demonstration project evaluating how modifications in vision rehabilitation can improve the use of functional vision. Fifty-five African Americans 40 years of age and older with central vision impairment were randomly assigned to receive either clinic-based (CB) or home-based (HB) low vision rehabilitation services. Forty-eight subjects completed the study. The primary outcome was the change in functional vision in activities of daily living, as assessed with the Veteran's Administration Low-Vision Visual Function Questionnaire (VFQ-48). This included scores for overall visual ability and visual ability domains (reading, mobility, visual information processing, and visual motor skills). Each score was normalized into logit estimates by Rasch analysis. Linear regression models were used to compare the difference in the total score and each domain score between the two intervention groups. The significance level for each comparison was set at 0.05. Both CB and HB groups showed significant improvement in overall visual ability at the final visit compared with baseline. The CB group showed greater improvement than the HB group (mean of 1.28 vs. 0.87 logits change), though the group difference is not significant (p = 0.057). The CB group visual motor skills score showed significant improvement over the HB group score (mean of 3.30 vs. 1.34 logits change, p = 0.044). The differences in improvement of the reading and visual information processing scores were not significant (p = 0.054 and p = 0.509) between groups. Neither group had significant improvement in the mobility score, which was not part of the rehabilitation program. Vision rehabilitation is effective for this study population regardless of location. Possible reasons why the CB group performed better than the HB group include a number of psychosocial factors as well as the more standardized distraction-free work environment within the clinic setting.

  16. Accessibility of health clubs for people with mobility disabilities and visual impairments.

    PubMed

    Rimmer, James H; Riley, Barth; Wang, Edward; Rauworth, Amy

    2005-11-01

    We sought to examine the accessibility of health clubs to persons with mobility disabilities and visual impairments. We assessed 35 health clubs and fitness facilities as part of a national field trial of a new instrument, Accessibility Instruments Measuring Fitness and Recreation Environments (AIMFREE), designed to assess accessibility of fitness facilities in the following domains: (1) built environment, (2) equipment, (3) swimming pools, (4) information, (5) facility policies, and (6) professional behavior. All facilities had a low to moderate level of accessibility. Some of the deficiencies concerned specific Americans with Disabilities Act guidelines pertaining to the built environment, whereas other deficiency areas were related to aspects of the facilities' equipment, information, policies, and professional staff. Persons with mobility disabilities and visual impairments have difficulty accessing various areas of fitness facilities and health clubs. AIMFREE is an important tool for increasing awareness of these accessibility barriers for people with disabilities.

  17. Accessibility of Health Clubs for People with Mobility Disabilities and Visual Impairments

    PubMed Central

    Rimmer, James H.; Riley, Barth; Wang, Edward; Rauworth, Amy

    2005-01-01

    Objective. We sought to examine the accessibility of health clubs to persons with mobility disabilities and visual impairments. Methods. We assessed 35 health clubs and fitness facilities as part of a national field trial of a new instrument, Accessibility Instruments Measuring Fitness and Recreation Environments (AIMFREE), designed to assess accessibility of fitness facilities in the following domains: (1) built environment, (2) equipment, (3) swimming pools, (4) information, (5) facility policies, and (6) professional behavior. Results. All facilities had a low to moderate level of accessibility. Some of the deficiencies concerned specific Americans with Disabilities Act guidelines pertaining to the built environment, whereas other deficiency areas were related to aspects of the facilities’ equipment, information, policies, and professional staff. Conclusions. Persons with mobility disabilities and visual impairments have difficulty accessing various areas of fitness facilities and health clubs. AIMFREE is an important tool for increasing awareness of these accessibility barriers for people with disabilities. PMID:16254234

  18. Optoelectronic aid for patients with severely restricted visual fields in daylight conditions

    NASA Astrophysics Data System (ADS)

    Peláez-Coca, María Dolores; Sobrado-Calvo, Paloma; Vargas-Martín, Fernando

    2011-11-01

    In this study we evaluated the immediate effectiveness of an optoelectronic visual field expander in a sample of subjects with retinitis pigmentosa suffering from a severe peripheral visual field restriction. The aid uses the augmented view concept and provides subjects with visual information from outside their visual field. The tests were carried out in daylight conditions. The optoelectronic aid comprises a FPGA (real-time video processor), a wide-angle mini camera and a transparent see-through head-mounted display. This optoelectronic aid is called SERBA (Sistema Electro-óptico Reconfigurable de Ayuda para Baja Visión). We previously showed that, without compromising residual vision, the SERBA system provides information about objects within an area about three times greater on average than the remaining visual field of the subjects [1]. In this paper we address the effects of the device on mobility under daylight conditions with and without SERBA. The participants were six subjects with retinitis pigmentosa. In this mobility test, better results were obtained when subjects were wearing the SERBA system; specifically, both the number of contacts with low-level obstacles and mobility errors decreased significantly. A longer training period with the device might improve its usefulness.

  19. High visual resolution matters in audiovisual speech perception, but only for some.

    PubMed

    Alsius, Agnès; Wayne, Rachel V; Paré, Martin; Munhall, Kevin G

    2016-07-01

    The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect.

  20. Audio-visual presentation of information for informed consent for participation in clinical trials.

    PubMed

    Synnot, Anneliese; Ryan, Rebecca; Prictor, Megan; Fetherstonhaugh, Deirdre; Parker, Barbara

    2014-05-09

    Informed consent is a critical component of clinical research. Different methods of presenting information to potential participants of clinical trials may improve the informed consent process. Audio-visual interventions (presented, for example, on the Internet or on DVD) are one such method. We updated a 2008 review of the effects of these interventions for informed consent for trial participation. To assess the effects of audio-visual information interventions regarding informed consent compared with standard information or placebo audio-visual interventions regarding informed consent for potential clinical trial participants, in terms of their understanding, satisfaction, willingness to participate, and anxiety or other psychological distress. We searched: the Cochrane Central Register of Controlled Trials (CENTRAL), The Cochrane Library, issue 6, 2012; MEDLINE (OvidSP) (1946 to 13 June 2012); EMBASE (OvidSP) (1947 to 12 June 2012); PsycINFO (OvidSP) (1806 to June week 1 2012); CINAHL (EbscoHOST) (1981 to 27 June 2012); Current Contents (OvidSP) (1993 Week 27 to 2012 Week 26); and ERIC (Proquest) (searched 27 June 2012). We also searched reference lists of included studies and relevant review articles, and contacted study authors and experts. There were no language restrictions. We included randomised and quasi-randomised controlled trials comparing audio-visual information alone, or in conjunction with standard forms of information provision (such as written or verbal information), with standard forms of information provision or placebo audio-visual information, in the informed consent process for clinical trials. Trials involved individuals or their guardians asked to consider participating in a real or hypothetical clinical study. (In the earlier version of this review we only included studies evaluating informed consent interventions for real studies). Two authors independently assessed studies for inclusion and extracted data. We synthesised the findings using meta-analysis, where possible, and narrative synthesis of results. We assessed the risk of bias of individual studies and considered the impact of the quality of the overall evidence on the strength of the results. We included 16 studies involving data from 1884 participants. Nine studies included participants considering real clinical trials, and eight included participants considering hypothetical clinical trials, with one including both. All studies were conducted in high-income countries.There is still much uncertainty about the effect of audio-visual informed consent interventions on a range of patient outcomes. However, when considered across comparisons, we found low to very low quality evidence that such interventions may slightly improve knowledge or understanding of the parent trial, but may make little or no difference to rate of participation or willingness to participate. Audio-visual presentation of informed consent may improve participant satisfaction with the consent information provided. However its effect on satisfaction with other aspects of the process is not clear. There is insufficient evidence to draw conclusions about anxiety arising from audio-visual informed consent. We found conflicting, very low quality evidence about whether audio-visual interventions took more or less time to administer. No study measured researcher satisfaction with the informed consent process, nor ease of use.The evidence from real clinical trials was rated as low quality for most outcomes, and for hypothetical studies, very low. We note, however, that this was in large part due to poor study reporting, the hypothetical nature of some studies and low participant numbers, rather than inconsistent results between studies or confirmed poor trial quality. We do not believe that any studies were funded by organisations with a vested interest in the results. The value of audio-visual interventions as a tool for helping to enhance the informed consent process for people considering participating in clinical trials remains largely unclear, although trends are emerging with regard to improvements in knowledge and satisfaction. Many relevant outcomes have not been evaluated in randomised trials. Triallists should continue to explore innovative methods of providing information to potential trial participants during the informed consent process, mindful of the range of outcomes that the intervention should be designed to achieve, and balancing the resource implications of intervention development and delivery against the purported benefits of any intervention.More trials, adhering to CONSORT standards, and conducted in settings and populations underserved in this review, i.e. low- and middle-income countries and people with low literacy, would strengthen the results of this review and broaden its applicability. Assessing process measures, such as time taken to administer the intervention and researcher satisfaction, would inform the implementation of audio-visual consent materials.

  1. The Effect of Gender and Level of Vision on the Physical Activity Level of Children and Adolescents with Visual Impairment

    ERIC Educational Resources Information Center

    Aslan, Ummuhan Bas; Calik, Bilge Basakci; Kitis, Ali

    2012-01-01

    This study was planned in order to determine physical activity levels of visually impaired children and adolescents and to investigate the effect of gender and level of vision on physical activity level in visually impaired children and adolescents. A total of 30 visually impaired children and adolescents (16 low vision and 14 blind) aged between…

  2. Simultaneous colour visualizations of multiple ALS point cloud attributes for land cover and vegetation analysis

    NASA Astrophysics Data System (ADS)

    Zlinszky, András; Schroiff, Anke; Otepka, Johannes; Mandlburger, Gottfried; Pfeifer, Norbert

    2014-05-01

    LIDAR point clouds hold valuable information for land cover and vegetation analysis, not only in the spatial distribution of the points but also in their various attributes. However, LIDAR point clouds are rarely used for visual interpretation, since for most users, the point cloud is difficult to interpret compared to passive optical imagery. Meanwhile, point cloud viewing software is available allowing interactive 3D interpretation, but typically only one attribute at a time. This results in a large number of points with the same colour, crowding the scene and often obscuring detail. We developed a scheme for mapping information from multiple LIDAR point attributes to the Red, Green, and Blue channels of a widely used LIDAR data format, which are otherwise mostly used to add information from imagery to create "photorealistic" point clouds. The possible combinations of parameters are therefore represented in a wide range of colours, but relative differences in individual parameter values of points can be well understood. The visualization was implemented in OPALS software, using a simple and robust batch script, and is viewer independent since the information is stored in the point cloud data file itself. In our case, the following colour channel assignment delivered best results: Echo amplitude in the Red, echo width in the Green and normalized height above a Digital Terrain Model in the Blue channel. With correct parameter scaling (but completely without point classification), points belonging to asphalt and bare soil are dark red, low grassland and crop vegetation are bright red to yellow, shrubs and low trees are green and high trees are blue. Depending on roof material and DTM quality, buildings are shown from red through purple to dark blue. Erroneously high or low points, or points with incorrect amplitude or echo width usually have colours contrasting from terrain or vegetation. This allows efficient visual interpretation of the point cloud in planar, profile and 3D views since it reduces crowding of the scene and delivers intuitive contextual information. The resulting visualization has proved useful for vegetation analysis for habitat mapping, and can also be applied as a first step for point cloud level classification. An interactive demonstration of the visualization script is shown during poster attendance, including the opportunity to view your own point cloud sample files.

  3. Behavioral model of visual perception and recognition

    NASA Astrophysics Data System (ADS)

    Rybak, Ilya A.; Golovan, Alexander V.; Gusakova, Valentina I.

    1993-09-01

    In the processes of visual perception and recognition human eyes actively select essential information by way of successive fixations at the most informative points of the image. A behavioral program defining a scanpath of the image is formed at the stage of learning (object memorizing) and consists of sequential motor actions, which are shifts of attention from one to another point of fixation, and sensory signals expected to arrive in response to each shift of attention. In the modern view of the problem, invariant object recognition is provided by the following: (1) separated processing of `what' (object features) and `where' (spatial features) information at high levels of the visual system; (2) mechanisms of visual attention using `where' information; (3) representation of `what' information in an object-based frame of reference (OFR). However, most recent models of vision based on OFR have demonstrated the ability of invariant recognition of only simple objects like letters or binary objects without background, i.e. objects to which a frame of reference is easily attached. In contrast, we use not OFR, but a feature-based frame of reference (FFR), connected with the basic feature (edge) at the fixation point. This has provided for our model, the ability for invariant representation of complex objects in gray-level images, but demands realization of behavioral aspects of vision described above. The developed model contains a neural network subsystem of low-level vision which extracts a set of primary features (edges) in each fixation, and high- level subsystem consisting of `what' (Sensory Memory) and `where' (Motor Memory) modules. The resolution of primary features extraction decreases with distances from the point of fixation. FFR provides both the invariant representation of object features in Sensor Memory and shifts of attention in Motor Memory. Object recognition consists in successive recall (from Motor Memory) and execution of shifts of attention and successive verification of the expected sets of features (stored in Sensory Memory). The model shows the ability of recognition of complex objects (such as faces) in gray-level images invariant with respect to shift, rotation, and scale.

  4. Helicopter pilot scan techniques during low-altitude high-speed flight.

    PubMed

    Kirby, Christopher E; Kennedy, Quinn; Yang, Ji Hyun

    2014-07-01

    This study examined pilots' visual scan patterns during a simulated high-speed, low-level flight and how their scan rates related to flight performance. As helicopters become faster and more agile, pilots are expected to navigate at low altitudes while traveling at high speeds. A pilot's ability to interpret information from a combination of visual sources determines not only mission success, but also aircraft and crew survival. In a fixed-base helicopter simulator modeled after the U.S. Navy's MH-60S, 17 active-duty Navy helicopter pilots with varying total flight times flew and navigated through a simulated southern Californian desert course. Pilots' scan rate and fixation locations were monitored using an eye-tracking system while they flew through the course. Flight parameters, including altitude, were recorded using the simulator's recording system. Experienced pilots with more than 1000 total flight hours better maintained a constant altitude (mean altitude deviation = 48.52 ft, SD = 31.78) than less experienced pilots (mean altitude deviation = 73.03 ft, SD = 10.61) and differed in some aspects of their visual scans. They spent more time looking at the instrument display and less time looking out the window (OTW) than less experienced pilots. Looking OTW was associated with less consistency in maintaining altitude. Results may aid training effectiveness specific to helicopter aviation, particularly in high-speed low-level flight conditions.

  5. Rehabilitation regimes based upon psychophysical studies of prosthetic vision

    NASA Astrophysics Data System (ADS)

    Chen, S. C.; Suaning, G. J.; Morley, J. W.; Lovell, N. H.

    2009-06-01

    Human trials of prototype visual prostheses have successfully elicited visual percepts (phosphenes) in the visual field of implant recipients blinded through retinitis pigmentosa and age-related macular degeneration. Researchers are progressing rapidly towards a device that utilizes individual phosphenes as the elementary building blocks to compose a visual scene. This form of prosthetic vision is expected, in the near term, to have low resolution, large inter-phosphene gaps, distorted spatial distribution of phosphenes, restricted field of view, an eccentrically located phosphene field and limited number of expressible luminance levels. In order to fully realize the potential of these devices, there needs to be a training and rehabilitation program which aims to assist the prosthesis recipients to understand what they are seeing, and also to adapt their viewing habits to optimize the performance of the device. Based on the literature of psychophysical studies in simulated and real prosthetic vision, this paper proposes a comprehensive, theoretical training regime for a prosthesis recipient: visual search, visual acuity, reading, face/object recognition, hand-eye coordination and navigation. The aim of these tasks is to train the recipients to conduct visual scanning, eccentric viewing and reading, discerning low-contrast visual information, and coordinating bodily actions for visual-guided tasks under prosthetic vision. These skills have been identified as playing an important role in making prosthetic vision functional for the daily activities of their recipients.

  6. Abnormalities in the Visual Processing of Viewing Complex Visual Stimuli Amongst Individuals With Body Image Concern.

    PubMed

    Duncum, A J F; Atkins, K J; Beilharz, F L; Mundy, M E

    2016-01-01

    Individuals with body dysmorphic disorder (BDD) and clinically concerning body-image concern (BIC) appear to possess abnormalities in the way they perceive visual information in the form of a bias towards local visual processing. As inversion interrupts normal global processing, forcing individuals to process locally, an upright-inverted stimulus discrimination task was used to investigate this phenomenon. We examined whether individuals with nonclinical, yet high levels of BIC would show signs of this bias, in the form of reduced inversion effects (i.e., increased local processing). Furthermore, we assessed whether this bias appeared for general visual stimuli or specifically for appearance-related stimuli, such as faces and bodies. Participants with high-BIC (n = 25) and low-BIC (n = 30) performed a stimulus discrimination task with upright and inverted faces, scenes, objects, and bodies. Unexpectedly, the high-BIC group showed an increased inversion effect compared to the low-BIC group, indicating perceptual abnormalities may not be present as local processing biases, as originally thought. There was no significant difference in performance across stimulus types, signifying that any visual processing abnormalities may be general rather than appearance-based. This has important implications for whether visual processing abnormalities are predisposing factors for BDD or develop throughout the disorder.

  7. The Impact of Density and Ratio on Object-Ensemble Representation in Human Anterior-Medial Ventral Visual Cortex

    PubMed Central

    Cant, Jonathan S.; Xu, Yaoda

    2015-01-01

    Behavioral research has demonstrated that observers can extract summary statistics from ensembles of multiple objects. We recently showed that a region of anterior-medial ventral visual cortex, overlapping largely with the scene-sensitive parahippocampal place area (PPA), participates in object-ensemble representation. Here we investigated the encoding of ensemble density in this brain region using fMRI-adaptation. In Experiment 1, we varied density by changing the spacing between objects and found no sensitivity in PPA to such density changes. Thus, density may not be encoded in PPA, possibly because object spacing is not perceived as an intrinsic ensemble property. In Experiment 2, we varied relative density by changing the ratio of 2 types of objects comprising an ensemble, and observed significant sensitivity in PPA to such ratio change. Although colorful ensembles were shown in Experiment 2, Experiment 3 demonstrated that sensitivity to object ratio change was not driven mainly by a change in the ratio of colors. Thus, while anterior-medial ventral visual cortex is insensitive to density (object spacing) changes, it does code relative density (object ratio) within an ensemble. Object-ensemble processing in this region may thus depend on high-level visual information, such as object ratio, rather than low-level information, such as spacing/spatial frequency. PMID:24964917

  8. Sensory processing patterns predict the integration of information held in visual working memory.

    PubMed

    Lowe, Matthew X; Stevenson, Ryan A; Wilson, Kristin E; Ouslis, Natasha E; Barense, Morgan D; Cant, Jonathan S; Ferber, Susanne

    2016-02-01

    Given the limited resources of visual working memory, multiple items may be remembered as an averaged group or ensemble. As a result, local information may be ill-defined, but these ensemble representations provide accurate diagnostics of the natural world by combining gist information with item-level information held in visual working memory. Some neurodevelopmental disorders are characterized by sensory processing profiles that predispose individuals to avoid or seek-out sensory stimulation, fundamentally altering their perceptual experience. Here, we report such processing styles will affect the computation of ensemble statistics in the general population. We identified stable adult sensory processing patterns to demonstrate that individuals with low sensory thresholds who show a greater proclivity to engage in active response strategies to prevent sensory overstimulation are less likely to integrate mean size information across a set of similar items and are therefore more likely to be biased away from the mean size representation of an ensemble display. We therefore propose the study of ensemble processing should extend beyond the statistics of the display, and should also consider the statistics of the observer. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Is race erased? Decoding race from patterns of neural activity when skin color is not diagnostic of group boundaries.

    PubMed

    Ratner, Kyle G; Kaul, Christian; Van Bavel, Jay J

    2013-10-01

    Several theories suggest that people do not represent race when it does not signify group boundaries. However, race is often associated with visually salient differences in skin tone and facial features. In this study, we investigated whether race could be decoded from distributed patterns of neural activity in the fusiform gyri and early visual cortex when visual features that often covary with race were orthogonal to group membership. To this end, we used multivariate pattern analysis to examine an fMRI dataset that was collected while participants assigned to mixed-race groups categorized own-race and other-race faces as belonging to their newly assigned group. Whereas conventional univariate analyses provided no evidence of race-based responses in the fusiform gyri or early visual cortex, multivariate pattern analysis suggested that race was represented within these regions. Moreover, race was represented in the fusiform gyri to a greater extent than early visual cortex, suggesting that the fusiform gyri results do not merely reflect low-level perceptual information (e.g. color, contrast) from early visual cortex. These findings indicate that patterns of activation within specific regions of the visual cortex may represent race even when overall activation in these regions is not driven by racial information.

  10. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams

    PubMed Central

    Rouinfar, Amy; Agra, Elise; Larson, Adam M.; Rebello, N. Sanjay; Loschky, Lester C.

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants’ attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants’ verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers’ attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions. PMID:25324804

  11. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams.

    PubMed

    Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.

  12. Shared sensory estimates for human motion perception and pursuit eye movements.

    PubMed

    Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio; Osborne, Leslie C

    2015-06-03

    Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. Copyright © 2015 the authors 0270-6474/15/358515-16$15.00/0.

  13. Shared Sensory Estimates for Human Motion Perception and Pursuit Eye Movements

    PubMed Central

    Mukherjee, Trishna; Battifarano, Matthew; Simoncini, Claudio

    2015-01-01

    Are sensory estimates formed centrally in the brain and then shared between perceptual and motor pathways or is centrally represented sensory activity decoded independently to drive awareness and action? Questions about the brain's information flow pose a challenge because systems-level estimates of environmental signals are only accessible indirectly as behavior. Assessing whether sensory estimates are shared between perceptual and motor circuits requires comparing perceptual reports with motor behavior arising from the same sensory activity. Extrastriate visual cortex both mediates the perception of visual motion and provides the visual inputs for behaviors such as smooth pursuit eye movements. Pursuit has been a valuable testing ground for theories of sensory information processing because the neural circuits and physiological response properties of motion-responsive cortical areas are well studied, sensory estimates of visual motion signals are formed quickly, and the initiation of pursuit is closely coupled to sensory estimates of target motion. Here, we analyzed variability in visually driven smooth pursuit and perceptual reports of target direction and speed in human subjects while we manipulated the signal-to-noise level of motion estimates. Comparable levels of variability throughout viewing time and across conditions provide evidence for shared noise sources in the perception and action pathways arising from a common sensory estimate. We found that conditions that create poor, low-gain pursuit create a discrepancy between the precision of perception and that of pursuit. Differences in pursuit gain arising from differences in optic flow strength in the stimulus reconcile much of the controversy on this topic. PMID:26041919

  14. Models of Speed Discrimination

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The prime purpose of this project was to investigate various theoretical issues concerning the integration of information across visual space. To date, most of the research efforts in the study of the visual system seem to have been focused in two almost non-overlaping directions. One research focus has been the low level perception as studied by psychophysics. The other focus has been the study of high level vision exemplified by the study of object perception. Most of the effort in psychophysics has been devoted to the search for the fundamental "features" of perception. The general idea is that the most peripheral processes of the visual system decompose the input into features that are then used for classification and recognition. The experimental and theoretical focus has been on finding and describing these analyzers that decompose images into useful components. Various models are then compared to the physiological measurements performed on neurons in the sensory systems. In the study of higher level perception, the work has been focused on the representation of objects and on the connections between various physical effects and object perception. In this category we find the perception of 3D from a variety of physical measurements including motion, shading and other physical phenomena. With few exceptions, there seem to be very limited development of theories describing how the visual system might combine the output of the analyzers to form the representation of visual objects. Therefore, the processes underlying the integration of information over space represent critical aspects of vision system. The understanding of these processes will have implications on our expectations for the underlying physiological mechanisms, as well as for our models of the internal representation for visual percepts. In this project, we explored several mechanisms related to spatial summation, attention, and eye movements. The project comprised three components: 1. Modeling visual search for the detection of speed deviation. 2. Perception of moving objects. 3. Exploring the role of eye movements in various visual tasks.

  15. Influence of anxiety on memory performance in temporal lobe epilepsy

    PubMed Central

    Brown, Franklin C.; Westerveld, Michael; Langfitt, John T.; Hamberger, Marla; Hamid, Hamada; Shinnar, Shlomo; Sperling, Michael R.; Devinsky, Orrin; Barr, William; Tracy, Joseph; Masur, David; Bazil, Carl W.; Spencer, Susan S.

    2013-01-01

    This study examined the degree to which anxiety contributed to inconsistent material-specific memory difficulties among 243 temporal lobe epilepsy patients from the Multisite Epilepsy Study. Visual memory performance on the Rey Complex Figure Test (RCFT) was lower for those with high versus low level of anxiety, but was not found to be related to side of TLE. Verbal memory on the California Verbal Learning Test (CVLT) was significantly lower for left than right TLE patients with low anxiety, but equally impaired for those with high anxiety. These results suggest that we can place more confidence in the ability of verbal memory tests like the CVLT to lateralize to left TLE for those with low anxiety, but that verbal memory will be less likely to produce lateralizing information for those with high anxiety. This suggests that more caution is needed when interpreting verbal memory tests for those with high anxiety. These results indicated that RCFT performance was significantly affected by anxiety and did not lateralize to either side, regardless of anxiety level. This study adds to the existing literature which suggests that drawing-based visual memory tests do not lateralize among TLE patients, regardless of anxiety level. PMID:24291525

  16. Successful Outcomes from a Structured Curriculum Used in the Veterans Affairs Low Vision Intervention Trial

    ERIC Educational Resources Information Center

    Stelmack, Joan A.; Rinne, Stephen; Mancil, Rickilyn M.; Dean, Deborah; Moran, D'Anna; Tang, X. Charlene; Cummings, Roger; Massof, Robert W.

    2008-01-01

    A low vision rehabilitation program with a structured curriculum was evaluated in a randomized controlled trial. The treatment group demonstrated large improvements in self-reported visual function (reading, mobility, visual information processing, visual motor skills, and overall). The team approach and the protocols of the treatment program are…

  17. MEMS-based system and image processing strategy for epiretinal prosthesis.

    PubMed

    Xia, Peng; Hu, Jie; Qi, Jin; Gu, Chaochen; Peng, Yinghong

    2015-01-01

    Retinal prostheses have the potential to restore some level of visual function to the patients suffering from retinal degeneration. In this paper, an epiretinal approach with active stimulation devices is presented. The MEMS-based processing system consists of an external micro-camera, an information processor, an implanted electrical stimulator and a microelectrode array. The image processing strategy combining image clustering and enhancement techniques was proposed and evaluated by psychophysical experiments. The results indicated that the image processing strategy improved the visual performance compared with direct merging pixels to low resolution. The image processing methods assist epiretinal prosthesis for vision restoration.

  18. GABA predicts visual intelligence.

    PubMed

    Cook, Emily; Hammett, Stephen T; Larsson, Jonas

    2016-10-06

    Early psychological researchers proposed a link between intelligence and low-level perceptual performance. It was recently suggested that this link is driven by individual variations in the ability to suppress irrelevant information, evidenced by the observation of strong correlations between perceptual surround suppression and cognitive performance. However, the neural mechanisms underlying such a link remain unclear. A candidate mechanism is neural inhibition by gamma-aminobutyric acid (GABA), but direct experimental support for GABA-mediated inhibition underlying suppression is inconsistent. Here we report evidence consistent with a global suppressive mechanism involving GABA underlying the link between sensory performance and intelligence. We measured visual cortical GABA concentration, visuo-spatial intelligence and visual surround suppression in a group of healthy adults. Levels of GABA were strongly predictive of both intelligence and surround suppression, with higher levels of intelligence associated with higher levels of GABA and stronger surround suppression. These results indicate that GABA-mediated neural inhibition may be a key factor determining cognitive performance and suggests a physiological mechanism linking surround suppression and intelligence. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  19. 3D Surveying, Modeling and Geo-Information System of the New Campus of ITB-Indonesia

    NASA Astrophysics Data System (ADS)

    Suwardhi, D.; Trisyanti, S. W.; Ainiyah, N.; Fajri, M. N.; Hanan, H.; Virtriana, R.; Edmarani, A. A.

    2016-10-01

    The new campus of ITB-Indonesia, which is located at Jatinangor, requires good facilities and infrastructures to supporting all of campus activities. Those can not be separated from procurement and maintenance activities. Technology for procurement and maintenance of facilities and infrastructures -based computer (information system)- has been known as Building Information Modeling (BIM). Nowadays, that technology is more affordable with some of free software that easy to use and tailored to user needs. BIM has some disadvantages and it requires other technologies to complete it, namely Geographic Information System (GIS). BIM and GIS require surveying data to visualized landscape and buildings on Jatinangor ITB campus. This paper presents the on-going of an internal service program conducted by the researcher, academic staff and students for the university. The program including 3D surveying to support the data requirements for 3D modeling of buildings in CityGML and Industry Foundation Classes (IFC) data model. The entire 3D surveying will produce point clouds that can be used to make 3D model. The 3D modeling is divided into low and high levels of detail modeling. The low levels model is stored in 3D CityGML database, and the high levels model including interiors is stored in BIM Server. 3D model can be used to visualized the building and site of Jatinangor ITB campus. For facility management of campus, an geo-information system is developed that can be used for planning, constructing, and maintaining Jatinangor ITB's facilities and infrastructures. The system uses openMAINT, an open source solution for the Property & Facility Management.

  20. Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter.

    PubMed

    Kim, Seongwan; Ban, Yuseok; Lee, Sangyoun

    2017-01-17

    The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor), usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor's stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity.

  1. Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter

    PubMed Central

    Kim, Seongwan; Ban, Yuseok; Lee, Sangyoun

    2017-01-01

    The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor), usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor’s stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity. PMID:28106716

  2. Cognitive aging on latent constructs for visual processing capacity: a novel structural equation modeling framework with causal assumptions based on a theory of visual attention.

    PubMed

    Nielsen, Simon; Wilms, L Inge

    2014-01-01

    We examined the effects of normal aging on visual cognition in a sample of 112 healthy adults aged 60-75. A testbattery was designed to capture high-level measures of visual working memory and low-level measures of visuospatial attention and memory. To answer questions of how cognitive aging affects specific aspects of visual processing capacity, we used confirmatory factor analyses in Structural Equation Modeling (SEM; Model 2), informed by functional structures that were modeled with path analyses in SEM (Model 1). The results show that aging effects were selective to measures of visual processing speed compared to visual short-term memory (VSTM) capacity (Model 2). These results are consistent with some studies reporting selective aging effects on processing speed, and inconsistent with other studies reporting aging effects on both processing speed and VSTM capacity. In the discussion we argue that this discrepancy may be mediated by differences in age ranges, and variables of demography. The study demonstrates that SEM is a sensitive method to detect cognitive aging effects even within a narrow age-range, and a useful approach to structure the relationships between measured variables, and the cognitive functional foundation they supposedly represent.

  3. Visual search deficits in amblyopia.

    PubMed

    Tsirlin, Inna; Colpa, Linda; Goltz, Herbert C; Wong, Agnes M F

    2018-04-01

    Amblyopia is a neurodevelopmental disorder defined as a reduction in visual acuity that cannot be corrected by optical means. It has been associated with low-level deficits. However, research has demonstrated a link between amblyopia and visual attention deficits in counting, tracking, and identifying objects. Visual search is a useful tool for assessing visual attention but has not been well studied in amblyopia. Here, we assessed the extent of visual search deficits in amblyopia using feature and conjunction search tasks. We compared the performance of participants with amblyopia (n = 10) to those of controls (n = 12) on both feature and conjunction search tasks using Gabor patch stimuli, varying spatial bandwidth and orientation. To account for the low-level deficits inherent in amblyopia, we measured individual contrast and crowding thresholds and monitored eye movements. The display elements were then presented at suprathreshold levels to ensure that visibility was equalized across groups. There was no performance difference between groups on feature search, indicating that our experimental design controlled successfully for low-level amblyopia deficits. In contrast, during conjunction search, median reaction times and reaction time slopes were significantly larger in participants with amblyopia compared with controls. Amblyopia differentially affects performance on conjunction visual search, a more difficult task that requires feature binding and possibly the involvement of higher-level attention processes. Deficits in visual search may affect day-to-day functioning in people with amblyopia.

  4. The interplay of bottom-up and top-down mechanisms in visual guidance during object naming.

    PubMed

    Coco, Moreno I; Malcolm, George L; Keller, Frank

    2014-01-01

    An ongoing issue in visual cognition concerns the roles played by low- and high-level information in guiding visual attention, with current research remaining inconclusive about the interaction between the two. In this study, we bring fresh evidence into this long-standing debate by investigating visual saliency and contextual congruency during object naming (Experiment 1), a task in which visual processing interacts with language processing. We then compare the results of this experiment to data of a memorization task using the same stimuli (Experiment 2). In Experiment 1, we find that both saliency and congruency influence visual and naming responses and interact with linguistic factors. In particular, incongruent objects are fixated later and less often than congruent ones. However, saliency is a significant predictor of object naming, with salient objects being named earlier in a trial. Furthermore, the saliency and congruency of a named object interact with the lexical frequency of the associated word and mediate the time-course of fixations at naming. In Experiment 2, we find a similar overall pattern in the eye-movement responses, but only the congruency of the target is a significant predictor, with incongruent targets fixated less often than congruent targets. Crucially, this finding contrasts with claims in the literature that incongruent objects are more informative than congruent objects by deviating from scene context and hence need a longer processing. Overall, this study suggests that different sources of information are interactively used to guide visual attention on the targets to be named and raises new questions for existing theories of visual attention.

  5. Posttraining transcranial magnetic stimulation of striate cortex disrupts consolidation early in visual skill learning.

    PubMed

    De Weerd, Peter; Reithler, Joel; van de Ven, Vincent; Been, Marin; Jacobs, Christianne; Sack, Alexander T

    2012-02-08

    Practice-induced improvements in skilled performance reflect "offline " consolidation processes extending beyond daily training sessions. According to visual learning theories, an early, fast learning phase driven by high-level areas is followed by a late, asymptotic learning phase driven by low-level, retinotopic areas when higher resolution is required. Thus, low-level areas would not contribute to learning and offline consolidation until late learning. Recent studies have challenged this notion, demonstrating modified responses to trained stimuli in primary visual cortex (V1) and offline activity after very limited training. However, the behavioral relevance of modified V1 activity for offline consolidation of visual skill memory in V1 after early training sessions remains unclear. Here, we used neuronavigated transcranial magnetic stimulation (TMS) directed to a trained retinotopic V1 location to test for behaviorally relevant consolidation in human low-level visual cortex. Applying TMS to the trained V1 location within 45 min of the first or second training session strongly interfered with learning, as measured by impaired performance the next day. The interference was conditional on task context and occurred only when training in the location targeted by TMS was followed by training in a second location before TMS. In this condition, high-level areas may become coupled to the second location and uncoupled from the previously trained low-level representation, thereby rendering consolidation vulnerable to interference. Our data show that, during the earliest phases of skill learning in the lowest-level visual areas, a behaviorally relevant form of consolidation exists of which the robustness is controlled by high-level, contextual factors.

  6. Optimization of Visual Information Presentation for Visual Prosthesis.

    PubMed

    Guo, Fei; Yang, Yuan; Gao, Yong

    2018-01-01

    Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis.

  7. Optimization of Visual Information Presentation for Visual Prosthesis

    PubMed Central

    Gao, Yong

    2018-01-01

    Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis. PMID:29731769

  8. Modes of Visual Recognition and Perceptually Relevant Sketch-based Coding for Images

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.

    1991-01-01

    A review of visual recognition studies is used to define two levels of information requirements. These two levels are related to two primary subdivisions of the spatial frequency domain of images and reflect two distinct different physical properties of arbitrary scenes. In particular, pathologies in recognition due to cerebral dysfunction point to a more complete split into two major types of processing: high spatial frequency edge based recognition vs. low spatial frequency lightness (and color) based recognition. The former is more central and general while the latter is more specific and is necessary for certain special tasks. The two modes of recognition can also be distinguished on the basis of physical scene properties: the highly localized edges associated with reflectance and sharp topographic transitions vs. smooth topographic undulation. The extreme case of heavily abstracted images is pursued to gain an understanding of the minimal information required to support both modes of recognition. Here the intention is to define the semantic core of transmission. This central core of processing can then be fleshed out with additional image information and coding and rendering techniques.

  9. A web-based 3D geological information visualization system

    NASA Astrophysics Data System (ADS)

    Song, Renbo; Jiang, Nan

    2013-03-01

    Construction of 3D geological visualization system has attracted much more concern in GIS, computer modeling, simulation and visualization fields. It not only can effectively help geological interpretation and analysis work, but also can it can help leveling up geosciences professional education. In this paper, an applet-based method was introduced for developing a web-based 3D geological information visualization system. The main aims of this paper are to explore a rapid and low-cost development method for constructing a web-based 3D geological system. First, the borehole data stored in Excel spreadsheets was extracted and then stored in SQLSERVER database of a web server. Second, the JDBC data access component was utilized for providing the capability of access the database. Third, the user interface was implemented with applet component embedded in JSP page and the 3D viewing and querying functions were implemented with PickCanvas of Java3D. Last, the borehole data acquired from geological survey were used for test the system, and the test results has shown that related methods of this paper have a certain application values.

  10. Higher Level Visual Cortex Represents Retinotopic, Not Spatiotopic, Object Location

    PubMed Central

    Kanwisher, Nancy

    2012-01-01

    The crux of vision is to identify objects and determine their locations in the environment. Although initial visual representations are necessarily retinotopic (eye centered), interaction with the real world requires spatiotopic (absolute) location information. We asked whether higher level human visual cortex—important for stable object recognition and action—contains information about retinotopic and/or spatiotopic object position. Using functional magnetic resonance imaging multivariate pattern analysis techniques, we found information about both object category and object location in each of the ventral, dorsal, and early visual regions tested, replicating previous reports. By manipulating fixation position and stimulus position, we then tested whether these location representations were retinotopic or spatiotopic. Crucially, all location information was purely retinotopic. This pattern persisted when location information was irrelevant to the task, and even when spatiotopic (not retinotopic) stimulus position was explicitly emphasized. We also conducted a “searchlight” analysis across our entire scanned volume to explore additional cortex but again found predominantly retinotopic representations. The lack of explicit spatiotopic representations suggests that spatiotopic object position may instead be computed indirectly and continually reconstructed with each eye movement. Thus, despite our subjective impression that visual information is spatiotopic, even in higher level visual cortex, object location continues to be represented in retinotopic coordinates. PMID:22190434

  11. A Multiple-Label Guided Clustering Algorithm for Historical Document Dating and Localization.

    PubMed

    He, Sheng; Samara, Petros; Burgers, Jan; Schomaker, Lambert

    2016-11-01

    It is of essential importance for historians to know the date and place of origin of the documents they study. It would be a huge advancement for historical scholars if it would be possible to automatically estimate the geographical and temporal provenance of a handwritten document by inferring them from the handwriting style of such a document. We propose a multiple-label guided clustering algorithm to discover the correlations between the concrete low-level visual elements in historical documents and abstract labels, such as date and location. First, a novel descriptor, called histogram of orientations of handwritten strokes, is proposed to extract and describe the visual elements, which is built on a scale-invariant polar-feature space. In addition, the multi-label self-organizing map (MLSOM) is proposed to discover the correlations between the low-level visual elements and their labels in a single framework. Our proposed MLSOM can be used to predict the labels directly. Moreover, the MLSOM can also be considered as a pre-structured clustering method to build a codebook, which contains more discriminative information on date and geography. The experimental results on the medieval paleographic scale data set demonstrate that our method achieves state-of-the-art results.

  12. Modeling of pilot's visual behavior for low-level flight

    NASA Astrophysics Data System (ADS)

    Schulte, Axel; Onken, Reiner

    1995-06-01

    Developers of synthetic vision systems for low-level flight simulators deal with the problem to decide which features to incorporate in order to achieve most realistic training conditions. This paper supports an approach to this problem on the basis of modeling the pilot's visual behavior. This approach is founded upon the basic requirement that the pilot's mechanisms of visual perception should be identical in simulated and real low-level flight. Flight simulator experiments with pilots were conducted for knowledge acquisition. During the experiments video material of a real low-level flight mission containing different situations was displayed to the pilot who was acting under a realistic mission assignment in a laboratory environment. Pilot's eye movements could be measured during the replay. The visual mechanisms were divided into rule based strategies for visual navigation, based on the preflight planning process, as opposed to skill based processes. The paper results in a model of the pilot's planning strategy of a visual fixing routine as part of the navigation task. The model is a knowledge based system based upon the fuzzy evaluation of terrain features in order to determine the landmarks used by pilots. It can be shown that a computer implementation of the model selects those features, which were preferred by trained pilots, too.

  13. Individual differences in working memory capacity and workload capacity.

    PubMed

    Yu, Ju-Chi; Chang, Ting-Yun; Yang, Cheng-Ta

    2014-01-01

    We investigated the relationship between working memory capacity (WMC) and workload capacity (WLC). Each participant performed an operation span (OSPAN) task to measure his/her WMC and three redundant-target detection tasks to measure his/her WLC. WLC was computed non-parametrically (Experiments 1 and 2) and parametrically (Experiment 2). Both levels of analyses showed that participants high in WMC had larger WLC than those low in WMC only when redundant information came from visual and auditory modalities, suggesting that high-WMC participants had superior processing capacity in dealing with redundant visual and auditory information. This difference was eliminated when multiple processes required processing for only a single working memory subsystem in a color-shape detection task and a double-dot detection task. These results highlighted the role of executive control in integrating and binding information from the two working memory subsystems for perceptual decision making.

  14. Training-Induced Recovery of Low-Level Vision Followed by Mid-Level Perceptual Improvements in Developmental Object and Face Agnosia

    ERIC Educational Resources Information Center

    Lev, Maria; Gilaie-Dotan, Sharon; Gotthilf-Nezri, Dana; Yehezkel, Oren; Brooks, Joseph L.; Perry, Anat; Bentin, Shlomo; Bonneh, Yoram; Polat, Uri

    2015-01-01

    Long-term deprivation of normal visual inputs can cause perceptual impairments at various levels of visual function, from basic visual acuity deficits, through mid-level deficits such as contour integration and motion coherence, to high-level face and object agnosia. Yet it is unclear whether training during adulthood, at a post-developmental…

  15. Neural processing of visual information under interocular suppression: a critical review

    PubMed Central

    Sterzer, Philipp; Stein, Timo; Ludwig, Karin; Rothkirch, Marcus; Hesselmann, Guido

    2014-01-01

    When dissimilar stimuli are presented to the two eyes, only one stimulus dominates at a time while the other stimulus is invisible due to interocular suppression. When both stimuli are equally potent in competing for awareness, perception alternates spontaneously between the two stimuli, a phenomenon called binocular rivalry. However, when one stimulus is much stronger, e.g., due to higher contrast, the weaker stimulus can be suppressed for prolonged periods of time. A technique that has recently become very popular for the investigation of unconscious visual processing is continuous flash suppression (CFS): High-contrast dynamic patterns shown to one eye can render a low-contrast stimulus shown to the other eye invisible for up to minutes. Studies using CFS have produced new insights but also controversies regarding the types of visual information that can be processed unconsciously as well as the neural sites and the relevance of such unconscious processing. Here, we review the current state of knowledge in regard to neural processing of interocularly suppressed information. Focusing on recent neuroimaging findings, we discuss whether and to what degree such suppressed visual information is processed at early and more advanced levels of the visual processing hierarchy. We review controversial findings related to the influence of attention on early visual processing under interocular suppression, the putative differential roles of dorsal and ventral areas in unconscious object processing, and evidence suggesting privileged unconscious processing of emotional and other socially relevant information. On a more general note, we discuss methodological and conceptual issues, from practical issues of how unawareness of a stimulus is assessed to the overarching question of what constitutes an adequate operational definition of unawareness. Finally, we propose approaches for future research to resolve current controversies in this exciting research area. PMID:24904469

  16. Usability Evaluation of a Flight-Deck Airflow Hazard Visualization System

    NASA Technical Reports Server (NTRS)

    Aragon, Cecilia R.

    2004-01-01

    Many aircraft accidents each year are caused by encounters with unseen airflow hazards near the ground, such as vortices, downdrafts, low level wind shear, microbursts, or turbulence from surrounding vegetation or structures near the landing site. These hazards can be dangerous even to airliners; there have been hundreds of fatalities in the United States in the last two decades attributable to airliner encounters with microbursts and low level wind shear alone. However, helicopters are especially vulnerable to airflow hazards because they often have to operate in confined spaces and under operationally stressful conditions (such as emergency search and rescue, military or shipboard operations). Providing helicopter pilots with an augmented-reality display visualizing local airflow hazards may be of significant benefit. However, the form such a visualization might take, and whether it does indeed provide a benefit, had not been studied before our experiment. We recruited experienced military and civilian helicopter pilots for a preliminary usability study to evaluate a prototype augmented-reality visualization system. The study had two goals: first, to assess the efficacy of presenting airflow data in flight; and second, to obtain expert feedback on sample presentations of hazard indicators to refine our design choices. The study addressed the optimal way to provide critical safety information to the pilot, what level of detail to provide, whether to display specific aerodynamic causes or potential effects only, and how to safely and effectively shift the locus of attention during a high-workload task. Three-dimensional visual cues, with varying shape, color, transparency, texture, depth cueing, and use of motion, depicting regions of hazardous airflow, were developed and presented to the pilots. The study results indicated that such a visualization system could be of significant value in improving safety during critical takeoff and landing operations, and also gave clear indications of the best design choices in producing the hazard visual cues.

  17. Are all types of expertise created equal? Car experts use different spatial frequency scales for subordinate categorization of cars and faces.

    PubMed

    Harel, Assaf; Bentin, Shlomo

    2013-01-01

    A much-debated question in object recognition is whether expertise for faces and expertise for non-face objects utilize common perceptual information. We investigated this issue by assessing the diagnostic information required for different types of expertise. Specifically, we asked whether face categorization and expert car categorization at the subordinate level relies on the same spatial frequency (SF) scales. Fifteen car experts and fifteen novices performed a category verification task with spatially filtered images of faces, cars, and airplanes. Images were categorized based on their basic (e.g. "car") and subordinate level (e.g. "Japanese car") identity. The effect of expertise was not evident when objects were categorized at the basic level. However, when the car experts categorized faces and cars at the subordinate level, the two types of expertise required different kinds of SF information. Subordinate categorization of faces relied on low SFs more than on high SFs, whereas subordinate expert car categorization relied on high SFs more than on low SFs. These findings suggest that expertise in the recognition of objects and faces do not utilize the same type of information. Rather, different types of expertise require different types of diagnostic visual information.

  18. Are All Types of Expertise Created Equal? Car Experts Use Different Spatial Frequency Scales for Subordinate Categorization of Cars and Faces

    PubMed Central

    Harel, Assaf; Bentin, Shlomo

    2013-01-01

    A much-debated question in object recognition is whether expertise for faces and expertise for non-face objects utilize common perceptual information. We investigated this issue by assessing the diagnostic information required for different types of expertise. Specifically, we asked whether face categorization and expert car categorization at the subordinate level relies on the same spatial frequency (SF) scales. Fifteen car experts and fifteen novices performed a category verification task with spatially filtered images of faces, cars, and airplanes. Images were categorized based on their basic (e.g. “car”) and subordinate level (e.g. “Japanese car”) identity. The effect of expertise was not evident when objects were categorized at the basic level. However, when the car experts categorized faces and cars at the subordinate level, the two types of expertise required different kinds of SF information. Subordinate categorization of faces relied on low SFs more than on high SFs, whereas subordinate expert car categorization relied on high SFs more than on low SFs. These findings suggest that expertise in the recognition of objects and faces do not utilize the same type of information. Rather, different types of expertise require different types of diagnostic visual information. PMID:23826188

  19. Investigating the Use of 3d Geovisualizations for Urban Design in Informal Settlement Upgrading in South Africa

    NASA Astrophysics Data System (ADS)

    Rautenbach, V.; Coetzee, S.; Çöltekin, A.

    2016-06-01

    Informal settlements are a common occurrence in South Africa, and to improve in-situ circumstances of communities living in informal settlements, upgrades and urban design processes are necessary. Spatial data and maps are essential throughout these processes to understand the current environment, plan new developments, and communicate the planned developments. All stakeholders need to understand maps to actively participate in the process. However, previous research demonstrated that map literacy was relatively low for many planning professionals in South Africa, which might hinder effective planning. Because 3D visualizations resemble the real environment more than traditional maps, many researchers posited that they would be easier to interpret. Thus, our goal is to investigate the effectiveness of 3D geovisualizations for urban design in informal settlement upgrading in South Africa. We consider all involved processes: 3D modelling, visualization design, and cognitive processes during map reading. We found that procedural modelling is a feasible alternative to time-consuming manual modelling, and can produce high quality models. When investigating the visualization design, the visual characteristics of 3D models and relevance of a subset of visual variables for urban design activities of informal settlement upgrades were qualitatively assessed. The results of three qualitative user experiments contributed to understanding the impact of various levels of complexity in 3D city models and map literacy of future geoinformatics and planning professionals when using 2D maps and 3D models. The research results can assist planners in designing suitable 3D models that can be used throughout all phases of the process.

  20. Analyzing the Reading Skills and Visual Perception Levels of First Grade Students

    ERIC Educational Resources Information Center

    Çayir, Aybala

    2017-01-01

    The purpose of this study was to analyze primary school first grade students' reading levels and correlate their visual perception skills. For this purpose, students' reading speed, reading comprehension and reading errors were determined using The Informal Reading Inventory. Students' visual perception levels were also analyzed using…

  1. The Sensory Components of High-Capacity Iconic Memory and Visual Working Memory

    PubMed Central

    Bradley, Claire; Pearson, Joel

    2012-01-01

    Early visual memory can be split into two primary components: a high-capacity, short-lived iconic memory followed by a limited-capacity visual working memory that can last many seconds. Whereas a large number of studies have investigated visual working memory for low-level sensory features, much research on iconic memory has used more “high-level” alphanumeric stimuli such as letters or numbers. These two forms of memory are typically examined separately, despite an intrinsic overlap in their characteristics. Here, we used a purely sensory paradigm to examine visual short-term memory for 10 homogeneous items of three different visual features (color, orientation and motion) across a range of durations from 0 to 6 s. We found that the amount of information stored in iconic memory is smaller for motion than for color or orientation. Performance declined exponentially with longer storage durations and reached chance levels after ∼2 s. Further experiments showed that performance for the 10 items at 1 s was contingent on unperturbed attentional resources. In addition, for orientation stimuli, performance was contingent on the location of stimuli in the visual field, especially for short cue delays. Overall, our results suggest a smooth transition between an automatic, high-capacity, feature-specific sensory-iconic memory, and an effortful “lower-capacity” visual working memory. PMID:23055993

  2. Predicting successful tactile mapping of virtual objects.

    PubMed

    Brayda, Luca; Campus, Claudio; Gori, Monica

    2013-01-01

    Improving spatial ability of blind and visually impaired people is the main target of orientation and mobility (O&M) programs. In this study, we use a minimalistic mouse-shaped haptic device to show a new approach aimed at evaluating devices providing tactile representations of virtual objects. We consider psychophysical, behavioral, and subjective parameters to clarify under which circumstances mental representations of spaces (cognitive maps) can be efficiently constructed with touch by blindfolded sighted subjects. We study two complementary processes that determine map construction: low-level perception (in a passive stimulation task) and high-level information integration (in an active exploration task). We show that jointly considering a behavioral measure of information acquisition and a subjective measure of cognitive load can give an accurate prediction and a practical interpretation of mapping performance. Our simple TActile MOuse (TAMO) uses haptics to assess spatial ability: this may help individuals who are blind or visually impaired to be better evaluated by O&M practitioners or to evaluate their own performance.

  3. Relevance feedback-based building recognition

    NASA Astrophysics Data System (ADS)

    Li, Jing; Allinson, Nigel M.

    2010-07-01

    Building recognition is a nontrivial task in computer vision research which can be utilized in robot localization, mobile navigation, etc. However, existing building recognition systems usually encounter the following two problems: 1) extracted low level features cannot reveal the true semantic concepts; and 2) they usually involve high dimensional data which require heavy computational costs and memory. Relevance feedback (RF), widely applied in multimedia information retrieval, is able to bridge the gap between the low level visual features and high level concepts; while dimensionality reduction methods can mitigate the high-dimensional problem. In this paper, we propose a building recognition scheme which integrates the RF and subspace learning algorithms. Experimental results undertaken on our own building database show that the newly proposed scheme appreciably enhances the recognition accuracy.

  4. The nature-disorder paradox: A perceptual study on how nature is disorderly yet aesthetically preferred.

    PubMed

    Kotabe, Hiroki P; Kardan, Omid; Berman, Marc G

    2017-08-01

    Natural environments have powerful aesthetic appeal linked to their capacity for psychological restoration. In contrast, disorderly environments are aesthetically aversive, and have various detrimental psychological effects. But in our research, we have repeatedly found that natural environments are perceptually disorderly. What could explain this paradox? We present 3 competing hypotheses: the aesthetic preference for naturalness is more powerful than the aesthetic aversion to disorder (the nature-trumps-disorder hypothesis ); disorder is trivial to aesthetic preference in natural contexts (the harmless-disorder hypothesis ); and disorder is aesthetically preferred in natural contexts (the beneficial-disorder hypothesis ). Utilizing novel methods of perceptual study and diverse stimuli, we rule in the nature-trumps-disorder hypothesis and rule out the harmless-disorder and beneficial-disorder hypotheses. In examining perceptual mechanisms, we find evidence that high-level scene semantics are both necessary and sufficient for the nature-trumps-disorder effect. Necessity is evidenced by the effect disappearing in experiments utilizing only low-level visual stimuli (i.e., where scene semantics have been removed) and experiments utilizing a rapid-scene-presentation procedure that obscures scene semantics. Sufficiency is evidenced by the effect reappearing in experiments utilizing noun stimuli which remove low-level visual features. Furthermore, we present evidence that the interaction of scene semantics with low-level visual features amplifies the nature-trumps-disorder effect-the effect is weaker both when statistically adjusting for quantified low-level visual features and when using noun stimuli which remove low-level visual features. These results have implications for psychological theories bearing on the joint influence of low- and high-level perceptual inputs on affect and cognition, as well as for aesthetic design. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Small numbers are sensed directly, high numbers constructed from size and density.

    PubMed

    Zimmermann, Eckart

    2018-04-01

    Two theories compete to explain how we estimate the numerosity of visual object sets. The first suggests that the apparent numerosity is derived from an analysis of more low-level features like size and density of the set. The second theory suggests that numbers are sensed directly. Consistent with the latter claim is the existence of neurons in parietal cortex which are specialized for processing the numerosity of elements in the visual scene. However, recent evidence suggests that only low numbers can be sensed directly whereas the perception of high numbers is supported by the analysis of low-level features. Processing of low and high numbers, being located at different levels of the neural hierarchy should involve different receptive field sizes. Here, I tested this idea with visual adaptation. I measured the spatial spread of number adaptation for low and high numerosities. A focused adaptation spread of high numerosities suggested the involvement of early neural levels where receptive fields are comparably small and the broad spread for low numerosities was consistent with processing of number neurons which have larger receptive fields. These results provide evidence for the claim that different mechanism exist generating the perception of visual numerosity. Whereas low numbers are sensed directly as a primary visual attribute, the estimation of high numbers however likely depends on the area size over which the objects are spread. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Effet de l'encombrement visuel de l'ecran primaire de vol sur la performance du pilote, la charge de travail et le parcours visuel

    NASA Astrophysics Data System (ADS)

    Doyon-Poulin, Philippe

    Flight deck of 21st century commercial aircrafts does not look like the one the Wright brothers used for their first flight. The rapid growth of civilian aviation resulted in an increase in the number of flight deck instruments and of their complexity, in order to complete a safe and ontime flight. However, presenting an abundance of visual information using visually cluttered flight instruments might reduce the pilot's flight performance. Visual clutter has received an increased interest by the aerospace community to understand the effects of visual density and information overload on pilots' performance. Aerospace regulations demand to minimize visual clutter of flight deck displays. Past studies found a mixed effect of visual clutter of the primary flight display on pilots' technical flight performance. More research is needed to better understand this subject. In this thesis, we did an experimental study in a flight simulator to test the effects of visual clutter of the primary flight display on the pilot's technical flight performance, mental workload and gaze pattern. First, we identified a gap in existing definitions of visual clutter and we proposed a new definition relevant to the aerospace community that takes into account the context of use of the display. Then, we showed that past research on the effects of visual clutter of the primary flight display on pilots' performance did not manipulate the variable of visual clutter in a similar manner. Past research changed visual clutter at the same time than the flight guidance function. Using a different flight guidance function between displays might have masked the effect of visual clutter on pilots' performance. To solve this issue, we proposed three requirements that all tested displays must satisfy to assure that only the variable of visual clutter is changed during study while leaving other variables unaffected. Then, we designed three primary flight displays with a different visual clutter level (low, medium, high) but with the same flight guidance function, by respecting the previous requirements. Twelve pilots, with a mean experience of over 4000 total flight hours, completed an instrument landing in a flight simulator using all three displays for a total of nine repetitions. Our results showed that pilots reported lower workload level and had better lateral precision during the approach using the medium-clutter display compared to the low- and high-clutter displays. Also, pilots reported that the medium-clutter display was the most useful for the flight task compared to the two other displays. Eye tracker results showed that pilots' gaze pattern was less efficient for the high-clutter display compared to the low- and medium-clutter displays. Overall, these new experimental results emphasize the importance of optimizing visual clutter of flight displays as it affects both objective and subjective performance of experienced pilots in their flying task. This thesis ends with practical recommendations to help designers optimize visual clutter of displays used for man-machine interface.

  7. Looking away from faces: influence of high-level visual processes on saccade programming.

    PubMed

    Morand, Stéphanie M; Grosbras, Marie-Hélène; Caldara, Roberto; Harvey, Monika

    2010-03-30

    Human faces capture attention more than other visual stimuli. Here we investigated whether such face-specific biases rely on automatic (involuntary) or voluntary orienting responses. To this end, we used an anti-saccade paradigm, which requires the ability to inhibit a reflexive automatic response and to generate a voluntary saccade in the opposite direction of the stimulus. To control for potential low-level confounds in the eye-movement data, we manipulated the high-level visual properties of the stimuli while normalizing their global low-level visual properties. Eye movements were recorded in 21 participants who performed either pro- or anti-saccades to a face, car, or noise pattern, randomly presented to the left or right of a fixation point. For each trial, a symbolic cue instructed the observer to generate either a pro-saccade or an anti-saccade. We report a significant increase in anti-saccade error rates for faces compared to cars and noise patterns, as well as faster pro-saccades to faces and cars in comparison to noise patterns. These results indicate that human faces induce stronger involuntary orienting responses than other visual objects, i.e., responses that are beyond the control of the observer. Importantly, this involuntary processing cannot be accounted for by global low-level visual factors.

  8. Biocharts: a visual formalism for complex biological systems

    PubMed Central

    Kugler, Hillel; Larjo, Antti; Harel, David

    2010-01-01

    We address one of the central issues in devising languages, methods and tools for the modelling and analysis of complex biological systems, that of linking high-level (e.g. intercellular) information with lower-level (e.g. intracellular) information. Adequate ways of dealing with this issue are crucial for understanding biological networks and pathways, which typically contain huge amounts of data that continue to grow as our knowledge and understanding of a system increases. Trying to comprehend such data using the standard methods currently in use is often virtually impossible. We propose a two-tier compound visual language, which we call Biocharts, that is geared towards building fully executable models of biological systems. One of the main goals of our approach is to enable biologists to actively participate in the computational modelling effort, in a natural way. The high-level part of our language is a version of statecharts, which have been shown to be extremely successful in software and systems engineering. The statecharts can be combined with any appropriately well-defined language (preferably a diagrammatic one) for specifying the low-level dynamics of the pathways and networks. We illustrate the language and our general modelling approach using the well-studied process of bacterial chemotaxis. PMID:20022895

  9. Image wavelet decomposition and applications

    NASA Technical Reports Server (NTRS)

    Treil, N.; Mallat, S.; Bajcsy, R.

    1989-01-01

    The general problem of computer vision has been investigated for more that 20 years and is still one of the most challenging fields in artificial intelligence. Indeed, taking a look at the human visual system can give us an idea of the complexity of any solution to the problem of visual recognition. This general task can be decomposed into a whole hierarchy of problems ranging from pixel processing to high level segmentation and complex objects recognition. Contrasting an image at different representations provides useful information such as edges. An example of low level signal and image processing using the theory of wavelets is introduced which provides the basis for multiresolution representation. Like the human brain, we use a multiorientation process which detects features independently in different orientation sectors. So, images of the same orientation but of different resolutions are contrasted to gather information about an image. An interesting image representation using energy zero crossings is developed. This representation is shown to be experimentally complete and leads to some higher level applications such as edge and corner finding, which in turn provides two basic steps to image segmentation. The possibilities of feedback between different levels of processing are also discussed.

  10. A computational visual saliency model based on statistics and machine learning.

    PubMed

    Lin, Ru-Je; Lin, Wei-Song

    2014-08-01

    Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. © 2014 ARVO.

  11. Application of Frameworks in the Analysis and (Re)design of Interactive Visual Learning Tools

    ERIC Educational Resources Information Center

    Liang, Hai-Ning; Sedig, Kamran

    2009-01-01

    Interactive visual learning tools (IVLTs) are software environments that encode and display information visually and allow learners to interact with the visual information. This article examines the application and utility of frameworks in the analysis and design of IVLTs at the micro level. Frameworks play an important role in any design. They…

  12. The effect of visual representation style in problem-solving: a perspective from cognitive processes.

    PubMed

    Nyamsuren, Enkhbold; Taatgen, Niels A

    2013-01-01

    Using results from a controlled experiment and simulations based on cognitive models, we show that visual presentation style can have a significant impact on performance in a complex problem-solving task. We compared subject performances in two isomorphic, but visually different, tasks based on a card game of SET. Although subjects used the same strategy in both tasks, the difference in presentation style resulted in radically different reaction times and significant deviations in scanpath patterns in the two tasks. Results from our study indicate that low-level subconscious visual processes, such as differential acuity in peripheral vision and low-level iconic memory, can have indirect, but significant effects on decision making during a problem-solving task. We have developed two ACT-R models that employ the same basic strategy but deal with different presentations styles. Our ACT-R models confirm that changes in low-level visual processes triggered by changes in presentation style can propagate to higher-level cognitive processes. Such a domino effect can significantly affect reaction times and eye movements, without affecting the overall strategy of problem solving.

  13. The Effect of Visual Representation Style in Problem-Solving: A Perspective from Cognitive Processes

    PubMed Central

    Nyamsuren, Enkhbold; Taatgen, Niels A.

    2013-01-01

    Using results from a controlled experiment and simulations based on cognitive models, we show that visual presentation style can have a significant impact on performance in a complex problem-solving task. We compared subject performances in two isomorphic, but visually different, tasks based on a card game of SET. Although subjects used the same strategy in both tasks, the difference in presentation style resulted in radically different reaction times and significant deviations in scanpath patterns in the two tasks. Results from our study indicate that low-level subconscious visual processes, such as differential acuity in peripheral vision and low-level iconic memory, can have indirect, but significant effects on decision making during a problem-solving task. We have developed two ACT-R models that employ the same basic strategy but deal with different presentations styles. Our ACT-R models confirm that changes in low-level visual processes triggered by changes in presentation style can propagate to higher-level cognitive processes. Such a domino effect can significantly affect reaction times and eye movements, without affecting the overall strategy of problem solving. PMID:24260415

  14. The visual divide

    NASA Astrophysics Data System (ADS)

    Maes, Alfons

    2017-04-01

    Climate change is a playground for visualization. Yet research and technological innovations in visual communication and data visualization do not account for a substantial part of the world's population: vulnerable audiences with low levels of literacy.

  15. Cross-modal enhancement of speech detection in young and older adults: does signal content matter?

    PubMed

    Tye-Murray, Nancy; Spehar, Brent; Myerson, Joel; Sommers, Mitchell S; Hale, Sandra

    2011-01-01

    The purpose of the present study was to examine the effects of age and visual content on cross-modal enhancement of auditory speech detection. Visual content consisted of three clearly distinct types of visual information: an unaltered video clip of a talker's face, a low-contrast version of the same clip, and a mouth-like Lissajous figure. It was hypothesized that both young and older adults would exhibit reduced enhancement as visual content diverged from the original clip of the talker's face, but that the decrease would be greater for older participants. Nineteen young adults and 19 older adults were asked to detect a single spoken syllable (/ba/) in speech-shaped noise, and the level of the signal was adaptively varied to establish the signal-to-noise ratio (SNR) at threshold. There was an auditory-only baseline condition and three audiovisual conditions in which the syllable was accompanied by one of the three visual signals (the unaltered clip of the talker's face, the low-contrast version of that clip, or the Lissajous figure). For each audiovisual condition, the SNR at threshold was compared with the SNR at threshold for the auditory-only condition to measure the amount of cross-modal enhancement. Young adults exhibited significant cross-modal enhancement with all three types of visual stimuli, with the greatest amount of enhancement observed for the unaltered clip of the talker's face. Older adults, in contrast, exhibited significant cross-modal enhancement only with the unaltered face. Results of this study suggest that visual signal content affects cross-modal enhancement of speech detection in both young and older adults. They also support a hypothesized age-related deficit in processing low-contrast visual speech stimuli, even in older adults with normal contrast sensitivity.

  16. The visual information system

    Treesearch

    Merlyn J. Paulson

    1979-01-01

    This paper outlines a project level process (V.I.S.) which utilizes very accurate and flexible computer algorithms in combination with contemporary site analysis and design techniques for visual evaluation, design and management. The process provides logical direction and connecting bridges through problem identification, information collection and verification, visual...

  17. AUVA - Augmented Reality Empowers Visual Analytics to explore Medical Curriculum Data.

    PubMed

    Nifakos, Sokratis; Vaitsis, Christos; Zary, Nabil

    2015-01-01

    Medical curriculum data play a key role in the structure and the organization of medical programs in Universities around the world. The effective processing and usage of these data may improve the educational environment of medical students. As a consequence, the new generation of health professionals would have improved skills from the previous ones. This study introduces the process of enhancing curriculum data by the use of augmented reality technology as a management and presentation tool. The final goal is to enrich the information presented from a visual analytics approach applied on medical curriculum data and to sustain low levels of complexity of understanding these data.

  18. Computational mechanisms underlying cortical responses to the affordance properties of visual scenes

    PubMed Central

    Epstein, Russell A.

    2018-01-01

    Biologically inspired deep convolutional neural networks (CNNs), trained for computer vision tasks, have been found to predict cortical responses with remarkable accuracy. However, the internal operations of these models remain poorly understood, and the factors that account for their success are unknown. Here we develop a set of techniques for using CNNs to gain insights into the computational mechanisms underlying cortical responses. We focused on responses in the occipital place area (OPA), a scene-selective region of dorsal occipitoparietal cortex. In a previous study, we showed that fMRI activation patterns in the OPA contain information about the navigational affordances of scenes; that is, information about where one can and cannot move within the immediate environment. We hypothesized that this affordance information could be extracted using a set of purely feedforward computations. To test this idea, we examined a deep CNN with a feedforward architecture that had been previously trained for scene classification. We found that responses in the CNN to scene images were highly predictive of fMRI responses in the OPA. Moreover the CNN accounted for the portion of OPA variance relating to the navigational affordances of scenes. The CNN could thus serve as an image-computable candidate model of affordance-related responses in the OPA. We then ran a series of in silico experiments on this model to gain insights into its internal operations. These analyses showed that the computation of affordance-related features relied heavily on visual information at high-spatial frequencies and cardinal orientations, both of which have previously been identified as low-level stimulus preferences of scene-selective visual cortex. These computations also exhibited a strong preference for information in the lower visual field, which is consistent with known retinotopic biases in the OPA. Visualizations of feature selectivity within the CNN suggested that affordance-based responses encoded features that define the layout of the spatial environment, such as boundary-defining junctions and large extended surfaces. Together, these results map the sensory functions of the OPA onto a fully quantitative model that provides insights into its visual computations. More broadly, they advance integrative techniques for understanding visual cortex across multiple level of analysis: from the identification of cortical sensory functions to the modeling of their underlying algorithms. PMID:29684011

  19. Night vision in barn owls: visual acuity and contrast sensitivity under dark adaptation.

    PubMed

    Orlowski, Julius; Harmening, Wolf; Wagner, Hermann

    2012-12-06

    Barn owls are effective nocturnal predators. We tested their visual performance at low light levels and determined visual acuity and contrast sensitivity of three barn owls by their behavior at stimulus luminances ranging from photopic to fully scotopic levels (23.5 to 1.5 × 10⁻⁶). Contrast sensitivity and visual acuity decreased only slightly from photopic to scotopic conditions. Peak grating acuity was at mesopic (4 × 10⁻² cd/m²) conditions. Barn owls retained a quarter of their maximal acuity when luminance decreased by 5.5 log units. We argue that the visual system of barn owls is designed to yield as much visual acuity under low light conditions as possible, thereby sacrificing resolution at photopic conditions.

  20. Clinically Meaningful Rehabilitation Outcomes of Low Vision Patients Served by Outpatient Clinical Centers.

    PubMed

    Goldstein, Judith E; Jackson, Mary Lou; Fox, Sandra M; Deremeik, James T; Massof, Robert W

    2015-07-01

    To facilitate comparative clinical outcome research in low vision rehabilitation, we must use patient-centered measurements that reflect clinically meaningful changes in visual ability. To quantify the effects of currently provided low vision rehabilitation (LVR) on patients who present for outpatient LVR services in the United States. Prospective, observational study of new patients seeking outpatient LVR services. From April 2008 through May 2011, 779 patients from 28 clinical centers in the United States were enrolled in the Low Vision Rehabilitation Outcomes Study. The Activity Inventory, a visual function questionnaire, was administered to measure overall visual ability and visual ability in 4 functional domains (reading, mobility, visual motor function, and visual information processing) at baseline and 6 to 9 months after usual LVR care. The Geriatric Depression Scale, Telephone Interview for Cognitive Status, and Medical Outcomes Study 36-Item Short-Form Health Survey physical functioning questionnaires were also administered to measure patients' psychological, cognitive, and physical health states, respectively, and clinical findings of patients were provided by study centers. Mean changes in the study population and minimum clinically important differences in the individual in overall visual ability and in visual ability in 4 functional domains as measured by the Activity Inventory. Baseline and post-rehabilitation measures were obtained for 468 patients. Minimum clinically important differences (95% CIs) were observed in nearly half (47% [95% CI, 44%-50%]) of patients in overall visual ability. The prevalence rates of patients with minimum clinically important differences in visual ability in functional domains were reading (44% [95% CI, 42%-48%]), visual motor function (38% [95% CI, 36%-42%]), visual information processing (33% [95% CI, 31%-37%]), and mobility (27% [95% CI, 25%-31%]). The largest average effect size (Cohen d = 0.87) for the population was observed in overall visual ability. Age (P = .006) was an independent predictor of changes in overall visual ability, and logMAR visual acuity (P = .002) was predictive of changes in visual information processing. Forty-four to fifty percent of patients presenting for outpatient LVR show clinically meaningful differences in overall visual ability after LVR, and the average effect sizes in overall visual ability are large, close to 1 SD.

  1. Greater magnocellular saccadic suppression in high versus low autistic tendency suggests a causal path to local perceptual style.

    PubMed

    Crewther, David P; Crewther, Daniel; Bevan, Stephanie; Goodale, Melvyn A; Crewther, Sheila G

    2015-12-01

    Saccadic suppression-the reduction of visual sensitivity during rapid eye movements-has previously been proposed to reflect a specific suppression of the magnocellular visual system, with the initial neural site of that suppression at or prior to afferent visual information reaching striate cortex. Dysfunction in the magnocellular visual pathway has also been associated with perceptual and physiological anomalies in individuals with autism spectrum disorder or high autistic tendency, leading us to question whether saccadic suppression is altered in the broader autism phenotype. Here we show that individuals with high autistic tendency show greater saccadic suppression of low versus high spatial frequency gratings while those with low autistic tendency do not. In addition, those with high but not low autism spectrum quotient (AQ) demonstrated pre-cortical (35-45 ms) evoked potential differences (saccade versus fixation) to a large, low contrast, pseudo-randomly flashing bar. Both AQ groups showed similar differential visual evoked potential effects in later epochs (80-160 ms) at high contrast. Thus, the magnocellular theory of saccadic suppression appears untenable as a general description for the typically developing population. Our results also suggest that the bias towards local perceptual style reported in autism may be due to selective suppression of low spatial frequency information accompanying every saccadic eye movement.

  2. Modeling and visualizing borehole information on virtual globes using KML

    NASA Astrophysics Data System (ADS)

    Zhu, Liang-feng; Wang, Xi-feng; Zhang, Bing

    2014-01-01

    Advances in virtual globes and Keyhole Markup Language (KML) are providing the Earth scientists with the universal platforms to manage, visualize, integrate and disseminate geospatial information. In order to use KML to represent and disseminate subsurface geological information on virtual globes, we present an automatic method for modeling and visualizing a large volume of borehole information. Based on a standard form of borehole database, the method first creates a variety of borehole models with different levels of detail (LODs), including point placemarks representing drilling locations, scatter dots representing contacts and tube models representing strata. Subsequently, the level-of-detail based (LOD-based) multi-scale representation is constructed to enhance the efficiency of visualizing large numbers of boreholes. Finally, the modeling result can be loaded into a virtual globe application for 3D visualization. An implementation program, termed Borehole2KML, is developed to automatically convert borehole data into KML documents. A case study of using Borehole2KML to create borehole models in Shanghai shows that the modeling method is applicable to visualize, integrate and disseminate borehole information on the Internet. The method we have developed has potential use in societal service of geological information.

  3. Navigational Efficiency of Nocturnal Myrmecia Ants Suffers at Low Light Levels

    PubMed Central

    Narendra, Ajay; Reid, Samuel F.; Raderschall, Chloé A.

    2013-01-01

    Insects face the challenge of navigating to specific goals in both bright sun-lit and dim-lit environments. Both diurnal and nocturnal insects use quite similar navigation strategies. This is despite the signal-to-noise ratio of the navigational cues being poor at low light conditions. To better understand the evolution of nocturnal life, we investigated the navigational efficiency of a nocturnal ant, Myrmecia pyriformis, at different light levels. Workers of M. pyriformis leave the nest individually in a narrow light-window in the evening twilight to forage on nest-specific Eucalyptus trees. The majority of foragers return to the nest in the morning twilight, while few attempt to return to the nest throughout the night. We found that as light levels dropped, ants paused for longer, walked more slowly, the success in finding the nest reduced and their paths became less straight. We found that in both bright and dark conditions ants relied predominantly on visual landmark information for navigation and that landmark guidance became less reliable at low light conditions. It is perhaps due to the poor navigational efficiency at low light levels that the majority of foragers restrict navigational tasks to the twilight periods, where sufficient navigational information is still available. PMID:23484052

  4. Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures

    PubMed Central

    Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding. PMID:21690314

  5. Figure-ground organization and the emergence of proto-objects in the visual cortex.

    PubMed

    von der Heydt, Rüdiger

    2015-01-01

    A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields (CRF), but in addition their responses are modulated (enhanced or suppressed) depending on the location of a 'figure' relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the CRF. This paper reviews evidence indicating that border ownership selectivity reflects the formation of early object representations ('proto-objects'). The evidence includes experiments showing (1) reversal of border ownership signals with change of perceived object structure, (2) border ownership specific enhancement of responses in object-based selective attention, (3) persistence of border ownership signals in accordance with continuity of object perception, and (4) remapping of border ownership signals across saccades and object movements. Findings 1 and 2 can be explained by hypothetical grouping circuits that sum contour feature signals in search of objectness, and, via recurrent projections, enhance the corresponding low-level feature signals. Findings 3 and 4 might be explained by assuming that the activity of grouping circuits persists and can be remapped. Grouping, persistence, and remapping are fundamental operations of vision. Finding these operations manifest in low-level visual areas challenges traditional views of visual processing. New computational models need to be developed for a comprehensive understanding of the function of the visual cortex.

  6. Figure–ground organization and the emergence of proto-objects in the visual cortex

    PubMed Central

    von der Heydt, Rüdiger

    2015-01-01

    A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields (CRF), but in addition their responses are modulated (enhanced or suppressed) depending on the location of a ‘figure’ relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the CRF. This paper reviews evidence indicating that border ownership selectivity reflects the formation of early object representations (‘proto-objects’). The evidence includes experiments showing (1) reversal of border ownership signals with change of perceived object structure, (2) border ownership specific enhancement of responses in object-based selective attention, (3) persistence of border ownership signals in accordance with continuity of object perception, and (4) remapping of border ownership signals across saccades and object movements. Findings 1 and 2 can be explained by hypothetical grouping circuits that sum contour feature signals in search of objectness, and, via recurrent projections, enhance the corresponding low-level feature signals. Findings 3 and 4 might be explained by assuming that the activity of grouping circuits persists and can be remapped. Grouping, persistence, and remapping are fundamental operations of vision. Finding these operations manifest in low-level visual areas challenges traditional views of visual processing. New computational models need to be developed for a comprehensive understanding of the function of the visual cortex. PMID:26579062

  7. The Efficacy of a Low-Level Program Visualization Tool for Teaching Programming Concepts to Novice C Programmers.

    ERIC Educational Resources Information Center

    Smith, Philip A.; Webb, Geoffrey I.

    2000-01-01

    Describes "Glass-box Interpreter" a low-level program visualization tool called Bradman designed to provide a conceptual model of C program execution for novice programmers and makes visible aspects of the programming process normally hidden from the user. Presents an experiment that tests the efficacy of Bradman, and provides…

  8. Coherence and interlimb force control: Effects of visual gain.

    PubMed

    Kang, Nyeonju; Cauraugh, James H

    2018-03-06

    Neural coupling across hemispheres and homologous muscles often appears during bimanual motor control. Force coupling in a specific frequency domain may indicate specific bimanual force coordination patterns. This study investigated coherence on pairs of bimanual isometric index finger force while manipulating visual gain and task asymmetry conditions. We used two visual gain conditions (low and high gain = 8 and 512 pixels/N), and created task asymmetry by manipulating coefficient ratios imposed on the left and right index finger forces (0.4:1.6; 1:1; 1.6:0.4, respectively). Unequal coefficient ratios required different contributions from each hand to the bimanual force task resulting in force asymmetry. Fourteen healthy young adults performed bimanual isometric force control at 20% of their maximal level of the summed force of both fingers. We quantified peak coherence and relative phase angle between hands at 0-4, 4-8, and 8-12 Hz, and estimated a signal-to-noise ratio of bimanual forces. The findings revealed higher peak coherence and relative phase angle at 0-4 Hz than at 4-8 and 8-12 Hz for both visual gain conditions. Further, peak coherence and relative phase angle values at 0-4 Hz were larger at the high gain than at the low gain. At the high gain, higher peak coherence at 0-4 Hz collapsed across task asymmetry conditions significantly predicted greater signal-to-noise ratio. These findings indicate that a greater level of visual information facilitates bimanual force coupling at a specific frequency range related to sensorimotor processing. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Decoding visual object categories from temporal correlations of ECoG signals.

    PubMed

    Majima, Kei; Matsuo, Takeshi; Kawasaki, Keisuke; Kawai, Kensuke; Saito, Nobuhito; Hasegawa, Isao; Kamitani, Yukiyasu

    2014-04-15

    How visual object categories are represented in the brain is one of the key questions in neuroscience. Studies on low-level visual features have shown that relative timings or phases of neural activity between multiple brain locations encode information. However, whether such temporal patterns of neural activity are used in the representation of visual objects is unknown. Here, we examined whether and how visual object categories could be predicted (or decoded) from temporal patterns of electrocorticographic (ECoG) signals from the temporal cortex in five patients with epilepsy. We used temporal correlations between electrodes as input features, and compared the decoding performance with features defined by spectral power and phase from individual electrodes. While using power or phase alone, the decoding accuracy was significantly better than chance, correlations alone or those combined with power outperformed other features. Decoding performance with correlations was degraded by shuffling the order of trials of the same category in each electrode, indicating that the relative time series between electrodes in each trial is critical. Analysis using a sliding time window revealed that decoding performance with correlations began to rise earlier than that with power. This earlier increase in performance was replicated by a model using phase differences to encode categories. These results suggest that activity patterns arising from interactions between multiple neuronal units carry additional information on visual object categories. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Multi-modal distraction: insights from children's limited attention.

    PubMed

    Matusz, Pawel J; Broadbent, Hannah; Ferrari, Jessica; Forrest, Benjamin; Merkley, Rebecca; Scerif, Gaia

    2015-03-01

    How does the multi-sensory nature of stimuli influence information processing? Cognitive systems with limited selective attention can elucidate these processes. Six-year-olds, 11-year-olds and 20-year-olds engaged in a visual search task that required them to detect a pre-defined coloured shape under conditions of low or high visual perceptual load. On each trial, a peripheral distractor that could be either compatible or incompatible with the current target colour was presented either visually, auditorily or audiovisually. Unlike unimodal distractors, audiovisual distractors elicited reliable compatibility effects across the two levels of load in adults and in the older children, but high visual load significantly reduced distraction for all children, especially the youngest participants. This study provides the first demonstration that multi-sensory distraction has powerful effects on selective attention: Adults and older children alike allocate attention to potentially relevant information across multiple senses. However, poorer attentional resources can, paradoxically, shield the youngest children from the deleterious effects of multi-sensory distraction. Furthermore, we highlight how developmental research can enrich the understanding of distinct mechanisms controlling adult selective attention in multi-sensory environments. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Reference frames for spatial frequency in face representation differ in the temporal visual cortex and amygdala.

    PubMed

    Inagaki, Mikio; Fujita, Ichiro

    2011-07-13

    Social communication in nonhuman primates and humans is strongly affected by facial information from other individuals. Many cortical and subcortical brain areas are known to be involved in processing facial information. However, how the neural representation of faces differs across different brain areas remains unclear. Here, we demonstrate that the reference frame for spatial frequency (SF) tuning of face-responsive neurons differs in the temporal visual cortex and amygdala in monkeys. Consistent with psychophysical properties for face recognition, temporal cortex neurons were tuned to image-based SFs (cycles/image) and showed viewing distance-invariant representation of face patterns. On the other hand, many amygdala neurons were influenced by retina-based SFs (cycles/degree), a characteristic that is useful for social distance computation. The two brain areas also differed in the luminance contrast sensitivity of face-responsive neurons; amygdala neurons sharply reduced their responses to low luminance contrast images, while temporal cortex neurons maintained the level of their responses. From these results, we conclude that different types of visual processing in the temporal visual cortex and the amygdala contribute to the construction of the neural representations of faces.

  12. Patterns and comparisons of human-induced changes in river flood impacts in cities

    NASA Astrophysics Data System (ADS)

    Clark, Stephanie; Sharma, Ashish; Sisson, Scott A.

    2018-03-01

    In this study, information extracted from the first global urban fluvial flood risk data set (Aqueduct) is investigated and visualized to explore current and projected city-level flood impacts driven by urbanization and climate change. We use a novel adaption of the self-organizing map (SOM) method, an artificial neural network proficient at clustering, pattern extraction, and visualization of large, multi-dimensional data sets. Prevalent patterns of current relationships and anticipated changes over time in the nonlinearly-related environmental and social variables are presented, relating urban river flood impacts to socioeconomic development and changing hydrologic conditions. Comparisons are provided between 98 individual cities. Output visualizations compare baseline and changing trends of city-specific exposures of population and property to river flooding, revealing relationships between the cities based on their relative map placements. Cities experiencing high (or low) baseline flood impacts on population and/or property that are expected to improve (or worsen), as a result of anticipated climate change and development, are identified and compared. This paper condenses and conveys large amounts of information through visual communication to accelerate the understanding of relationships between local urban conditions and global processes.

  13. Company and Below Command and Control Information Exchange Study

    DTIC Science & Technology

    2007-10-22

    between text and graphical forms of visual communication as well. With limited exceptions, verbal/auditory communication remains the best choice...Text and graphics. At the squad level and above visual communication system is best for complex information and/or less time critical information...Battalion o 13.2.2 Request Casualty Evacuation (CASEVAC) from Battalion Best: A mixed auditory/ visual communication would be ideal for a CASEVAC

  14. Planning of visually guided reach-to-grasp movements: inference from reaction time and contingent negative variation (CNV).

    PubMed

    Zaepffel, Manuel; Brochier, Thomas

    2012-01-01

    We performed electroencephalogram (EEG) recording in a precuing task to investigate the planning processes of reach-to-grasp movements in human. In this reaction time (RT) task, subjects had to reach, grasp, and pull an object as fast as possible after a visual GO signal. We manipulated two parameters: the hand shape for grasping (precision grip or side grip) and the force required to pull the object (high or low). Three seconds before the GO onset, a cue provided advance information about force, grip, both parameters, or no information at all. EEG data show that reach-to-grasp movements generate differences in the topographic distribution of the late Contingent Negative Variation (ICNV) amplitude between the 4 precuing conditions. Along with RT data, it confirms that two distinct functional networks are involved with different time courses in the planning of grip and force. Finally, we outline the composite nature of the lCNV that might reflect both high- and low-level planning processes. Copyright © 2011 Society for Psychophysiological Research.

  15. Exploring the Impact of Visual Complexity Levels in 3d City Models on the Accuracy of Individuals' Orientation and Cognitive Maps

    NASA Astrophysics Data System (ADS)

    Rautenbach, V.; Çöltekin, A.; Coetzee, S.

    2015-08-01

    In this paper we report results from a qualitative user experiment (n=107) designed to contribute to understanding the impact of various levels of complexity (mainly based on levels of detail, i.e., LoD) in 3D city models, specifically on the participants' orientation and cognitive (mental) maps. The experiment consisted of a number of tasks motivated by spatial cognition theory where participants (among other things) were given orientation tasks, and in one case also produced sketches of a path they `travelled' in a virtual environment. The experiments were conducted in groups, where individuals provided responses on an answer sheet. The preliminary results based on descriptive statistics and qualitative sketch analyses suggest that very little information (i.e., a low LoD model of a smaller area) might have a negative impact on the accuracy of cognitive maps constructed based on a virtual experience. Building an accurate cognitive map is an inherently desired effect of the visualizations in planning tasks, thus the findings are important for understanding how to develop better-suited 3D visualizations such as 3D city models. In this study, we specifically discuss the suitability of different levels of visual complexity for development planning (urban planning), one of the domains where 3D city models are most relevant.

  16. [Prevalence of low visual acuity and ophthalmological disorders in six-year-old children from Santa Fe city].

    PubMed

    Verrone, Pablo J; Simi, Marcelo R

    2008-08-01

    Changes in children visual acuity that are not treated carry a high risk of irreversible consequences. To determine the prevalence of low visual acuity and to diagnose the ophthalmologic diseases that cause it in six-year-old children from Santa Fe City, Argentina. Observational, descriptive and transversal design. Visual acuity is defined as the eye's capacity to distinguish separate points and to recognize shapes. It was determined using the Snellen table for farsighted vision on 177 six-year-old children who attended four elementary schools in Santa Fe City. An ophthalmologic examination was performed on those who had low visual acuity and their mothers were interviewed to ascertain the pathological background of their children. The prevalence of low visual acuity was 10.7% (n= 19). The prevalence of amblyopia was 3.9%. Refraction errors were the only cause of low visual acuity. Astigmatism was predominantly frequent. The most frequent pathological backgrounds were: ocular infections, premature birth, history of malnutrition and maternal use of tobacco. The prevalence of low visual acuity found in this study is lower than the one informed in most other studies. This data require confirmation by further studies.

  17. Regions of mid-level human visual cortex sensitive to the global coherence of local image patches.

    PubMed

    Mannion, Damien J; Kersten, Daniel J; Olman, Cheryl A

    2014-08-01

    The global structural arrangement and spatial layout of the visual environment must be derived from the integration of local signals represented in the lower tiers of the visual system. This interaction between the spatially local and global properties of visual stimulation underlies many of our visual capacities, and how this is achieved in the brain is a central question for visual and cognitive neuroscience. Here, we examine the sensitivity of regions of the posterior human brain to the global coordination of spatially displaced naturalistic image patches. We presented observers with image patches in two circular apertures to the left and right of central fixation, with the patches drawn from either the same (coherent condition) or different (noncoherent condition) extended image. Using fMRI at 7T (n = 5), we find that global coherence affected signal amplitude in regions of dorsal mid-level cortex. Furthermore, we find that extensive regions of mid-level visual cortex contained information in their local activity pattern that could discriminate coherent and noncoherent stimuli. These findings indicate that the global coordination of local naturalistic image information has important consequences for the processing in human mid-level visual cortex.

  18. Oscillatory encoding of visual stimulus familiarity.

    PubMed

    Kissinger, Samuel T; Pak, Alexandr; Tang, Yu; Masmanidis, Sotiris C; Chubykin, Alexander A

    2018-06-18

    Familiarity of the environment changes the way we perceive and encode incoming information. However, the neural substrates underlying this phenomenon are poorly understood. Here we describe a new form of experience-dependent low frequency oscillations in the primary visual cortex (V1) of awake adult male mice. The oscillations emerged in visually evoked potentials (VEPs) and single-unit activity following repeated visual stimulation. The oscillations were sensitive to the spatial frequency content of a visual stimulus and required the muscarinic acetylcholine receptors (mAChRs) for their induction and expression. Finally, ongoing visually evoked theta (4-6 Hz) oscillations boost the VEP amplitude of incoming visual stimuli if the stimuli are presented at the high excitability phase of the oscillations. Our results demonstrate that an oscillatory code can be used to encode familiarity and serves as a gate for oncoming sensory inputs. Significance Statement. Previous experience can influence the processing of incoming sensory information by the brain and alter perception. However, the mechanistic understanding of how this process takes place is lacking. We have discovered that persistent low frequency oscillations in the primary visual cortex encode information about familiarity and the spatial frequency of the stimulus. These familiarity evoked oscillations influence neuronal responses to the oncoming stimuli in a way that depends on the oscillation phase. Our work demonstrates a new mechanism of visual stimulus feature detection and learning. Copyright © 2018 the authors.

  19. Learner-Information Interaction: A Macro-Level Framework Characterizing Visual Cognitive Tools

    ERIC Educational Resources Information Center

    Sedig, Kamran; Liang, Hai-Ning

    2008-01-01

    Visual cognitive tools (VCTs) are external mental aids that maintain and display visual representations (VRs) of information (i.e., structures, objects, concepts, ideas, and problems). VCTs allow learners to operate upon the VRs to perform epistemic (i.e., reasoning and knowledge-based) activities. In VCTs, the mechanism by which learners operate…

  20. View Combination: A Generalization Mechanism for Visual Recognition

    ERIC Educational Resources Information Center

    Friedman, Alinda; Waller, David; Thrash, Tyler; Greenauer, Nathan; Hodgson, Eric

    2011-01-01

    We examined whether view combination mechanisms shown to underlie object and scene recognition can integrate visual information across views that have little or no three-dimensional information at either the object or scene level. In three experiments, people learned four "views" of a two dimensional visual array derived from a three-dimensional…

  1. Contributions of Low and High Spatial Frequency Processing to Impaired Object Recognition Circuitry in Schizophrenia

    PubMed Central

    Calderone, Daniel J.; Hoptman, Matthew J.; Martínez, Antígona; Nair-Collins, Sangeeta; Mauro, Cristina J.; Bar, Moshe; Javitt, Daniel C.; Butler, Pamela D.

    2013-01-01

    Patients with schizophrenia exhibit cognitive and sensory impairment, and object recognition deficits have been linked to sensory deficits. The “frame and fill” model of object recognition posits that low spatial frequency (LSF) information rapidly reaches the prefrontal cortex (PFC) and creates a general shape of an object that feeds back to the ventral temporal cortex to assist object recognition. Visual dysfunction findings in schizophrenia suggest a preferential loss of LSF information. This study used functional magnetic resonance imaging (fMRI) and resting state functional connectivity (RSFC) to investigate the contribution of visual deficits to impaired object “framing” circuitry in schizophrenia. Participants were shown object stimuli that were intact or contained only LSF or high spatial frequency (HSF) information. For controls, fMRI revealed preferential activation to LSF information in precuneus, superior temporal, and medial and dorsolateral PFC areas, whereas patients showed a preference for HSF information or no preference. RSFC revealed a lack of connectivity between early visual areas and PFC for patients. These results demonstrate impaired processing of LSF information during object recognition in schizophrenia, with patients instead displaying increased processing of HSF information. This is consistent with findings of a preference for local over global visual information in schizophrenia. PMID:22735157

  2. Pain-related fear of (re-)injury in patients with low back pain: Estimation or measurement in manual therapy primary care practice? A pilot study.

    PubMed

    Oostendorp, Rob A B; Elvers, Hans; Mikolajewska, Emilia; Laekeman, Marjan; Roussel, Nathalie; van der Zanden, Olaf; Nijs, Jo; Samwel, Han

    2017-11-06

    Manual physical therapists (MPTs) working in primary care get limited information about patient's courses of (chronic) low back pain (LBP). Identification of kinesiophobia is mostly based on clinical perception. The aim of this study was to evaluate the association between the scores with which manual physical therapists in a primary care setting identify kinesiophobia in patients with low back pain, and the patients' self-reported measures of kinesiophobia. The cross-sectional study comprised 104 patients with LBP and 17 MPTs. Patients first independently completed the Tampa Scale for Kinesiophobia (TSK-17). The therapists, blinded to the TSK-scores, rated their perception of a patient's kinesiophobia using the Visual Analogue Scale-Estimation (VAS-est) and the accuracy of their ratings using the Visual Analogue Scale-Accuracy (VAS-ac). Kendall's tau b was used to determine the level of correlation between scores on the TSK-17 and the VAS-est.

  3. Complexity Level Analysis Revisited: What Can 30 Years of Hindsight Tell Us about How the Brain Might Represent Visual Information?

    PubMed Central

    Tsotsos, John K.

    2017-01-01

    Much has been written about how the biological brain might represent and process visual information, and how this might inspire and inform machine vision systems. Indeed, tremendous progress has been made, and especially during the last decade in the latter area. However, a key question seems too often, if not mostly, be ignored. This question is simply: do proposed solutions scale with the reality of the brain's resources? This scaling question applies equally to brain and to machine solutions. A number of papers have examined the inherent computational difficulty of visual information processing using theoretical and empirical methods. The main goal of this activity had three components: to understand the deep nature of the computational problem of visual information processing; to discover how well the computational difficulty of vision matches to the fixed resources of biological seeing systems; and, to abstract from the matching exercise the key principles that lead to the observed characteristics of biological visual performance. This set of components was termed complexity level analysis in Tsotsos (1987) and was proposed as an important complement to Marr's three levels of analysis. This paper revisits that work with the advantage that decades of hindsight can provide. PMID:28848458

  4. Complexity Level Analysis Revisited: What Can 30 Years of Hindsight Tell Us about How the Brain Might Represent Visual Information?

    PubMed

    Tsotsos, John K

    2017-01-01

    Much has been written about how the biological brain might represent and process visual information, and how this might inspire and inform machine vision systems. Indeed, tremendous progress has been made, and especially during the last decade in the latter area. However, a key question seems too often, if not mostly, be ignored. This question is simply: do proposed solutions scale with the reality of the brain's resources? This scaling question applies equally to brain and to machine solutions. A number of papers have examined the inherent computational difficulty of visual information processing using theoretical and empirical methods. The main goal of this activity had three components: to understand the deep nature of the computational problem of visual information processing; to discover how well the computational difficulty of vision matches to the fixed resources of biological seeing systems; and, to abstract from the matching exercise the key principles that lead to the observed characteristics of biological visual performance. This set of components was termed complexity level analysis in Tsotsos (1987) and was proposed as an important complement to Marr's three levels of analysis. This paper revisits that work with the advantage that decades of hindsight can provide.

  5. Bullying in German Adolescents: Attending Special School for Students with Visual Impairment

    ERIC Educational Resources Information Center

    Pinquart, Martin; Pfeiffer, Jens P.

    2011-01-01

    The present study analysed bullying in German adolescents with and without visual impairment. Ninety-eight adolescents with vision loss from schools for students with visual impairment, of whom 31 were blind and 67 had low vision, were compared with 98 sighted peers using a matched-pair design. Students with low vision reported higher levels of…

  6. Greater magnocellular saccadic suppression in high versus low autistic tendency suggests a causal path to local perceptual style

    PubMed Central

    Crewther, David P.; Crewther, Daniel; Bevan, Stephanie; Goodale, Melvyn A.; Crewther, Sheila G.

    2015-01-01

    Saccadic suppression—the reduction of visual sensitivity during rapid eye movements—has previously been proposed to reflect a specific suppression of the magnocellular visual system, with the initial neural site of that suppression at or prior to afferent visual information reaching striate cortex. Dysfunction in the magnocellular visual pathway has also been associated with perceptual and physiological anomalies in individuals with autism spectrum disorder or high autistic tendency, leading us to question whether saccadic suppression is altered in the broader autism phenotype. Here we show that individuals with high autistic tendency show greater saccadic suppression of low versus high spatial frequency gratings while those with low autistic tendency do not. In addition, those with high but not low autism spectrum quotient (AQ) demonstrated pre-cortical (35–45 ms) evoked potential differences (saccade versus fixation) to a large, low contrast, pseudo-randomly flashing bar. Both AQ groups showed similar differential visual evoked potential effects in later epochs (80–160 ms) at high contrast. Thus, the magnocellular theory of saccadic suppression appears untenable as a general description for the typically developing population. Our results also suggest that the bias towards local perceptual style reported in autism may be due to selective suppression of low spatial frequency information accompanying every saccadic eye movement. PMID:27019719

  7. How do field of view and resolution affect the information content of panoramic scenes for visual navigation? A computational investigation.

    PubMed

    Wystrach, Antoine; Dewar, Alex; Philippides, Andrew; Graham, Paul

    2016-02-01

    The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal's behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.

  8. The Impact of Density and Ratio on Object-Ensemble Representation in Human Anterior-Medial Ventral Visual Cortex.

    PubMed

    Cant, Jonathan S; Xu, Yaoda

    2015-11-01

    Behavioral research has demonstrated that observers can extract summary statistics from ensembles of multiple objects. We recently showed that a region of anterior-medial ventral visual cortex, overlapping largely with the scene-sensitive parahippocampal place area (PPA), participates in object-ensemble representation. Here we investigated the encoding of ensemble density in this brain region using fMRI-adaptation. In Experiment 1, we varied density by changing the spacing between objects and found no sensitivity in PPA to such density changes. Thus, density may not be encoded in PPA, possibly because object spacing is not perceived as an intrinsic ensemble property. In Experiment 2, we varied relative density by changing the ratio of 2 types of objects comprising an ensemble, and observed significant sensitivity in PPA to such ratio change. Although colorful ensembles were shown in Experiment 2, Experiment 3 demonstrated that sensitivity to object ratio change was not driven mainly by a change in the ratio of colors. Thus, while anterior-medial ventral visual cortex is insensitive to density (object spacing) changes, it does code relative density (object ratio) within an ensemble. Object-ensemble processing in this region may thus depend on high-level visual information, such as object ratio, rather than low-level information, such as spacing/spatial frequency. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Objectively measured physical activity in Brazilians with visual impairment: description and associated factors.

    PubMed

    Barbosa Porcellis da Silva, Rafael; Marques, Alexandre Carriconde; Reichert, Felipe Fossati

    2017-05-19

    Low level of physical activity is a serious health issue in individuals with visual impairment. Few studies have objectively measured physical activity in this population group, particularly outside high-income countries. The aim of this study was to describe physical activity measured by accelerometry and its associated factors in Brazilian adults with visual impairment. In a cross-sectional design, 90 adults (18-95 years old) answered a questionnaire and wore an accelerometer for at least 3 days (including one weekend day) to measure physical activity (min/day). Sixty percent of the individuals practiced at least 30 min/day of moderate-to-vigorous physical activity. Individuals who were blind were less active, spent more time in sedentary activities and spent less time in moderate and vigorous activities than those with low vision. Individuals who walked mainly without any assistance were more active, spent less time in sedentary activities and spent more time in light and moderate activities than those who walked with a long cane or sighted guide. Our data highlight factors associated with lower levels of physical activity in people with visual impairment. These factors, such as being blind and walking without assistance should be tackled in interventions to increase physical activity levels among visual impairment individuals. Implications for Rehabilitation Physical inactivity worldwide is a serious health issue in people with visual impairments and specialized institutions and public policies must work to increase physical activity level of this population. Those with lower visual acuity and walking with any aid are at a higher risk of having low levels of physical activity. The association between visual response profile, living for less than 11 years with visual impairment and PA levels deserves further investigations Findings of the present study provide reliable data to support rehabilitation programs, observing the need of taking special attention to the subgroups that are even more likely to be inactive.

  10. Speed but not amplitude of visual feedback exacerbates force variability in older adults.

    PubMed

    Kim, Changki; Yacoubi, Basma; Christou, Evangelos A

    2018-06-23

    Magnification of visual feedback (VF) impairs force control in older adults. In this study, we aimed to determine whether the age-associated increase in force variability with magnification of visual feedback is a consequence of increased amplitude or speed of visual feedback. Seventeen young and 18 older adults performed a constant isometric force task with the index finger at 5% of MVC. We manipulated the vertical (force gain) and horizontal (time gain) aspect of the visual feedback so participants performed the task with the following VF conditions: (1) high amplitude-fast speed; (2) low amplitude-slow speed; (3) high amplitude-slow speed. Changing the visual feedback from low amplitude-slow speed to high amplitude-fast speed increased force variability in older adults but decreased it in young adults (P < 0.01). Changing the visual feedback from low amplitude-slow speed to high amplitude-slow speed did not alter force variability in older adults (P > 0.2), but decreased it in young adults (P < 0.01). Changing the visual feedback from high amplitude-slow speed to high amplitude-fast speed increased force variability in older adults (P < 0.01) but did not alter force variability in young adults (P > 0.2). In summary, increased force variability in older adults with magnification of visual feedback was evident only when the speed of visual feedback increased. Thus, we conclude that in older adults deficits in the rate of processing visual information and not deficits in the processing of more visual information impair force control.

  11. Learning-based saliency model with depth information.

    PubMed

    Ma, Chih-Yao; Hang, Hsueh-Ming

    2015-01-01

    Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.

  12. Strengths and Gaps in Physicians' Risk Communication: A Scenario Study of the Influence of Numeracy on Cancer Screening Communication.

    PubMed

    Petrova, Dafina; Kostopoulou, Olga; Delaney, Brendan C; Cokely, Edward T; Garcia-Retamero, Rocio

    2018-04-01

    Many patients have low numeracy, which impedes their understanding of important information about health (e.g., benefits and harms of screening). We investigated whether physicians adapt their risk communication to accommodate the needs of patients with low numeracy, and how physicians' own numeracy influences their understanding and communication of screening statistics. UK family physicians ( N = 151) read a description of a patient seeking advice on cancer screening. We manipulated the level of numeracy of the patient (low v. high v. unspecified) and measured physicians' risk communication, recommendation to the patient, understanding of screening statistics, and numeracy. Consistent with best practices, family physicians generally preferred to use visual aids rather than numbers when communicating information to a patient with low (v. high) numeracy. A substantial proportion of physicians (44%) offered high quality (i.e., complete and meaningful) risk communication to the patient. This was more often the case for physicians with higher (v. lower) numeracy who were more likely to mention mortality rates, OR=1.43 [1.10, 1.86], and harms from overdiagnosis, OR=1.44 [1.05, 1.98]. Physicians with higher numeracy were also more likely to understand that increased detection or survival rates do not demonstrate screening effectiveness, OR=1.61 [1.26, 2.06]. Most physicians know how to appropriately tailor risk communication for patients with low numeracy (i.e., with visual aids). However, physicians who themselves have low numeracy are likely to misunderstand the risks and unintentionally mislead patients by communicating incomplete information. High-quality risk communication and shared decision making can depend critically on factors that improve the risk literacy of physicians.

  13. The effect of spatial attention on invisible stimuli.

    PubMed

    Shin, Kilho; Stolte, Moritz; Chong, Sang Chul

    2009-10-01

    The influence of selective attention on visual processing is widespread. Recent studies have demonstrated that spatial attention can affect processing of invisible stimuli. However, it has been suggested that this effect is limited to low-level features, such as line orientations. The present experiments investigated whether spatial attention can influence both low-level (contrast threshold) and high-level (gender discrimination) adaptation, using the same method of attentional modulation for both types of stimuli. We found that spatial attention was able to increase the amount of adaptation to low- as well as to high-level invisible stimuli. These results suggest that attention can influence perceptual processes independent of visual awareness.

  14. The Forest, the Trees, and the Leaves: Differences of Processing across Development

    ERIC Educational Resources Information Center

    Krakowski, Claire-Sara; Poirel, Nicolas; Vidal, Julie; Roëll, Margot; Pineau, Arlette; Borst, Grégoire; Houdé, Olivier

    2016-01-01

    To act and think, children and adults are continually required to ignore irrelevant visual information to focus on task-relevant items. As real-world visual information is organized into structures, we designed a feature visual search task containing 3-level hierarchical stimuli (i.e., local shapes that constituted intermediate shapes that formed…

  15. Luminance, Colour, Viewpoint and Border Enhanced Disparity Energy Model

    PubMed Central

    Martins, Jaime A.; Rodrigues, João M. F.; du Buf, Hans

    2015-01-01

    The visual cortex is able to extract disparity information through the use of binocular cells. This process is reflected by the Disparity Energy Model, which describes the role and functioning of simple and complex binocular neuron populations, and how they are able to extract disparity. This model uses explicit cell parameters to mathematically determine preferred cell disparities, like spatial frequencies, orientations, binocular phases and receptive field positions. However, the brain cannot access such explicit cell parameters; it must rely on cell responses. In this article, we implemented a trained binocular neuronal population, which encodes disparity information implicitly. This allows the population to learn how to decode disparities, in a similar way to how our visual system could have developed this ability during evolution. At the same time, responses of monocular simple and complex cells can also encode line and edge information, which is useful for refining disparities at object borders. The brain should then be able, starting from a low-level disparity draft, to integrate all information, including colour and viewpoint perspective, in order to propagate better estimates to higher cortical areas. PMID:26107954

  16. High-level, but not low-level, motion perception is impaired in patients with schizophrenia.

    PubMed

    Kandil, Farid I; Pedersen, Anya; Wehnes, Jana; Ohrmann, Patricia

    2013-01-01

    Smooth pursuit eye movements are compromised in patients with schizophrenia and their first-degree relatives. Although research has demonstrated that the motor components of smooth pursuit eye movements are intact, motion perception has been shown to be impaired. In particular, studies have consistently revealed deficits in performance on tasks specific to the high-order motion area V5 (middle temporal area, MT) in patients with schizophrenia. In contrast, data from low-level motion detectors in the primary visual cortex (V1) have been inconsistent. To differentiate between low-level and high-level visual motion processing, we applied a temporal-order judgment task for motion events and a motion-defined figure-ground segregation task using patients with schizophrenia and healthy controls. Successful judgments in both tasks rely on the same low-level motion detectors in the V1; however, the first task is further processed in the higher-order motion area MT in the magnocellular (dorsal) pathway, whereas the second task requires subsequent computations in the parvocellular (ventral) pathway in visual area V4 and the inferotemporal cortex (IT). These latter structures are supposed to be intact in schizophrenia. Patients with schizophrenia revealed a significantly impaired temporal resolution on the motion-based temporal-order judgment task but only mild impairment in the motion-based segregation task. These results imply that low-level motion detection in V1 is not, or is only slightly, compromised; furthermore, our data restrain the locus of the well-known deficit in motion detection to areas beyond the primary visual cortex.

  17. Research on image complexity evaluation method based on color information

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo

    2017-11-01

    In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.

  18. Residual perception of biological motion in cortical blindness.

    PubMed

    Ruffieux, Nicolas; Ramon, Meike; Lao, Junpeng; Colombo, Françoise; Stacchi, Lisa; Borruat, François-Xavier; Accolla, Ettore; Annoni, Jean-Marie; Caldara, Roberto

    2016-12-01

    From birth, the human visual system shows a remarkable sensitivity for perceiving biological motion. This visual ability relies on a distributed network of brain regions and can be preserved even after damage of high-level ventral visual areas. However, it remains unknown whether this critical biological skill can withstand the loss of vision following bilateral striate damage. To address this question, we tested the categorization of human and animal biological motion in BC, a rare case of cortical blindness after anoxia-induced bilateral striate damage. The severity of his impairment, encompassing various aspects of vision (i.e., color, shape, face, and object recognition) and causing blind-like behavior, contrasts with a residual ability to process motion. We presented BC with static or dynamic point-light displays (PLDs) of human or animal walkers. These stimuli were presented either individually, or in pairs in two alternative forced choice (2AFC) tasks. When confronted with individual PLDs, the patient was unable to categorize the stimuli, irrespective of whether they were static or dynamic. In the 2AFC task, BC exhibited appropriate eye movements towards diagnostic information, but performed at chance level with static PLDs, in stark contrast to his ability to efficiently categorize dynamic biological agents. This striking ability to categorize biological motion provided top-down information is important for at least two reasons. Firstly, it emphasizes the importance of assessing patients' (visual) abilities across a range of task constraints, which can reveal potential residual abilities that may in turn represent a key feature for patient rehabilitation. Finally, our findings reinforce the view that the neural network processing biological motion can efficiently operate despite severely impaired low-level vision, positing our natural predisposition for processing dynamicity in biological agents as a robust feature of human vision. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Visual temporal processing in dyslexia and the magnocellular deficit theory: the need for speed?

    PubMed

    McLean, Gregor M T; Stuart, Geoffrey W; Coltheart, Veronika; Castles, Anne

    2011-12-01

    A controversial question in reading research is whether dyslexia is associated with impairments in the magnocellular system and, if so, how these low-level visual impairments might affect reading acquisition. This study used a novel chromatic flicker perception task to specifically explore temporal aspects of magnocellular functioning in 40 children with dyslexia and 42 age-matched controls (aged 7-11). The relationship between magnocellular temporal resolution and higher-level aspects of visual temporal processing including inspection time, single and dual-target (attentional blink) RSVP performance, go/no-go reaction time, and rapid naming was also assessed. The Dyslexia group exhibited significant deficits in magnocellular temporal resolution compared with controls, but the two groups did not differ in parvocellular temporal resolution. Despite the significant group differences, associations between magnocellular temporal resolution and reading ability were relatively weak, and links between low-level temporal resolution and reading ability did not appear specific to the magnocellular system. Factor analyses revealed that a collective Perceptual Speed factor, involving both low-level and higher-level visual temporal processing measures, accounted for unique variance in reading ability independently of phonological processing, rapid naming, and general ability.

  20. Lost in the Forest, Stuck in the Trees: Dispositional Global/Local Bias Is Resistant to Exposure to High and Low Spatial Frequencies

    PubMed Central

    Dale, Gillian; Arnell, Karen M.

    2014-01-01

    Visual stimuli can be perceived at a broad, “global” level, or at a more focused, “local” level. While research has shown that many individuals demonstrate a preference for global information, there are large individual differences in the degree of global/local bias, such that some individuals show a large global bias, some show a large local bias, and others show no bias. The main purpose of the current study was to examine whether these dispositional differences in global/local bias could be altered through various manipulations of high/low spatial frequency. Through 5 experiments, we examined various measures of dispositional global/local bias and whether performance on these measures could be altered by manipulating previous exposure to high or low spatial frequency information (with high/low spatial frequency faces, gratings, and Navon letters). Ultimately, there was little evidence of change from pre-to-post manipulation on the dispositional measures, and dispositional global/local bias was highly reliable pre- to post-manipulation. The results provide evidence that individual differences in global/local bias or preference are relatively resistant to exposure to spatial frequency information, and suggest that the processing mechanisms underlying high/low spatial frequency use and global/local bias may be more independent than previously thought. PMID:24992321

  1. Independent sources of anisotropy in visual orientation representation: a visual and a cognitive oblique effect.

    PubMed

    Balikou, Panagiota; Gourtzelidis, Pavlos; Mantas, Asimakis; Moutoussis, Konstantinos; Evdokimidis, Ioannis; Smyrnis, Nikolaos

    2015-11-01

    The representation of visual orientation is more accurate for cardinal orientations compared to oblique, and this anisotropy has been hypothesized to reflect a low-level visual process (visual, "class 1" oblique effect). The reproduction of directional and orientation information also leads to a mean error away from cardinal orientations or directions. This anisotropy has been hypothesized to reflect a high-level cognitive process of space categorization (cognitive, "class 2," oblique effect). This space categorization process would be more prominent when the visual representation of orientation degrades such as in the case of working memory with increasing cognitive load, leading to increasing magnitude of the "class 2" oblique effect, while the "class 1" oblique effect would remain unchanged. Two experiments were performed in which an array of orientation stimuli (1-4 items) was presented and then subjects had to realign a probe stimulus within the previously presented array. In the first experiment, the delay between stimulus presentation and probe varied, while in the second experiment, the stimulus presentation time varied. The variable error was larger for oblique compared to cardinal orientations in both experiments reproducing the visual "class 1" oblique effect. The mean error also reproduced the tendency away from cardinal and toward the oblique orientations in both experiments (cognitive "class 2" oblique effect). The accuracy or the reproduced orientation degraded (increasing variable error) and the cognitive "class 2" oblique effect increased with increasing memory load (number of items) in both experiments and presentation time in the second experiment. In contrast, the visual "class 1" oblique effect was not significantly modulated by any one of these experimental factors. These results confirmed the theoretical predictions for the two anisotropies in visual orientation reproduction and provided support for models proposing the categorization of orientation in visual working memory.

  2. Consciousness weaves our internal view of the outside world.

    PubMed

    Gur, Moshe

    2016-01-01

    Low-level consciousness is fundamental to our understanding of the world. Within the conscious field, the constantly changing external visual information is transformed into stable, object-based percepts. Remarkably, holistic objects are perceived while we are cognizant of all of the spatial details comprising the objects and of the relationship between individual elements. This parallel conscious association is unique to the brain. Conscious contributions to motor activity come after our understanding of the world has been established.

  3. Functional MRI Representational Similarity Analysis Reveals a Dissociation between Discriminative and Relative Location Information in the Human Visual System.

    PubMed

    Roth, Zvi N

    2016-01-01

    Neural responses in visual cortex are governed by a topographic mapping from retinal locations to cortical responses. Moreover, at the voxel population level early visual cortex (EVC) activity enables accurate decoding of stimuli locations. However, in many cases information enabling one to discriminate between locations (i.e., discriminative information) may be less relevant than information regarding the relative location of two objects (i.e., relative information). For example, when planning to grab a cup, determining whether the cup is located at the same retinal location as the hand is hardly relevant, whereas the location of the cup relative to the hand is crucial for performing the action. We have previously used multivariate pattern analysis techniques to measure discriminative location information, and found the highest levels in EVC, in line with other studies. Here we show, using representational similarity analysis, that availability of discriminative information in fMRI activation patterns does not entail availability of relative information. Specifically, we find that relative location information can be reliably extracted from activity patterns in posterior intraparietal sulcus (pIPS), but not from EVC, where we find the spatial representation to be warped. We further show that this variability in relative information levels between regions can be explained by a computational model based on an array of receptive fields. Moreover, when the model's receptive fields are extended to include inhibitory surround regions, the model can account for the spatial warping in EVC. These results demonstrate how size and shape properties of receptive fields in human visual cortex contribute to the transformation of discriminative spatial representations into relative spatial representations along the visual stream.

  4. Functional MRI Representational Similarity Analysis Reveals a Dissociation between Discriminative and Relative Location Information in the Human Visual System

    PubMed Central

    Roth, Zvi N.

    2016-01-01

    Neural responses in visual cortex are governed by a topographic mapping from retinal locations to cortical responses. Moreover, at the voxel population level early visual cortex (EVC) activity enables accurate decoding of stimuli locations. However, in many cases information enabling one to discriminate between locations (i.e., discriminative information) may be less relevant than information regarding the relative location of two objects (i.e., relative information). For example, when planning to grab a cup, determining whether the cup is located at the same retinal location as the hand is hardly relevant, whereas the location of the cup relative to the hand is crucial for performing the action. We have previously used multivariate pattern analysis techniques to measure discriminative location information, and found the highest levels in EVC, in line with other studies. Here we show, using representational similarity analysis, that availability of discriminative information in fMRI activation patterns does not entail availability of relative information. Specifically, we find that relative location information can be reliably extracted from activity patterns in posterior intraparietal sulcus (pIPS), but not from EVC, where we find the spatial representation to be warped. We further show that this variability in relative information levels between regions can be explained by a computational model based on an array of receptive fields. Moreover, when the model's receptive fields are extended to include inhibitory surround regions, the model can account for the spatial warping in EVC. These results demonstrate how size and shape properties of receptive fields in human visual cortex contribute to the transformation of discriminative spatial representations into relative spatial representations along the visual stream. PMID:27242455

  5. Understanding pictorial information in biology: students' cognitive activities and visual reading strategies

    NASA Astrophysics Data System (ADS)

    Brandstetter, Miriam; Sandmann, Angela; Florian, Christine

    2017-06-01

    In classroom, scientific contents are increasingly communicated through visual forms of representations. Students' learning outcomes rely on their ability to read and understand pictorial information. Understanding pictorial information in biology requires cognitive effort and can be challenging to students. Yet evidence-based knowledge about students' visual reading strategies during the process of understanding pictorial information is pending. Therefore, 42 students at the age of 14-15 were asked to think aloud while trying to understand visual representations of the blood circulatory system and the patellar reflex. A category system was developed differentiating 16 categories of cognitive activities. A Principal Component Analysis revealed two underlying patterns of activities that can be interpreted as visual reading strategies: 1. Inferences predominated by using a problem-solving schema; 2. Inferences predominated by recall of prior content knowledge. Each pattern consists of a specific set of cognitive activities that reflect selection, organisation and integration of pictorial information as well as different levels of expertise. The results give detailed insights into cognitive activities of students who were required to understand the pictorial information of complex organ systems. They provide an evidence-based foundation to derive instructional aids that can promote students pictorial-information-based learning on different levels of expertise.

  6. Working memory encoding delays top-down attention to visual cortex.

    PubMed

    Scalf, Paige E; Dux, Paul E; Marois, René

    2011-09-01

    The encoding of information from one event into working memory can delay high-level, central decision-making processes for subsequent events [e.g., Jolicoeur, P., & Dell'Acqua, R. The demonstration of short-term consolidation. Cognitive Psychology, 36, 138-202, 1998, doi:10.1006/cogp.1998.0684]. Working memory, however, is also believed to interfere with the deployment of top-down attention [de Fockert, J. W., Rees, G., Frith, C. D., & Lavie, N. The role of working memory in visual selective attention. Science, 291, 1803-1806, 2001, doi:10.1126/science.1056496]. It is, therefore, possible that, in addition to delaying central processes, the engagement of working memory encoding (WME) also postpones perceptual processing as well. Here, we tested this hypothesis with time-resolved fMRI by assessing whether WME serially postpones the action of top-down attention on low-level sensory signals. In three experiments, participants viewed a skeletal rapid serial visual presentation sequence that contained two target items (T1 and T2) separated by either a short (550 msec) or long (1450 msec) SOA. During single-target runs, participants attended and responded only to T1, whereas in dual-target runs, participants attended and responded to both targets. To determine whether T1 processing delayed top-down attentional enhancement of T2, we examined T2 BOLD response in visual cortex by subtracting the single-task waveforms from the dual-task waveforms for each SOA. When the WME demands of T1 were high (Experiments 1 and 3), T2 BOLD response was delayed at the short SOA relative to the long SOA. This was not the case when T1 encoding demands were low (Experiment 2). We conclude that encoding of a stimulus into working memory delays the deployment of attention to subsequent target representations in visual cortex.

  7. Playing checkers: detection and eye hand coordination in simulated prosthetic vision

    NASA Astrophysics Data System (ADS)

    Dagnelie, Gislin; Walter, Matthias; Yang, Liancheng

    2006-09-01

    In order to assess the potential for visual inspection and eye hand coordination without tactile feedback under conditions that may be available to future retinal prosthesis wearers, we studied the ability of sighted individuals to act upon pixelized visual information at very low resolution, equivalent to 20/2400 visual acuity. Live images from a head-mounted camera were low-pass filtered and presented in a raster of 6 × 10 circular Gaussian dots. Subjects could either freely move their gaze across the raster (free-viewing condition) or the raster position was locked to the subject's gaze by means of video-based pupil tracking (gaze-locked condition). Four normally sighted and one severely visually impaired subject with moderate nystagmus participated in a series of four experiments. Subjects' task was to count 1 to 16 white fields randomly distributed across an otherwise black checkerboard (counting task) or to place a black checker on each of the white fields (placing task). We found that all subjects were capable of learning both tasks after varying amounts of practice, both in the free-viewing and in the gaze-locked conditions. Normally sighted subjects all reached very similar performance levels independent of the condition. The practiced performance level of the visually impaired subject in the free-viewing condition was indistinguishable from that of the normally sighted subjects, but required approximately twice the amount of time to place checkers in the gaze-locked condition; this difference is most likely attributable to this subject's nystagmus. Thus, if early retinal prosthesis wearers can achieve crude form vision, then on the basis of these results they too should be able to perform simple eye hand coordination tasks without tactile feedback.

  8. Entropy, instrument scan and pilot workload

    NASA Technical Reports Server (NTRS)

    Tole, J. R.; Stephens, A. T.; Vivaudou, M.; Harris, R. L., Jr.; Ephrath, A. R.

    1982-01-01

    Correlation and information theory which analyze the relationships between mental loading and visual scanpath of aircraft pilots are described. The relationship between skill, performance, mental workload, and visual scanning behavior are investigated. The experimental method required pilots to maintain a general aviation flight simulator on a straight and level, constant sensitivity, Instrument Landing System (ILS) course with a low level of turbulence. An additional periodic verbal task whose difficulty increased with frequency was used to increment the subject's mental workload. The subject's looppoint on the instrument panel during each ten minute run was computed via a TV oculometer and stored. Several pilots ranging in skill from novices to test pilots took part in the experiment. Analysis of the periodicity of the subject's instrument scan was accomplished by means of correlation techniques. For skilled pilots, the autocorrelation of instrument/dwell times sequences showed the same periodicity as the verbal task. The ability to multiplex simultaneous tasks increases with skill. Thus autocorrelation provides a way of evaluating the operator's skill level.

  9. A Force-Visualized Silicone Retractor Attachable to Surgical Suction Pipes.

    PubMed

    Watanabe, Tetsuyou; Koyama, Toshio; Yoneyama, Takeshi; Nakada, Mitsutoshi

    2017-04-05

    This paper presents a force-visually-observable silicone retractor, which is an extension of a previously developed system that had the same functions of retracting, suction, and force sensing. These features provide not only high usability by reducing the number of tool changes, but also a safe choice of retracting by visualized force information. Suction is achieved by attaching the retractor to a suction pipe. The retractor has a deformable sensing component including a hole filled with a liquid. The hole is connected to an outer tube, and the liquid level displaced in proportion to the extent of deformation resulting from the retracting load. The liquid level is capable to be observed around the surgeon's fingertips, which enhances the usability. The new hybrid structure of soft sensing and hard retracting allows the miniaturization of the retractor as well as a resolution of less than 0.05 N and a range of 0.1-0.7 N. The overall structure is made of silicone, which has the advantages of disposability, low cost, and easy sterilization/disinfection. This system was validated by conducting experiments.

  10. Tools for Visualizing HIV in Cure Research.

    PubMed

    Niessl, Julia; Baxter, Amy E; Kaufmann, Daniel E

    2018-02-01

    The long-lived HIV reservoir remains a major obstacle for an HIV cure. Current techniques to analyze this reservoir are generally population-based. We highlight recent developments in methods visualizing HIV, which offer a different, complementary view, and provide indispensable information for cure strategy development. Recent advances in fluorescence in situ hybridization techniques enabled key developments in reservoir visualization. Flow cytometric detection of HIV mRNAs, concurrently with proteins, provides a high-throughput approach to study the reservoir on a single-cell level. On a tissue level, key spatial information can be obtained detecting viral RNA and DNA in situ by fluorescence microscopy. At total-body level, advancements in non-invasive immuno-positron emission tomography (PET) detection of HIV proteins may allow an encompassing view of HIV reservoir sites. HIV imaging approaches provide important, complementary information regarding the size, phenotype, and localization of the HIV reservoir. Visualizing the reservoir may contribute to the design, assessment, and monitoring of HIV cure strategies in vitro and in vivo.

  11. Public Communication of Technical Issues in Today's Changing Visual Language - 12436

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hermann, Laura

    2012-07-01

    Communication regarding the management of radioactive materials is a well-established challenge. Residents and consumers have suspected for years that companies and governments place short-term economic concerns ahead of health and safety. This skepticism is compounded with increased attention to safety issues at nuclear power plants everywhere after Fukushima. Nonetheless, today's environment presents unexpected opportunities to transform public fear into teachable moments that bring knowledge and facts to discussions on nuclear energy. In the weeks following Japan's crisis, the lack of reliable information on radiation levels saw citizens taking to the streets with dosimeters and Geiger counters in crowd-sourced radiation monitoringmore » efforts. Efforts, based mainly online, represent a growing set of examples of how internet and cell-phone technology are being put to use in emergency situations. The maps, graphs and tables created to meet public interest also exemplify some of the psychological priorities of audiences and present learning tools that can improve future education efforts in non-emergency situations. Industry outreach efforts often consist of technical details and quantitative data that are difficult for lay audiences to interpret. The intense attention to nuclear energy issues since last March has produced a wide array of visual samples. Citizen monitors, news organizations, government agencies and others have displayed quantitative information in innovative ways. Their efforts offer new perspective on what charts, maps and info graphics do - or need to do - to illustrate requirements, record assessments and promote understanding of nuclear-waste issues. Surveying the best examples, nuclear communicators can improve their offerings of easy-to-use, evidence-based visuals to inform stakeholders. Familiar to most communications professionals in the nuclear industry, risk communication is a science-based approach with over three decades of research evidence informing the discipline. Risk communication principles address the fact that often perception is reality. Scientific evidence for decision making must be approached differently in situations where stakeholders have high levels of concern and low levels of trust. Visual communications must take this into account to be successful. (author)« less

  12. Steady-state visual evoked potentials as a research tool in social affective neuroscience

    PubMed Central

    Wieser, Matthias J.; Miskovic, Vladimir; Keil, Andreas

    2017-01-01

    Like many other primates, humans place a high premium on social information transmission and processing. One important aspect of this information concerns the emotional state of other individuals, conveyed by distinct visual cues such as facial expressions, overt actions, or by cues extracted from the situational context. A rich body of theoretical and empirical work has demonstrated that these socio-emotional cues are processed by the human visual system in a prioritized fashion, in the service of optimizing social behavior. Furthermore, socio-emotional perception is highly dependent on situational contexts and previous experience. Here, we review current issues in this area of research and discuss the utility of the steady-state visual evoked potential (ssVEP) technique for addressing key empirical questions. Methodological advantages and caveats are discussed with particular regard to quantifying time-varying competition among multiple perceptual objects, trial-by-trial analysis of visual cortical activation, functional connectivity, and the control of low-level stimulus features. Studies on facial expression and emotional scene processing are summarized, with an emphasis on viewing faces and other social cues in emotional contexts, or when competing with each other. Further, because the ssVEP technique can be readily accommodated to studying the viewing of complex scenes with multiple elements, it enables researchers to advance theoretical models of socio-emotional perception, based on complex, quasi-naturalistic viewing situations. PMID:27699794

  13. Dual Low-Rank Pursuit: Learning Salient Features for Saliency Detection.

    PubMed

    Lang, Congyan; Feng, Jiashi; Feng, Songhe; Wang, Jingdong; Yan, Shuicheng

    2016-06-01

    Saliency detection is an important procedure for machines to understand visual world as humans do. In this paper, we consider a specific saliency detection problem of predicting human eye fixations when they freely view natural images, and propose a novel dual low-rank pursuit (DLRP) method. DLRP learns saliency-aware feature transformations by utilizing available supervision information and constructs discriminative bases for effectively detecting human fixation points under the popular low-rank and sparsity-pursuit framework. Benefiting from the embedded high-level information in the supervised learning process, DLRP is able to predict fixations accurately without performing the expensive object segmentation as in the previous works. Comprehensive experiments clearly show the superiority of the proposed DLRP method over the established state-of-the-art methods. We also empirically demonstrate that DLRP provides stronger generalization performance across different data sets and inherits the advantages of both the bottom-up- and top-down-based saliency detection methods.

  14. The levels of perceptual processing and the neural correlates of increasing subjective visibility.

    PubMed

    Binder, Marek; Gociewicz, Krzysztof; Windey, Bert; Koculak, Marcin; Finc, Karolina; Nikadon, Jan; Derda, Monika; Cleeremans, Axel

    2017-10-01

    According to the levels-of-processing hypothesis, transitions from unconscious to conscious perception may depend on stimulus processing level, with more gradual changes for low-level stimuli and more dichotomous changes for high-level stimuli. In an event-related fMRI study we explored this hypothesis using a visual backward masking procedure. Task requirements manipulated level of processing. Participants reported the magnitude of the target digit in the high-level task, its color in the low-level task, and rated subjective visibility of stimuli using the Perceptual Awareness Scale. Intermediate stimulus visibility was reported more frequently in the low-level task, confirming prior behavioral results. Visible targets recruited insulo-fronto-parietal regions in both tasks. Task effects were observed in visual areas, with higher activity in the low-level task across all visibility levels. Thus, the influence of level of processing on conscious perception may be mediated by attentional modulation of activity in regions representing features of consciously experienced stimuli. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. The Competitive Influences of Perceptual Load and Working Memory Guidance on Selective Attention.

    PubMed

    Tan, Jinfeng; Zhao, Yuanfang; Wang, Lijun; Tian, Xia; Cui, Yan; Yang, Qian; Pan, Weigang; Zhao, Xiaoyue; Chen, Antao

    2015-01-01

    The perceptual load theory in selective attention literature proposes that the interference from task-irrelevant distractor is eliminated when perceptual capacity is fully consumed by task-relevant information. However, the biased competition model suggests that the contents of working memory (WM) can guide attentional selection automatically, even when this guidance is detrimental to visual search. An intriguing but unsolved question is what will happen when selective attention is influenced by both perceptual load and WM guidance. To study this issue, behavioral performances and event-related potentials (ERPs) were recorded when participants were presented with a cue to either identify or hold in memory and had to perform a visual search task subsequently, under conditions of low or high perceptual load. Behavioural data showed that high perceptual load eliminated the attentional capture by WM. The ERP results revealed an obvious WM guidance effect in P1 component with invalid trials eliciting larger P1 than neutral trials, regardless of the level of perceptual load. The interaction between perceptual load and WM guidance was significant for the posterior N1 component. The memory guidance effect on N1 was eliminated by high perceptual load. Standardized Low Resolution Electrical Tomography Analysis (sLORETA) showed that the WM guidance effect and the perceptual load effect on attention can be localized into the occipital area and parietal lobe, respectively. Merely identifying the cue produced no effect on the P1 or N1 component. These results suggest that in selective attention, the information held in WM could capture attention at the early stage of visual processing in the occipital cortex. Interestingly, this initial capture of attention by WM could be modulated by the level of perceptual load and the parietal lobe mediates target selection at the discrimination stage.

  16. The Competitive Influences of Perceptual Load and Working Memory Guidance on Selective Attention

    PubMed Central

    Tan, Jinfeng; Zhao, Yuanfang; Wang, Lijun; Tian, Xia; Cui, Yan; Yang, Qian; Pan, Weigang; Zhao, Xiaoyue; Chen, Antao

    2015-01-01

    The perceptual load theory in selective attention literature proposes that the interference from task-irrelevant distractor is eliminated when perceptual capacity is fully consumed by task-relevant information. However, the biased competition model suggests that the contents of working memory (WM) can guide attentional selection automatically, even when this guidance is detrimental to visual search. An intriguing but unsolved question is what will happen when selective attention is influenced by both perceptual load and WM guidance. To study this issue, behavioral performances and event-related potentials (ERPs) were recorded when participants were presented with a cue to either identify or hold in memory and had to perform a visual search task subsequently, under conditions of low or high perceptual load. Behavioural data showed that high perceptual load eliminated the attentional capture by WM. The ERP results revealed an obvious WM guidance effect in P1 component with invalid trials eliciting larger P1 than neutral trials, regardless of the level of perceptual load. The interaction between perceptual load and WM guidance was significant for the posterior N1 component. The memory guidance effect on N1 was eliminated by high perceptual load. Standardized Low Resolution Electrical Tomography Analysis (sLORETA) showed that the WM guidance effect and the perceptual load effect on attention can be localized into the occipital area and parietal lobe, respectively. Merely identifying the cue produced no effect on the P1 or N1 component. These results suggest that in selective attention, the information held in WM could capture attention at the early stage of visual processing in the occipital cortex. Interestingly, this initial capture of attention by WM could be modulated by the level of perceptual load and the parietal lobe mediates target selection at the discrimination stage. PMID:26098079

  17. Chromatic information and feature detection in fast visual analysis

    DOE PAGES

    Del Viva, Maria M.; Punzi, Giovanni; Shevell, Steven K.; ...

    2016-08-01

    The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-andwhite movies provide compelling representations of real world scenes. Also, the contrast sensitivity ofmore » color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. As a result, we conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in.« less

  18. Chromatic information and feature detection in fast visual analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Viva, Maria M.; Punzi, Giovanni; Shevell, Steven K.

    The visual system is able to recognize a scene based on a sketch made of very simple features. This ability is likely crucial for survival, when fast image recognition is necessary, and it is believed that a primal sketch is extracted very early in the visual processing. Such highly simplified representations can be sufficient for accurate object discrimination, but an open question is the role played by color in this process. Rich color information is available in natural scenes, yet artist's sketches are usually monochromatic; and, black-andwhite movies provide compelling representations of real world scenes. Also, the contrast sensitivity ofmore » color is low at fine spatial scales. We approach the question from the perspective of optimal information processing by a system endowed with limited computational resources. We show that when such limitations are taken into account, the intrinsic statistical properties of natural scenes imply that the most effective strategy is to ignore fine-scale color features and devote most of the bandwidth to gray-scale information. We find confirmation of these information-based predictions from psychophysics measurements of fast-viewing discrimination of natural scenes. As a result, we conclude that the lack of colored features in our visual representation, and our overall low sensitivity to high-frequency color components, are a consequence of an adaptation process, optimizing the size and power consumption of our brain for the visual world we live in.« less

  19. A Free Program for Using and Teaching an Accessible Electronic Wayfinding Device

    ERIC Educational Resources Information Center

    Greenberg, Maya Delgado; Kuns, Jerry

    2012-01-01

    Accessible Global Positioning Systems (GPS) are changing the way many people with visual impairments (that is, those who are blind or have low vision) travel. GPS provides real-time orientation information so that a traveler with a visual impairment can make informed decisions about path of travel and destination. Orientation and mobility (O&M)…

  20. Visualization-by-Sketching: An Artist's Interface for Creating Multivariate Time-Varying Data Visualizations.

    PubMed

    Schroeder, David; Keefe, Daniel F

    2016-01-01

    We present Visualization-by-Sketching, a direct-manipulation user interface for designing new data visualizations. The goals are twofold: First, make the process of creating real, animated, data-driven visualizations of complex information more accessible to artists, graphic designers, and other visual experts with traditional, non-technical training. Second, support and enhance the role of human creativity in visualization design, enabling visual experimentation and workflows similar to what is possible with traditional artistic media. The approach is to conceive of visualization design as a combination of processes that are already closely linked with visual creativity: sketching, digital painting, image editing, and reacting to exemplars. Rather than studying and tweaking low-level algorithms and their parameters, designers create new visualizations by painting directly on top of a digital data canvas, sketching data glyphs, and arranging and blending together multiple layers of animated 2D graphics. This requires new algorithms and techniques to interpret painterly user input relative to data "under" the canvas, balance artistic freedom with the need to produce accurate data visualizations, and interactively explore large (e.g., terabyte-sized) multivariate datasets. Results demonstrate a variety of multivariate data visualization techniques can be rapidly recreated using the interface. More importantly, results and feedback from artists support the potential for interfaces in this style to attract new, creative users to the challenging task of designing more effective data visualizations and to help these users stay "in the creative zone" as they work.

  1. A spatially collocated sound thrusts a flash into awareness

    PubMed Central

    Aller, Máté; Giani, Anette; Conrad, Verena; Watanabe, Masataka; Noppeney, Uta

    2015-01-01

    To interact effectively with the environment the brain integrates signals from multiple senses. It is currently unclear to what extent spatial information can be integrated across different senses in the absence of awareness. Combining dynamic continuous flash suppression (CFS) and spatial audiovisual stimulation, the current study investigated whether a sound facilitates a concurrent visual flash to elude flash suppression and enter perceptual awareness depending on audiovisual spatial congruency. Our results demonstrate that a concurrent sound boosts unaware visual signals into perceptual awareness. Critically, this process depended on the spatial congruency of the auditory and visual signals pointing towards low level mechanisms of audiovisual integration. Moreover, the concurrent sound biased the reported location of the flash as a function of flash visibility. The spatial bias of sounds on reported flash location was strongest for flashes that were judged invisible. Our results suggest that multisensory integration is a critical mechanism that enables signals to enter conscious perception. PMID:25774126

  2. Alpha and Beta Oscillations Index Semantic Congruency between Speech and Gestures in Clear and Degraded Speech.

    PubMed

    Drijvers, Linda; Özyürek, Asli; Jensen, Ole

    2018-06-19

    Previous work revealed that visual semantic information conveyed by gestures can enhance degraded speech comprehension, but the mechanisms underlying these integration processes under adverse listening conditions remain poorly understood. We used MEG to investigate how oscillatory dynamics support speech-gesture integration when integration load is manipulated by auditory (e.g., speech degradation) and visual semantic (e.g., gesture congruency) factors. Participants were presented with videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching (mixing gesture + "mixing") or mismatching (drinking gesture + "walking") gesture. In clear speech, alpha/beta power was more suppressed in the left inferior frontal gyrus and motor and visual cortices when integration load increased in response to mismatching versus matching gestures. In degraded speech, beta power was less suppressed over posterior STS and medial temporal lobe for mismatching compared with matching gestures, showing that integration load was lowest when speech was degraded and mismatching gestures could not be integrated and disambiguate the degraded signal. Our results thus provide novel insights on how low-frequency oscillatory modulations in different parts of the cortex support the semantic audiovisual integration of gestures in clear and degraded speech: When speech is clear, the left inferior frontal gyrus and motor and visual cortices engage because higher-level semantic information increases semantic integration load. When speech is degraded, posterior STS/middle temporal gyrus and medial temporal lobe are less engaged because integration load is lowest when visual semantic information does not aid lexical retrieval and speech and gestures cannot be integrated.

  3. Prevalence and causes of low vision and blindness in Baotou

    PubMed Central

    Zhang, Guisen; Li, Yan; Teng, Xuelong; Wu, Qiang; Gong, Hui; Ren, Fengmei; Guo, Yuxia; Liu, Lei; Zhang, Han

    2016-01-01

    Abstract The aim of this study was to investigate the prevalence and causes of low vision and blindness in Baotou, Inner Mongolia. A cross-sectional study was carried out. Multistage sampling was used to select samples. The visual acuity was estimated using LogMAR and corrected by pinhole as best-corrected visual acuity. There were 7000 samples selected and 5770 subjects included in this investigation. The overall bilateral prevalence rates of low vision and blindness were 3.66% (95% CI: 3.17–4.14) and 0.99% (95% CI: 0.73–1.24), respectively. The prevalence of bilateral low vision, blindness, and visual impairment increased with age and decreased with education level. The main leading cause of low vision and blindness was cataract. Diabetic retinopathy and age-related macular degeneration were found to be the second leading causes of blindness in Baotou. The low vision and blindness were more prevalent in elderly people and subjects with low education level in Baotou. Cataract was the main cause for visual impairment and more attention should be paid to fundus diseases. In order to prevent blindness, much more eye care programs should be established. PMID:27631267

  4. Prevalence and causes of low vision and blindness in Baotou: A cross-sectional study.

    PubMed

    Zhang, Guisen; Li, Yan; Teng, Xuelong; Wu, Qiang; Gong, Hui; Ren, Fengmei; Guo, Yuxia; Liu, Lei; Zhang, Han

    2016-09-01

    The aim of this study was to investigate the prevalence and causes of low vision and blindness in Baotou, Inner Mongolia.A cross-sectional study was carried out. Multistage sampling was used to select samples. The visual acuity was estimated using LogMAR and corrected by pinhole as best-corrected visual acuity.There were 7000 samples selected and 5770 subjects included in this investigation. The overall bilateral prevalence rates of low vision and blindness were 3.66% (95% CI: 3.17-4.14) and 0.99% (95% CI: 0.73-1.24), respectively. The prevalence of bilateral low vision, blindness, and visual impairment increased with age and decreased with education level. The main leading cause of low vision and blindness was cataract. Diabetic retinopathy and age-related macular degeneration were found to be the second leading causes of blindness in Baotou.The low vision and blindness were more prevalent in elderly people and subjects with low education level in Baotou. Cataract was the main cause for visual impairment and more attention should be paid to fundus diseases. In order to prevent blindness, much more eye care programs should be established.

  5. Visualization rhetoric: framing effects in narrative visualization.

    PubMed

    Hullman, Jessica; Diakopoulos, Nicholas

    2011-12-01

    Narrative visualizations combine conventions of communicative and exploratory information visualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations in visualizations that "tell a story" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrative visualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation. © 2011 IEEE

  6. Evaluation of low-dose limits in 3D-2D rigid registration for surgical guidance

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Wang, A. S.; Otake, Y.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gallia, G. L.; Gokaslan, Z. L.; Siewerdsen, J. H.

    2014-09-01

    An algorithm for intensity-based 3D-2D registration of CT and C-arm fluoroscopy is evaluated for use in surgical guidance, specifically considering the low-dose limits of the fluoroscopic x-ray projections. The registration method is based on a framework using the covariance matrix adaptation evolution strategy (CMA-ES) to identify the 3D patient pose that maximizes the gradient information similarity metric. Registration performance was evaluated in an anthropomorphic head phantom emulating intracranial neurosurgery, using target registration error (TRE) to characterize accuracy and robustness in terms of 95% confidence upper bound in comparison to that of an infrared surgical tracking system. Three clinical scenarios were considered: (1) single-view image + guidance, wherein a single x-ray projection is used for visualization and 3D-2D guidance; (2) dual-view image + guidance, wherein one projection is acquired for visualization, combined with a second (lower-dose) projection acquired at a different C-arm angle for 3D-2D guidance; and (3) dual-view guidance, wherein both projections are acquired at low dose for the purpose of 3D-2D guidance alone (not visualization). In each case, registration accuracy was evaluated as a function of the entrance surface dose associated with the projection view(s). Results indicate that images acquired at a dose as low as 4 μGy (approximately one-tenth the dose of a typical fluoroscopic frame) were sufficient to provide TRE comparable or superior to that of conventional surgical tracking, allowing 3D-2D guidance at a level of dose that is at most 10% greater than conventional fluoroscopy (scenario #2) and potentially reducing the dose to approximately 20% of the level in a conventional fluoroscopically guided procedure (scenario #3).

  7. Languages on the screen: is film comprehension related to the viewers' fluency level and to the language in the subtitles?

    PubMed

    Lavaur, Jean-Marc; Bairstow, Dominique

    2011-12-01

    This research aimed at studying the role of subtitling in film comprehension. It focused on the languages in which the subtitles are written and on the participants' fluency levels in the languages presented in the film. In a preliminary part of the study, the most salient visual and dialogue elements of a short sequence of an English film were extracted by the means of a free recall task after showing two versions of the film (first a silent, then a dubbed-into-French version) to native French speakers. This visual and dialogue information was used in the setting of a questionnaire concerning the understanding of the film presented in the main part of the study, in which other French native speakers with beginner, intermediate, or advanced fluency levels in English were shown one of three versions of the film used in the preliminary part. Respectively, these versions had no subtitles or they included either English or French subtitles. The results indicate a global interaction between all three factors in this study: For the beginners, visual processing dropped from the version without subtitles to that with English subtitles, and even more so if French subtitles were provided, whereas the effect of film version on dialogue comprehension was the reverse. The advanced participants achieved higher comprehension for both types of information with the version without subtitles, and dialogue information processing was always better than visual information processing. The intermediate group similarly processed dialogues in a better way than visual information, but was not affected by film version. These results imply that, depending on the viewers' fluency levels, the language of subtitles can have different effects on movie information processing.

  8. Transient visual responses reset the phase of low-frequency oscillations in the skeletomotor periphery.

    PubMed

    Wood, Daniel K; Gu, Chao; Corneil, Brian D; Gribble, Paul L; Goodale, Melvyn A

    2015-08-01

    We recorded muscle activity from an upper limb muscle while human subjects reached towards peripheral targets. We tested the hypothesis that the transient visual response sweeps not only through the central nervous system, but also through the peripheral nervous system. Like the transient visual response in the central nervous system, stimulus-locked muscle responses (< 100 ms) were sensitive to stimulus contrast, and were temporally and spatially dissociable from voluntary orienting activity. Also, the arrival of visual responses reduced the variability of muscle activity by resetting the phase of ongoing low-frequency oscillations. This latter finding critically extends the emerging evidence that the feedforward visual sweep reduces neural variability via phase resetting. We conclude that, when sensory information is relevant to a particular effector, detailed information about the sensorimotor transformation, even from the earliest stages, is found in the peripheral nervous system. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  9. Visual Reliance for Balance Control in Older Adults Persists When Visual Information Is Disrupted by Artificial Feedback Delays

    PubMed Central

    Balasubramaniam, Ramesh

    2014-01-01

    Sensory information from our eyes, skin and muscles helps guide and correct balance. Less appreciated, however, is that delays in the transmission of sensory information between our eyes, limbs and central nervous system can exceed several 10s of milliseconds. Investigating how these time-delayed sensory signals influence balance control is central to understanding the postural system. Here, we investigate how delayed visual feedback and cognitive performance influence postural control in healthy young and older adults. The task required that participants position their center of pressure (COP) in a fixed target as accurately as possible without visual feedback about their COP location (eyes-open balance), or with artificial time delays imposed on visual COP feedback. On selected trials, the participants also performed a silent arithmetic task (cognitive dual task). We separated COP time series into distinct frequency components using low and high-pass filtering routines. Visual feedback delays affected low frequency postural corrections in young and older adults, with larger increases in postural sway noted for the group of older adults. In comparison, cognitive performance reduced the variability of rapid center of pressure displacements in young adults, but did not alter postural sway in the group of older adults. Our results demonstrate that older adults prioritize vision to control posture. This visual reliance persists even when feedback about the task is delayed by several hundreds of milliseconds. PMID:24614576

  10. Invisibility and interpretation.

    PubMed

    Herzog, Michael H; Hermens, Frouke; Oğmen, Haluk

    2014-01-01

    Invisibility is often thought to occur because of the low-level limitations of the visual system. For example, it is often assumed that backward masking renders a target invisible because the visual system is simply too slow to resolve the target and the mask separately. Here, we propose an alternative explanation in which invisibility is a goal rather than a limitation and occurs naturally when making sense out of the plethora of incoming information. For example, we present evidence that (in)visibility of an element can strongly depend on how it groups with other elements. Changing grouping changes visibility. In addition, we will show that features often just appear to be invisible but are in fact visible in a way the experimenter is not aware of.

  11. Invisibility and interpretation

    PubMed Central

    Herzog, Michael H.; Hermens, Frouke; Öğmen, Haluk

    2014-01-01

    Invisibility is often thought to occur because of the low-level limitations of the visual system. For example, it is often assumed that backward masking renders a target invisible because the visual system is simply too slow to resolve the target and the mask separately. Here, we propose an alternative explanation in which invisibility is a goal rather than a limitation and occurs naturally when making sense out of the plethora of incoming information. For example, we present evidence that (in)visibility of an element can strongly depend on how it groups with other elements. Changing grouping changes visibility. In addition, we will show that features often just appear to be invisible but are in fact visible in a way the experimenter is not aware of. PMID:25278910

  12. Topographic contribution of early visual cortex to short-term memory consolidation: a transcranial magnetic stimulation study.

    PubMed

    van de Ven, Vincent; Jacobs, Christianne; Sack, Alexander T

    2012-01-04

    The neural correlates for retention of visual information in visual short-term memory are considered separate from those of sensory encoding. However, recent findings suggest that sensory areas may play a role also in short-term memory. We investigated the functional relevance, spatial specificity, and temporal characteristics of human early visual cortex in the consolidation of capacity-limited topographic visual memory using transcranial magnetic stimulation (TMS). Topographically specific TMS pulses were delivered over lateralized occipital cortex at 100, 200, or 400 ms into the retention phase of a modified change detection task with low or high memory loads. For the high but not the low memory load, we found decreased memory performance for memory trials in the visual field contralateral, but not ipsilateral to the side of TMS, when pulses were delivered at 200 ms into the retention interval. A behavioral version of the TMS experiment, in which a distractor stimulus (memory mask) replaced the TMS pulses, further corroborated these findings. Our findings suggest that retinotopic visual cortex contributes to the short-term consolidation of topographic visual memory during early stages of the retention of visual information. Further, TMS-induced interference decreased the strength (amplitude) of the memory representation, which most strongly affected the high memory load trials.

  13. Superadditive responses in superior temporal sulcus predict audiovisual benefits in object categorization.

    PubMed

    Werner, Sebastian; Noppeney, Uta

    2010-08-01

    Merging information from multiple senses provides a more reliable percept of our environment. Yet, little is known about where and how various sensory features are combined within the cortical hierarchy. Combining functional magnetic resonance imaging and psychophysics, we investigated the neural mechanisms underlying integration of audiovisual object features. Subjects categorized or passively perceived audiovisual object stimuli with the informativeness (i.e., degradation) of the auditory and visual modalities being manipulated factorially. Controlling for low-level integration processes, we show higher level audiovisual integration selectively in the superior temporal sulci (STS) bilaterally. The multisensory interactions were primarily subadditive and even suppressive for intact stimuli but turned into additive effects for degraded stimuli. Consistent with the inverse effectiveness principle, auditory and visual informativeness determine the profile of audiovisual integration in STS similarly to the influence of physical stimulus intensity in the superior colliculus. Importantly, when holding stimulus degradation constant, subjects' audiovisual behavioral benefit predicts their multisensory integration profile in STS: only subjects that benefit from multisensory integration exhibit superadditive interactions, while those that do not benefit show suppressive interactions. In conclusion, superadditive and subadditive integration profiles in STS are functionally relevant and related to behavioral indices of multisensory integration with superadditive interactions mediating successful audiovisual object categorization.

  14. Spatial Tuning Shifts Increase the Discriminability and Fidelity of Population Codes in Visual Cortex

    PubMed Central

    2017-01-01

    Selective visual attention enables organisms to enhance the representation of behaviorally relevant stimuli by altering the encoding properties of single receptive fields (RFs). Yet we know little about how the attentional modulations of single RFs contribute to the encoding of an entire visual scene. Addressing this issue requires (1) measuring a group of RFs that tile a continuous portion of visual space, (2) constructing a population-level measurement of spatial representations based on these RFs, and (3) linking how different types of RF attentional modulations change the population-level representation. To accomplish these aims, we used fMRI to characterize the responses of thousands of voxels in retinotopically organized human cortex. First, we found that the response modulations of voxel RFs (vRFs) depend on the spatial relationship between the RF center and the visual location of the attended target. Second, we used two analyses to assess the spatial encoding quality of a population of voxels. We found that attention increased fine spatial discriminability and representational fidelity near the attended target. Third, we linked these findings by manipulating the observed vRF attentional modulations and recomputing our measures of the fidelity of population codes. Surprisingly, we discovered that attentional enhancements of population-level representations largely depend on position shifts of vRFs, rather than changes in size or gain. Our data suggest that position shifts of single RFs are a principal mechanism by which attention enhances population-level representations in visual cortex. SIGNIFICANCE STATEMENT Although changes in the gain and size of RFs have dominated our view of how attention modulates visual information codes, such hypotheses have largely relied on the extrapolation of single-cell responses to population responses. Here we use fMRI to relate changes in single voxel receptive fields (vRFs) to changes in population-level representations. We find that vRF position shifts contribute more to population-level enhancements of visual information than changes in vRF size or gain. This finding suggests that position shifts are a principal mechanism by which spatial attention enhances population codes for relevant visual information. This poses challenges for labeled line theories of information processing, suggesting that downstream regions likely rely on distributed inputs rather than single neuron-to-neuron mappings. PMID:28242794

  15. Does functional vision behave differently in low-vision patients with diabetic retinopathy?--A case-matched study.

    PubMed

    Ahmadian, Lohrasb; Massof, Robert

    2008-09-01

    A retrospective case-matched study designed to compare patients with diabetic retinopathy (DR) and other ocular diseases, managed in a low-vision clinic, in four different types of functional vision. Reading, mobility, visual motor, and visual information processing were measured in the patients (n = 114) and compared with those in patients with other ocular diseases (n = 114) matched in sex, visual acuity (VA), general health status, and age, using the Activity Inventory as a Rasch-scaled measurement tool. Binocular distance visual acuity was categorized as normal (20/12.5-20/25), near normal (20/32-20/63), moderate (20/80-20/160), severe (20/200-20/400), profound (20/500-20/1000), and total blindness (20/1250 to no light perception). Both Wilcoxon matched pairs signed rank test and the sign test of matched pairs were used to compare estimated functional vision measures between DR cases and controls. Cases ranged in age from 19 to 90 years (mean age, 67.5), and 59% were women. The mean visual acuity (logMar scale) was 0.7. Based on the Wilcoxon signed rank test analyses and after adjusting the probability for multiple comparisons, there was no statistically significant difference (P > 0.05) between patients with DR and control subjects in any of four functional visions. Furthermore, diabetic retinopathy patients did not differ (P > 0.05) from their matched counterparts in goal-level vision-related functional ability and total visual ability. Visual impairment in patients with DR appears to be a generic and non-disease-specific outcome that can be explained mainly by the end impact of the disease in the patients' daily lives and not by the unique disease process that results in the visual impairment.

  16. Braille Reading Accuracy of Students Who Are Visually Impaired: The Effects of Gender, Age at Vision Loss, and Level of Education

    ERIC Educational Resources Information Center

    Argyropoulos, Vassilis; Papadimitriou, Vassilios

    2015-01-01

    Introduction: The present study assesses the performance of students who are visually impaired (that is, those who are blind or have low vision) in braille reading accuracy and examines potential correlations among the error categories on the basis of gender, age at loss of vision, and level of education. Methods: Twenty-one visually impaired…

  17. Contextual modulation of primary visual cortex by auditory signals.

    PubMed

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  18. Contextual modulation of primary visual cortex by auditory signals

    PubMed Central

    Paton, A. T.

    2017-01-01

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044015

  19. Using higher-level inquiry to improve spatial ability in an introductory geology course

    NASA Astrophysics Data System (ADS)

    Stevens, Lacey A.

    Visuo-spatial skills, the ability to visually take in information and create a mental image are crucial for success in fields involving science, technology, engineering, and math (STEM) as well as fine arts. Unfortunately, due to a lack of curriculum focused on developing spatial skills, students enrolled in introductory college-level science courses tend to have difficulty with spatially-related activities. One of the best ways to engage students in science activities is through a learning and teaching strategy called inquiry. There are lower levels of inquiry wherein learning and problem-solving are guided by instructions and higher levels of inquiry wherein students have a greater degree of autonomy in learning and creating their own problem-solving strategy. A study involving 112 participants was conducted during the fall semester in 2014 at Bowling Green State University (BGSU) in an 1040 Introductory Geology Lab to determine if a new, high-level, inquiry-based lab would increase participants' spatial skills more than the traditional, low-level inquiry lab. The study also evaluated whether a higher level of inquiry differentially affected low versus high spatial ability participants. Participants were evaluated using a spatial ability assessment, and pre- and post-tests. The results of this study show that for 3-D to 2-D visualization, the higher-level inquiry lab increased participants' spatial ability more than the lower-level inquiry lab. For spatial rotational skills, all participants' spatial ability scores improved, regardless of the level of inquiry to which they were exposed. Low and high spatial ability participants were not differentially affected. This study demonstrates that a lab designed with a higher level of inquiry can increase students' spatial ability more than a lab with a low level of inquiry. A lab with a higher level of inquiry helped all participants, regardless of their initial spatial ability level. These findings show that curriculum that incorporates a high level of inquiry that integrates practice of spatial skills can increase students' spatial abilities in Geology-related coursework.

  20. Learning efficient visual search for stimuli containing diagnostic spatial configurations and color-shape conjunctions.

    PubMed

    Reavis, Eric A; Frank, Sebastian M; Tse, Peter U

    2018-04-12

    Visual search is often slow and difficult for complex stimuli such as feature conjunctions. Search efficiency, however, can improve with training. Search for stimuli that can be identified by the spatial configuration of two elements (e.g., the relative position of two colored shapes) improves dramatically within a few hundred trials of practice. Several recent imaging studies have identified neural correlates of this learning, but it remains unclear what stimulus properties participants learn to use to search efficiently. Influential models, such as reverse hierarchy theory, propose two major possibilities: learning to use information contained in low-level image statistics (e.g., single features at particular retinotopic locations) or in high-level characteristics (e.g., feature conjunctions) of the task-relevant stimuli. In a series of experiments, we tested these two hypotheses, which make different predictions about the effect of various stimulus manipulations after training. We find relatively small effects of manipulating low-level properties of the stimuli (e.g., changing their retinotopic location) and some conjunctive properties (e.g., color-position), whereas the effects of manipulating other conjunctive properties (e.g., color-shape) are larger. Overall, the findings suggest conjunction learning involving such stimuli might be an emergent phenomenon that reflects multiple different learning processes, each of which capitalizes on different types of information contained in the stimuli. We also show that both targets and distractors are learned, and that reversing learned target and distractor identities impairs performance. This suggests that participants do not merely learn to discriminate target and distractor stimuli, they also learn stimulus identity mappings that contribute to performance improvements.

  1. Usability Testing the "Personal Patient Profile-Prostate" in a Sample of African American and Hispanic Men.

    PubMed

    Wolpin, Seth; Halpenny, Barbara; Sorrentino, Erica; Stewart, Mark; McReynolds, Justin; Cvitkovic, Ivan; Chang, Peter; Berry, Donna

    2016-07-01

    Shared treatment decision making in a cancer setting requires a patient's understanding of the potential benefits and risks of each treatment option. Graphical display of risk information is one approach to improving understanding. Little is known about how patients engage with infographics in the context of health education materials and whether interactions vary with health literacy levels. We conducted an observational study, using an eye tracker device, of how men with newly diagnosed localized prostate cancer visually engaged with an on-screen infographic depicting risk information in the Personal Patient Profile-Prostate. Health literacy was measured with the Short Assessment of Health Literacy-English. Gaze patterns on an exemplar screens containing infographics about survival were analyzed and explored with respect to sociodemographic and health literacy data. Acceptability of Personal Patient Profile-Prostate was measured with the Acceptability E-scale. Twenty-six English-speaking men participated, and eye tracking data were collected for 12 men on the exemplar page of risk information that we analyzed. We found preliminary evidence of visual scanning and of participants with lower literacy focusing sooner on infographics versus text. Acceptability for Personal Patient Profile-Prostate was high. These findings suggest that infographics may be of higher relative value to participants with low health literacy. Eye trackers may provide valuable information on how people visually engage with infographics and may inform development of health education materials, although care must be taken to minimize data loss.

  2. Neurons in the human hippocampus and amygdala respond to both low- and high-level image properties

    PubMed Central

    Cabrales, Elaine; Wilson, Michael S.; Baker, Christopher P.; Thorp, Christopher K.; Smith, Kris A.; Treiman, David M.

    2011-01-01

    A large number of studies have demonstrated that structures within the medial temporal lobe, such as the hippocampus, are intimately involved in declarative memory for objects and people. Although these items are abstractions of the visual scene, specific visual details can change the speed and accuracy of their recall. By recording from 415 neurons in the hippocampus and amygdala of human epilepsy patients as they viewed images drawn from 10 image categories, we showed that the firing rates of 8% of these neurons encode image illuminance and contrast, low-level properties not directly pertinent to task performance, whereas in 7% of the neurons, firing rates encode the category of the item depicted in the image, a high-level property pertinent to the task. This simultaneous representation of high- and low-level image properties within the same brain areas may serve to bind separate aspects of visual objects into a coherent percept and allow episodic details of objects to influence mnemonic performance. PMID:21471400

  3. Bayesian learning of visual chunks by human observers

    PubMed Central

    Orbán, Gergő; Fiser, József; Aslin, Richard N.; Lengyel, Máté

    2008-01-01

    Efficient and versatile processing of any hierarchically structured information requires a learning mechanism that combines lower-level features into higher-level chunks. We investigated this chunking mechanism in humans with a visual pattern-learning paradigm. We developed an ideal learner based on Bayesian model comparison that extracts and stores only those chunks of information that are minimally sufficient to encode a set of visual scenes. Our ideal Bayesian chunk learner not only reproduced the results of a large set of previous empirical findings in the domain of human pattern learning but also made a key prediction that we confirmed experimentally. In accordance with Bayesian learning but contrary to associative learning, human performance was well above chance when pair-wise statistics in the exemplars contained no relevant information. Thus, humans extract chunks from complex visual patterns by generating accurate yet economical representations and not by encoding the full correlational structure of the input. PMID:18268353

  4. Typograph: Multiscale Spatial Exploration of Text Documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; Burtner, Edwin R.; Cramer, Nicholas O.

    2013-12-01

    Visualizing large document collections using a spatial layout of terms can enable quick overviews of information. However, these metaphors (e.g., word clouds, tag clouds, etc.) often lack interactivity to explore the information and the location and rendering of the terms are often not based on mathematical models that maintain relative distances from other information based on similarity metrics. Further, transitioning between levels of detail (i.e., from terms to full documents) can be challanging. In this paper, we present Typograph, a multi-scale spatial exploration visualization for large document collections. Based on the term-based visualization methods, Typograh enables multipel levels of detailmore » (terms, phrases, snippets, and full documents) within the single spatialization. Further, the information is placed based on their relative similarity to other information to create the “near = similar” geography metaphor. This paper discusses the design principles and functionality of Typograph and presents a use case analyzing Wikipedia to demonstrate usage.« less

  5. Low literacy and written drug information: information-seeking, leaflet evaluation and preferences, and roles for images.

    PubMed

    van Beusekom, Mara M; Grootens-Wiegers, Petronella; Bos, Mark J W; Guchelaar, Henk-Jan; van den Broek, Jos M

    2016-12-01

    Background Low-literate patients are at risk to misinterpret written drug information. For the (co-) design of targeted patient information, it is key to involve this group in determining their communication barriers and information needs. Objective To gain insight into how people with low literacy use and evaluate written drug information, and to identify ways in which they feel the patient leaflet can be improved, and in particular how images could be used. Setting Food banks and an education institution for Dutch language training in the Netherlands. Method Semi-structured focus groups and individual interviews were held with low-literate participants (n = 45). The thematic framework approach was used for analysis to identify themes in the data. Main outcome measure Low-literate people's experience with patient information leaflets, ideas for improvements, and perceptions on possible uses for visuals. Results Patient information leaflets were considered discouraging to use, and information difficult to find and understand. Many rely on alternative information sources. The leaflet should be shorter, and improved in terms of organisation, legibility and readability. Participants thought images could increase the leaflet's appeal, help ask questions, provide an overview, help understand textual information, aid recall, reassure, and even lead to increased confidence, empowerment and feeling of safety. Conclusion Already at the stages of paying attention to the leaflet and maintaining interest in the message, low-literate patients experience barriers in the communication process through written drug information. Short, structured, visual/textual explanations can lower the motivational threshold to use the leaflet, improve understanding, and empower the low-literate target group.

  6. In vivo imaging of an inducible oncogenic tumor antigen visualizes tumor progression and predicts CTL tolerance.

    PubMed

    Buschow, Christian; Charo, Jehad; Anders, Kathleen; Loddenkemper, Christoph; Jukica, Ana; Alsamah, Wisam; Perez, Cynthia; Willimsky, Gerald; Blankenstein, Thomas

    2010-03-15

    Visualizing oncogene/tumor Ag expression by noninvasive imaging is of great interest for understanding processes of tumor development and therapy. We established transgenic (Tg) mice conditionally expressing a fusion protein of the SV40 large T Ag and luciferase (TagLuc) that allows monitoring of oncogene/tumor Ag expression by bioluminescent imaging upon Cre recombinase-mediated activation. Independent of Cre-mediated recombination, the TagLuc gene was expressed at low levels in different tissues, probably due to the leakiness of the stop cassette. The level of spontaneous TagLuc expression, detected by bioluminescent imaging, varied between the different Tg lines, depended on the nature of the Tg expression cassette, and correlated with Tag-specific CTL tolerance. Following liver-specific Cre-loxP site-mediated excision of the stop cassette that separated the promoter from the TagLuc fusion gene, hepatocellular carcinoma development was visualized. The ubiquitous low level TagLuc expression caused the failure of transferred effector T cells to reject Tag-expressing tumors rather than causing graft-versus-host disease. This model may be useful to study different levels of tolerance, monitor tumor development at an early stage, and rapidly visualize the efficacy of therapeutic intervention versus potential side effects of low-level Ag expression in normal tissues.

  7. Multi-scale Visualization of Remote Sensing and Topographic Data of the Amazon Rain Forest for Environmental Monitoring of the Petroleum Industry.

    NASA Astrophysics Data System (ADS)

    Fonseca, L.; Miranda, F. P.; Beisl, C. H.; Souza-Fonseca, J.

    2002-12-01

    PETROBRAS (the Brazilian national oil company) built a pipeline to transport crude oil from the Urucu River region to a terminal in the vicinities of Coari, a city located in the right margin of the Solimoes River. The oil is then shipped by tankers to another terminal in Manaus, capital city of the Amazonas state. At the city of Coari, changes in water level between dry and wet seasons reach up to 14 meters. This strong seasonal character of the Amazonian climate gives rise to four distinct scenarios in the annual hydrological cycle: low water, high water, receding water, and rising water. These scenarios constitute the main reference for the definition of oil spill response planning in the region, since flooded forests and flooded vegetation are the most sensitive fluvial environments to oil spills. This study focuses on improving information about oil spill environmental sensitivity in Western Amazon by using 3D visualization techniques to help the analysis and interpretation of remote sensing and digital topographic data, as follows: (a) 1995 low flood and 1996 high flood JERS-1 SAR mosaics, band LHH, 100m pixel; (b) 2000 low flood and 2001 high flood RADARSAT-1 W1 images, band CHH, 30m pixel; (c) 2002 high flood airborne SAR images from the SIVAM project (System for Surveillance of the Amazon), band LHH, 3m pixel and band XHH, 6m pixel; (d) GTOPO30 digital elevation model, 30' resolution; (e) Digital elevation model derived from topographic information acquired during seismic surveys, 25m resolution; (f) panoramic views obtained from low altitude helicopter flights. The methodology applied includes image processing, cartographic conversion and generation of value-added product using 3D visualization. A semivariogram textural classification was applied to the SAR images in order to identify areas of flooded forest and flooded vegetation. The digital elevation models were color shaded to highlight subtle topographic features. Both datasets were then converted to the same cartographic projection and inserted into the Fledermaus 3D visualization environment. 3D visualization proved to be an important aid in understanding the spatial distribution pattern of the environmentally sensitive vegetation cover. The dynamics of the hydrological cycle was depicted in a basin-wide scale, revealing new geomorphic information relevant to assess the environmental risk of oil spills. Results demonstrate that pipelines constitute an environmentally saver option for oil transportation in the region when compared to fluvial tanker routes.

  8. Psychological adjustment and levels of self esteem in children with visual-motor integration difficulties influences the results of a randomized intervention trial.

    PubMed

    Lahav, Orit; Apter, Alan; Ratzon, Navah Z

    2013-01-01

    This study evaluates how much the effects of intervention programs are influenced by pre-existing psychological adjustment and self-esteem levels in kindergarten and first grade children with poor visual-motor integration skills, from low socioeconomic backgrounds. One hundred and sixteen mainstream kindergarten and first-grade children, from low socioeconomic backgrounds, scoring below the 25th percentile on a measure of visual-motor integration (VMI) were recruited and randomly divided into two parallel intervention groups. One intervention group received directive visual-motor intervention (DVMI), while the second intervention group received a non-directive supportive intervention (NDSI). Tests were administered to evaluate visual-motor integration skills outcome. Children with higher baseline measures of psychological adjustment and self-esteem responded better in NDSI while children with lower baseline performance on psychological adjustment and self-esteem responded better in DVMI. This study suggests that children from low socioeconomic backgrounds with low VMI performance scores will benefit more from intervention programs if clinicians choose the type of intervention according to baseline psychological adjustment and self-esteem measures. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Human body segmentation via data-driven graph cut.

    PubMed

    Li, Shifeng; Lu, Huchuan; Shao, Xingqing

    2014-11-01

    Human body segmentation is a challenging and important problem in computer vision. Existing methods usually entail a time-consuming training phase for prior knowledge learning with complex shape matching for body segmentation. In this paper, we propose a data-driven method that integrates top-down body pose information and bottom-up low-level visual cues for segmenting humans in static images within the graph cut framework. The key idea of our approach is first to exploit human kinematics to search for body part candidates via dynamic programming for high-level evidence. Then, by using the body parts classifiers, obtaining bottom-up cues of human body distribution for low-level evidence. All the evidence collected from top-down and bottom-up procedures are integrated in a graph cut framework for human body segmentation. Qualitative and quantitative experiment results demonstrate the merits of the proposed method in segmenting human bodies with arbitrary poses from cluttered backgrounds.

  10. Perceptions of the Visually Impaired toward Pursuing Geography Courses and Majors in Higher Education

    ERIC Educational Resources Information Center

    Murr, Christopher D.; Blanchard, R. Denise

    2011-01-01

    Advances in classroom technology have lowered barriers for the visually impaired to study geography, yet few participate. Employing stereotype threat theory, we examined whether beliefs held by the visually impaired affect perceptions toward completing courses and majors in visually oriented disciplines. A test group received a low-level threat…

  11. New insights into ambient and focal visual fixations using an automatic classification algorithm

    PubMed Central

    Follet, Brice; Le Meur, Olivier; Baccino, Thierry

    2011-01-01

    Overt visual attention is the act of directing the eyes toward a given area. These eye movements are characterised by saccades and fixations. A debate currently surrounds the role of visual fixations. Do they all have the same role in the free viewing of natural scenes? Recent studies suggest that at least two types of visual fixations exist: focal and ambient. The former is believed to be used to inspect local areas accurately, whereas the latter is used to obtain the context of the scene. We investigated the use of an automated system to cluster visual fixations in two groups using four types of natural scene images. We found new evidence to support a focal–ambient dichotomy. Our data indicate that the determining factor is the saccade amplitude. The dependence on the low-level visual features and the time course of these two kinds of visual fixations were examined. Our results demonstrate that there is an interplay between both fixation populations and that focal fixations are more dependent on low-level visual features than are ambient fixations. PMID:23145248

  12. Neural organization and visual processing in the anterior optic tubercle of the honeybee brain.

    PubMed

    Mota, Theo; Yamagata, Nobuhiro; Giurfa, Martin; Gronenberg, Wulfila; Sandoz, Jean-Christophe

    2011-08-10

    The honeybee Apis mellifera represents a valuable model for studying the neural segregation and integration of visual information. Vision in honeybees has been extensively studied at the behavioral level and, to a lesser degree, at the physiological level using intracellular electrophysiological recordings of single neurons. However, our knowledge of visual processing in honeybees is still limited by the lack of functional studies of visual processing at the circuit level. Here we contribute to filling this gap by providing a neuroanatomical and neurophysiological characterization at the circuit level of a practically unstudied visual area of the bee brain, the anterior optic tubercle (AOTu). First, we analyzed the internal organization and neuronal connections of the AOTu. Second, we established a novel protocol for performing optophysiological recordings of visual circuit activity in the honeybee brain and studied the responses of AOTu interneurons during stimulation of distinct eye regions. Our neuroanatomical data show an intricate compartmentalization and connectivity of the AOTu, revealing a dorsoventral segregation of the visual input to the AOTu. Light stimuli presented in different parts of the visual field (dorsal, lateral, or ventral) induce distinct patterns of activation in AOTu output interneurons, retaining to some extent the dorsoventral input segregation revealed by our neuroanatomical data. In particular, activity patterns evoked by dorsal and ventral eye stimulation are clearly segregated into distinct AOTu subunits. Our results therefore suggest an involvement of the AOTu in the processing of dorsoventrally segregated visual information in the honeybee brain.

  13. Assistive technology applied to education of students with visual impairment.

    PubMed

    Alves, Cássia Cristiane de Freitas; Monteiro, Gelse Beatriz Martins; Rabello, Suzana; Gasparetto, Maria Elisabete Rodrigues Freire; de Carvalho, Keila Monteiro

    2009-08-01

    Verify the application of assistive technology, especially information technology in the education of blind and low-vision students from the perceptions of their teachers. Descriptive survey study in public schools in three municipalities of the state of São Paulo, Brazil. The sample comprised 134 teachers. According to the teachers' opinions, there are differences in the specificities and applicability of assistive technology for blind and low-vision students, for whom specific computer programs are important. Information technology enhances reading and writing skills, as well as communication with the world on an equal basis, thereby improving quality of life and facilitating the learning process. The main reason for not using information technology is the lack of planning courses. The main requirements for the use of information technology in schools are enough computers for all students, advisers to help teachers, and pedagogical support. Assistive technology is applied to education of students with visual impairment; however, teachers indicate the need for infrastructure and pedagogical support. Information technology is an important tool in the inclusion process and can promote independence and autonomy of students with visual impairment.

  14. Using GIS Mapping to Target Public Health Interventions: Examining Birth Outcomes Across GIS Techniques.

    PubMed

    MacQuillan, E L; Curtis, A B; Baker, K M; Paul, R; Back, Y O

    2017-08-01

    With advances in spatial analysis techniques, there has been a trend in recent public health research to assess the contribution of area-level factors to health disparity for a number of outcomes, including births. Although it is widely accepted that health disparity is best addressed by targeted, evidence-based and data-driven community efforts, and despite national and local focus in the U.S. to reduce infant mortality and improve maternal-child health, there is little work exploring how choice of scale and specific GIS visualization technique may alter the perception of analyses focused on health disparity in birth outcomes. Retrospective cohort study. Spatial analysis of individual-level vital records data for low birthweight and preterm births born to black women from 2007 to 2012 in one mid-sized Midwest city using different geographic information systems (GIS) visualization techniques [geocoded address records were aggregated at two levels of scale and additionally mapped using kernel density estimation (KDE)]. GIS analyses in this study support our hypothesis that choice of geographic scale (neighborhood or census tract) for aggregated birth data can alter programmatic decision-making. Results indicate that the relative merits of aggregated visualization or the use of KDE technique depend on the scale of intervention. The KDE map proved useful in targeting specific areas for interventions in cities with smaller populations and larger census tracts, where they allow for greater specificity in identifying intervention areas. When public health programmers seek to inform intervention placement in highly populated areas, however, aggregated data at the census tract level may be preferred, since it requires lower investments in terms of time and cartographic skill and, unlike neighborhood, census tracts are standardized in that they become smaller as the population density of an area increases.

  15. Advocating for a Population-Specific Health Literacy for People With Visual Impairments.

    PubMed

    Harrison, Tracie; Lazard, Allison

    2015-01-01

    Health literacy, the ability to access, process, and understand health information, is enhanced by the visual senses among people who are typically sighted. Emotions, meaning, speed of knowledge transfer, level of attention, and degree of relevance are all manipulated by the visual design of health information when people can see. When consumers of health information are blind or visually impaired, they access, process, and understand their health information in a multitude of methods using a variety of accommodations depending upon their severity and type of impairment. They are taught, or they learn how, to accommodate their differences by using alternative sensory experiences and interpretations. In this article, we argue that due to the unique and powerful aspects of visual learning and due to the differences in knowledge creation when people are not visually oriented, health literacy must be considered a unique construct for people with visual impairment, which requires a distinctive theoretical basis for determining the impact of their mind-constructed representations of health.

  16. Consumers' Perspectives on Effective Orientation and Mobility Services for Diabetic Adults Who Are Visually Impaired

    ERIC Educational Resources Information Center

    Griffin-Shirley, Nora; Kelley, Pat; Matlock, Dwayne; Page, Anita

    2006-01-01

    The authors interviewed and videotaped diabetic adults with visual impairments about their perceptions of orientation and mobility (O&M) services that they had received. The visual impairments of these middle-aged adults ranged from totally blind to low vision. The interview questions focused on demographic information about the interviewees, the…

  17. What Mathematical Images Are in a Typical Mathematics Textbook? Implications for Students with Visual Impairments

    ERIC Educational Resources Information Center

    Emerson, Robert Wall; Anderson, Dawn

    2018-01-01

    Introduction: Visually impaired students (that is, those who are blind or have low vision) have difficulty accessing curricular material in mathematical textbooks because many mathematics texts have visual images that contain important content information that are not transcribed or described in digital versions of the texts. However, little is…

  18. G-Induced Visual Symptoms in a Military Helicopter Pilot.

    PubMed

    McMahon, Terry W; Newman, David G

    2016-11-01

    Military helicopters are increasingly agile and capable of producing significant G forces experienced in the longitudinal (z) axis of the body in a head-to-foot direction (+Gz). Dehydration and fatigue can adversely affect a pilot's +Gz tolerance, leading to +Gz-induced symptomatology occurring at lower +Gz levels than expected. The potential for adverse consequences of +Gz exposure to affect flight safety in military helicopter operations needs to be recognized. This case report describes a helicopter pilot who experienced +Gz-induced visual impairment during low-level flight. The incident occurred during a tropical training exercise, with an ambient temperature of around 35°C (95°F). As a result of the operational tempo and the environmental conditions, aircrew were generally fatigued and dehydrated. During a low-level steep turn, a Blackhawk pilot experienced significant visual deterioration. The +Gz level was estimated at +2.5 Gz. After completing the turn, the pilot's vision returned to normal, and the flight concluded without further incident. This case highlights the potential dangers of +Gz exposure in tactical helicopters. Although the +Gz level was moderate, the pilot's +Gz tolerance was reduced by the combined effects of dehydration and fatigue. The dangers of such +Gz-induced visual impairment during low-level flight are clear. More awareness of +Gz physiology and +Gz tolerance-reducing factors in helicopter operations is needed. Reprint & Copyright © 2016 Association of Military Surgeons of the U.S.

  19. The effect of synesthetic associations between the visual and auditory modalities on the Colavita effect.

    PubMed

    Stekelenburg, Jeroen J; Keetels, Mirjam

    2016-05-01

    The Colavita effect refers to the phenomenon that when confronted with an audiovisual stimulus, observers report more often to have perceived the visual than the auditory component. The Colavita effect depends on low-level stimulus factors such as spatial and temporal proximity between the unimodal signals. Here, we examined whether the Colavita effect is modulated by synesthetic congruency between visual size and auditory pitch. If the Colavita effect depends on synesthetic congruency, we expect a larger Colavita effect for synesthetically congruent size/pitch (large visual stimulus/low-pitched tone; small visual stimulus/high-pitched tone) than synesthetically incongruent (large visual stimulus/high-pitched tone; small visual stimulus/low-pitched tone) combinations. Participants had to identify stimulus type (visual, auditory or audiovisual). The study replicated the Colavita effect because participants reported more often the visual than auditory component of the audiovisual stimuli. Synesthetic congruency had, however, no effect on the magnitude of the Colavita effect. EEG recordings to congruent and incongruent audiovisual pairings showed a late frontal congruency effect at 400-550 ms and an occipitoparietal effect at 690-800 ms with neural sources in the anterior cingulate and premotor cortex for the 400- to 550-ms window and premotor cortex, inferior parietal lobule and the posterior middle temporal gyrus for the 690- to 800-ms window. The electrophysiological data show that synesthetic congruency was probably detected in a processing stage subsequent to the Colavita effect. We conclude that-in a modality detection task-the Colavita effect can be modulated by low-level structural factors but not by higher-order associations between auditory and visual inputs.

  20. Color extended visual cryptography using error diffusion.

    PubMed

    Kang, InKoo; Arce, Gonzalo R; Lee, Heung-Kyu

    2011-01-01

    Color visual cryptography (VC) encrypts a color secret message into n color halftone image shares. Previous methods in the literature show good results for black and white or gray scale VC schemes, however, they are not sufficient to be applied directly to color shares due to different color structures. Some methods for color visual cryptography are not satisfactory in terms of producing either meaningless shares or meaningful shares with low visual quality, leading to suspicion of encryption. This paper introduces the concept of visual information pixel (VIP) synchronization and error diffusion to attain a color visual cryptography encryption method that produces meaningful color shares with high visual quality. VIP synchronization retains the positions of pixels carrying visual information of original images throughout the color channels and error diffusion generates shares pleasant to human eyes. Comparisons with previous approaches show the superior performance of the new method.

  1. Active confocal imaging for visual prostheses

    PubMed Central

    Jung, Jae-Hyun; Aloni, Doron; Yitzhaky, Yitzhak; Peli, Eli

    2014-01-01

    There are encouraging advances in prosthetic vision for the blind, including retinal and cortical implants, and other “sensory substitution devices” that use tactile or electrical stimulation. However, they all have low resolution, limited visual field, and can display only few gray levels (limited dynamic range), severely restricting their utility. To overcome these limitations, image processing or the imaging system could emphasize objects of interest and suppress the background clutter. We propose an active confocal imaging system based on light-field technology that will enable a blind user of any visual prosthesis to efficiently scan, focus on, and “see” only an object of interest while suppressing interference from background clutter. The system captures three-dimensional scene information using a light-field sensor and displays only an in-focused plane with objects in it. After capturing a confocal image, a de-cluttering process removes the clutter based on blur difference. In preliminary experiments we verified the positive impact of confocal-based background clutter removal on recognition of objects in low resolution and limited dynamic range simulated phosphene images. Using a custom-made multiple-camera system, we confirmed that the concept of a confocal de-cluttered image can be realized effectively using light field imaging. PMID:25448710

  2. Vision and Control for UAVs: A Survey of General Methods and of Inexpensive Platforms for Infrastructure Inspection

    PubMed Central

    Máthé, Koppány; Buşoniu, Lucian

    2015-01-01

    Unmanned aerial vehicles (UAVs) have gained significant attention in recent years. Low-cost platforms using inexpensive sensor payloads have been shown to provide satisfactory flight and navigation capabilities. In this report, we survey vision and control methods that can be applied to low-cost UAVs, and we list some popular inexpensive platforms and application fields where they are useful. We also highlight the sensor suites used where this information is available. We overview, among others, feature detection and tracking, optical flow and visual servoing, low-level stabilization and high-level planning methods. We then list popular low-cost UAVs, selecting mainly quadrotors. We discuss applications, restricting our focus to the field of infrastructure inspection. Finally, as an example, we formulate two use-cases for railway inspection, a less explored application field, and illustrate the usage of the vision and control techniques reviewed by selecting appropriate ones to tackle these use-cases. To select vision methods, we run a thorough set of experimental evaluations. PMID:26121608

  3. How Awareness Changes the Relative Weights of Evidence During Human Decision-Making

    PubMed Central

    Lamme, Victor A. F.; Dehaene, Stanislas

    2011-01-01

    Human decisions are based on accumulating evidence over time for different options. Here we ask a simple question: How is the accumulation of evidence affected by the level of awareness of the information? We examined the influence of awareness on decision-making using combined behavioral methods and magneto-encephalography (MEG). Participants were required to make decisions by accumulating evidence over a series of visually presented arrow stimuli whose visibility was modulated by masking. Behavioral results showed that participants could accumulate evidence under both high and low visibility. However, a top-down strategic modulation of the flow of incoming evidence was only present for stimuli with high visibility: once enough evidence had been accrued, participants strategically reduced the impact of new incoming stimuli. Also, decision-making speed and confidence were strongly modulated by the strength of the evidence for high-visible but not low-visible evidence, even though direct priming effects were identical for both types of stimuli. Neural recordings revealed that, while initial perceptual processing was independent of visibility, there was stronger top-down amplification for stimuli with high visibility than low visibility. Furthermore, neural markers of evidence accumulation over occipito-parietal cortex showed a strategic bias only for highly visible sensory information, speeding up processing and reducing neural computations related to the decision process. Our results indicate that the level of awareness of information changes decision-making: while accumulation of evidence already exists under low visibility conditions, high visibility allows evidence to be accumulated up to a higher level, leading to important strategical top-down changes in decision-making. Our results therefore suggest a potential role of awareness in deploying flexible strategies for biasing information acquisition in line with one's expectations and goals. PMID:22131904

  4. How awareness changes the relative weights of evidence during human decision-making.

    PubMed

    de Lange, Floris P; van Gaal, Simon; Lamme, Victor A F; Dehaene, Stanislas

    2011-11-01

    Human decisions are based on accumulating evidence over time for different options. Here we ask a simple question: How is the accumulation of evidence affected by the level of awareness of the information? We examined the influence of awareness on decision-making using combined behavioral methods and magneto-encephalography (MEG). Participants were required to make decisions by accumulating evidence over a series of visually presented arrow stimuli whose visibility was modulated by masking. Behavioral results showed that participants could accumulate evidence under both high and low visibility. However, a top-down strategic modulation of the flow of incoming evidence was only present for stimuli with high visibility: once enough evidence had been accrued, participants strategically reduced the impact of new incoming stimuli. Also, decision-making speed and confidence were strongly modulated by the strength of the evidence for high-visible but not low-visible evidence, even though direct priming effects were identical for both types of stimuli. Neural recordings revealed that, while initial perceptual processing was independent of visibility, there was stronger top-down amplification for stimuli with high visibility than low visibility. Furthermore, neural markers of evidence accumulation over occipito-parietal cortex showed a strategic bias only for highly visible sensory information, speeding up processing and reducing neural computations related to the decision process. Our results indicate that the level of awareness of information changes decision-making: while accumulation of evidence already exists under low visibility conditions, high visibility allows evidence to be accumulated up to a higher level, leading to important strategical top-down changes in decision-making. Our results therefore suggest a potential role of awareness in deploying flexible strategies for biasing information acquisition in line with one's expectations and goals.

  5. The effect of mood state on visual search times for detecting a target in noise: An application of smartphone technology

    PubMed Central

    Maekawa, Toru; de Brecht, Matthew; Yamagishi, Noriko

    2018-01-01

    The study of visual perception has largely been completed without regard to the influence that an individual’s emotional status may have on their performance in visual tasks. However, there is a growing body of evidence to suggest that mood may affect not only creative abilities and interpersonal skills but also the capacity to perform low-level cognitive tasks. Here, we sought to determine whether rudimentary visual search processes are similarly affected by emotion. Specifically, we examined whether an individual’s perceived happiness level affects their ability to detect a target in noise. To do so, we employed pop-out and serial visual search paradigms, implemented using a novel smartphone application that allowed search times and self-rated levels of happiness to be recorded throughout each twenty-four-hour period for two weeks. This experience sampling protocol circumvented the need to alter mood artificially with laboratory-based induction methods. Using our smartphone application, we were able to replicate the classic visual search findings, whereby pop-out search times remained largely unaffected by the number of distractors whereas serial search times increased with increasing number of distractors. While pop-out search times were unaffected by happiness level, serial search times with the maximum numbers of distractors (n = 30) were significantly faster for high happiness levels than low happiness levels (p = 0.02). Our results demonstrate the utility of smartphone applications in assessing ecologically valid measures of human visual performance. We discuss the significance of our findings for the assessment of basic visual functions using search time measures, and for our ability to search effectively for targets in real world settings. PMID:29664952

  6. The effect of mood state on visual search times for detecting a target in noise: An application of smartphone technology.

    PubMed

    Maekawa, Toru; Anderson, Stephen J; de Brecht, Matthew; Yamagishi, Noriko

    2018-01-01

    The study of visual perception has largely been completed without regard to the influence that an individual's emotional status may have on their performance in visual tasks. However, there is a growing body of evidence to suggest that mood may affect not only creative abilities and interpersonal skills but also the capacity to perform low-level cognitive tasks. Here, we sought to determine whether rudimentary visual search processes are similarly affected by emotion. Specifically, we examined whether an individual's perceived happiness level affects their ability to detect a target in noise. To do so, we employed pop-out and serial visual search paradigms, implemented using a novel smartphone application that allowed search times and self-rated levels of happiness to be recorded throughout each twenty-four-hour period for two weeks. This experience sampling protocol circumvented the need to alter mood artificially with laboratory-based induction methods. Using our smartphone application, we were able to replicate the classic visual search findings, whereby pop-out search times remained largely unaffected by the number of distractors whereas serial search times increased with increasing number of distractors. While pop-out search times were unaffected by happiness level, serial search times with the maximum numbers of distractors (n = 30) were significantly faster for high happiness levels than low happiness levels (p = 0.02). Our results demonstrate the utility of smartphone applications in assessing ecologically valid measures of human visual performance. We discuss the significance of our findings for the assessment of basic visual functions using search time measures, and for our ability to search effectively for targets in real world settings.

  7. To fear or to feed: the effects of turbidity on perception of risk by a marine fish.

    PubMed

    Leahy, Susannah M; McCormick, Mark I; Mitchell, Matthew D; Ferrari, Maud C O

    2011-12-23

    Coral reefs are currently experiencing a number of worsening anthropogenic stressors, with nearshore reefs suffering from increasing sedimentation because of growing human populations and development in coastal regions. In habitats where vision and olfaction serve as the primary sources of information, reduced visual input from suspended sediment may lead to significant alterations in prey fish behaviour. Here, we test whether prey compensate for reduced visual information by increasing their antipredator responses to chemically mediated risk cues in turbid conditions. Experiments with the spiny damselfish, Acanthochromis polyacanthus, found that baseline activity levels were reduced by 23 per cent in high turbidity conditions relative to low turbidity conditions. Furthermore, risk cues elicited strong antipredator responses at all turbidity levels; the strongest antipredator responses were observed in high turbidity conditions, with fish reducing their foraging by almost 40 per cent, as compared with 17 per cent for fish in clear conditions. This provides unambiguous evidence of sensory compensation in a predation context for a tropical marine fish, and suggests that prey fish may be able to behaviourally offset some of the fitness reductions resulting from anthropogenic sedimentation of their habitats.

  8. To fear or to feed: the effects of turbidity on perception of risk by a marine fish

    PubMed Central

    Leahy, Susannah M.; McCormick, Mark I.; Mitchell, Matthew D.; Ferrari, Maud C. O.

    2011-01-01

    Coral reefs are currently experiencing a number of worsening anthropogenic stressors, with nearshore reefs suffering from increasing sedimentation because of growing human populations and development in coastal regions. In habitats where vision and olfaction serve as the primary sources of information, reduced visual input from suspended sediment may lead to significant alterations in prey fish behaviour. Here, we test whether prey compensate for reduced visual information by increasing their antipredator responses to chemically mediated risk cues in turbid conditions. Experiments with the spiny damselfish, Acanthochromis polyacanthus, found that baseline activity levels were reduced by 23 per cent in high turbidity conditions relative to low turbidity conditions. Furthermore, risk cues elicited strong antipredator responses at all turbidity levels; the strongest antipredator responses were observed in high turbidity conditions, with fish reducing their foraging by almost 40 per cent, as compared with 17 per cent for fish in clear conditions. This provides unambiguous evidence of sensory compensation in a predation context for a tropical marine fish, and suggests that prey fish may be able to behaviourally offset some of the fitness reductions resulting from anthropogenic sedimentation of their habitats. PMID:21849308

  9. Spatial frequency supports the emergence of categorical representations in visual cortex during natural scene perception.

    PubMed

    Dima, Diana C; Perry, Gavin; Singh, Krish D

    2018-06-11

    In navigating our environment, we rapidly process and extract meaning from visual cues. However, the relationship between visual features and categorical representations in natural scene perception is still not well understood. Here, we used natural scene stimuli from different categories and filtered at different spatial frequencies to address this question in a passive viewing paradigm. Using representational similarity analysis (RSA) and cross-decoding of magnetoencephalography (MEG) data, we show that categorical representations emerge in human visual cortex at ∼180 ms and are linked to spatial frequency processing. Furthermore, dorsal and ventral stream areas reveal temporally and spatially overlapping representations of low and high-level layer activations extracted from a feedforward neural network. Our results suggest that neural patterns from extrastriate visual cortex switch from low-level to categorical representations within 200 ms, highlighting the rapid cascade of processing stages essential in human visual perception. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  10. A Neurobehavioral Model of Flexible Spatial Language Behaviors

    ERIC Educational Resources Information Center

    Lipinski, John; Schneegans, Sebastian; Sandamirskaya, Yulia; Spencer, John P.; Schoner, Gregor

    2012-01-01

    We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that…

  11. The impact of using visual images of the body within a personalized health risk assessment: an experimental study.

    PubMed

    Hollands, Gareth J; Marteau, Theresa M

    2013-05-01

    To examine the motivational impact of the addition of a visual image to a personalized health risk assessment and the underlying cognitive and emotional mechanisms. An online experimental study in which participants (n = 901; mean age = 27.2 years; 61.5% female) received an assessment and information focusing on the health implications of internal body fat and highlighting the protective benefits of physical activity. Participants were randomized to receive this in either (a) solely text form (control arm) or (b) text plus a visual image of predicted internal body fat (image arm). Participants received information representing one of three levels of health threat, determined by how physically active they were: high, moderate or benign. Main outcome measures were physical activity intentions (assessed pre- and post-intervention), worry, coherence and believability of the information. Intentions to undertake recommended levels of physical activity were significantly higher in the image arm, but only amongst those participants who received a high-threat communication. Believability of the results received was greater in the image arm and mediated the intervention effect on intentions. The addition of a visual image to a risk assessment led to small but significant increases in intentions to undertake recommended levels of physical activity in those at increased health risk. Limitations of the study and implications for future research are discussed. What is already known on this subject? Health risk information that is personalized to the individual may more strongly motivate risk-reducing behaviour change. Little prior research attention has been paid specifically to the motivational impact of personalized visual images and underlying mechanisms. What does this study add? In an experimental design, it is shown that receipt of visual images increases intentions to engage in risk-reducing behaviour, although only when a significant level of threat is presented. The study suggests that images increase the believability of health risk information and this may underlie motivational impact. © 2012 The British Psychological Society.

  12. Multilevel depth and image fusion for human activity detection.

    PubMed

    Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng

    2013-10-01

    Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.

  13. Visual aspects of perception of multimedia messages on the web through the "eye tracker" method.

    PubMed

    Svilicić, Niksa

    2010-09-01

    Since the dawn of civilisation visual communication played a role in everyday life. In the early times there were simply shaped drawings of animals, pictograms explaining hunting tactics or strategies of attacking the enemies. Through evolution visual expression becomes an important component of communication process on several levels, from the existential and economic level to the artistic level. However, there was always a question of the level of user reception of such visual information in the medium transmitting the information. Does physical positioning of information in the medium contribute to the efficiency of the message? Do the same rules of content positioning apply for traditional (offline) and online media (Internet)? Rapid development of information technology and Internet in almost all segments of contemporary life calls for defining the rules of designing and positioning multimedia online contents on web sites. Recent research indicates beyond doubt that the physical positioning of an online content on a web site significantly determines the quality of user's perception of such content. By employing the "Eye tracking" method it is possible to objectively analyse the level of user perception of a multimedia content on a web site. What is the first thing observed by the user after opening the web site and how does he/she visually search the online content? By which methods can this be investigated subjectively and objectively? How can the survey results be used to improve the creation of web sites and to optimise the positioning of relevant contents on the site? The answers to these questions will significantly improve the presentation of multimedia interactive contents on the Web.

  14. Add a picture for suspense: neural correlates of the interaction between language and visual information in the perception of fear.

    PubMed

    Willems, Roel M; Clevis, Krien; Hagoort, Peter

    2011-09-01

    We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants' brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information.

  15. Visual local and global processing in low-functioning deaf individuals with and without autism spectrum disorder.

    PubMed

    Maljaars, J P W; Noens, I L J; Scholte, E M; Verpoorten, R A W; van Berckelaer-Onnes, I A

    2011-01-01

    The ComFor study has indicated that individuals with intellectual disability (ID) and autism spectrum disorder (ASD) show enhanced visual local processing compared with individuals with ID only. Items of the ComFor with meaningless materials provided the best discrimination between the two samples. These results can be explained by the weak central coherence account. The main focus of the present study is to examine whether enhanced visual perception is also present in low-functioning deaf individuals with and without ASD compared with individuals with ID, and to evaluate the underlying cognitive style in deaf and hearing individuals with ASD. Different sorting tasks (selected from the ComFor) were administered from four subsamples: (1) individuals with ID (n = 68); (2) individuals with ID and ASD (n = 72); (3) individuals with ID and deafness (n = 22); and (4) individuals with ID, ASD and deafness (n = 15). Differences in performance on sorting tasks with meaningful and meaningless materials between the four subgroups were analysed. Age and level of functioning were taken into account. Analyses of covariance revealed that results of deaf individuals with ID and ASD are in line with the results of hearing individuals with ID and ASD. Both groups showed enhanced visual perception, especially on meaningless sorting tasks, when compared with hearing individuals with ID, but not compared with deaf individuals with ID. In ASD either with or without deafness, enhanced visual perception for meaningless information can be understood within the framework of the central coherence theory, whereas in deafness, enhancement in visual perception might be due to a more generally enhanced visual perception as a result of auditory deprivation. © 2010 The Authors. Journal of Intellectual Disability Research © 2010 Blackwell Publishing Ltd.

  16. Does Guiding Toward Task-Relevant Information Help Improve Graph Processing and Graph Comprehension of Individuals with Low or High Numeracy? An Eye-Tracker Experiment.

    PubMed

    Keller, Carmen; Junghans, Alex

    2017-11-01

    Individuals with low numeracy have difficulties with understanding complex graphs. Combining the information-processing approach to numeracy with graph comprehension and information-reduction theories, we examined whether high numerates' better comprehension might be explained by their closer attention to task-relevant graphical elements, from which they would expect numerical information to understand the graph. Furthermore, we investigated whether participants could be trained in improving their attention to task-relevant information and graph comprehension. In an eye-tracker experiment ( N = 110) involving a sample from the general population, we presented participants with 2 hypothetical scenarios (stomach cancer, leukemia) showing survival curves for 2 treatments. In the training condition, participants received written instructions on how to read the graph. In the control condition, participants received another text. We tracked participants' eye movements while they answered 9 knowledge questions. The sum constituted graph comprehension. We analyzed visual attention to task-relevant graphical elements by using relative fixation durations and relative fixation counts. The mediation analysis revealed a significant ( P < 0.05) indirect effect of numeracy on graph comprehension through visual attention to task-relevant information, which did not differ between the 2 conditions. Training had a significant main effect on visual attention ( P < 0.05) but not on graph comprehension ( P < 0.07). Individuals with high numeracy have better graph comprehension due to their greater attention to task-relevant graphical elements than individuals with low numeracy. With appropriate instructions, both groups can be trained to improve their graph-processing efficiency. Future research should examine (e.g., motivational) mediators between visual attention and graph comprehension to develop appropriate instructions that also result in higher graph comprehension.

  17. Forced to remember: when memory is biased by salient information.

    PubMed

    Santangelo, Valerio

    2015-04-15

    The last decades have seen a rapid growing in the attempt to understand the key factors involved in the internal memory representation of the external world. Visual salience have been found to provide a major contribution in predicting the probability for an item/object embedded in a complex setting (i.e., a natural scene) to be encoded and then remembered later on. Here I review the existing literature highlighting the impact of perceptual- (based on low-level sensory features) and semantics-related salience (based on high-level knowledge) on short-term memory representation, along with the neural mechanisms underpinning the interplay between these factors. The available evidence reveal that both perceptual- and semantics-related factors affect attention selection mechanisms during the encoding of natural scenes. Biasing internal memory representation, both perceptual and semantics factors increase the probability to remember high- to the detriment of low-saliency items. The available evidence also highlight an interplay between these factors, with a reduced impact of perceptual-related salience in biasing memory representation as a function of the increasing availability of semantics-related salient information. The neural mechanisms underpinning this interplay involve the activation of different portions of the frontoparietal attention control network. Ventral regions support the assignment of selection/encoding priorities based on high-level semantics, while the involvement of dorsal regions reflects priorities assignment based on low-level sensory features. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Visual slant misperception and the Black-Hole landing situation

    NASA Technical Reports Server (NTRS)

    Perrone, J. A.

    1983-01-01

    A theory which explains the tendency for dangerously low approaches during night landing situations is presented. The two dimensional information at the pilot's eye contains sufficient information for the visual system to extract the angle of slant of the runway relative to the approach path. The analysis is depends upon perspective information which is available at a certain distance out from the aimpoint, to either side of the runway edgelights. Under black hole landing conditions, however, this information is not available, and it is proposed that the visual system use instead the only available information, the perspective gradient of the runway edgelights. An equation is developed which predicts the perceived approach angle when this incorrect parameter is used. The predictions are in close agreement with existing experimental data.

  19. Visual task performance using a monocular see-through head-mounted display (HMD) while walking.

    PubMed

    Mustonen, Terhi; Berg, Mikko; Kaistinen, Jyrki; Kawai, Takashi; Häkkinen, Jukka

    2013-12-01

    A monocular see-through head-mounted display (HMD) allows the user to view displayed information while simultaneously interacting with the surrounding environment. This configuration lets people use HMDs while they are moving, such as while walking. However, sharing attention between the display and environment can compromise a person's performance in any ongoing task, and controlling one's gait may add further challenges. In this study, the authors investigated how the requirements of HMD-administered visual tasks altered users' performance while they were walking. Twenty-four university students completed 3 cognitive tasks (high- and low-working memory load, visual vigilance) on an HMD while seated and while simultaneously performing a paced walking task in a controlled environment. The results show that paced walking worsened performance (d', reaction time) in all HMD-administered tasks, but visual vigilance deteriorated more than memory performance. The HMD-administered tasks also worsened walking performance (speed, path overruns) in a manner that varied according to the overall demands of the task. These results suggest that people's ability to process information displayed on an HMD may worsen while they are in motion. Furthermore, the use of an HMD can critically alter a person's natural performance, such as their ability to guide and control their gait. In particular, visual tasks that involve constant monitoring of the HMD should be avoided. These findings highlight the need for careful consideration of the type and difficulty of information that can be presented through HMDs while still letting the user achieve an acceptable overall level of performance in various contexts of use. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  20. Making Astronomy and Space Science Accessible to the Blind and Visually Impaired

    NASA Astrophysics Data System (ADS)

    Beck-Winchatz, B.; Hoette, V.; Grice, N.

    2003-12-01

    One of the biggest obstacles blind and visually impaired people face in science is the ubiquity of important graphical information, which is generally not made available in alternate formats accessible to them. Funded by NASA's Initiative to Develop Education through Astronomy and Space Science (IDEAS), we have recently formed a team of scientists and educators from universities, the SOFIA NASA mission, a science museum, an observatory, and schools for the blind. Our goal is to develop and test Braille/tactile space science activities that actively engage students from elementary grades through introductory college-level in space science. We will discuss effective strategies and low-cost technologies that can be used to make graphical information accessible. We will also demonstrate examples, such a thermal expansion graphics created from telescope images of the Moon and other celestial objects, a tactile planisphere, three-dimensional models of near-Earth asteroids and tactile diagrams of their orbits, and an infrared detector activity.

  1. Individual's recollections of their experiences in eye clinics and understanding of their eye condition: results from a survey of visually impaired people in Britain.

    PubMed

    Douglas, Graeme; Pavey, Sue; Corcoran, Christine; Eperjesi, Frank

    2010-11-01

    Network 1000 is a UK-based panel survey of a representative sample of adults with registered visual impairment, with the aim of gathering information about people's opinions and circumstances. Participants were interviewed (Survey 1, n = 1007: 2005; Survey 2, n = 922: 2006/07) on a range of topics including the nature of their eye condition, details of other health issues, use of low vision aids (LVAs) and their experiences in eye clinics. Eleven percent of individuals did not know the name of their eye condition. Seventy percent of participants reported having long-term health problems or disabilities in addition to visual impairment and 43% reported having hearing difficulties. Seventy one percent reported using LVAs for reading tasks. Participants who had become registered as visually impaired in the previous 8 years (n = 395) were asked questions about non-medical information received in the eye clinic around that time. Reported information received included advice about 'registration' (48%), low vision aids (45%) and social care routes (43%); 17% reported receiving no information. While 70% of people were satisfied with the information received, this was lower for those of working age (56%) compared with retirement age (72%). Those who recalled receiving additional non-medical information and advice at the time of registration also recalled their experiences more positively. Whilst caution should be applied to the accuracy of recall of past events, the data provide a valuable insight into the types of information and support that visually impaired people feel they would benefit from in the eye clinic. © 2010 The Authors. Ophthalmic and Physiological Optics © 2010 The College of Optometrists.

  2. Manipulating Color and Other Visual Information Influences Picture Naming at Different Levels of Processing: Evidence from Alzheimer Subjects and Normal Controls

    ERIC Educational Resources Information Center

    Zannino, Gian Daniele; Perri, Roberta; Salamone, Giovanna; Di Lorenzo, Concetta; Caltagirone, Carlo; Carlesimo, Giovanni A.

    2010-01-01

    There is now a large body of evidence suggesting that color and photographic detail exert an effect on recognition of visually presented familiar objects. However, an unresolved issue is whether these factors act at the visual, the semantic or lexical level of the recognition process. In the present study, we investigated this issue by having…

  3. Top-Down Control of Visual Attention by the Prefrontal Cortex. Functional Specialization and Long-Range Interactions

    PubMed Central

    Paneri, Sofia; Gregoriou, Georgia G.

    2017-01-01

    The ability to select information that is relevant to current behavioral goals is the hallmark of voluntary attention and an essential part of our cognition. Attention tasks are a prime example to study at the neuronal level, how task related information can be selectively processed in the brain while irrelevant information is filtered out. Whereas, numerous studies have focused on elucidating the mechanisms of visual attention at the single neuron and population level in the visual cortices, considerably less work has been devoted to deciphering the distinct contribution of higher-order brain areas, which are known to be critical for the employment of attention. Among these areas, the prefrontal cortex (PFC) has long been considered a source of top-down signals that bias selection in early visual areas in favor of the attended features. Here, we review recent experimental data that support the role of PFC in attention. We examine the existing evidence for functional specialization within PFC and we discuss how long-range interactions between PFC subregions and posterior visual areas may be implemented in the brain and contribute to the attentional modulation of different measures of neural activity in visual cortices. PMID:29033784

  4. Top-Down Control of Visual Attention by the Prefrontal Cortex. Functional Specialization and Long-Range Interactions.

    PubMed

    Paneri, Sofia; Gregoriou, Georgia G

    2017-01-01

    The ability to select information that is relevant to current behavioral goals is the hallmark of voluntary attention and an essential part of our cognition. Attention tasks are a prime example to study at the neuronal level, how task related information can be selectively processed in the brain while irrelevant information is filtered out. Whereas, numerous studies have focused on elucidating the mechanisms of visual attention at the single neuron and population level in the visual cortices, considerably less work has been devoted to deciphering the distinct contribution of higher-order brain areas, which are known to be critical for the employment of attention. Among these areas, the prefrontal cortex (PFC) has long been considered a source of top-down signals that bias selection in early visual areas in favor of the attended features. Here, we review recent experimental data that support the role of PFC in attention. We examine the existing evidence for functional specialization within PFC and we discuss how long-range interactions between PFC subregions and posterior visual areas may be implemented in the brain and contribute to the attentional modulation of different measures of neural activity in visual cortices.

  5. What can neuromorphic event-driven precise timing add to spike-based pattern recognition?

    PubMed

    Akolkar, Himanshu; Meyer, Cedric; Clady, Zavier; Marre, Olivier; Bartolozzi, Chiara; Panzeri, Stefano; Benosman, Ryad

    2015-03-01

    This letter introduces a study to precisely measure what an increase in spike timing precision can add to spike-driven pattern recognition algorithms. The concept of generating spikes from images by converting gray levels into spike timings is currently at the basis of almost every spike-based modeling of biological visual systems. The use of images naturally leads to generating incorrect artificial and redundant spike timings and, more important, also contradicts biological findings indicating that visual processing is massively parallel, asynchronous with high temporal resolution. A new concept for acquiring visual information through pixel-individual asynchronous level-crossing sampling has been proposed in a recent generation of asynchronous neuromorphic visual sensors. Unlike conventional cameras, these sensors acquire data not at fixed points in time for the entire array but at fixed amplitude changes of their input, resulting optimally sparse in space and time-pixel individually and precisely timed only if new, (previously unknown) information is available (event based). This letter uses the high temporal resolution spiking output of neuromorphic event-based visual sensors to show that lowering time precision degrades performance on several recognition tasks specifically when reaching the conventional range of machine vision acquisition frequencies (30-60 Hz). The use of information theory to characterize separability between classes for each temporal resolution shows that high temporal acquisition provides up to 70% more information that conventional spikes generated from frame-based acquisition as used in standard artificial vision, thus drastically increasing the separability between classes of objects. Experiments on real data show that the amount of information loss is correlated with temporal precision. Our information-theoretic study highlights the potentials of neuromorphic asynchronous visual sensors for both practical applications and theoretical investigations. Moreover, it suggests that representing visual information as a precise sequence of spike times as reported in the retina offers considerable advantages for neuro-inspired visual computations.

  6. The impact of home care nurses' numeracy and graph literacy on comprehension of visual display information: implications for dashboard design.

    PubMed

    Dowding, Dawn; Merrill, Jacqueline A; Onorato, Nicole; Barrón, Yolanda; Rosati, Robert J; Russell, David

    2018-02-01

    To explore home care nurses' numeracy and graph literacy and their relationship to comprehension of visualized data. A multifactorial experimental design using online survey software. Nurses were recruited from 2 Medicare-certified home health agencies. Numeracy and graph literacy were measured using validated scales. Nurses were randomized to 1 of 4 experimental conditions. Each condition displayed data for 1 of 4 quality indicators, in 1 of 4 different visualized formats (bar graph, line graph, spider graph, table). A mixed linear model measured the impact of numeracy, graph literacy, and display format on data understanding. In all, 195 nurses took part in the study. They were slightly more numerate and graph literate than the general population. Overall, nurses understood information presented in bar graphs most easily (88% correct), followed by tables (81% correct), line graphs (77% correct), and spider graphs (41% correct). Individuals with low numeracy and low graph literacy had poorer comprehension of information displayed across all formats. High graph literacy appeared to enhance comprehension of data regardless of numeracy capabilities. Clinical dashboards are increasingly used to provide information to clinicians in visualized format, under the assumption that visual display reduces cognitive workload. Results of this study suggest that nurses' comprehension of visualized information is influenced by their numeracy, graph literacy, and the display format of the data. Individual differences in numeracy and graph literacy skills need to be taken into account when designing dashboard technology. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  7. Temporary Blinding Limits versus Maximum Permissible Exposure - A Paradigm Change in Risk Assessment for Visible Optical Radiation

    NASA Astrophysics Data System (ADS)

    Reidenbach, Hans-Dieter

    Safety considerations in the field of laser radiation have traditionally been restricted to maximum permissible exposure levels defined as a function of wavelength and exposure duration. But in Europe according to the European Directive 2006/25/EC on artificial optical radiation the employer has to include in his risk assessment indirect effects from temporary blinding. Whereas sufficient knowledge on various deterministic risks exists, only sparse quantitative data is available for the impairment of visual functions due to temporary blinding from visible optical radiation. The consideration of indirect effects corresponds to a paradigm change in risk assessment when situations have to be treated, where intrabeam viewing of low-power laser radiation is likely or other non-coherent visible radiation might influence certain visual tasks. In order to obtain a sufficient basis for the assessment of certain situations, investigations of the functional relationships between wavelength, exposure time and optical power and the resulting interference on visual functions have been performed and the results are reported. The duration of a visual disturbance is thus predictable. In addition, preliminary information on protective measures is given.

  8. Skilled Deaf Readers have an Enhanced Perceptual Span in Reading

    PubMed Central

    Bélanger, Nathalie N.; Slattery, Timothy J.; Mayberry, Rachel I.; Rayner, Keith

    2013-01-01

    Recent evidence suggests that deaf people have enhanced visual attention to simple stimuli in the parafovea in comparison to hearing people. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to pre-process upcoming words and decide where to look next. We investigated whether auditory deprivation affects low-level visual processing during reading, and compared the perceptual span of deaf signers who were skilled and less skilled readers to that of skilled hearing readers. Compared to hearing readers, deaf readers had a larger perceptual span than would be expected by their reading ability. These results provide the first evidence that deaf readers’ enhanced attentional allocation to the parafovea is used during a complex cognitive task such as reading. PMID:22683830

  9. Visual selection and maintenance of the cell lines with high plant regeneration ability and low ploidy level in Dianthus acicularis by monitoring with flow cytometry analysis.

    PubMed

    Shiba, Tomonori; Mii, Masahiro

    2005-12-01

    Efficient plant regeneration system from cell suspension cultures was established in D. acicularis (2n=90) by monitoring ploidy level and visual selection of the cultures. The ploidy level of the cell cultures closely related to the shoot regeneration ability. The cell lines comprising original ploidy levels (2C+4C cells corresponding to DNA contents of G1 and G2 cells of diploid plant, respectively) showed high regeneration ability, whereas those containing the cells with 8C or higher DNA C-values showed low or no regeneration ability. The highly regenerable cell lines thus selected consisted of compact cell clumps with yellowish color and relatively moderate growth, suggesting that it is possible to select visually the highly regenerable cell lines with the original ploidy level. All the regenerated plantlets from the highly regenerable cell cultures exhibited normal phenotypes and no variations in ploidy level were observed by flow cytometry (FCM) analysis.

  10. Neuropsychological Components of Imagery Processing, Final Technical Report.

    ERIC Educational Resources Information Center

    Kosslyn, Stephen M.

    High-level visual processes make use of stored information, and are invoked during object identification, navigation, tracking, and visual mental imagery. The work presented in this document has resulted in a theory of the component "processing subsystems" used in high-level vision. This theory was developed by considering…

  11. Increased attention but more efficient disengagement: neuroscientific evidence for defensive processing of threatening health information.

    PubMed

    Kessels, Loes T E; Ruiter, Robert A C; Jansma, Bernadette M

    2010-07-01

    Previous studies indicate that people respond defensively to threatening health information, especially when the information challenges self-relevant goals. The authors investigated whether reduced acceptance of self-relevant health risk information is already visible in early attention processes, that is, attention disengagement processes. In a randomized, controlled trial with 29 smoking and nonsmoking students, a variant of Posner's cueing task was used in combination with the high-temporal resolution method of event-related brain potentials (ERPs). Reaction times and P300 ERP. Smokers showed lower P300 amplitudes in response to high- as opposed to low-threat invalid trials when moving their attention to a target in the opposite visual field, indicating more efficient attention disengagement processes. Furthermore, both smokers and nonsmokers showed increased P300 amplitudes in response to the presentation of high- as opposed to low-threat valid trials, indicating threat-induced attention-capturing processes. Reaction time measures did not support the ERP data, indicating that the ERP measure can be extremely informative to measure low-level attention biases in health communication. The findings provide the first neuroscientific support for the hypothesis that threatening health information causes more efficient disengagement among those for whom the health threat is self-relevant. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  12. Teacher Viewpoints of Instructional Design Principles for Visuals in a Middle School Math Curriculum

    ERIC Educational Resources Information Center

    Clinton, Virginia; Cooper, Jennifer L.

    2015-01-01

    Instructional design principles for visuals in student materials have been developed through findings based on student-level measures. However, teacher viewpoints may be a rich source of information to better understand how visuals can be optimized for student learning. This study's purpose is to examine teacher viewpoints on visuals. In a…

  13. A unified selection signal for attention and reward in primary visual cortex.

    PubMed

    Stănişor, Liviu; van der Togt, Chris; Pennartz, Cyriel M A; Roelfsema, Pieter R

    2013-05-28

    Stimuli associated with high rewards evoke stronger neuronal activity than stimuli associated with lower rewards in many brain regions. It is not well understood how these reward effects influence activity in sensory cortices that represent low-level stimulus features. Here, we investigated the effects of reward information in the primary visual cortex (area V1) of monkeys. We found that the reward value of a stimulus relative to the value of other stimuli is a good predictor of V1 activity. Relative value biases the competition between stimuli, just as has been shown for selective attention. The neuronal latency of this reward value effect in V1 was similar to the latency of attentional influences. Moreover, V1 neurons with a strong value effect also exhibited a strong attention effect, which implies that relative value and top-down attention engage overlapping, if not identical, neuronal selection mechanisms. Our findings demonstrate that the effects of reward value reach down to the earliest sensory processing levels of the cerebral cortex and imply that theories about the effects of reward coding and top-down attention on visual representations should be unified.

  14. Guidance of attention by information held in working memory.

    PubMed

    Calleja, Marissa Ortiz; Rich, Anina N

    2013-05-01

    Information held in working memory (WM) can guide attention during visual search. The authors of recent studies have interpreted the effect of holding verbal labels in WM as guidance of visual attention by semantic information. In a series of experiments, we tested how attention is influenced by visual features versus category-level information about complex objects held in WM. Participants either memorized an object's image or its category. While holding this information in memory, they searched for a target in a four-object search display. On exact-match trials, the memorized item reappeared as a distractor in the search display. On category-match trials, another exemplar of the memorized item appeared as a distractor. On neutral trials, none of the distractors were related to the memorized object. We found attentional guidance in visual search on both exact-match and category-match trials in Experiment 1, in which the exemplars were visually similar. When we controlled for visual similarity among the exemplars by using four possible exemplars (Exp. 2) or by using two exemplars rated as being visually dissimilar (Exp. 3), we found attentional guidance only on exact-match trials when participants memorized the object's image. The same pattern of results held when the target was invariant (Exps. 2-3) and when the target was defined semantically and varied in visual features (Exp. 4). The findings of these experiments suggest that attentional guidance by WM requires active visual information.

  15. Improved detection probability of low level light and infrared image fusion system

    NASA Astrophysics Data System (ADS)

    Luo, Yuxiang; Fu, Rongguo; Zhang, Junju; Wang, Wencong; Chang, Benkang

    2018-02-01

    Low level light(LLL) image contains rich information on environment details, but is easily affected by the weather. In the case of smoke, rain, cloud or fog, much target information will lose. Infrared image, which is from the radiation produced by the object itself, can be "active" to obtain the target information in the scene. However, the image contrast and resolution is bad, the ability of the acquisition of target details is very poor, and the imaging mode does not conform to the human visual habit. The fusion of LLL and infrared image can make up for the deficiency of each sensor and give play to the advantages of single sensor. At first, we show the hardware design of fusion circuit. Then, through the recognition probability calculation of the target(one person) and the background image(trees), we find that the trees detection probability of LLL image is higher than that of the infrared image, and the person detection probability of the infrared image is obviously higher than that of LLL image. The detection probability of fusion image for one person and trees is higher than that of single detector. Therefore, image fusion can significantly enlarge recognition probability and improve detection efficiency.

  16. What's So Different about Visuals?

    ERIC Educational Resources Information Center

    Williams, Thomas R.

    1993-01-01

    Shows how visual images and text differ from one another in the extent to which they resemble their referents; kinds of concepts they evoke; precision with which they evoke them; kinds of structures they impose on the information they convey; and degree to which that information can be interpreted by perceptual as opposed to higher level cognitive…

  17. High-intensity erotic visual stimuli de-activate the primary visual cortex in women.

    PubMed

    Huynh, Hieu K; Beers, Caroline; Willemsen, Antoon; Lont, Erna; Laan, Ellen; Dierckx, Rudi; Jansen, Monique; Sand, Michael; Weijmar Schultz, Willibrord; Holstege, Gert

    2012-06-01

    The primary visual cortex, Brodmann's area (BA 17), plays a vital role in basic survival mechanisms in humans. In most neuro-imaging studies in which the volunteers have to watch pictures or movies, the primary visual cortex is similarly activated independent of the content of the pictures or movies. However, in case the volunteers perform demanding non-visual tasks, the primary visual cortex becomes de-activated, although the amount of incoming visual sensory information is the same. Do low- and high-intensity erotic movies, compared to neutral movies, produce similar de-activation of the primary visual cortex? Brain activation/de-activation was studied by Positron Emission Tomography scanning of the brains of 12 healthy heterosexual premenopausal women, aged 18-47, who watched neutral, low- and high-intensity erotic film segments. We measured differences in regional cerebral blood flow (rCBF) in the primary visual cortex during watching neutral, low-intensity erotic, and high-intensity erotic film segments. Watching high-intensity erotic, but not low-intensity erotic movies, compared to neutral movies resulted in strong de-activation of the primary (BA 17) and adjoining parts of the secondary visual cortex. The strong de-activation during watching high-intensity erotic film might represent compensation for the increased blood supply in the brain regions involved in sexual arousal, also because high-intensity erotic movies do not require precise scanning of the visual field, because the impact is clear to the observer. © 2012 International Society for Sexual Medicine.

  18. Abnormal brain activation in neurofibromatosis type 1: a link between visual processing and the default mode network.

    PubMed

    Violante, Inês R; Ribeiro, Maria J; Cunha, Gil; Bernardino, Inês; Duarte, João V; Ramos, Fabiana; Saraiva, Jorge; Silva, Eduardo; Castelo-Branco, Miguel

    2012-01-01

    Neurofibromatosis type 1 (NF1) is one of the most common single gene disorders affecting the human nervous system with a high incidence of cognitive deficits, particularly visuospatial. Nevertheless, neurophysiological alterations in low-level visual processing that could be relevant to explain the cognitive phenotype are poorly understood. Here we used functional magnetic resonance imaging (fMRI) to study early cortical visual pathways in children and adults with NF1. We employed two distinct stimulus types differing in contrast and spatial and temporal frequencies to evoke relatively different activation of the magnocellular (M) and parvocellular (P) pathways. Hemodynamic responses were investigated in retinotopically-defined regions V1, V2 and V3 and then over the acquired cortical volume. Relative to matched control subjects, patients with NF1 showed deficient activation of the low-level visual cortex to both stimulus types. Importantly, this finding was observed for children and adults with NF1, indicating that low-level visual processing deficits do not ameliorate with age. Moreover, only during M-biased stimulation patients with NF1 failed to deactivate or even activated anterior and posterior midline regions of the default mode network. The observation that the magnocellular visual pathway is impaired in NF1 in early visual processing and is specifically associated with a deficient deactivation of the default mode network may provide a neural explanation for high-order cognitive deficits present in NF1, particularly visuospatial and attentional. A link between magnocellular and default mode network processing may generalize to neuropsychiatric disorders where such deficits have been separately identified.

  19. Owl Monkeys (Aotus nigriceps and A. infulatus) Follow Routes Instead of Food-Related Cues during Foraging in Captivity

    PubMed Central

    da Costa, Renata Souza; Bicca-Marques, Júlio César

    2014-01-01

    Foraging at night imposes different challenges from those faced during daylight, including the reliability of sensory cues. Owl monkeys (Aotus spp.) are ideal models among anthropoids to study the information used during foraging at low light levels because they are unique by having a nocturnal lifestyle. Six Aotus nigriceps and four A. infulatus individuals distributed into five enclosures were studied for testing their ability to rely on olfactory, visual, auditory, or spatial and quantitative information for locating food rewards and for evaluating the use of routes to navigate among five visually similar artificial feeding boxes mounted in each enclosure. During most experiments only a single box was baited with a food reward in each session. The baited box changed randomly throughout the experiment. In the spatial and quantitative information experiment there were two baited boxes varying in the amount of food provided. These baited boxes remained the same throughout the experiment. A total of 45 sessions (three sessions per night during 15 consecutive nights) per enclosure was conducted in each experiment. Only one female showed a performance suggestive of learning of the usefulness of sight to locate the food reward in the visual information experiment. Subjects showed a chance performance in the remaining experiments. All owl monkeys showed a preference for one box or a subset of boxes to inspect upon the beginning of each experimental session and consistently followed individual routes among feeding boxes. PMID:25517894

  20. Owl monkeys (Aotus nigriceps and A. infulatus) follow routes instead of food-related cues during foraging in captivity.

    PubMed

    da Costa, Renata Souza; Bicca-Marques, Júlio César

    2014-01-01

    Foraging at night imposes different challenges from those faced during daylight, including the reliability of sensory cues. Owl monkeys (Aotus spp.) are ideal models among anthropoids to study the information used during foraging at low light levels because they are unique by having a nocturnal lifestyle. Six Aotus nigriceps and four A. infulatus individuals distributed into five enclosures were studied for testing their ability to rely on olfactory, visual, auditory, or spatial and quantitative information for locating food rewards and for evaluating the use of routes to navigate among five visually similar artificial feeding boxes mounted in each enclosure. During most experiments only a single box was baited with a food reward in each session. The baited box changed randomly throughout the experiment. In the spatial and quantitative information experiment there were two baited boxes varying in the amount of food provided. These baited boxes remained the same throughout the experiment. A total of 45 sessions (three sessions per night during 15 consecutive nights) per enclosure was conducted in each experiment. Only one female showed a performance suggestive of learning of the usefulness of sight to locate the food reward in the visual information experiment. Subjects showed a chance performance in the remaining experiments. All owl monkeys showed a preference for one box or a subset of boxes to inspect upon the beginning of each experimental session and consistently followed individual routes among feeding boxes.

  1. Detail-enhanced multimodality medical image fusion based on gradient minimization smoothing filter and shearing filter.

    PubMed

    Liu, Xingbin; Mei, Wenbo; Du, Huiqian

    2018-02-13

    In this paper, a detail-enhanced multimodality medical image fusion algorithm is proposed by using proposed multi-scale joint decomposition framework (MJDF) and shearing filter (SF). The MJDF constructed with gradient minimization smoothing filter (GMSF) and Gaussian low-pass filter (GLF) is used to decompose source images into low-pass layers, edge layers, and detail layers at multiple scales. In order to highlight the detail information in the fused image, the edge layer and the detail layer in each scale are weighted combined into a detail-enhanced layer. As directional filter is effective in capturing salient information, so SF is applied to the detail-enhanced layer to extract geometrical features and obtain directional coefficients. Visual saliency map-based fusion rule is designed for fusing low-pass layers, and the sum of standard deviation is used as activity level measurement for directional coefficients fusion. The final fusion result is obtained by synthesizing the fused low-pass layers and directional coefficients. Experimental results show that the proposed method with shift-invariance, directional selectivity, and detail-enhanced property is efficient in preserving and enhancing detail information of multimodality medical images. Graphical abstract The detailed implementation of the proposed medical image fusion algorithm.

  2. New colored optical security elements using Rolic's LPP/LCP technology: devices for first- to third-level inspection

    NASA Astrophysics Data System (ADS)

    Moia, Franco

    2002-04-01

    With linear photo-polymerization (LPP) ROLIC has invented a photo-patternable technology enabling to align not only conventional liquid crystals but also liquid crystals polymers (LCP). ROLIC's optical security device technology derives from its LPP/LCP technology. LPP/LCP security devices are created by structured photo-alignment of an LPP layer through phot-masks, thus generating a high resolution, photo-patterned aligning layer which carries the aligning information of the image to be created. The subsequent LCP layer transforms the aligning information into an optical phase image with low and/or very high information content, such as invisible photographic pictures. The building block capability of the LPP/LCP technology allows the manufacturing of cholesteric and non-cholesteric LPP/LCP devices which cover 1st and/or 2nd level applications. Apart from black/white security devices colored information zones can be integrated. Moreover, we have developed an LPP/LCP security device which covers all three- 1st, 2nd and 3rd- inspection levels in one and the same authentication device: besides a color shift by tilting the device (1st level) and the detection of normally hidden information by use of a simple sheet polarizer (2nd level) the new device contains encrypted hidden information which can be visualized only by superimposing an LPP/LCP inspection tool (key) for decryption (3rd level). This optical key is also based on the LPP/LCP technology and is itself a 3rd level security device.

  3. Risk based meat hygiene--examples on food chain information and visual meat inspection.

    PubMed

    Ellerbroek, Lüppo

    2007-08-01

    The Regulation (EC) No. 854/2004 refers in particularly to a broad spectrum of official supervision of products of animal origin. The supervision should be based on the most current relevant information which is available. The proposal presented includes forms for implementation of Food Chain Information (FCI) as laid down in the Regulation (EC) No. 853/2004 as well as suggestions for the implementation of the visual inspection and criteria for a practical approach to assist the competent authority and the business operator. These suggestions are summarised in two forms including the FCI and practical information from the farm of origin as well as from the slaughterhouse needed for the competent authority to permit the visual meat inspection of fattening pigs. The requested information from the farm level include i.e. an animal loss rate during the fattening period and diagnostic findings of carcasses and respectively on organs of the animals during meat inspection procedure. The evaluation of findings in liver and pleura at the slaughterhouse level indicate a correlation to the general health status of the fattening conditions at farm level. The criteria developed are in principle suited for a "gold standard" of the health status of fattening pigs and the criteria may serve as practical implementation of the term Good Farming Practice in relation to consumer protection. Only for fattening pigs deriving from farms fulfilling this "gold standard" the visual meat inspection shall be permitted. It is proposed to integrate the two forms for FCI and additional information for authorisation of visual meat inspection for fattening pigs in the working document of the DG SANCO titled "Commission regulation of laying down specific rules on official controls for the inspection of meat" (SANCO/2696/2006).

  4. An Information-Theoretic-Cluster Visualization for Self-Organizing Maps.

    PubMed

    Brito da Silva, Leonardo Enzo; Wunsch, Donald C

    2018-06-01

    Improved data visualization will be a significant tool to enhance cluster analysis. In this paper, an information-theoretic-based method for cluster visualization using self-organizing maps (SOMs) is presented. The information-theoretic visualization (IT-vis) has the same structure as the unified distance matrix, but instead of depicting Euclidean distances between adjacent neurons, it displays the similarity between the distributions associated with adjacent neurons. Each SOM neuron has an associated subset of the data set whose cardinality controls the granularity of the IT-vis and with which the first- and second-order statistics are computed and used to estimate their probability density functions. These are used to calculate the similarity measure, based on Renyi's quadratic cross entropy and cross information potential (CIP). The introduced visualizations combine the low computational cost and kernel estimation properties of the representative CIP and the data structure representation of a single-linkage-based grouping algorithm to generate an enhanced SOM-based visualization. The visual quality of the IT-vis is assessed by comparing it with other visualization methods for several real-world and synthetic benchmark data sets. Thus, this paper also contains a significant literature survey. The experiments demonstrate the IT-vis cluster revealing capabilities, in which cluster boundaries are sharply captured. Additionally, the information-theoretic visualizations are used to perform clustering of the SOM. Compared with other methods, IT-vis of large SOMs yielded the best results in this paper, for which the quality of the final partitions was evaluated using external validity indices.

  5. Identity Development in German Adolescents with and without Visual Impairments

    ERIC Educational Resources Information Center

    Pinquart, Martin; Pfeiffer, Jens P.

    2013-01-01

    Introduction: The study reported here assessed the exploration of identity and commitment to an identity in German adolescents with and without visual impairments. Methods: In total, 178 adolescents with visual impairments (blindness or low vision) and 526 sighted adolescents completed the Ego Identity Process Questionnaire. Results: The levels of…

  6. Top-down influences on visual attention during listening are modulated by observer sex.

    PubMed

    Shen, John; Itti, Laurent

    2012-07-15

    In conversation, women have a small advantage in decoding non-verbal communication compared to men. In light of these findings, we sought to determine whether sex differences also existed in visual attention during a related listening task, and if so, if the differences existed among attention to high-level aspects of the scene or to conspicuous visual features. Using eye-tracking and computational techniques, we present direct evidence that men and women orient attention differently during conversational listening. We tracked the eyes of 15 men and 19 women who watched and listened to 84 clips featuring 12 different speakers in various outdoor settings. At the fixation following each saccadic eye movement, we analyzed the type of object that was fixated. Men gazed more often at the mouth and women at the eyes of the speaker. Women more often exhibited "distracted" saccades directed away from the speaker and towards a background scene element. Examining the multi-scale center-surround variation in low-level visual features (static: color, intensity, orientation, and dynamic: motion energy), we found that men consistently selected regions which expressed more variation in dynamic features, which can be attributed to a male preference for motion and a female preference for areas that may contain nonverbal information about the speaker. In sum, significant differences were observed, which we speculate arise from different integration strategies of visual cues in selecting the final target of attention. Our findings have implications for studies of sex in nonverbal communication, as well as for more predictive models of visual attention. Published by Elsevier Ltd.

  7. Visual Predictions in the Orbitofrontal Cortex Rely on Associative Content

    PubMed Central

    Chaumon, Maximilien; Kveraga, Kestutis; Barrett, Lisa Feldman; Bar, Moshe

    2014-01-01

    Predicting upcoming events from incomplete information is an essential brain function. The orbitofrontal cortex (OFC) plays a critical role in this process by facilitating recognition of sensory inputs via predictive feedback to sensory cortices. In the visual domain, the OFC is engaged by low spatial frequency (LSF) and magnocellular-biased inputs, but beyond this, we know little about the information content required to activate it. Is the OFC automatically engaged to analyze any LSF information for meaning? Or is it engaged only when LSF information matches preexisting memory associations? We tested these hypotheses and show that only LSF information that could be linked to memory associations engages the OFC. Specifically, LSF stimuli activated the OFC in 2 distinct medial and lateral regions only if they resembled known visual objects. More identifiable objects increased activity in the medial OFC, known for its function in affective responses. Furthermore, these objects also increased the connectivity of the lateral OFC with the ventral visual cortex, a crucial region for object identification. At the interface between sensory, memory, and affective processing, the OFC thus appears to be attuned to the associative content of visual information and to play a central role in visuo-affective prediction. PMID:23771980

  8. Piaget's Water-Level Task: The Impact of Vision on Performance

    ERIC Educational Resources Information Center

    Papadopoulos, Konstantinos; Koustriava, Eleni

    2011-01-01

    In the present study, the aim was to examine the differences in performance between children and adolescents with visual impairment and sighted peers in the water-level task. Twenty-eight individuals with visual impairments, 14 individuals with blindness and 14 individuals with low vision, and 28 sighted individuals participated in the present…

  9. Learning from Balance Sheet Visualization

    ERIC Educational Resources Information Center

    Tanlamai, Uthai; Soongswang, Oranuj

    2011-01-01

    This exploratory study examines alternative visuals and their effect on the level of learning of balance sheet users. Executive and regular classes of graduate students majoring in information technology in business were asked to evaluate the extent of acceptance and enhanced capability of these alternative visuals toward their learning…

  10. Different source image fusion based on FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Xiao; Piao, Yan

    2016-03-01

    The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.

  11. Fixation and saliency during search of natural scenes: the case of visual agnosia.

    PubMed

    Foulsham, Tom; Barton, Jason J S; Kingstone, Alan; Dewhurst, Richard; Underwood, Geoffrey

    2009-07-01

    Models of eye movement control in natural scenes often distinguish between stimulus-driven processes (which guide the eyes to visually salient regions) and those based on task and object knowledge (which depend on expectations or identification of objects and scene gist). In the present investigation, the eye movements of a patient with visual agnosia were recorded while she searched for objects within photographs of natural scenes and compared to those made by students and age-matched controls. Agnosia is assumed to disrupt the top-down knowledge available in this task, and so may increase the reliance on bottom-up cues. The patient's deficit in object recognition was seen in poor search performance and inefficient scanning. The low-level saliency of target objects had an effect on responses in visual agnosia, and the most salient region in the scene was more likely to be fixated by the patient than by controls. An analysis of model-predicted saliency at fixation locations indicated a closer match between fixations and low-level saliency in agnosia than in controls. These findings are discussed in relation to saliency-map models and the balance between high and low-level factors in eye guidance.

  12. Content-Aware Video Adaptation under Low-Bitrate Constraint

    NASA Astrophysics Data System (ADS)

    Hsiao, Ming-Ho; Chen, Yi-Wen; Chen, Hua-Tsung; Chou, Kuan-Hung; Lee, Suh-Yin

    2007-12-01

    With the development of wireless network and the improvement of mobile device capability, video streaming is more and more widespread in such an environment. Under the condition of limited resource and inherent constraints, appropriate video adaptations have become one of the most important and challenging issues in wireless multimedia applications. In this paper, we propose a novel content-aware video adaptation in order to effectively utilize resource and improve visual perceptual quality. First, the attention model is derived from analyzing the characteristics of brightness, location, motion vector, and energy features in compressed domain to reduce computation complexity. Then, through the integration of attention model, capability of client device and correlational statistic model, attractive regions of video scenes are derived. The information object- (IOB-) weighted rate distortion model is used for adjusting the bit allocation. Finally, the video adaptation scheme dynamically adjusts video bitstream in frame level and object level. Experimental results validate that the proposed scheme achieves better visual quality effectively and efficiently.

  13. Extrinsic cognitive load impairs low-level speech perception.

    PubMed

    Mattys, Sven L; Barden, Katharine; Samuel, Arthur G

    2014-06-01

    Recent research has suggested that the extrinsic cognitive load generated by performing a nonlinguistic visual task while perceiving speech increases listeners' reliance on lexical knowledge and decreases their capacity to perceive phonetic detail. In the present study, we asked whether this effect is accounted for better at a lexical or a sublexical level. The former would imply that cognitive load directly affects lexical activation but not perceptual sensitivity; the latter would imply that increased lexical reliance under cognitive load is only a secondary consequence of imprecise or incomplete phonetic encoding. Using the phoneme restoration paradigm, we showed that perceptual sensitivity decreases (i.e., phoneme restoration increases) almost linearly with the effort involved in the concurrent visual task. However, cognitive load had only a minimal effect on the contribution of lexical information to phoneme restoration. We concluded that the locus of extrinsic cognitive load on the speech system is perceptual rather than lexical. Mechanisms by which cognitive load increases tolerance to acoustic imprecision and broadens phonemic categories were discussed.

  14. Economic Evaluation of Low-Vision Rehabilitation for Veterans With Macular Diseases in the US Department of Veterans Affairs.

    PubMed

    Stroupe, Kevin T; Stelmack, Joan A; Tang, X Charlene; Wei, Yongliang; Sayers, Scott; Reda, Domenic J; Kwon, Ellen; Massof, Robert W

    2018-05-01

    Examining costs and consequences of different low-vision (LV) programs provides important information about resources needed to expand treatment options efficiently. To examine the costs and consequences of LV rehabilitation or basic LV services. The US Department of Veterans Affairs (VA) Low Vision Intervention Trial (LOVIT) II was conducted from September 27, 2010, to July 31, 2014, at 9 VA facilities and included 323 veterans with macular diseases and a best-corrected distance visual acuity of 20/50 to 20/200. Veterans were randomized to receive basic LV services that provided LV devices without therapy, or LV rehabilitation that added a therapist to LV services who provided instruction and homework on using LV devices, eccentric viewing, and environmental modification. We compared costs and consequences between these groups. Low-vision devices without therapy and LV devices with therapy. Costs of providing basic LV services or LV rehabilitation were assessed. We measured consequences as changes in functional visual ability from baseline to follow-up 4 months after randomization using the VA Low Vision Visual Functioning Questionnaire. Visual ability was measured in dimensionless log odds units (logits). Of 323 randomized patients, the mean (SD) age was 80 (10.5) years, 314 (97.2%) were men, and 292 (90.4%) were white. One hundred sixty (49.5%) received basic LV services and 163 (50.1%) received LV rehabilitation. The mean (SD) total direct health care costs per patient were similar between patients who were randomized to receive basic LV services ($1662 [$671]) or LV rehabilitation ($1788 [$864]) (basic LV services, $126 lower; 95% CI, $299 lower to $35 higher; P = .15). However, basic LV services required less time and had lower transportation costs. Patients receiving LV rehabilitation had greater improvements in overall visual ability, reading ability, visual information processing, and visual motor skill scores.

  15. How Configural Is the Configural Superiority Effect? A Neuroimaging Investigation of Emergent Features in Visual Cortex

    PubMed Central

    Fox, Olivia M.; Harel, Assaf; Bennett, Kevin B.

    2017-01-01

    The perception of a visual stimulus is dependent not only upon local features, but also on the arrangement of those features. When stimulus features are perceptually well organized (e.g., symmetric or parallel), a global configuration with a high degree of salience emerges from the interactions between these features, often referred to as emergent features. Emergent features can be demonstrated in the Configural Superiority Effect (CSE): presenting a stimulus within an organized context relative to its presentation in a disarranged one results in better performance. Prior neuroimaging work on the perception of emergent features regards the CSE as an “all or none” phenomenon, focusing on the contrast between configural and non-configural stimuli. However, it is still not clear how emergent features are processed between these two endpoints. The current study examined the extent to which behavioral and neuroimaging markers of emergent features are responsive to the degree of configurality in visual displays. Subjects were tasked with reporting the anomalous quadrant in a visual search task while being scanned. Degree of configurality was manipulated by incrementally varying the rotational angle of low-level features within the stimulus arrays. Behaviorally, we observed faster response times with increasing levels of configurality. These behavioral changes were accompanied by increases in response magnitude across multiple visual areas in occipito-temporal cortex, primarily early visual cortex and object-selective cortex. Our findings suggest that the neural correlates of emergent features can be observed even in response to stimuli that are not fully configural, and demonstrate that configural information is already present at early stages of the visual hierarchy. PMID:28167924

  16. 'When is VISION asked too much'?

    PubMed

    van der Wildt, G J; den Brinker, B P; Wertheim, A H

    1995-01-01

    The last two decades a shift took place from substitutional/compensatory training to utilisation of residual vision regarding rehabilitation of the visually impaired. Some of the visually impaired are able to use their visual perception nearly as complete as normal seeing people in spite of a severe visual disability. On the other hand, people with nearly normal functions can be severely visually handicapped. To illustrate this, two cases are presented. The first case is a man, aged 47 years, with a juvenile macular degeneration on both eyes. In spite of a very low visual acuity of less then 0.05, he finished an university education and he is able to maintain himself very well in a leading position in a scientific environment, by using adequate low vision devices. Also for his leisure activities, as photography and speed skating, he relies upon visual perception. The second case is a woman, aged 30 years, with nearly normal visual functions, who is not able to read for longer periods caused by conflicting information from the body- and eye movements, and the visual input. This causes sickness during reading. She is unable to use books for her study and is working with recordings on tape. The results of a comprehensive visual assessment will be related to the specific low vision devices and its use.

  17. Theoretical approaches to lightness and perception.

    PubMed

    Gilchrist, Alan

    2015-01-01

    Theories of lightness, like theories of perception in general, can be categorized as high-level, low-level, and mid-level. However, I will argue that in practice there are only two categories: one-stage mid-level theories, and two-stage low-high theories. Low-level theories usually include a high-level component and high-level theories include a low-level component, the distinction being mainly one of emphasis. Two-stage theories are the modern incarnation of the persistent sensation/perception dichotomy according to which an early experience of raw sensations, faithful to the proximal stimulus, is followed by a process of cognitive interpretation, typically based on past experience. Like phlogiston or the ether, raw sensations seem like they must exist, but there is no clear evidence for them. Proximal stimulus matches are postperceptual, not read off an early sensory stage. Visual angle matches are achieved by a cognitive process of flattening the visual world. Likewise, brightness (luminance) matches depend on a cognitive process of flattening the illumination. Brightness is not the input to lightness; brightness is slower than lightness. Evidence for an early (< 200 ms) mosaic stage is shaky. As for cognitive influences on perception, the many claims tend to fall apart upon close inspection of the evidence. Much of the evidence for the current revival of the 'new look' is probably better explained by (1) a natural desire of (some) subjects to please the experimenter, and (2) the ease of intuiting an experimental hypothesis. High-level theories of lightness are overkill. The visual system does not need to know the amount of illumination, merely which surfaces share the same illumination. This leaves mid-level theories derived from the gestalt school. Here the debate seems to revolve around layer models and framework models. Layer models fit our visual experience of a pattern of illumination projected onto a pattern of reflectance, while framework models provide a better account of illusions and failures of constancy. Evidence for and against these approaches is reviewed.

  18. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  19. Visual Display Terminal use in Iranian bank tellers: Effects on job stress and insomnia.

    PubMed

    Giahi, Omid; Shahmoradi, Behzad; Barkhordari, Abdullah; Khoubi, Jamshid

    2015-01-01

    Visual Display Terminals (VDTs) are equipments in many workplaces which their use may increase the risk of visual, musculoskeletal and mental problems including insomnia. To determine the relationship between duration of daily VDT use and insomnia among the Iranian bank tellers. We randomly selected 382 bank tellers working with VDT. Quality of sleep and stress information were collected by Athens Insomnia Scales (AIS) and Demand-Control Model (DCM) model respectively. Out of 382 participants, 127 (33.2%) had sleep complaints and 255 (66.8%) had no sleep disorders. Moreover, the insomnia symptoms' score were significantly high in the participants having more than 6 hours of daily VDT use after adjusting for multiple confounding factors (P <  0.001). There was no significant relationship between stress and insomnia. It seems that the low levels of stress and job satisfaction reduce the impact of VDT on sleep quality in tellers who worked less than 6 hours per day.

  20. Distinct Feedforward and Feedback Effects of Microstimulation in Visual Cortex Reveal Neural Mechanisms of Texture Segregation.

    PubMed

    Klink, P Christiaan; Dagnino, Bruno; Gariel-Mathis, Marie-Alice; Roelfsema, Pieter R

    2017-07-05

    The visual cortex is hierarchically organized, with low-level areas coding for simple features and higher areas for complex ones. Feedforward and feedback connections propagate information between areas in opposite directions, but their functional roles are only partially understood. We used electrical microstimulation to perturb the propagation of neuronal activity between areas V1 and V4 in monkeys performing a texture-segregation task. In both areas, microstimulation locally caused a brief phase of excitation, followed by inhibition. Both these effects propagated faithfully in the feedforward direction from V1 to V4. Stimulation of V4, however, caused little V1 excitation, but it did yield a delayed suppression during the late phase of visually driven activity. This suppression was pronounced for the V1 figure representation and weaker for background representations. Our results reveal functional differences between feedforward and feedback processing in texture segregation and suggest a specific modulating role for feedback connections in perceptual organization. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Temporal Processing Capacity in High-Level Visual Cortex Is Domain Specific.

    PubMed

    Stigliani, Anthony; Weiner, Kevin S; Grill-Spector, Kalanit

    2015-09-09

    Prevailing hierarchical models propose that temporal processing capacity--the amount of information that a brain region processes in a unit time--decreases at higher stages in the ventral stream regardless of domain. However, it is unknown if temporal processing capacities are domain general or domain specific in human high-level visual cortex. Using a novel fMRI paradigm, we measured temporal capacities of functional regions in high-level visual cortex. Contrary to hierarchical models, our data reveal domain-specific processing capacities as follows: (1) regions processing information from different domains have differential temporal capacities within each stage of the visual hierarchy and (2) domain-specific regions display the same temporal capacity regardless of their position in the processing hierarchy. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. Notably, domain-specific temporal processing capacities are not apparent in V1 and have perceptual implications. Behavioral testing revealed that the encoding capacity of body images is higher than that of characters, faces, and places, and there is a correspondence between peak encoding rates and cortical capacities for characters and bodies. The present evidence supports a model in which the natural statistics of temporal information in the visual world may affect domain-specific temporal processing and encoding capacities. These findings suggest that the functional organization of high-level visual cortex may be constrained by temporal characteristics of stimuli in the natural world, and this temporal capacity is a characteristic of domain-specific networks in high-level visual cortex. Significance statement: Visual stimuli bombard us at different rates every day. For example, words and scenes are typically stationary and vary at slow rates. In contrast, bodies are dynamic and typically change at faster rates. Using a novel fMRI paradigm, we measured temporal processing capacities of functional regions in human high-level visual cortex. Contrary to prevailing theories, we find that different regions have different processing capacities, which have behavioral implications. In general, character-selective regions have the lowest capacity, face- and place-selective regions have an intermediate capacity, and body-selective regions have the highest capacity. These results suggest that temporal processing capacity is a characteristic of domain-specific networks in high-level visual cortex and contributes to the segregation of cortical regions. Copyright © 2015 the authors 0270-6474/15/3512412-13$15.00/0.

  2. Limits on perceptual encoding can be predicted from known receptive field properties of human visual cortex.

    PubMed

    Cohen, Michael A; Rhee, Juliana Y; Alvarez, George A

    2016-01-01

    Human cognition has a limited capacity that is often attributed to the brain having finite cognitive resources, but the nature of these resources is usually not specified. Here, we show evidence that perceptual interference between items can be predicted by known receptive field properties of the visual cortex, suggesting that competition within representational maps is an important source of the capacity limitations of visual processing. Across the visual hierarchy, receptive fields get larger and represent more complex, high-level features. Thus, when presented simultaneously, high-level items (e.g., faces) will often land within the same receptive fields, while low-level items (e.g., color patches) will often not. Using a perceptual task, we found long-range interference between high-level items, but only short-range interference for low-level items, with both types of interference being weaker across hemifields. Finally, we show that long-range interference between items appears to occur primarily during perceptual encoding and not during working memory maintenance. These results are naturally explained by the distribution of receptive fields and establish a link between perceptual capacity limits and the underlying neural architecture. (c) 2015 APA, all rights reserved).

  3. English 3135: Visual Rhetoric

    ERIC Educational Resources Information Center

    Gatta, Oriana

    2013-01-01

    As an advanced rhetoric and composition doctoral student, I taught Engl 3135: Visual Rhetoric, a three-credit upper-level course offered by the Department of English at Georgia State University. Mary E. Hocks originally designed this course in 2000 to, in her words, "introduce visual information design theories and practices for writers [and]…

  4. Training-induced recovery of low-level vision followed by mid-level perceptual improvements in developmental object and face agnosia

    PubMed Central

    Lev, Maria; Gilaie-Dotan, Sharon; Gotthilf-Nezri, Dana; Yehezkel, Oren; Brooks, Joseph L; Perry, Anat; Bentin, Shlomo; Bonneh, Yoram; Polat, Uri

    2015-01-01

    Long-term deprivation of normal visual inputs can cause perceptual impairments at various levels of visual function, from basic visual acuity deficits, through mid-level deficits such as contour integration and motion coherence, to high-level face and object agnosia. Yet it is unclear whether training during adulthood, at a post-developmental stage of the adult visual system, can overcome such developmental impairments. Here, we visually trained LG, a developmental object and face agnosic individual. Prior to training, at the age of 20, LG's basic and mid-level visual functions such as visual acuity, crowding effects, and contour integration were underdeveloped relative to normal adult vision, corresponding to or poorer than those of 5–6 year olds (Gilaie-Dotan, Perry, Bonneh, Malach & Bentin, 2009). Intensive visual training, based on lateral interactions, was applied for a period of 9 months. LG's directly trained but also untrained visual functions such as visual acuity, crowding, binocular stereopsis and also mid-level contour integration improved significantly and reached near-age-level performance, with long-term (over 4 years) persistence. Moreover, mid-level functions that were tested post-training were found to be normal in LG. Some possible subtle improvement was observed in LG's higher-order visual functions such as object recognition and part integration, while LG's face perception skills have not improved thus far. These results suggest that corrective training at a post-developmental stage, even in the adult visual system, can prove effective, and its enduring effects are the basis for a revival of a developmental cascade that can lead to reduced perceptual impairments. PMID:24698161

  5. Training-induced recovery of low-level vision followed by mid-level perceptual improvements in developmental object and face agnosia.

    PubMed

    Lev, Maria; Gilaie-Dotan, Sharon; Gotthilf-Nezri, Dana; Yehezkel, Oren; Brooks, Joseph L; Perry, Anat; Bentin, Shlomo; Bonneh, Yoram; Polat, Uri

    2015-01-01

    Long-term deprivation of normal visual inputs can cause perceptual impairments at various levels of visual function, from basic visual acuity deficits, through mid-level deficits such as contour integration and motion coherence, to high-level face and object agnosia. Yet it is unclear whether training during adulthood, at a post-developmental stage of the adult visual system, can overcome such developmental impairments. Here, we visually trained LG, a developmental object and face agnosic individual. Prior to training, at the age of 20, LG's basic and mid-level visual functions such as visual acuity, crowding effects, and contour integration were underdeveloped relative to normal adult vision, corresponding to or poorer than those of 5-6 year olds (Gilaie-Dotan, Perry, Bonneh, Malach & Bentin, 2009). Intensive visual training, based on lateral interactions, was applied for a period of 9 months. LG's directly trained but also untrained visual functions such as visual acuity, crowding, binocular stereopsis and also mid-level contour integration improved significantly and reached near-age-level performance, with long-term (over 4 years) persistence. Moreover, mid-level functions that were tested post-training were found to be normal in LG. Some possible subtle improvement was observed in LG's higher-order visual functions such as object recognition and part integration, while LG's face perception skills have not improved thus far. These results suggest that corrective training at a post-developmental stage, even in the adult visual system, can prove effective, and its enduring effects are the basis for a revival of a developmental cascade that can lead to reduced perceptual impairments. © 2014 The Authors. Developmental Science Published by John Wiley & Sons Ltd.

  6. Basic Visual Merchandising. Second Edition. [Student's Manual and] Answer Book/Teacher's Guide.

    ERIC Educational Resources Information Center

    Luter, Robert R.

    This student's manual that features content needed to do tasks related to visual merchandising is intended for students in co-op training stations and entry-level, master employee, and supervisory-level employees. It contains 13 assignments. Each assignment has questions covering specific information and also features activities in which students…

  7. Domestic pigs' (Sus scrofa domestica) use of direct and indirect visual and auditory cues in an object choice task.

    PubMed

    Nawroth, Christian; von Borell, Eberhard

    2015-05-01

    Recently, foraging strategies have been linked to the ability to use indirect visual information. More selective feeders should express a higher aversion against losses compared to non-selective feeders and should therefore be more prone to avoid empty food locations. To extend these findings, in this study, we present a series of studies investigating the use of direct and indirect visual and auditory information by an omnivorous but selective feeder-the domestic pig. Subjects had to choose between two buckets, with only one containing a reward. Before making a choice, the subjects in Experiment 1 (N = 8) received full information regarding both the baited and non-baited location, either in a visual or auditory domain. In this experiment, the subjects were able to use visual but not auditory cues to infer the location of the reward spontaneously. Additionally, four individuals learned to use auditory cues after a period of training. In Experiment 2 (N = 8), the pigs were given different amounts of visual information about the content of the buckets-lifting either both of the buckets (full information), the baited bucket (direct information), the empty bucket (indirect information) or no bucket at all (no information). The subjects as a group were able to use direct and indirect visual cues. However, over the course of the experiment, the performance dropped to chance level when indirect information was provided. A final experiment (N = 3) provided preliminary results for pigs' use of indirect auditory information to infer the location of a reward. We conclude that pigs at a very young age are able to make decisions based on indirect information in the visual domain, whereas their performance in the use of indirect auditory information warrants further investigation.

  8. A Different View on the Checkerboard? Alterations in Early and Late Visually Evoked EEG Potentials in Asperger Observers

    PubMed Central

    Kornmeier, Juergen; Wörner, Rike; Riedel, Andreas; Bach, Michael; Tebartz van Elst, Ludger

    2014-01-01

    Background Asperger Autism is a lifelong psychiatric condition with highly circumscribed interests and routines, problems in social cognition, verbal and nonverbal communication, and also perceptual abnormalities with sensory hypersensitivity. To objectify both lower-level visual and cognitive alterations we looked for differences in visual event-related potentials (EEG) between Asperger observers and matched controls while they observed simple checkerboard stimuli. Methods In a balanced oddball paradigm checkerboards of two checksizes (0.6° and 1.2°) were presented with different frequencies. Participants counted the occurrence times of the rare fine or rare coarse checkerboards in different experimental conditions. We focused on early visual ERP differences as a function of checkerboard size and the classical P3b ERP component as an indicator of cognitive processing. Results We found an early (100–200 ms after stimulus onset) occipital ERP effect of checkerboard size (dominant spatial frequency). This effect was weaker in the Asperger than in the control observers. Further a typical parietal/central oddball-P3b occurred at 500 ms with the rare checkerboards. The P3b showed a right-hemispheric lateralization, which was more prominent in Asperger than in control observers. Discussion The difference in the early occipital ERP effect between the two groups may be a physiological marker of differences in the processing of small visual details in Asperger observers compared to normal controls. The stronger lateralization of the P3b in Asperger observers may indicate a stronger involvement of the right-hemispheric network of bottom-up attention. The lateralization of the P3b signal might be a compensatory consequence of the compromised early checksize effect. Higher-level analytical information processing units may need to compensate for difficulties in low-level signal analysis. PMID:24632708

  9. A different view on the checkerboard? Alterations in early and late visually evoked EEG potentials in Asperger observers.

    PubMed

    Kornmeier, Juergen; Wörner, Rike; Riedel, Andreas; Bach, Michael; Tebartz van Elst, Ludger

    2014-01-01

    Asperger Autism is a lifelong psychiatric condition with highly circumscribed interests and routines, problems in social cognition, verbal and nonverbal communication, and also perceptual abnormalities with sensory hypersensitivity. To objectify both lower-level visual and cognitive alterations we looked for differences in visual event-related potentials (EEG) between Asperger observers and matched controls while they observed simple checkerboard stimuli. In a balanced oddball paradigm checkerboards of two checksizes (0.6° and 1.2°) were presented with different frequencies. Participants counted the occurrence times of the rare fine or rare coarse checkerboards in different experimental conditions. We focused on early visual ERP differences as a function of checkerboard size and the classical P3b ERP component as an indicator of cognitive processing. We found an early (100-200 ms after stimulus onset) occipital ERP effect of checkerboard size (dominant spatial frequency). This effect was weaker in the Asperger than in the control observers. Further a typical parietal/central oddball-P3b occurred at 500 ms with the rare checkerboards. The P3b showed a right-hemispheric lateralization, which was more prominent in Asperger than in control observers. The difference in the early occipital ERP effect between the two groups may be a physiological marker of differences in the processing of small visual details in Asperger observers compared to normal controls. The stronger lateralization of the P3b in Asperger observers may indicate a stronger involvement of the right-hemispheric network of bottom-up attention. The lateralization of the P3b signal might be a compensatory consequence of the compromised early checksize effect. Higher-level analytical information processing units may need to compensate for difficulties in low-level signal analysis.

  10. Disentangling brain activity related to the processing of emotional visual information and emotional arousal.

    PubMed

    Kuniecki, Michał; Wołoszyn, Kinga; Domagalik, Aleksandra; Pilarczyk, Joanna

    2018-05-01

    Processing of emotional visual information engages cognitive functions and induces arousal. We aimed to examine the modulatory role of emotional valence on brain activations linked to the processing of visual information and those linked to arousal. Participants were scanned and their pupil size was measured while viewing negative and neutral images. The visual noise was added to the images in various proportions to parametrically manipulate the amount of visual information. Pupil size was used as an index of physiological arousal. We show that arousal induced by the negative images, as compared to the neutral ones, is primarily related to greater amygdala activity while increasing visibility of negative content to enhanced activity in the lateral occipital complex (LOC). We argue that more intense visual processing of negative scenes can occur irrespective of the level of arousal. It may suggest that higher areas of the visual stream are fine-tuned to process emotionally relevant objects. Both arousal and processing of emotional visual information modulated activity within the ventromedial prefrontal cortex (vmPFC). Overlapping activations within the vmPFC may reflect the integration of these aspects of emotional processing. Additionally, we show that emotionally-evoked pupil dilations are related to activations in the amygdala, vmPFC, and LOC.

  11. Add a picture for suspense: neural correlates of the interaction between language and visual information in the perception of fear

    PubMed Central

    Clevis, Krien; Hagoort, Peter

    2011-01-01

    We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants’ brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information. PMID:20530540

  12. Development of an omnidirectional gamma-ray imaging Compton camera for low-radiation-level environmental monitoring

    NASA Astrophysics Data System (ADS)

    Watanabe, Takara; Enomoto, Ryoji; Muraishi, Hiroshi; Katagiri, Hideaki; Kagaya, Mika; Fukushi, Masahiro; Kano, Daisuke; Satoh, Wataru; Takeda, Tohoru; Tanaka, Manobu M.; Tanaka, Souichi; Uchida, Tomohisa; Wada, Kiyoto; Wakamatsu, Ryo

    2018-02-01

    We have developed an omnidirectional gamma-ray imaging Compton camera for environmental monitoring at low levels of radiation. The camera consisted of only six CsI(Tl) scintillator cubes of 3.5 cm, each of which was readout by super-bialkali photo-multiplier tubes (PMTs). Our camera enables the visualization of the position of gamma-ray sources in all directions (∼4π sr) over a wide energy range between 300 and 1400 keV. The angular resolution (σ) was found to be ∼11°, which was realized using an image-sharpening technique. A high detection efficiency of 18 cps/(µSv/h) for 511 keV (1.6 cps/MBq at 1 m) was achieved, indicating the capability of this camera to visualize hotspots in areas with low-radiation-level contamination from the order of µSv/h to natural background levels. Our proposed technique can be easily used as a low-radiation-level imaging monitor in radiation control areas, such as medical and accelerator facilities.

  13. The Relation of Visual and Auditory Aptitudes to First Grade Low Readers' Achievement under Sight-Word and Systematic Phonic Instructions. Research Report #36.

    ERIC Educational Resources Information Center

    Gallistel, Elizabeth; And Others

    Ten auditory and ten visual aptitude measures were administered in the middle of first grade to a sample of 58 low readers. More than half of this low reader sample had scored more than a year below expected grade level on two or more aptitudes. Word recognition measures were administered after four months of sight word instruction and again after…

  14. Oculomotor Reflexes as a Test of Visual Dysfunctions in Cognitively Impaired Observers

    DTIC Science & Technology

    2013-09-01

    right. Gaze horizontal position is plotted along the y-axis. The red bar indicates a visual nystagmus event detected by the filter. (d) A mild curse word...experimental conditions were chosen to simulate testing cognitively impaired observers. Reflex Stimulus Functions Visual Nystagmus luminance grating low-level...developed a new stimulus for visual nystagmus to 8 test visual motion processing in the presence of incoherent motion noise. The drifting equiluminant

  15. Evidence for Separate Contributions of High and Low Spatial Frequencies during Visual Word Recognition.

    PubMed

    Winsler, Kurt; Holcomb, Phillip J; Midgley, Katherine J; Grainger, Jonathan

    2017-01-01

    Previous studies have shown that different spatial frequency information processing streams interact during the recognition of visual stimuli. However, it is a matter of debate as to the contributions of high and low spatial frequency (HSF and LSF) information for visual word recognition. This study examined the role of different spatial frequencies in visual word recognition using event-related potential (ERP) masked priming. EEG was recorded from 32 scalp sites in 30 English-speaking adults in a go/no-go semantic categorization task. Stimuli were white characters on a neutral gray background. Targets were uppercase five letter words preceded by a forward-mask (#######) and a 50 ms lowercase prime. Primes were either the same word (repeated) or a different word (un-repeated) than the subsequent target and either contained only high, only low, or full spatial frequency information. Additionally within each condition, half of the prime-target pairs were high lexical frequency, and half were low. In the full spatial frequency condition, typical ERP masked priming effects were found with an attenuated N250 (sub-lexical) and N400 (lexical-semantic) for repeated compared to un-repeated primes. For HSF primes there was a weaker N250 effect which interacted with lexical frequency, a significant reversal of the effect around 300 ms, and an N400-like effect for only high lexical frequency word pairs. LSF primes did not produce any of the classic ERP repetition priming effects, however they did elicit a distinct early effect around 200 ms in the opposite direction of typical repetition effects. HSF information accounted for many of the masked repetition priming ERP effects and therefore suggests that HSFs are more crucial for word recognition. However, LSFs did produce their own pattern of priming effects indicating that larger scale information may still play a role in word recognition.

  16. Persistent recruitment of somatosensory cortex during active maintenance of hand images in working memory.

    PubMed

    Galvez-Pol, A; Calvo-Merino, B; Capilla, A; Forster, B

    2018-07-01

    Working memory (WM) supports temporary maintenance of task-relevant information. This process is associated with persistent activity in the sensory cortex processing the information (e.g., visual stimuli activate visual cortex). However, we argue here that more multifaceted stimuli moderate this sensory-locked activity and recruit distinctive cortices. Specifically, perception of bodies recruits somatosensory cortex (SCx) beyond early visual areas (suggesting embodiment processes). Here we explore persistent activation in processing areas beyond the sensory cortex initially relevant to the modality of the stimuli. Using visual and somatosensory evoked-potentials in a visual WM task, we isolated different levels of visual and somatosensory involvement during encoding of body and non-body-related images. Persistent activity increased in SCx only when maintaining body images in WM, whereas visual/posterior regions' activity increased significantly when maintaining non-body images. Our results bridge WM and embodiment frameworks, supporting a dynamic WM process where the nature of the information summons specific processing resources. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Oculomotor guidance and capture by irrelevant faces.

    PubMed

    Devue, Christel; Belopolsky, Artem V; Theeuwes, Jan

    2012-01-01

    Even though it is generally agreed that face stimuli constitute a special class of stimuli, which are treated preferentially by our visual system, it remains unclear whether faces can capture attention in a stimulus-driven manner. Moreover, there is a long-standing debate regarding the mechanism underlying the preferential bias of selecting faces. Some claim that faces constitute a set of special low-level features to which our visual system is tuned; others claim that the visual system is capable of extracting the meaning of faces very rapidly, driving attentional selection. Those debates continue because many studies contain methodological peculiarities and manipulations that prevent a definitive conclusion. Here, we present a new visual search task in which observers had to make a saccade to a uniquely colored circle while completely irrelevant objects were also present in the visual field. The results indicate that faces capture and guide the eyes more than other animated objects and that our visual system is not only tuned to the low-level features that make up a face but also to its meaning.

  18. The Social Lives of Canadian Youths with Visual Impairments

    ERIC Educational Resources Information Center

    Gold, Deborah; Shaw, Alexander; Wolffe, Karen

    2010-01-01

    This survey of the social and leisure experiences of Canadian youths with visual impairments found that, in general, youths with low vision experienced more social challenges than did their peers who were blind. Levels of social support were not found to differ on the basis of level of vision, sex, or age. (Contains 1 figure and 1 table.)

  19. Image jitter enhances visual performance when spatial resolution is impaired.

    PubMed

    Watson, Lynne M; Strang, Niall C; Scobie, Fraser; Love, Gordon D; Seidel, Dirk; Manahilov, Velitchko

    2012-09-06

    Visibility of low-spatial frequency stimuli improves when their contrast is modulated at 5 to 10 Hz compared with stationary stimuli. Therefore, temporal modulations of visual objects could enhance the performance of low vision patients who primarily perceive images of low-spatial frequency content. We investigated the effect of retinal-image jitter on word recognition speed and facial emotion recognition in subjects with central visual impairment. Word recognition speed and accuracy of facial emotion discrimination were measured in volunteers with AMD under stationary and jittering conditions. Computer-driven and optoelectronic approaches were used to induce retinal-image jitter with duration of 100 or 166 ms and amplitude within the range of 0.5 to 2.6° visual angle. Word recognition speed was also measured for participants with simulated (Bangerter filters) visual impairment. Text jittering markedly enhanced word recognition speed for people with severe visual loss (101 ± 25%), while for those with moderate visual impairment, this effect was weaker (19 ± 9%). The ability of low vision patients to discriminate the facial emotions of jittering images improved by a factor of 2. A prototype of optoelectronic jitter goggles produced similar improvement in facial emotion discrimination. Word recognition speed in participants with simulated visual impairment was enhanced for interjitter intervals over 100 ms and reduced for shorter intervals. Results suggest that retinal-image jitter with optimal frequency and amplitude is an effective strategy for enhancing visual information processing in the absence of spatial detail. These findings will enable the development of novel tools to improve the quality of life of low vision patients.

  20. Learning semantic and visual similarity for endomicroscopy video retrieval.

    PubMed

    Andre, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas

    2012-06-01

    Content-based image retrieval (CBIR) is a valuable computer vision technique which is increasingly being applied in the medical community for diagnosis support. However, traditional CBIR systems only deliver visual outputs, i.e., images having a similar appearance to the query, which is not directly interpretable by the physicians. Our objective is to provide a system for endomicroscopy video retrieval which delivers both visual and semantic outputs that are consistent with each other. In a previous study, we developed an adapted bag-of-visual-words method for endomicroscopy retrieval, called "Dense-Sift," that computes a visual signature for each video. In this paper, we present a novel approach to complement visual similarity learning with semantic knowledge extraction, in the field of in vivo endomicroscopy. We first leverage a semantic ground truth based on eight binary concepts, in order to transform these visual signatures into semantic signatures that reflect how much the presence of each semantic concept is expressed by the visual words describing the videos. Using cross-validation, we demonstrate that, in terms of semantic detection, our intuitive Fisher-based method transforming visual-word histograms into semantic estimations outperforms support vector machine (SVM) methods with statistical significance. In a second step, we propose to improve retrieval relevance by learning an adjusted similarity distance from a perceived similarity ground truth. As a result, our distance learning method allows to statistically improve the correlation with the perceived similarity. We also demonstrate that, in terms of perceived similarity, the recall performance of the semantic signatures is close to that of visual signatures and significantly better than those of several state-of-the-art CBIR methods. The semantic signatures are thus able to communicate high-level medical knowledge while being consistent with the low-level visual signatures and much shorter than them. In our resulting retrieval system, we decide to use visual signatures for perceived similarity learning and retrieval, and semantic signatures for the output of an additional information, expressed in the endoscopist own language, which provides a relevant semantic translation of the visual retrieval outputs.

  1. Brain processing of visual information during fast eye movements maintains motor performance.

    PubMed

    Panouillères, Muriel; Gaveau, Valérie; Socasau, Camille; Urquizar, Christian; Pélisson, Denis

    2013-01-01

    Movement accuracy depends crucially on the ability to detect errors while actions are being performed. When inaccuracies occur repeatedly, both an immediate motor correction and a progressive adaptation of the motor command can unfold. Of all the movements in the motor repertoire of humans, saccadic eye movements are the fastest. Due to the high speed of saccades, and to the impairment of visual perception during saccades, a phenomenon called "saccadic suppression", it is widely believed that the adaptive mechanisms maintaining saccadic performance depend critically on visual error signals acquired after saccade completion. Here, we demonstrate that, contrary to this widespread view, saccadic adaptation can be based entirely on visual information presented during saccades. Our results show that visual error signals introduced during saccade execution--by shifting a visual target at saccade onset and blanking it at saccade offset--induce the same level of adaptation as error signals, presented for the same duration, but after saccade completion. In addition, they reveal that this processing of intra-saccadic visual information for adaptation depends critically on visual information presented during the deceleration phase, but not the acceleration phase, of the saccade. These findings demonstrate that the human central nervous system can use short intra-saccadic glimpses of visual information for motor adaptation, and they call for a reappraisal of current models of saccadic adaptation.

  2. Familiarity enhances visual working memory for faces.

    PubMed

    Jackson, Margaret C; Raymond, Jane E

    2008-06-01

    Although it is intuitive that familiarity with complex visual objects should aid their preservation in visual working memory (WM), empirical evidence for this is lacking. This study used a conventional change-detection procedure to assess visual WM for unfamiliar and famous faces in healthy adults. Across experiments, faces were upright or inverted and a low- or high-load concurrent verbal WM task was administered to suppress contribution from verbal WM. Even with a high verbal memory load, visual WM performance was significantly better and capacity estimated as significantly greater for famous versus unfamiliar faces. Face inversion abolished this effect. Thus, neither strategic, explicit support from verbal WM nor low-level feature processing easily accounts for the observed benefit of high familiarity for visual WM. These results demonstrate that storage of items in visual WM can be enhanced if robust visual representations of them already exist in long-term memory.

  3. Training on Movement Figure-Ground Discrimination Remediates Low-Level Visual Timing Deficits in the Dorsal Stream, Improving High-Level Cognitive Functioning, Including Attention, Reading Fluency, and Working Memory.

    PubMed

    Lawton, Teri; Shelley-Tremblay, John

    2017-01-01

    The purpose of this study was to determine whether neurotraining to discriminate a moving test pattern relative to a stationary background, figure-ground discrimination, improves vision and cognitive functioning in dyslexics, as well as typically-developing normal students. We predict that improving the speed and sensitivity of figure-ground movement discrimination ( PATH to Reading neurotraining) acts to remediate visual timing deficits in the dorsal stream, thereby improving processing speed, reading fluency, and the executive control functions of attention and working memory in both dyslexic and normal students who had PATH neurotraining more than in those students who had no neurotraining. This prediction was evaluated by measuring whether dyslexic and normal students improved on standardized tests of cognitive skills following neurotraining exercises, more than following computer-based guided reading ( Raz-Kids ( RK )). The neurotraining used in this study was visually-based training designed to improve magnocellular function at both low and high levels in the dorsal stream: the input to the executive control networks coding working memory and attention. This approach represents a paradigm shift from the phonologically-based treatment for dyslexia, which concentrates on high-level speech and reading areas. This randomized controlled-validation study was conducted by training the entire second and third grade classrooms (42 students) for 30 min twice a week before guided reading. Standardized tests were administered at the beginning and end of 12-weeks of intervention training to evaluate improvements in academic skills. Only movement-discrimination training remediated both low-level visual timing deficits and high-level cognitive functioning, including selective and sustained attention, reading fluency and working memory for both dyslexic and normal students. Remediating visual timing deficits in the dorsal stream revealed the causal role of visual movement discrimination training in improving high-level cognitive functions such as attention, reading acquisition and working memory. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways in the dorsal stream is a fundamental cause of dyslexia and being at-risk for reading problems in normal students, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological or language deficits, requiring a paradigm shift from phonologically-based treatment of dyslexia to a visually-based treatment. This study shows that visual movement-discrimination can be used not only to diagnose dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning.

  4. Training on Movement Figure-Ground Discrimination Remediates Low-Level Visual Timing Deficits in the Dorsal Stream, Improving High-Level Cognitive Functioning, Including Attention, Reading Fluency, and Working Memory

    PubMed Central

    Lawton, Teri; Shelley-Tremblay, John

    2017-01-01

    The purpose of this study was to determine whether neurotraining to discriminate a moving test pattern relative to a stationary background, figure-ground discrimination, improves vision and cognitive functioning in dyslexics, as well as typically-developing normal students. We predict that improving the speed and sensitivity of figure-ground movement discrimination (PATH to Reading neurotraining) acts to remediate visual timing deficits in the dorsal stream, thereby improving processing speed, reading fluency, and the executive control functions of attention and working memory in both dyslexic and normal students who had PATH neurotraining more than in those students who had no neurotraining. This prediction was evaluated by measuring whether dyslexic and normal students improved on standardized tests of cognitive skills following neurotraining exercises, more than following computer-based guided reading (Raz-Kids (RK)). The neurotraining used in this study was visually-based training designed to improve magnocellular function at both low and high levels in the dorsal stream: the input to the executive control networks coding working memory and attention. This approach represents a paradigm shift from the phonologically-based treatment for dyslexia, which concentrates on high-level speech and reading areas. This randomized controlled-validation study was conducted by training the entire second and third grade classrooms (42 students) for 30 min twice a week before guided reading. Standardized tests were administered at the beginning and end of 12-weeks of intervention training to evaluate improvements in academic skills. Only movement-discrimination training remediated both low-level visual timing deficits and high-level cognitive functioning, including selective and sustained attention, reading fluency and working memory for both dyslexic and normal students. Remediating visual timing deficits in the dorsal stream revealed the causal role of visual movement discrimination training in improving high-level cognitive functions such as attention, reading acquisition and working memory. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways in the dorsal stream is a fundamental cause of dyslexia and being at-risk for reading problems in normal students, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological or language deficits, requiring a paradigm shift from phonologically-based treatment of dyslexia to a visually-based treatment. This study shows that visual movement-discrimination can be used not only to diagnose dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning. PMID:28555097

  5. Negative BOLD in sensory cortices during verbal memory: a component in generating internal representations?

    PubMed

    Azulay, Haim; Striem, Ella; Amedi, Amir

    2009-05-01

    People tend to close their eyes when trying to retrieve an event or a visual image from memory. However the brain mechanisms behind this phenomenon remain poorly understood. Recently, we showed that during visual mental imagery, auditory areas show a much more robust deactivation than during visual perception. Here we ask whether this is a special case of a more general phenomenon involving retrieval of intrinsic, internally stored information, which would result in crossmodal deactivations in other sensory cortices which are irrelevant to the task at hand. To test this hypothesis, a group of 9 sighted individuals were scanned while performing a memory retrieval task for highly abstract words (i.e., with low imaginability scores). We also scanned a group of 10 congenitally blind, which by definition do not have any visual imagery per se. In sighted subjects, both auditory and visual areas were robustly deactivated during memory retrieval, whereas in the blind the auditory cortex was deactivated while visual areas, shown previously to be relevant for this task, presented a positive BOLD signal. These results suggest that deactivation may be most prominent in task-irrelevant sensory cortices whenever there is a need for retrieval or manipulation of internally stored representations. Thus, there is a task-dependent balance of activation and deactivation that might allow maximization of resources and filtering out of non relevant information to enable allocation of attention to the required task. Furthermore, these results suggest that the balance between positive and negative BOLD might be crucial to our understanding of a large variety of intrinsic and extrinsic tasks including high-level cognitive functions, sensory processing and multisensory integration.

  6. Detection of regional air pollution episodes utilizing satellite data in the visual range

    NASA Technical Reports Server (NTRS)

    Bowley, C. J.; Burke, H. K.; Barnes, J. C.

    1981-01-01

    A comparative analysis of satellite-observed haze patterns and ground-based aerosol measurements is carried out for July 20-23, 1978. During this period, a significant regional air pollution episode existed across the northeastern United States, accompanied by widespread haze, reduced surface visibility, and elevated sulfate levels measured by the Sulfate Regional Experiment (SURE) network. The results show that the satellite-observed haze patterns correlate closely with the area of reported low surface visibility (less than 4 mi) and high sulfate levels. Quantitative information on total aerosol loading derived from the satellite-digitized data, using an atmospheric radiative transfer model, agrees well with the results obtained from the ground-based measurements.

  7. Visually Translating Educational Materials for Ethnic Populations.

    ERIC Educational Resources Information Center

    Schiffman, Carole B.

    Growing populations of older adults, ethnic minorities, and the low-literate create unique concerns for the design of visual information. Those for whom text presents a barrier will respond most to legibility, use of familiar formats and symbols, and simplification. Guidelines for those processes are needed, and this paper, in particular,…

  8. Effects of redundancy in the comparison of speech and pictorial displays in the cockpit environment.

    PubMed

    Byblow, W D

    1990-06-01

    Synthesised speech and pictorial displays were compared in a spatially compatible simulated cockpit environment. Messages of high or low levels of redundancy were presented to subjects in both modality conditions. Subjects responded to warnings presented in a warning-only condition and in a dual-task condition, in which a simulated flight task was performed with visual and manual input/output modalities. Because the amount of information presented in most real-world applications and experimental paradigms is quantifiably large with respect to present guidelines for the use of synthesised speech warnings, the low-redundancy condition was hypothesised to allow for better performance. Results showed that subjects respond quicker to messages of low redundancy in both modalities. It is suggested that speech messages with low-redundancy levels were effective in minimising message length and ensuring that messages did not overload the short-term memory required to process and maintain speech in memory. Manipulation of phrase structure was used to optimise message redundancy and enhance the conceptual compatibility of the message without increasing message length or imposing a perceptual cost or memory overload. The results also suggest that system response times were quicker when synthesised speech warnings were used. This result is consistent with predictions from multiple resource theory which states that the resources required for the perception of verbal warnings are different from those for the flight task. It is also suggested that the perception of a pictorial display requires the same resources used for the perception of the primary flight task. An alternative explanation is that pictorial displays impose a visual scanning cost which is responsible for decreased performance. Based on the findings reported here, it is suggested that speech displays be incorporated in a spatially compatible cockpit environment because they allow equal or better performance when compared with pictorial displays. More importantly, the amount of time that the operator must direct his vision away from information vital to the flight task is decreased.

  9. Can state-of-the-art HVS-based objective image quality criteria be used for image reconstruction techniques based on ROI analysis?

    NASA Astrophysics Data System (ADS)

    Dostal, P.; Krasula, L.; Klima, M.

    2012-06-01

    Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.

  10. The Effects of Dietary Carotenoid Supplementation and Retinal Carotenoid Accumulation on Vision-Mediated Foraging in the House Finch

    PubMed Central

    Toomey, Matthew B.; McGraw, Kevin J.

    2011-01-01

    Background For many bird species, vision is the primary sensory modality used to locate and assess food items. The health and spectral sensitivities of the avian visual system are influenced by diet-derived carotenoid pigments that accumulate in the retina. Among wild House Finches (Carpodacus mexicanus), we have found that retinal carotenoid accumulation varies significantly among individuals and is related to dietary carotenoid intake. If diet-induced changes in retinal carotenoid accumulation alter spectral sensitivity, then they have the potential to affect visually mediated foraging performance. Methodology/Principal Findings In two experiments, we measured foraging performance of house finches with dietarily manipulated retinal carotenoid levels. We tested each bird's ability to extract visually contrasting food items from a matrix of inedible distracters under high-contrast (full) and dimmer low-contrast (red-filtered) lighting conditions. In experiment one, zeaxanthin-supplemented birds had significantly increased retinal carotenoid levels, but declined in foraging performance in the high-contrast condition relative to astaxanthin-supplemented birds that showed no change in retinal carotenoid accumulation. In experiments one and two combined, we found that retinal carotenoid concentrations predicted relative foraging performance in the low- vs. high-contrast light conditions in a curvilinear pattern. Performance was positively correlated with retinal carotenoid accumulation among birds with low to medium levels of accumulation (∼0.5–1.5 µg/retina), but declined among birds with very high levels (>2.0 µg/retina). Conclusion/Significance Our results suggest that carotenoid-mediated spectral filtering enhances color discrimination, but that this improvement is traded off against a reduction in sensitivity that can compromise visual discrimination. Thus, retinal carotenoid levels may be optimized to meet the visual demands of specific behavioral tasks and light environments. PMID:21747917

  11. Differential effect of visual masking in perceptual categorization.

    PubMed

    Hélie, Sébastien; Cousineau, Denis

    2015-06-01

    This article explores the visual information used to categorize stimuli drawn from a common stimulus space into verbal and nonverbal categories using 2 experiments. Experiment 1 explores the effect of target duration on verbal and nonverbal categorization using backward masking to interrupt visual processing. With categories equated for difficulty for long and short target durations, intermediate target duration shows an advantage for verbal categorization over nonverbal categorization. Experiment 2 tests whether the results of Experiment 1 can be explained by shorter target duration resulting in a smaller signal-to-noise ratio of the categorization stimulus. To test for this possibility, Experiment 2 used integration masking with the same stimuli, categories, and masks as Experiment 1 with a varying level of mask opacity. As predicted, low mask opacity yielded similar results to long target duration while high mask opacity yielded similar results to short target duration. Importantly, intermediate mask opacity produced an advantage for verbal categorization over nonverbal categorization, similar to intermediate target duration. These results suggest that verbal and nonverbal categorization are affected differently by manipulations affecting the signal-to-noise ratio of the stimulus, consistent with multiple-system theories of categorizations. The results further suggest that verbal categorization may be more digital (and more robust to low signal-to-noise ratio) while the information used in nonverbal categorization may be more analog (and less robust to lower signal-to-noise ratio). This article concludes with a discussion of how these new results affect the use of masking in perceptual categorization and multiple-system theories of perceptual category learning. (c) 2015 APA, all rights reserved).

  12. Visual object imagery and autobiographical memory: Object Imagers are better at remembering their personal past.

    PubMed

    Vannucci, Manila; Pelagatti, Claudia; Chiorri, Carlo; Mazzoni, Giuliana

    2016-01-01

    In the present study we examined whether higher levels of object imagery, a stable characteristic that reflects the ability and preference in generating pictorial mental images of objects, facilitate involuntary and voluntary retrieval of autobiographical memories (ABMs). Individuals with high (High-OI) and low (Low-OI) levels of object imagery were asked to perform an involuntary and a voluntary ABM task in the laboratory. Results showed that High-OI participants generated more involuntary and voluntary ABMs than Low-OI, with faster retrieval times. High-OI also reported more detailed memories compared to Low-OI and retrieved memories as visual images. Theoretical implications of these findings for research on voluntary and involuntary ABMs are discussed.

  13. Olfactory discrimination: when vision matters?

    PubMed

    Demattè, M Luisa; Sanabria, Daniel; Spence, Charles

    2009-02-01

    Many previous studies have attempted to investigate the effect of visual cues on olfactory perception in humans. The majority of this research has only looked at the modulatory effect of color, which has typically been explained in terms of multisensory perceptual interactions. However, such crossmodal effects may equally well relate to interactions taking place at a higher level of information processing as well. In fact, it is well-known that semantic knowledge can have a substantial effect on people's olfactory perception. In the present study, we therefore investigated the influence of visual cues, consisting of color patches and/or shapes, on people's olfactory discrimination performance. Participants had to make speeded odor discrimination responses (lemon vs. strawberry) while viewing a red or yellow color patch, an outline drawing of a strawberry or lemon, or a combination of these color and shape cues. Even though participants were instructed to ignore the visual stimuli, our results demonstrate that the accuracy of their odor discrimination responses was influenced by visual distractors. This result shows that both color and shape information are taken into account during speeded olfactory discrimination, even when such information is completely task irrelevant, hinting at the automaticity of such higher level visual-olfactory crossmodal interactions.

  14. Dissociable neural correlates of contour completion and contour representation in illusory contour perception.

    PubMed

    Wu, Xiang; He, Sheng; Bushara, Khalaf; Zeng, Feiyan; Liu, Ying; Zhang, Daren

    2012-10-01

    Object recognition occurs even when environmental information is incomplete. Illusory contours (ICs), in which a contour is perceived though the contour edges are incomplete, have been extensively studied as an example of such a visual completion phenomenon. Despite the neural activity in response to ICs in visual cortical areas from low (V1 and V2) to high (LOC: the lateral occipital cortex) levels, the details of the neural processing underlying IC perception are largely not clarified. For example, how do the visual areas function in IC perception and how do they interact to archive the coherent contour perception? IC perception involves the process of completing the local discrete contour edges (contour completion) and the process of representing the global completed contour information (contour representation). Here, functional magnetic resonance imaging was used to dissociate contour completion and contour representation by varying each in opposite directions. The results show that the neural activity was stronger to stimuli with more contour completion than to stimuli with more contour representation in V1 and V2, which was the reverse of that in the LOC. When inspecting the neural activity change across the visual pathway, the activation remained high for the stimuli with more contour completion and increased for the stimuli with more contour representation. These results suggest distinct neural correlates of contour completion and contour representation, and the possible collaboration between the two processes during IC perception, indicating a neural connection between the discrete retinal input and the coherent visual percept. Copyright © 2011 Wiley Periodicals, Inc.

  15. Real-time distributed video coding for 1K-pixel visual sensor networks

    NASA Astrophysics Data System (ADS)

    Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian

    2016-07-01

    Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.

  16. Skilled deaf readers have an enhanced perceptual span in reading.

    PubMed

    Bélanger, Nathalie N; Slattery, Timothy J; Mayberry, Rachel I; Rayner, Keith

    2012-07-01

    Recent evidence suggests that, compared with hearing people, deaf people have enhanced visual attention to simple stimuli viewed in the parafovea and periphery. Although a large part of reading involves processing the fixated words in foveal vision, readers also utilize information in parafoveal vision to preprocess upcoming words and decide where to look next. In the study reported here, we investigated whether auditory deprivation affects low-level visual processing during reading by comparing the perceptual span of deaf signers who were skilled and less-skilled readers with the perceptual span of skilled hearing readers. Compared with hearing readers, the two groups of deaf readers had a larger perceptual span than would be expected given their reading ability. These results provide the first evidence that deaf readers' enhanced attentional allocation to the parafovea is used during complex cognitive tasks, such as reading.

  17. Health services experiences of parents of recently diagnosed visually impaired children

    PubMed Central

    Rahi, J S; Manaras, I; Tuomainen, H; Lewando Hundt, G

    2005-01-01

    Aim: To investigate the health service experiences and needs of parents in the period around diagnosis of ophthalmic disorders in their children. Methods: Parents of children newly diagnosed with visual impairment and/or ophthalmic disorders at a tertiary level hospital in London participated in a questionnaire survey, using standard instruments, followed by in-depth individual interviews, to elicit their views about the processes of care, their overall level of satisfaction, and their unmet needs. Results: 67% (147) of eligible families (135 mothers, 76 fathers) participated. Overall satisfaction with care was high, being greater among parents of children with milder visual loss or isolated ophthalmic disorders than those with more severe visual loss or multiple impairments. Nevertheless, parents’ reported greatest need was the provision of general information, including about their child’s ophthalmic disorder and educational and social services and support. Mothers reported greater information needs than fathers, as did white parents compared to those from ethnic minorities. White parents also regarded the processes of care to be less comprehensive and coordinated, as well as less enabling, than did parents from ethnic minorities. Conclusions: Although parents reported high overall satisfaction with services, improving the medium, content, and scope of general information provided by professionals to parents of visually impaired children emerges as a priority. Equitable planning and provision of health services for families of children with visual impairment needs to take into account that informational and other needs vary by whether the parent is the primary carer or not and their ethnicity, as well as by the severity and complexity of their child’s visual loss. PMID:15665355

  18. Synaptic Correlates of Low-Level Perception in V1.

    PubMed

    Gerard-Mercier, Florian; Carelli, Pedro V; Pananceau, Marc; Troncoso, Xoana G; Frégnac, Yves

    2016-04-06

    The computational role of primary visual cortex (V1) in low-level perception remains largely debated. A dominant view assumes the prevalence of higher cortical areas and top-down processes in binding information across the visual field. Here, we investigated the role of long-distance intracortical connections in form and motion processing by measuring, with intracellular recordings, their synaptic impact on neurons in area 17 (V1) of the anesthetized cat. By systematically mapping synaptic responses to stimuli presented in the nonspiking surround of V1 receptive fields, we provide the first quantitative characterization of the lateral functional connectivity kernel of V1 neurons. Our results revealed at the population level two structural-functional biases in the synaptic integration and dynamic association properties of V1 neurons. First, subthreshold responses to oriented stimuli flashed in isolation in the nonspiking surround exhibited a geometric organization around the preferred orientation axis mirroring the psychophysical "association field" for collinear contour perception. Second, apparent motion stimuli, for which horizontal and feedforward synaptic inputs summed in-phase, evoked dominantly facilitatory nonlinear interactions, specifically during centripetal collinear activation along the preferred orientation axis, at saccadic-like speeds. This spatiotemporal integration property, which could constitute the neural correlate of a human perceptual bias in speed detection, suggests that local (orientation) and global (motion) information is already linked within V1. We propose the existence of a "dynamic association field" in V1 neurons, whose spatial extent and anisotropy are transiently updated and reshaped as a function of changes in the retinal flow statistics imposed during natural oculomotor exploration. The computational role of primary visual cortex in low-level perception remains debated. The expression of this "pop-out" perception is often assumed to require attention-related processes, such as top-down feedback from higher cortical areas. Using intracellular techniques in the anesthetized cat and novel analysis methods, we reveal unexpected structural-functional biases in the synaptic integration and dynamic association properties of V1 neurons. These structural-functional biases provide a substrate, within V1, for contour detection and, more unexpectedly, global motion flow sensitivity at saccadic speed, even in the absence of attentional processes. We argue for the concept of a "dynamic association field" in V1 neurons, whose spatial extent and anisotropy changes with retinal flow statistics, and more generally for a renewed focus on intracortical computation. Copyright © 2016 the authors 0270-6474/16/363925-18$15.00/0.

  19. Exploring an optimal wavelet-based filter for cryo-ET imaging.

    PubMed

    Huang, Xinrui; Li, Sha; Gao, Song

    2018-02-07

    Cryo-electron tomography (cryo-ET) is one of the most advanced technologies for the in situ visualization of molecular machines by producing three-dimensional (3D) biological structures. However, cryo-ET imaging has two serious disadvantages-low dose and low image contrast-which result in high-resolution information being obscured by noise and image quality being degraded, and this causes errors in biological interpretation. The purpose of this research is to explore an optimal wavelet denoising technique to reduce noise in cryo-ET images. We perform tests using simulation data and design a filter using the optimum selected wavelet parameters (three-level decomposition, level-1 zeroed out, subband-dependent threshold, a soft-thresholding and spline-based discrete dyadic wavelet transform (DDWT)), which we call a modified wavelet shrinkage filter; this filter is suitable for noisy cryo-ET data. When testing using real cryo-ET experiment data, higher quality images and more accurate measures of a biological structure can be obtained with the modified wavelet shrinkage filter processing compared with conventional processing. Because the proposed method provides an inherent advantage when dealing with cryo-ET images, it can therefore extend the current state-of-the-art technology in assisting all aspects of cryo-ET studies: visualization, reconstruction, structural analysis, and interpretation.

  20. Composite Measures of Individual and Area-Level Socio-Economic Status Are Associated with Visual Impairment in Singapore

    PubMed Central

    Wah, Win; Earnest, Arul; Sabanayagam, Charumathi; Cheng, Ching-Yu; Ong, Marcus Eng Hock; Wong, Tien Y.; Lamoureux, Ecosse L.

    2015-01-01

    Purpose To investigate the independent relationship of individual- and area-level socio-economic status (SES) with the presence and severity of visual impairment (VI) in an Asian population. Methods Cross-sectional data from 9993 Chinese, Malay and Indian adults aged 40–80 years who participated in the Singapore Epidemiology of eye Diseases (2004–2011) in Singapore. Based on the presenting visual acuity (PVA) in the better-seeing eye, VI was categorized into normal vision (logMAR≤0.30), low vision (logMAR>0.30<1.00), and blindness (logMAR≥1.00). Any VI was defined as low vision/blindness in the PVA of better-seeing eye. Individual-level low-SES was defined as a composite of primary-level education, monthly income<2000 SGD and residing in 1 or 2-room public apartment. An area-level SES was assessed using a socio-economic disadvantage index (SEDI), created using 12 variables from the 2010 Singapore census. A high SEDI score indicates a relatively poor SES. Associations between SES measures and presence and severity of VI were examined using multi-level, mixed-effects logistic and multinomial regression models. Results The age-adjusted prevalence of any VI was 19.62% (low vision = 19%, blindness = 0.62%). Both individual- and area-level SES were positively associated with any VI and low vision after adjusting for confounders. The odds ratio (95% confidence interval) of any VI was 2.11(1.88–2.37) for low-SES and 1.07(1.02–1.13) per 1 standard deviation increase in SEDI. When stratified by unilateral/bilateral categories, while low SES showed significant associations with all categories, SEDI showed a significant association with bilateral low vision only. The association between low SES and any VI remained significant among all age, gender and ethnic sub-groups. Although a consistent positive association was observed between area-level SEDI and any VI, the associations were significant among participants aged 40–65 years and male. Conclusion In this community-based sample of Asian adults, both individual- and area-level SES were independently associated with the presence and severity of VI. PMID:26555141

  1. Sketchy Rendering for Information Visualization.

    PubMed

    Wood, J; Isenberg, P; Isenberg, T; Dykes, J; Boukhelifa, N; Slingsby, A

    2012-12-01

    We present and evaluate a framework for constructing sketchy style information visualizations that mimic data graphics drawn by hand. We provide an alternative renderer for the Processing graphics environment that redefines core drawing primitives including line, polygon and ellipse rendering. These primitives allow higher-level graphical features such as bar charts, line charts, treemaps and node-link diagrams to be drawn in a sketchy style with a specified degree of sketchiness. The framework is designed to be easily integrated into existing visualization implementations with minimal programming modification or design effort. We show examples of use for statistical graphics, conveying spatial imprecision and for enhancing aesthetic and narrative qualities of visualization. We evaluate user perception of sketchiness of areal features through a series of stimulus-response tests in order to assess users' ability to place sketchiness on a ratio scale, and to estimate area. Results suggest relative area judgment is compromised by sketchy rendering and that its influence is dependent on the shape being rendered. They show that degree of sketchiness may be judged on an ordinal scale but that its judgement varies strongly between individuals. We evaluate higher-level impacts of sketchiness through user testing of scenarios that encourage user engagement with data visualization and willingness to critique visualization design. Results suggest that where a visualization is clearly sketchy, engagement may be increased and that attitudes to participating in visualization annotation are more positive. The results of our work have implications for effective information visualization design that go beyond the traditional role of sketching as a tool for prototyping or its use for an indication of general uncertainty.

  2. Age effects on visual-perceptual processing and confrontation naming.

    PubMed

    Gutherie, Audrey H; Seely, Peter W; Beacham, Lauren A; Schuchard, Ronald A; De l'Aune, William A; Moore, Anna Bacon

    2010-03-01

    The impact of age-related changes in visual-perceptual processing on naming ability has not been reported. The present study investigated the effects of 6 levels of spatial frequency and 6 levels of contrast on accuracy and latency to name objects in 14 young and 13 older neurologically normal adults with intact lexical-semantic functioning. Spatial frequency and contrast manipulations were made independently. Consistent with the hypotheses, variations in these two visual parameters impact naming ability in young and older subjects differently. The results from the spatial frequency-manipulations revealed that, in general, young vs. older subjects are faster and more accurate to name. However, this age-related difference is dependent on the spatial frequency on the image; differences were only seen for images presented at low (e.g., 0.25-1 c/deg) or high (e.g., 8-16 c/deg) spatial frequencies. Contrary to predictions, the results from the contrast manipulations revealed that overall older vs. young adults are more accurate to name. Again, however, differences were only seen for images presented at the lower levels of contrast (i.e., 1.25%). Both age groups had shorter latencies on the second exposure of the contrast-manipulated images, but this possible advantage of exposure was not seen for spatial frequency. Category analyses conducted on the data from this study indicate that older vs. young adults exhibit a stronger nonliving-object advantage for naming spatial frequency-manipulated images. Moreover, the findings suggest that bottom-up visual-perceptual variables integrate with top-down category information in different ways. Potential implications on the aging and naming (and recognition) literature are discussed.

  3. Autism spectrum traits and visual processing in young adults with very low birth weight: the Helsinki Study of Very Low Birth Weight adults.

    PubMed

    Wolford, E; Pesonen, A-K; Heinonen, K; Lahti, M; Pyhälä, R; Lahti, J; Hovi, P; Strang-Karlsson, S; Eriksson, J G; Andersson, S; Järvenpää, A-L; Kajantie, E; Räikkönen, K

    2017-04-01

    Visual processing problems may be one underlying factor for cognitive impairments related to autism spectrum disorders (ASDs). We examined associations between ASD-traits (Autism-Spectrum Quotient) and visual processing performance (Rey-Osterrieth Complex Figure Test; Block Design task of the Wechsler Adult Intelligence Scale-III) in young adults (mean age=25.0, s.d.=2.1 years) born preterm at very low birth weight (VLBW; <1500 g) (n=101) or at term (n=104). A higher level of ASD-traits was associated with slower global visual processing speed among the preterm VLBW, but not among the term-born group (P<0.04 for interaction). Our findings suggest that the associations between ASD-traits and visual processing may be restricted to individuals born preterm, and related specifically to global, not local visual processing. Our findings point to cumulative social and neurocognitive problems in those born preterm at VLBW.

  4. Children inhibit global information when the forest is dense and local information when the forest is sparse.

    PubMed

    Krakowski, Claire-Sara; Borst, Grégoire; Vidal, Julie; Houdé, Olivier; Poirel, Nicolas

    2018-09-01

    Visual environments are composed of global shapes and local details that compete for attentional resources. In adults, the global level is processed more rapidly than the local level, and global information must be inhibited in order to process local information when the local information and global information are in conflict. Compared with adults, children present less of a bias toward global visual information and appear to be more sensitive to the density of local elements that constitute the global level. The current study aimed, for the first time, to investigate the key role of inhibition during global/local processing in children. By including two different conditions of global saliency during a negative priming procedure, the results showed that when the global level was salient (dense hierarchical figures), 7-year-old children and adults needed to inhibit the global level to process the local information. However, when the global level was less salient (sparse hierarchical figures), only children needed to inhibit the local level to process the global information. These results confirm a weaker global bias and the greater impact of saliency in children than in adults. Moreover, the results indicate that, regardless of age, inhibition of the most salient hierarchical level is systematically required to select the less salient but more relevant level. These findings have important implications for future research in this area. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. The sensory basis of rheotaxis in turbulent flow

    NASA Astrophysics Data System (ADS)

    Elder, John P.

    Rheotaxis is a robust, multisensory behavior with many potential benefits for fish and other aquatic animals, yet the influence of different fluvial conditions on rheotactic performance and its sensory basis is still poorly understood. Here, we examine the role that vision and the lateral line play in the rheotactic behavior of a stream-dwelling species (Mexican tetra, Astyanax mexicanus) under both rectilinear and turbulent flow conditions. Turbulence enhanced overall rheotactic strength and lowered the flow speed at which rheotaxis was initiated; this effect did not depend on the availability of either visual or lateral line information. Compared to fish without access to visual information, fish with access to visual information exhibited increased levels of positional stability and as a result, increased levels of rheotactic accuracy. No disruption in rheotactic performance was found when the lateral line was disabled, suggesting that this sensory system is not necessary for either rheotaxis or turbulence detection under the conditions of this study.

  6. Dissociating visual form from lexical frequency using Japanese.

    PubMed

    Twomey, Tae; Kawabata Duncan, Keith J; Hogan, John S; Morita, Kenji; Umeda, Kazumasa; Sakai, Katsuyuki; Devlin, Joseph T

    2013-05-01

    In Japanese, the same word can be written in either morphographic Kanji or syllabographic Hiragana and this provides a unique opportunity to disentangle a word's lexical frequency from the frequency of its visual form - an important distinction for understanding the neural information processing in regions engaged by reading. Behaviorally, participants responded more quickly to high than low frequency words and to visually familiar relative to less familiar words, independent of script. Critically, the imaging results showed that visual familiarity, as opposed to lexical frequency, had a strong effect on activation in ventral occipito-temporal cortex. Activation here was also greater for Kanji than Hiragana words and this was not due to their inherent differences in visual complexity. These findings can be understood within a predictive coding framework in which vOT receives bottom-up information encoding complex visual forms and top-down predictions from regions encoding non-visual attributes of the stimulus. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Psychological Adjustment and Levels of Self Esteem in Children with Visual-Motor Integration Difficulties Influences the Results of a Randomized Intervention Trial

    ERIC Educational Resources Information Center

    Lahav, Orit; Apter, Alan; Ratzon, Navah Z.

    2013-01-01

    This study evaluates how much the effects of intervention programs are influenced by pre-existing psychological adjustment and self-esteem levels in kindergarten and first grade children with poor visual-motor integration skills, from low socioeconomic backgrounds. One hundred and sixteen mainstream kindergarten and first-grade children, from low…

  8. Rarefied flow diagnostics using pulsed high-current electron beams

    NASA Technical Reports Server (NTRS)

    Wojcik, Radoslaw M.; Schilling, John H.; Erwin, Daniel A.

    1990-01-01

    The use of high-current short-pulse electron beams in low-density gas flow diagnostics is introduced. Efficient beam propagation is demonstrated for pressure up to 300 microns. The beams, generated by low-pressure pseudospark discharges in helium, provide extremely high fluorescence levels, allowing time-resolved visualization in high-background environments. The fluorescence signal frequency is species-dependent, allowing instantaneous visualization of mixing flowfields.

  9. Creating visual explanations improves learning.

    PubMed

    Bobek, Eliza; Tversky, Barbara

    2016-01-01

    Many topics in science are notoriously difficult for students to learn. Mechanisms and processes outside student experience present particular challenges. While instruction typically involves visualizations, students usually explain in words. Because visual explanations can show parts and processes of complex systems directly, creating them should have benefits beyond creating verbal explanations. We compared learning from creating visual or verbal explanations for two STEM domains, a mechanical system (bicycle pump) and a chemical system (bonding). Both kinds of explanations were analyzed for content and learning assess by a post-test. For the mechanical system, creating a visual explanation increased understanding particularly for participants of low spatial ability. For the chemical system, creating both visual and verbal explanations improved learning without new teaching. Creating a visual explanation was superior and benefitted participants of both high and low spatial ability. Visual explanations often included crucial yet invisible features. The greater effectiveness of visual explanations appears attributable to the checks they provide for completeness and coherence as well as to their roles as platforms for inference. The benefits should generalize to other domains like the social sciences, history, and archeology where important information can be visualized. Together, the findings provide support for the use of learner-generated visual explanations as a powerful learning tool.

  10. Gestalten of today: early processing of visual contours and surfaces.

    PubMed

    Kovács, I

    1996-12-01

    While much is known about the specialized, parallel processing streams of low-level vision that extract primary visual cues, there is only limited knowledge about the dynamic interactions between them. How are the fragments, caught by local analyzers, assembled together to provide us with a unified percept? How are local discontinuities in texture, motion or depth evaluated with respect to object boundaries and surface properties? These questions are presented within the framework of orientation-specific spatial interactions of early vision. Key observations of psychophysics, anatomy and neurophysiology on interactions of various spatial and temporal ranges are reviewed. Aspects of the functional architecture and possible neural substrates of local orientation-specific interactions are discussed, underlining their role in the integration of information across the visual field, and particularly in contour integration. Examples are provided demonstrating that global context, such as contour closure and figure-ground assignment, affects these local interactions. It is illustrated that figure-ground assignment is realized early in visual processing, and that the pattern of early interactions also brings about an effective and sparse coding of visual shape. Finally, it is concluded that the underlying functional architecture is not only dynamic and context dependent, but the pattern of connectivity depends as much on past experience as on actual stimulation.

  11. Force sensor attachable to thin fiberscopes/endoscopes utilizing high elasticity fabric.

    PubMed

    Watanabe, Tetsuyou; Iwai, Takanobu; Fujihira, Yoshinori; Wakako, Lina; Kagawa, Hiroyuki; Yoneyama, Takeshi

    2014-03-12

    An endoscope/fiberscope is a minimally invasive tool used for directly observing tissues in areas deep inside the human body where access is limited. However, this tool only yields visual information. If force feedback information were also available, endoscope/fiberscope operators would be able to detect indurated areas that are visually hard to recognize. Furthermore, obtaining such feedback information from tissues in areas where collecting visual information is a challenge would be highly useful. The major obstacle is that such force information is difficult to acquire. This paper presents a novel force sensing system that can be attached to a very thin fiberscope/endoscope. To ensure a small size, high resolution, easy sterilization, and low cost, the proposed force visualization-based system uses a highly elastic material-panty stocking fabric. The paper also presents the methodology for deriving the force value from the captured image. The system has a resolution of less than 0.01 N and sensitivity of greater than 600 pixels/N within the force range of 0-0.2 N.

  12. The effect of visual and verbal modes of presentation on children's retention of images and words

    NASA Astrophysics Data System (ADS)

    Vasu, Ellen Storey; Howe, Ann C.

    This study tested the hypothesis that the use of two modes of presenting information to children has an additive memory effect for the retention of both images and words. Subjects were 22 first-grade and 22 fourth-grade children randomly assigned to visual and visual-verbal treatment groups. The visual-verbal group heard a description while observing an object; the visual group observed the same object but did not hear a description. Children were tested individually immediately after presentation of stimuli and two weeks later. They were asked to represent the information recalled through a drawing and an oral verbal description. In general, results supported the hypothesis and indicated, in addition, that children represent more information in iconic (pictorial) form than in symbolic (verbal) form. Strategies for using these results to enhance science learning at the elementary school level are discussed.

  13. Eyes Matched to the Prize: The State of Matched Filters in Insect Visual Circuits.

    PubMed

    Kohn, Jessica R; Heath, Sarah L; Behnia, Rudy

    2018-01-01

    Confronted with an ever-changing visual landscape, animals must be able to detect relevant stimuli and translate this information into behavioral output. A visual scene contains an abundance of information: to interpret the entirety of it would be uneconomical. To optimally perform this task, neural mechanisms exist to enhance the detection of important features of the sensory environment while simultaneously filtering out irrelevant information. This can be accomplished by using a circuit design that implements specific "matched filters" that are tuned to relevant stimuli. Following this rule, the well-characterized visual systems of insects have evolved to streamline feature extraction on both a structural and functional level. Here, we review examples of specialized visual microcircuits for vital behaviors across insect species, including feature detection, escape, and estimation of self-motion. Additionally, we discuss how these microcircuits are modulated to weigh relevant input with respect to different internal and behavioral states.

  14. Information fusion via isocortex-based Area 37 modeling

    NASA Astrophysics Data System (ADS)

    Peterson, James K.

    2004-08-01

    A simplified model of information processing in the brain can be constructed using primary sensory input from two modalities (auditory and visual) and recurrent connections to the limbic subsystem. Information fusion would then occur in Area 37 of the temporal cortex. The creation of meta concepts from the low order primary inputs is managed by models of isocortex processing. Isocortex algorithms are used to model parietal (auditory), occipital (visual), temporal (polymodal fusion) cortex and the limbic system. Each of these four modules is constructed out of five cortical stacks in which each stack consists of three vertically oriented six layer isocortex models. The input to output training of each cortical model uses the OCOS (on center - off surround) and FFP (folded feedback pathway) circuitry of (Grossberg, 1) which is inherently a recurrent network type of learning characterized by the identification of perceptual groups. Models of this sort are thus closely related to cognitive models as it is difficult to divorce the sensory processing subsystems from the higher level processing in the associative cortex. The overall software architecture presented is biologically based and is presented as a potential architectural prototype for the development of novel sensory fusion strategies. The algorithms are motivated to some degree by specific data from projects on musical composition and autonomous fine art painting programs, but only in the sense that these projects use two specific types of auditory and visual cortex data. Hence, the architectures are presented for an artificial information processing system which utilizes two disparate sensory sources. The exact nature of the two primary sensory input streams is irrelevant.

  15. Are children with low vision adapted to the visual environment in classrooms of mainstream schools?

    PubMed

    Negiloni, Kalpa; Ramani, Krishna Kumar; Jeevitha, R; Kalva, Jayashree; Sudhir, Rachapalle Reddi

    2018-02-01

    The study aimed to evaluate the classroom environment of children with low vision and provide recommendations to reduce visual stress, with focus on mainstream schooling. The medical records of 110 children (5-17 years) seen in low vision clinic during 1 year period (2015) at a tertiary care center in south India were extracted. The visual function levels of children were compared to the details of their classroom environment. The study evaluated and recommended the chalkboard visual task size and viewing distance required for children with mild, moderate, and severe visual impairment (VI). The major causes of low vision based on the site of abnormality and etiology were retinal (80%) and hereditary (67%) conditions, respectively, in children with mild (n = 18), moderate (n = 72), and severe (n = 20) VI. Many of the children (72%) had difficulty in viewing chalkboard and common strategies used for better visibility included copying from friends (47%) and going closer to chalkboard (42%). To view the chalkboard with reduced visual stress, a child with mild VI can be seated at a maximum distance of 4.3 m from the chalkboard, with the minimum size of visual task (height of lowercase letter writing on chalkboard) recommended to be 3 cm. For 3/60-6/60 range, the maximum viewing distance with the visual task size of 4 cm is recommended to be 85 cm to 1.7 m. Simple modifications of the visual task size and seating arrangements can aid children with low vision with better visibility of chalkboard and reduced visual stress to manage in mainstream schools.

  16. Visual perceptual load induces inattentional deafness.

    PubMed

    Macdonald, James S P; Lavie, Nilli

    2011-08-01

    In this article, we establish a new phenomenon of "inattentional deafness" and highlight the level of load on visual attention as a critical determinant of this phenomenon. In three experiments, we modified an inattentional blindness paradigm to assess inattentional deafness. Participants made either a low- or high-load visual discrimination concerning a cross shape (respectively, a discrimination of line color or of line length with a subtle length difference). A brief pure tone was presented simultaneously with the visual task display on a final trial. Failures to notice the presence of this tone (i.e., inattentional deafness) reached a rate of 79% in the high-visual-load condition, significantly more than in the low-load condition. These findings establish the phenomenon of inattentional deafness under visual load, thereby extending the load theory of attention (e.g., Lavie, Journal of Experimental Psychology. Human Perception and Performance, 25, 596-616, 1995) to address the cross-modal effects of visual perceptual load.

  17. Retinotopically specific reorganization of visual cortex for tactile pattern recognition

    PubMed Central

    Cheung, Sing-Hang; Fang, Fang; He, Sheng; Legge, Gordon E.

    2009-01-01

    Although previous studies have shown that Braille reading and other tactile-discrimination tasks activate the visual cortex of blind and sighted people [1–5], it is not known whether this kind of cross-modal reorganization is influenced by retinotopic organization. We have addressed this question by studying S, a visually impaired adult with the rare ability to read print visually and Braille by touch. S had normal visual development until age six years, and thereafter severe acuity reduction due to corneal opacification, but no evidence of visual-field loss. Functional magnetic resonance imaging (fMRI) revealed that, in S’s early visual areas, tactile information processing activated what would be the foveal representation for normally-sighted individuals, and visual information processing activated what would be the peripheral representation. Control experiments showed that this activation pattern was not due to visual imagery. S’s high-level visual areas which correspond to shape- and object-selective areas in normally-sighted individuals were activated by both visual and tactile stimuli. The retinotopically specific reorganization in early visual areas suggests an efficient redistribution of neural resources in the visual cortex. PMID:19361999

  18. Using geographic information systems to identify prospective marketing areas for a special library.

    PubMed

    McConnaughy, Rozalynd P; Wilson, Steven P

    2006-05-04

    The Center for Disability Resources (CDR) Library is the largest collection of its kind in the Southeastern United States, consisting of over 5,200 books, videos/DVDs, brochures, and audiotapes covering a variety of disability-related topics, from autism to transition resources. The purpose of the library is to support the information needs of families, faculty, students, staff, and other professionals in South Carolina working with individuals with disabilities. The CDR Library is funded on a yearly basis; therefore, maintaining high usage is crucial. A variety of promotional efforts have been used to attract new patrons to the library. Anyone in South Carolina can check out materials from the library, and most of the patrons use the library remotely by requesting materials, which are then mailed to them. The goal of this project was to identify areas of low geographic usage as a means of identifying locations for future library marketing efforts. Nearly four years worth of library statistics were compiled in a spreadsheet that provided information per county on the number of checkouts, the number of renewals, and the population. Five maps were created using ArcView GIS software to create visual representations of patron checkout and renewal behavior per county. Out of the 46 counties in South Carolina, eight counties never checked out materials from the library. As expected urban areas and counties near the library's physical location have high usage totals. The visual representation of the data made identification of low usage regions easier than using a standalone database with no visual-spatial component. The low usage counties will be the focus of future Center for Disability Resources Library marketing efforts. Due to the impressive visual-spatial representations created with Geographic Information Systems, which more efficiently communicate information than stand-alone database information can, librarians may benefit from the software's use as a supplemental tool for tracking library usage and planning promotional efforts.

  19. The dynamics of neuronal redundancy in decision making

    NASA Astrophysics Data System (ADS)

    Daniels, Bryan; Flack, Jessica; Krakauer, David

    We propose two temporal phases of collective computation in a visual motion direction discrimination task by analyzing recordings from 169 neural channels in the prefrontal cortex of macaque monkeys. Phase I is a distributed phase in which uncertainty is substantially reduced by pooling information from many cells. Phase II is a redundant phase in which numerous single cells contain all the information present at the population level in Phase I. A dynamic distributed model connects low redundancy to a slow timescale of information aggregation, and provides a common explanation for both behaviors that differs only in the degree of recurrent excitation. We attribute the slow timescale of information accumulation to critical slowing down near the transition to a memory-carrying collective state. We suggest that this dynamic of slow distributed accumulation followed by fast collective propagation is a generic feature of robust collective computing systems related to consensus formation.

  20. Audiovisual Modulation in Mouse Primary Visual Cortex Depends on Cross-Modal Stimulus Configuration and Congruency.

    PubMed

    Meijer, Guido T; Montijn, Jorrit S; Pennartz, Cyriel M A; Lansink, Carien S

    2017-09-06

    The sensory neocortex is a highly connected associative network that integrates information from multiple senses, even at the level of the primary sensory areas. Although a growing body of empirical evidence supports this view, the neural mechanisms of cross-modal integration in primary sensory areas, such as the primary visual cortex (V1), are still largely unknown. Using two-photon calcium imaging in awake mice, we show that the encoding of audiovisual stimuli in V1 neuronal populations is highly dependent on the features of the stimulus constituents. When the visual and auditory stimulus features were modulated at the same rate (i.e., temporally congruent), neurons responded with either an enhancement or suppression compared with unisensory visual stimuli, and their prevalence was balanced. Temporally incongruent tones or white-noise bursts included in audiovisual stimulus pairs resulted in predominant response suppression across the neuronal population. Visual contrast did not influence multisensory processing when the audiovisual stimulus pairs were congruent; however, when white-noise bursts were used, neurons generally showed response suppression when the visual stimulus contrast was high whereas this effect was absent when the visual contrast was low. Furthermore, a small fraction of V1 neurons, predominantly those located near the lateral border of V1, responded to sound alone. These results show that V1 is involved in the encoding of cross-modal interactions in a more versatile way than previously thought. SIGNIFICANCE STATEMENT The neural substrate of cross-modal integration is not limited to specialized cortical association areas but extends to primary sensory areas. Using two-photon imaging of large groups of neurons, we show that multisensory modulation of V1 populations is strongly determined by the individual and shared features of cross-modal stimulus constituents, such as contrast, frequency, congruency, and temporal structure. Congruent audiovisual stimulation resulted in a balanced pattern of response enhancement and suppression compared with unisensory visual stimuli, whereas incongruent or dissimilar stimuli at full contrast gave rise to a population dominated by response-suppressing neurons. Our results indicate that V1 dynamically integrates nonvisual sources of information while still attributing most of its resources to coding visual information. Copyright © 2017 the authors 0270-6474/17/378783-14$15.00/0.

  1. Figure-ground activity in primary visual cortex (V1) of the monkey matches the speed of behavioral response.

    PubMed

    Supèr, Hans; Spekreijse, Henk; Lamme, Victor A F

    2003-06-26

    To look at an object its position in the visual scene has to be localized and subsequently appropriate oculo-motor behavior needs to be initiated. This kind of behavior is largely controlled by the cortical executive system, such as the frontal eye field. In this report, we analyzed neural activity in the visual cortex in relation to oculo-motor behavior. We show that in a figure-ground detection task, the strength of late modulated activity in the primary visual cortex correlates with the saccade latency. We propose that this may indicate that the variability of reaction times in the detection of a visual stimulus is reflected in low-level visual areas as well as in high-level areas.

  2. Optimizing Cognitive Load for Learning from Computer-Based Science Simulations

    ERIC Educational Resources Information Center

    Lee, Hyunjeong; Plass, Jan L.; Homer, Bruce D.

    2006-01-01

    How can cognitive load in visual displays of computer simulations be optimized? Middle-school chemistry students (N = 257) learned with a simulation of the ideal gas law. Visual complexity was manipulated by separating the display of the simulations in two screens (low complexity) or presenting all information on one screen (high complexity). The…

  3. Visual motion transforms visual space representations similarly throughout the human visual hierarchy.

    PubMed

    Harvey, Ben M; Dumoulin, Serge O

    2016-02-15

    Several studies demonstrate that visual stimulus motion affects neural receptive fields and fMRI response amplitudes. Here we unite results of these two approaches and extend them by examining the effects of visual motion on neural position preferences throughout the hierarchy of human visual field maps. We measured population receptive field (pRF) properties using high-field fMRI (7T), characterizing position preferences simultaneously over large regions of the visual cortex. We measured pRFs properties using sine wave gratings in stationary apertures, moving at various speeds in either the direction of pRF measurement or the orthogonal direction. We find direction- and speed-dependent changes in pRF preferred position and size in all visual field maps examined, including V1, V3A, and the MT+ map TO1. These effects on pRF properties increase up the hierarchy of visual field maps. However, both within and between visual field maps the extent of pRF changes was approximately proportional to pRF size. This suggests that visual motion transforms the representation of visual space similarly throughout the visual hierarchy. Visual motion can also produce an illusory displacement of perceived stimulus position. We demonstrate perceptual displacements using the same stimulus configuration. In contrast to effects on pRF properties, perceptual displacements show only weak effects of motion speed, with far larger speed-independent effects. We describe a model where low-level mechanisms could underlie the observed effects on neural position preferences. We conclude that visual motion induces similar transformations of visuo-spatial representations throughout the visual hierarchy, which may arise through low-level mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Using geographic information systems (GIS) to identify communities in need of health insurance outreach: An OCHIN practice-based research network (PBRN) report.

    PubMed

    Angier, Heather; Likumahuwa, Sonja; Finnegan, Sean; Vakarcs, Trisha; Nelson, Christine; Bazemore, Andrew; Carrozza, Mark; DeVoe, Jennifer E

    2014-01-01

    Our practice-based research network (PBRN) is conducting an outreach intervention to increase health insurance coverage for patients seen in the network. To assist with outreach site selection, we sought an understandable way to use electronic health record (EHR) data to locate uninsured patients. Health insurance information was displayed within a web-based mapping platform to demonstrate the feasibility of using geographic information systems (GIS) to visualize EHR data. This study used EHR data from 52 clinics in the OCHIN PBRN. We included cross-sectional coverage data for patients aged 0 to 64 years with at least 1 visit to a study clinic during 2011 (n = 228,284). Our PBRN was successful in using GIS to identify intervention sites. Through use of the maps, we found geographic variation in insurance rates of patients seeking care in OCHIN PBRN clinics. Insurance rates also varied by age: The percentage of adults without insurance ranged from 13.2% to 86.8%; rates of children lacking insurance ranged from 1.1% to 71.7%. GIS also showed some areas of households with median incomes that had low insurance rates. EHR data can be imported into a web-based GIS mapping tool to visualize patient information. Using EHR data, we were able to observe smaller areas than could be seen using only publicly available data. Using this information, we identified appropriate OCHIN PBRN clinics for dissemination of an EHR-based insurance outreach intervention. GIS could also be used by clinics to visualize other patient-level characteristics to target clinic outreach efforts or interventions. © Copyright 2014 by the American Board of Family Medicine.

  5. The role of the landscape architect in applied forest landscape management: a case study on process

    Treesearch

    Wayne Tlusty

    1979-01-01

    Land planning allocations are often multi-resource concepts, with visual quality objectives addressing the appropriate level of visual resource management. Current legislation and/or regulations often require interdisciplinary teams to implement planning decisions. A considerable amount of information is currently avail-able on visual assessment techniques both for...

  6. High-level user interfaces for transfer function design with semantics.

    PubMed

    Salama, Christof Rezk; Keller, Maik; Kohlmann, Peter

    2006-01-01

    Many sophisticated techniques for the visualization of volumetric data such as medical data have been published. While existing techniques are mature from a technical point of view, managing the complexity of visual parameters is still difficult for non-expert users. To this end, this paper presents new ideas to facilitate the specification of optical properties for direct volume rendering. We introduce an additional level of abstraction for parametric models of transfer functions. The proposed framework allows visualization experts to design high-level transfer function models which can intuitively be used by non-expert users. The results are user interfaces which provide semantic information for specialized visualization problems. The proposed method is based on principal component analysis as well as on concepts borrowed from computer animation.

  7. Audio-visual presentation of information for informed consent for participation in clinical trials.

    PubMed

    Ryan, R E; Prictor, M J; McLaughlin, K J; Hill, S J

    2008-01-23

    Informed consent is a critical component of clinical research. Different methods of presenting information to potential participants of clinical trials may improve the informed consent process. Audio-visual interventions (presented for example on the Internet, DVD, or video cassette) are one such method. To assess the effects of providing audio-visual information alone, or in conjunction with standard forms of information provision, to potential clinical trial participants in the informed consent process, in terms of their satisfaction, understanding and recall of information about the study, level of anxiety and their decision whether or not to participate. We searched: the Cochrane Consumers and Communication Review Group Specialised Register (searched 20 June 2006); the Cochrane Central Register of Controlled Trials (CENTRAL), The Cochrane Library, issue 2, 2006; MEDLINE (Ovid) (1966 to June week 1 2006); EMBASE (Ovid) (1988 to 2006 week 24); and other databases. We also searched reference lists of included studies and relevant review articles, and contacted study authors and experts. There were no language restrictions. Randomised and quasi-randomised controlled trials comparing audio-visual information alone, or in conjunction with standard forms of information provision (such as written or oral information as usually employed in the particular service setting), with standard forms of information provision alone, in the informed consent process for clinical trials. Trials involved individuals or their guardians asked to participate in a real (not hypothetical) clinical study. Two authors independently assessed studies for inclusion and extracted data. Due to heterogeneity no meta-analysis was possible; we present the findings in a narrative review. We included 4 trials involving data from 511 people. Studies were set in the USA and Canada. Three were randomised controlled trials (RCTs) and the fourth a quasi-randomised trial. Their quality was mixed and results should be interpreted with caution. Considerable uncertainty remains about the effects of audio-visual interventions, compared with standard forms of information provision (such as written or oral information normally used in the particular setting), for use in the process of obtaining informed consent for clinical trials. Audio-visual interventions did not consistently increase participants' levels of knowledge/understanding (assessed in four studies), although one study showed better retention of knowledge amongst intervention recipients. An audio-visual intervention may transiently increase people's willingness to participate in trials (one study), but this was not sustained at two to four weeks post-intervention. Perceived worth of the trial did not appear to be influenced by an audio-visual intervention (one study), but another study suggested that the quality of information disclosed may be enhanced by an audio-visual intervention. Many relevant outcomes including harms were not measured. The heterogeneity in results may reflect the differences in intervention design, content and delivery, the populations studied and the diverse methods of outcome assessment in included studies. The value of audio-visual interventions for people considering participating in clinical trials remains unclear. Evidence is mixed as to whether audio-visual interventions enhance people's knowledge of the trial they are considering entering, and/or the health condition the trial is designed to address; one study showed improved retention of knowledge amongst intervention recipients. The intervention may also have small positive effects on the quality of information disclosed, and may increase willingness to participate in the short-term; however the evidence is weak. There were no data for several primary outcomes, including harms. In the absence of clear results, triallists should continue to explore innovative methods of providing information to potential trial participants. Further research should take the form of high-quality randomised controlled trials, with clear reporting of methods. Studies should conduct content assessment of audio-visual and other innovative interventions for people of differing levels of understanding and education; also for different age and cultural groups. Researchers should assess systematically the effects of different intervention components and delivery characteristics, and should involve consumers in intervention development. Studies should assess additional outcomes relevant to individuals' decisional capacity, using validated tools, including satisfaction; anxiety; and adherence to the subsequent trial protocol.

  8. Training haptic stiffness discrimination: time course of learning with or without visual information and knowledge of results.

    PubMed

    Teodorescu, Kinneret; Bouchigny, Sylvain; Korman, Maria

    2013-08-01

    In this study, we explored the time course of haptic stiffness discrimination learning and how it was affected by two experimental factors, the addition of visual information and/or knowledge of results (KR) during training. Stiffness perception may integrate both haptic and visual modalities. However, in many tasks, the visual field is typically occluded, forcing stiffness perception to be dependent exclusively on haptic information. No studies to date addressed the time course of haptic stiffness perceptual learning. Using a virtual environment (VE) haptic interface and a two-alternative forced-choice discrimination task, the haptic stiffness discrimination ability of 48 participants was tested across 2 days. Each day included two haptic test blocks separated by a training block Additional visual information and/or KR were manipulated between participants during training blocks. Practice repetitions alone induced significant improvement in haptic stiffness discrimination. Between days, accuracy was slightly improved, but decision time performance was deteriorated. The addition of visual information and/or KR had only temporary effects on decision time, without affecting the time course of haptic discrimination learning. Learning in haptic stiffness discrimination appears to evolve through at least two distinctive phases: A single training session resulted in both immediate and latent learning. This learning was not affected by the training manipulations inspected. Training skills in VE in spaced sessions can be beneficial for tasks in which haptic perception is critical, such as surgery procedures, when the visual field is occluded. However, training protocols for such tasks should account for low impact of multisensory information and KR.

  9. Supporting interruption management and multimodal interface design: three meta-analyses of task performance as a function of interrupting task modality.

    PubMed

    Lu, Sara A; Wickens, Christopher D; Prinet, Julie C; Hutchins, Shaun D; Sarter, Nadine; Sebok, Angelia

    2013-08-01

    The aim of this study was to integrate empirical data showing the effects of interrupting task modality on the performance of an ongoing visual-manual task and the interrupting task itself. The goal is to support interruption management and the design of multimodal interfaces. Multimodal interfaces have been proposed as a promising means to support interruption management.To ensure the effectiveness of this approach, their design needs to be based on an analysis of empirical data concerning the effectiveness of individual and redundant channels of information presentation. Three meta-analyses were conducted to contrast performance on an ongoing visual task and interrupting tasks as a function of interrupting task modality (auditory vs. tactile, auditory vs. visual, and single modality vs. redundant auditory-visual). In total, 68 studies were included and six moderator variables were considered. The main findings from the meta-analyses are that response times are faster for tactile interrupting tasks in case of low-urgency messages.Accuracy is higher with tactile interrupting tasks for low-complexity signals but higher with auditory interrupting tasks for high-complexity signals. Redundant auditory-visual combinations are preferable for communication tasks during high workload and with a small visual angle of separation. The three meta-analyses contribute to the knowledge base in multimodal information processing and design. They highlight the importance of moderator variables in predicting the effects of interruption task modality on ongoing and interrupting task performance. The findings from this research will help inform the design of multimodal interfaces in data-rich, event-driven domains.

  10. Ultrafast scene detection and recognition with limited visual information

    PubMed Central

    Hagmann, Carl Erick; Potter, Mary C.

    2016-01-01

    Humans can detect target color pictures of scenes depicting concepts like picnic or harbor in sequences of six or twelve pictures presented as briefly as 13 ms, even when the target is named after the sequence (Potter, Wyble, Hagmann, & McCourt, 2014). Such rapid detection suggests that feedforward processing alone enabled detection without recurrent cortical feedback. There is debate about whether coarse, global, low spatial frequencies (LSFs) provide predictive information to high cortical levels through the rapid magnocellular (M) projection of the visual path, enabling top-down prediction of possible object identities. To test the “Fast M” hypothesis, we compared detection of a named target across five stimulus conditions: unaltered color, blurred color, grayscale, thresholded monochrome, and LSF pictures. The pictures were presented for 13–80 ms in six-picture rapid serial visual presentation (RSVP) sequences. Blurred, monochrome, and LSF pictures were detected less accurately than normal color or grayscale pictures. When the target was named before the sequence, all picture types except LSF resulted in above-chance detection at all durations. Crucially, when the name was given only after the sequence, performance dropped and the monochrome and LSF pictures (but not the blurred pictures) were at or near chance. Thus, without advance information, monochrome and LSF pictures were rarely understood. The results offer only limited support for the Fast M hypothesis, suggesting instead that feedforward processing is able to activate conceptual representations without complementary reentrant processing. PMID:28255263

  11. Image/video understanding systems based on network-symbolic models

    NASA Astrophysics Data System (ADS)

    Kuvich, Gary

    2004-03-01

    Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.

  12. Oculomotor Reflexes as a Test of Visual Dysfunctions in Cognitively Impaired Observers

    DTIC Science & Technology

    2012-10-01

    visual nystagmus much more robust. Because the absolute gaze is not measured in our paradigm (this would require a gaze calibration, involving...the dots were also drifting to the right. Gaze horizontal position is plotted along the y-axis. The red bar indicates a visual nystagmus event...for automated 5 Reflex Stimulus Functions Visual Nystagmus luminance grating low-level motion equiluminant grating color vision contrast gratings at 3

  13. IDEN2-A program for visual identification of spectral lines and energy levels in optical spectra of atoms and simple molecules

    NASA Astrophysics Data System (ADS)

    Azarov, V. I.; Kramida, A.; Vokhmentsev, M. Ya.

    2018-04-01

    The article describes a Java program that can be used in a user-friendly way to visually identify spectral lines observed in complex spectra with theoretically predicted transitions between atomic or molecular energy levels. The program arranges various information about spectral lines and energy levels in such a way that line identification and determination of positions of experimentally observed energy levels become much easier tasks that can be solved fast and efficiently.

  14. Cross-sectional study on prevalence, causes and avoidable causes of visual impairment in Maori children.

    PubMed

    Chong, Cheefoong; Dai, Shuan

    2013-08-02

    To provide information and comparison pertaining to visual impairment of Maori children with other children in New Zealand in particular: prevalence of blindness, causes of visual impairment, and avoidable causes of visual impairment. Retrospective data collection utilising the WHO/PBL eye examination record for children with blindness and low vision at Blind and Low Vision Education Network New Zealand (BLENNZ), Homai. Individuals not of Maori ethnicity or over the age of 16 were excluded from the study. 106 blind and 64 low-vision Maori children were studied. The main cause of blindness in Maori children is cortical visual impairment. Twenty-eight percent of causes of blindness in this population are potentially avoidable with non-accidental injury as the main cause. The prevalence of blindness and low vision in children amounts to 0.05% and 0.03%, respectively. The prevalence and causes of childhood blindness are comparable to the other ethnic groups in New Zealand. The main difference lies in avoidable causes of blindness, which appeared to be much higher in the Maori population. The leading cause of avoidable blindness in Maori children is caused by non-accidental injuries.

  15. Basic level category structure emerges gradually across human ventral visual cortex.

    PubMed

    Iordan, Marius Cătălin; Greene, Michelle R; Beck, Diane M; Fei-Fei, Li

    2015-07-01

    Objects can be simultaneously categorized at multiple levels of specificity ranging from very broad ("natural object") to very distinct ("Mr. Woof"), with a mid-level of generality (basic level: "dog") often providing the most cognitively useful distinction between categories. It is unknown, however, how this hierarchical representation is achieved in the brain. Using multivoxel pattern analyses, we examined how well each taxonomic level (superordinate, basic, and subordinate) of real-world object categories is represented across occipitotemporal cortex. We found that, although in early visual cortex objects are best represented at the subordinate level (an effect mostly driven by low-level feature overlap between objects in the same category), this advantage diminishes compared to the basic level as we move up the visual hierarchy, disappearing in object-selective regions of occipitotemporal cortex. This pattern stems from a combined increase in within-category similarity (category cohesion) and between-category dissimilarity (category distinctiveness) of neural activity patterns at the basic level, relative to both subordinate and superordinate levels, suggesting that successive visual areas may be optimizing basic level representations.

  16. Family genome browser: visualizing genomes with pedigree information.

    PubMed

    Juan, Liran; Liu, Yongzhuang; Wang, Yongtian; Teng, Mingxiang; Zang, Tianyi; Wang, Yadong

    2015-07-15

    Families with inherited diseases are widely used in Mendelian/complex disease studies. Owing to the advances in high-throughput sequencing technologies, family genome sequencing becomes more and more prevalent. Visualizing family genomes can greatly facilitate human genetics studies and personalized medicine. However, due to the complex genetic relationships and high similarities among genomes of consanguineous family members, family genomes are difficult to be visualized in traditional genome visualization framework. How to visualize the family genome variants and their functions with integrated pedigree information remains a critical challenge. We developed the Family Genome Browser (FGB) to provide comprehensive analysis and visualization for family genomes. The FGB can visualize family genomes in both individual level and variant level effectively, through integrating genome data with pedigree information. Family genome analysis, including determination of parental origin of the variants, detection of de novo mutations, identification of potential recombination events and identical-by-decent segments, etc., can be performed flexibly. Diverse annotations for the family genome variants, such as dbSNP memberships, linkage disequilibriums, genes, variant effects, potential phenotypes, etc., are illustrated as well. Moreover, the FGB can automatically search de novo mutations and compound heterozygous variants for a selected individual, and guide investigators to find high-risk genes with flexible navigation options. These features enable users to investigate and understand family genomes intuitively and systematically. The FGB is available at http://mlg.hit.edu.cn/FGB/. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. High-sensitivity visualization of localized electric fields using low-energy electron beam deflection

    NASA Astrophysics Data System (ADS)

    Jeong, Samuel; Ito, Yoshikazu; Edwards, Gary; Fujita, Jun-ichi

    2018-06-01

    The visualization of localized electronic charges on nanocatalysts is expected to yield fundamental information about catalytic reaction mechanisms. We have developed a high-sensitivity detection technique for the visualization of localized charges on a catalyst and their corresponding electric field distribution, using a low-energy beam of 1 to 5 keV electrons and a high-sensitivity scanning transmission electron microscope (STEM) detector. The highest sensitivity for visualizing a localized electric field was ∼0.08 V/µm at a distance of ∼17 µm from a localized charge at 1 keV of the primary electron energy, and a weak local electric field produced by 200 electrons accumulated on the carbon nanotube (CNT) apex can be visualized. We also observed that Au nanoparticles distributed on a CNT forest tended to accumulate a certain amount of charges, about 150 electrons, at a ‑2 V bias.

  18. A digital retina-like low-level vision processor.

    PubMed

    Mertoguno, S; Bourbakis, N G

    2003-01-01

    This correspondence presents the basic design and the simulation of a low level multilayer vision processor that emulates to some degree the functional behavior of a human retina. This retina-like multilayer processor is the lower part of an autonomous self-organized vision system, called Kydon, that could be used on visually impaired people with a damaged visual cerebral cortex. The Kydon vision system, however, is not presented in this paper. The retina-like processor consists of four major layers, where each of them is an array processor based on hexagonal, autonomous processing elements that perform a certain set of low level vision tasks, such as smoothing and light adaptation, edge detection, segmentation, line recognition and region-graph generation. At each layer, the array processor is a 2D array of k/spl times/m hexagonal identical autonomous cells that simultaneously execute certain low level vision tasks. Thus, the hardware design and the simulation at the transistor level of the processing elements (PEs) of the retina-like processor and its simulated functionality with illustrative examples are provided in this paper.

  19. Approach bias and cue reactivity towards food in people with high versus low levels of food craving.

    PubMed

    Brockmeyer, Timo; Hahn, Carolyn; Reetz, Christina; Schmidt, Ulrike; Friederich, Hans-Christoph

    2015-12-01

    Even though people suffering from high levels of food craving are aware of the negative consequences of binge eating, they cannot resist. Automatic action tendencies (i.e. approach bias) towards food cues that operate outside conscious control may contribute to this dysfunctional behavior. The present study aimed to examine whether people with high levels of food craving show a stronger approach bias for food than those with low levels of food craving and whether this bias is associated with cue-elicited food craving. Forty-one individuals reporting either extremely high or extremely low levels of trait food craving were recruited via an online screening and compared regarding approach bias towards visual food cues by means of an implicit stimulus-response paradigm (i.e. the Food Approach-Avoidance Task). State levels of food craving were assessed before and after cue exposure to indicate food cue reactivity. As expected, high food cravers showed stronger automatic approach tendencies towards food than low food cravers. Also in line with the hypotheses, approach bias for food was positively correlated with the magnitude of change in state levels of food craving from pre-to post-cue exposure in the total sample. The findings suggest that an approach bias in early stages of information processing contributes to the inability to resist food intake and may be of relevance for understanding and treating dysfunctional eating behavior. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Visual perception and imagery: a new molecular hypothesis.

    PubMed

    Bókkon, I

    2009-05-01

    Here, we put forward a redox molecular hypothesis about the natural biophysical substrate of visual perception and visual imagery. This hypothesis is based on the redox and bioluminescent processes of neuronal cells in retinotopically organized cytochrome oxidase-rich visual areas. Our hypothesis is in line with the functional roles of reactive oxygen and nitrogen species in living cells that are not part of haphazard process, but rather a very strict mechanism used in signaling pathways. We point out that there is a direct relationship between neuronal activity and the biophoton emission process in the brain. Electrical and biochemical processes in the brain represent sensory information from the external world. During encoding or retrieval of information, electrical signals of neurons can be converted into synchronized biophoton signals by bioluminescent radical and non-radical processes. Therefore, information in the brain appears not only as an electrical (chemical) signal but also as a regulated biophoton (weak optical) signal inside neurons. During visual perception, the topological distribution of photon stimuli on the retina is represented by electrical neuronal activity in retinotopically organized visual areas. These retinotopic electrical signals in visual neurons can be converted into synchronized biophoton signals by radical and non-radical processes in retinotopically organized mitochondria-rich areas. As a result, regulated bioluminescent biophotons can create intrinsic pictures (depictive representation) in retinotopically organized cytochrome oxidase-rich visual areas during visual imagery and visual perception. The long-term visual memory is interpreted as epigenetic information regulated by free radicals and redox processes. This hypothesis does not claim to solve the secret of consciousness, but proposes that the evolution of higher levels of complexity made the intrinsic picture representation of the external visual world possible by regulated redox and bioluminescent reactions in the visual system during visual perception and visual imagery.

  1. The Data Platform for Climate Research and Action: Introducing Climate Watch

    NASA Astrophysics Data System (ADS)

    Hennig, R. J.; Ge, M.; Friedrich, J.; Lebling, K.; Carlock, G.; Arcipowska, A.; Mangan, E.; Biru, H.; Tankou, A.; Chaudhury, M.

    2017-12-01

    The Paris Agreement, adopted through Decision 1/CP.21, brings all nations together to take on ambitious efforts to combat climate change. Open access to climate data supporting climate research, advancing knowledge, and informing decision making is key to encourage and strengthen efforts of stakeholders at all levels to address and respond to effects of climate change. Climate Watch is a robust online data platform developed in response to the urgent needs of knowledge and tools to empower climate research and action, including those of researchers, policy makers, the private sector, civil society, and all other non-state actors. Building on the rapid growing technology of open data and information sharing, Climate Watch is equipped with extensive amount of climate data, informative visualizations, concise yet efficient user interface, and connection to resources users need to gather insightful information on national and global progress towards delivering on the objective of the Convention and the Paris Agreement. Climate Watch brings together hundreds of quantitative and qualitative indicators for easy explore, visualize, compare, download at global, national, and sectoral levels: Greenhouse gas (GHG) emissions for more than 190 countries over the1850-2014 time period, covering all seven Kyoto Gases following IPCC source/sink categories; Structured information on over 150 NDCs facilitating the clarity, understanding and transparency of countries' contributions to address climate change; Over 6500 identified linkages between climate actions in NDCs across the 169 targets of the sustainable development goals (SDG); Over 200 indicators describing low carbon pathways from models and scenarios by integrated assessment models (IAMs) and national sources; and Data on vulnerability and risk, policies, finance, and many more. Climate Watch platform is developed as part of the broader efforts within the World Resources Institute, the NDC Partnership, and in collaboration with GIZ, UNFCCC, World Bank, and Climate Analytics.

  2. Flight Deck Technologies to Enable NextGen Low Visibility Surface Operations

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence (Lance) J., III; Arthur, Jarvis (Trey) J.; Kramer, Lynda J.; Norman, Robert M.; Bailey, Randall E.; Jones, Denise R.; Karwac, Jerry R., Jr.; Shelton, Kevin J.; Ellis, Kyle K. E.

    2013-01-01

    Many key capabilities are being identified to enable Next Generation Air Transportation System (NextGen), including the concept of Equivalent Visual Operations (EVO) . replicating the capacity and safety of today.s visual flight rules (VFR) in all-weather conditions. NASA is striving to develop the technologies and knowledge to enable EVO and to extend EVO towards a Better-Than-Visual operational concept. This operational concept envisions an .equivalent visual. paradigm where an electronic means provides sufficient visual references of the external world and other required flight references on flight deck displays that enable Visual Flight Rules (VFR)-like operational tempos while maintaining and improving safety of VFR while using VFR-like procedures in all-weather conditions. The Langley Research Center (LaRC) has recently completed preliminary research on flight deck technologies for low visibility surface operations. The work assessed the potential of enhanced vision and airport moving map displays to achieve equivalent levels of safety and performance to existing low visibility operational requirements. The work has the potential to better enable NextGen by perhaps providing an operational credit for conducting safe low visibility surface operations by use of the flight deck technologies.

  3. Typograph: Multiscale Spatial Exploration of Text Documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; Burtner, Edwin R.; Cramer, Nicholas O.

    2013-10-06

    Visualizing large document collections using a spatial layout of terms can enable quick overviews of information. These visual metaphors (e.g., word clouds, tag clouds, etc.) traditionally show a series of terms organized by space-filling algorithms. However, often lacking in these views is the ability to interactively explore the information to gain more detail, and the location and rendering of the terms are often not based on mathematical models that maintain relative distances from other information based on similarity metrics. In this paper, we present Typograph, a multi-scale spatial exploration visualization for large document collections. Based on the term-based visualization methods,more » Typograh enables multiple levels of detail (terms, phrases, snippets, and full documents) within the single spatialization. Further, the information is placed based on their relative similarity to other information to create the “near = similar” geographic metaphor. This paper discusses the design principles and functionality of Typograph and presents a use case analyzing Wikipedia to demonstrate usage.« less

  4. Relative Spatial Frequency Processing Drives Hemispheric Asymmetry in Conscious Awareness

    PubMed Central

    Piazza, Elise A.; Silver, Michael A.

    2017-01-01

    Visual stimuli with different spatial frequencies (SFs) are processed asymmetrically in the two cerebral hemispheres. Specifically, low SFs are processed relatively more efficiently in the right hemisphere than the left hemisphere, whereas high SFs show the opposite pattern. In this study, we ask whether these differences between the two hemispheres reflect a low-level division that is based on absolute SF values or a flexible comparison of the SFs in the visual environment at any given time. In a recent study, we showed that conscious awareness of SF information (i.e., visual perceptual selection from multiple SFs simultaneously present in the environment) differs between the two hemispheres. Building upon that result, here we employed binocular rivalry to test whether this hemispheric asymmetry is due to absolute or relative SF processing. In each trial, participants viewed a pair of rivalrous orthogonal gratings of different SFs, presented either to the left or right of central fixation, and continuously reported which grating they perceived. We found that the hemispheric asymmetry in perception is significantly influenced by relative processing of the SFs of the simultaneously presented stimuli. For example, when a medium SF grating and a higher SF grating were presented as a rivalry pair, subjects were more likely to report that they initially perceived the medium SF grating when the rivalry pair was presented in the left visual hemifield (right hemisphere), compared to the right hemifield. However, this same medium SF grating, when it was paired in rivalry with a lower SF grating, was more likely to be perceptually selected when it was in the right visual hemifield (left hemisphere). Thus, the visual system’s classification of a given SF as “low” or “high” (and therefore, which hemisphere preferentially processes that SF) depends on the other SFs that are present, demonstrating that relative SF processing contributes to hemispheric differences in visual perceptual selection. PMID:28469585

  5. Neural practice effect during cross-modal selective attention: Supra-modal and modality-specific effects.

    PubMed

    Xia, Jing; Zhang, Wei; Jiang, Yizhou; Li, You; Chen, Qi

    2018-05-16

    Practice and experiences gradually shape the central nervous system, from the synaptic level to large-scale neural networks. In natural multisensory environment, even when inundated by streams of information from multiple sensory modalities, our brain does not give equal weight to different modalities. Rather, visual information more frequently receives preferential processing and eventually dominates consciousness and behavior, i.e., visual dominance. It remains unknown, however, the supra-modal and modality-specific practice effect during cross-modal selective attention, and moreover whether the practice effect shows similar modality preferences as the visual dominance effect in the multisensory environment. To answer the above two questions, we adopted a cross-modal selective attention paradigm in conjunction with the hybrid fMRI design. Behaviorally, visual performance significantly improved while auditory performance remained constant with practice, indicating that visual attention more flexibly adapted behavior with practice than auditory attention. At the neural level, the practice effect was associated with decreasing neural activity in the frontoparietal executive network and increasing activity in the default mode network, which occurred independently of the modality attended, i.e., the supra-modal mechanisms. On the other hand, functional decoupling between the auditory and the visual system was observed with the progress of practice, which varied as a function of the modality attended. The auditory system was functionally decoupled with both the dorsal and ventral visual stream during auditory attention while was decoupled only with the ventral visual stream during visual attention. To efficiently suppress the irrelevant visual information with practice, auditory attention needs to additionally decouple the auditory system from the dorsal visual stream. The modality-specific mechanisms, together with the behavioral effect, thus support the visual dominance model in terms of the practice effect during cross-modal selective attention. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Visual Environments for CFD Research

    NASA Technical Reports Server (NTRS)

    Watson, Val; George, Michael W. (Technical Monitor)

    1994-01-01

    This viewgraph presentation gives an overview of the visual environments for computational fluid dynamics (CFD) research. It includes details on critical needs from the future computer environment, features needed to attain this environment, prospects for changes in and the impact of the visualization revolution on the human-computer interface, human processing capabilities, limits of personal environment and the extension of that environment with computers. Information is given on the need for more 'visual' thinking (including instances of visual thinking), an evaluation of the alternate approaches for and levels of interactive computer graphics, a visual analysis of computational fluid dynamics, and an analysis of visualization software.

  7. Horizontal tuning for faces originates in high-level Fusiform Face Area.

    PubMed

    Goffaux, Valerie; Duecker, Felix; Hausfeld, Lars; Schiltz, Christine; Goebel, Rainer

    2016-01-29

    Recent work indicates that the specialization of face visual perception relies on the privileged processing of horizontal angles of facial information. This suggests that stimulus properties assumed to be fully resolved in primary visual cortex (V1; e.g., orientation) in fact determine human vision until high-level stages of processing. To address this hypothesis, the present fMRI study explored the orientation sensitivity of V1 and high-level face-specialized ventral regions such as the Occipital Face Area (OFA) and Fusiform Face Area (FFA) to different angles of face information. Participants viewed face images filtered to retain information at horizontal, vertical or oblique angles. Filtered images were viewed upright, inverted and (phase-)scrambled. FFA responded most strongly to the horizontal range of upright face information; its activation pattern reliably separated horizontal from oblique ranges, but only when faces were upright. Moreover, activation patterns induced in the right FFA and the OFA by upright and inverted faces could only be separated based on horizontal information. This indicates that the specialized processing of upright face information in the OFA and FFA essentially relies on the encoding of horizontal facial cues. This pattern was not passively inherited from V1, which was found to respond less strongly to horizontal than other orientations likely due to adaptive whitening. Moreover, we found that orientation decoding accuracy in V1 was impaired for stimuli containing no meaningful shape. By showing that primary coding in V1 is influenced by high-order stimulus structure and that high-level processing is tuned to selective ranges of primary information, the present work suggests that primary and high-level levels of the visual system interact in order to modulate the processing of certain ranges of primary information depending on their relevance with respect to the stimulus and task at hand. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Effects of color combination and ambient illumination on visual perception time with TFT-LCD.

    PubMed

    Lin, Chin-Chiuan; Huang, Kuo-Chen

    2009-10-01

    An empirical study was carried out to examine the effects of color combination and ambient illumination on visual perception time using TFT-LCD. The effect of color combination was broken down into two subfactors, luminance contrast ratio and chromaticity contrast. Analysis indicated that the luminance contrast ratio and ambient illumination had significant, though small effects on visual perception. Visual perception time was better at high luminance contrast ratio than at low luminance contrast ratio. Visual perception time under normal ambient illumination was better than at other ambient illumination levels, although the stimulus color had a confounding effect on visual perception time. In general, visual perception time was better for the primary colors than the middle-point colors. Based on the results, normal ambient illumination level and high luminance contrast ratio seemed to be the optimal choice for design of workplace with video display terminals TFT-LCD.

  9. Salience and Attention in Surprisal-Based Accounts of Language Processing.

    PubMed

    Zarcone, Alessandra; van Schijndel, Marten; Vogels, Jorrig; Demberg, Vera

    2016-01-01

    The notion of salience has been singled out as the explanatory factor for a diverse range of linguistic phenomena. In particular, perceptual salience (e.g., visual salience of objects in the world, acoustic prominence of linguistic sounds) and semantic-pragmatic salience (e.g., prominence of recently mentioned or topical referents) have been shown to influence language comprehension and production. A different line of research has sought to account for behavioral correlates of cognitive load during comprehension as well as for certain patterns in language usage using information-theoretic notions, such as surprisal. Surprisal and salience both affect language processing at different levels, but the relationship between the two has not been adequately elucidated, and the question of whether salience can be reduced to surprisal / predictability is still open. Our review identifies two main challenges in addressing this question: terminological inconsistency and lack of integration between high and low levels of representations in salience-based accounts and surprisal-based accounts. We capitalize upon work in visual cognition in order to orient ourselves in surveying the different facets of the notion of salience in linguistics and their relation with models of surprisal. We find that work on salience highlights aspects of linguistic communication that models of surprisal tend to overlook, namely the role of attention and relevance to current goals, and we argue that the Predictive Coding framework provides a unified view which can account for the role played by attention and predictability at different levels of processing and which can clarify the interplay between low and high levels of processes and between predictability-driven expectation and attention-driven focus.

  10. Developmental changes in the neural influence of sublexical information on semantic processing.

    PubMed

    Lee, Shu-Hui; Booth, James R; Chou, Tai-Li

    2015-07-01

    Functional magnetic resonance imaging (fMRI) was used to examine the developmental changes in a group of normally developing children (aged 8-12) and adolescents (aged 13-16) during semantic processing. We manipulated association strength (i.e. a global reading unit) and semantic radical (i.e. a local reading unit) to explore the interaction of lexical and sublexical semantic information in making semantic judgments. In the semantic judgment task, two types of stimuli were used: visually-similar (i.e. shared a semantic radical) versus visually-dissimilar (i.e. did not share a semantic radical) character pairs. Participants were asked to indicate if two Chinese characters, arranged according to association strength, were related in meaning. The results showed greater developmental increases in activation in left angular gyrus (BA 39) in the visually-similar compared to the visually-dissimilar pairs for the strong association. There were also greater age-related increases in angular gyrus for the strong compared to weak association in the visually-similar pairs. Both of these results suggest that shared semantics at the sublexical level facilitates the integration of overlapping features at the lexical level in older children. In addition, there was a larger developmental increase in left posterior middle temporal gyrus (BA 21) for the weak compared to strong association in the visually-dissimilar pairs, suggesting conflicting sublexical information placed greater demands on access to lexical representations in the older children. All together, these results suggest that older children are more sensitive to sublexical information when processing lexical representations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. The Impact of Early Visual Deprivation on Spatial Hearing: A Comparison between Totally and Partially Visually Deprived Children

    PubMed Central

    Cappagli, Giulia; Finocchietti, Sara; Cocchi, Elena; Gori, Monica

    2017-01-01

    The specific role of early visual deprivation on spatial hearing is still unclear, mainly due to the difficulty of comparing similar spatial skills at different ages and to the difficulty in recruiting young blind children from birth. In this study, the effects of early visual deprivation on the development of auditory spatial localization have been assessed in a group of seven 3–5 years old children with congenital blindness (n = 2; light perception or no perception of light) or low vision (n = 5; visual acuity range 1.1–1.7 LogMAR), with the main aim to understand if visual experience is fundamental to the development of specific spatial skills. Our study led to three main findings: firstly, totally blind children performed overall more poorly compared sighted and low vision children in all the spatial tasks performed; secondly, low vision children performed equally or better than sighted children in the same auditory spatial tasks; thirdly, higher residual levels of visual acuity are positively correlated with better spatial performance in the dynamic condition of the auditory localization task indicating that the more residual vision the better spatial performance. These results suggest that early visual experience has an important role in the development of spatial cognition, even when the visual input during the critical period of visual calibration is partially degraded like in the case of low vision children. Overall these results shed light on the importance of early assessment of spatial impairments in visually impaired children and early intervention to prevent the risk of isolation and social exclusion. PMID:28443040

  12. Image segmentation via foreground and background semantic descriptors

    NASA Astrophysics Data System (ADS)

    Yuan, Ding; Qiang, Jingjing; Yin, Jihao

    2017-09-01

    In the field of image processing, it has been a challenging task to obtain a complete foreground that is not uniform in color or texture. Unlike other methods, which segment the image by only using low-level features, we present a segmentation framework, in which high-level visual features, such as semantic information, are used. First, the initial semantic labels were obtained by using the nonparametric method. Then, a subset of the training images, with a similar foreground to the input image, was selected. Consequently, the semantic labels could be further refined according to the subset. Finally, the input image was segmented by integrating the object affinity and refined semantic labels. State-of-the-art performance was achieved in experiments with the challenging MSRC 21 dataset.

  13. Resources for Designing, Selecting and Teaching with Visualizations in the Geoscience Classroom

    NASA Astrophysics Data System (ADS)

    Kirk, K. B.; Manduca, C. A.; Ormand, C. J.; McDaris, J. R.

    2009-12-01

    Geoscience is a highly visual field, and effective use of visualizations can enhance student learning, appeal to students’ emotions and help them acquire skills for interpreting visual information. The On the Cutting Edge website, “Teaching Geoscience with Visualizations” presents information of interest to faculty who are teaching with visualizations, as well as those who are designing visualizations. The website contains best practices for effective visualizations, drawn from the educational literature and from experts in the field. For example, a case is made for careful selection of visualizations so that faculty can align the correct visualization with their teaching goals and audience level. Appropriate visualizations will contain the desired geoscience content without adding extraneous information that may distract or confuse students. Features such as labels, arrows and contextual information can help guide students through imagery and help to explain the relevant concepts. Because students learn by constructing their own mental image of processes, it is helpful to select visualizations that reflect the same type of mental picture that students should create. A host of recommended readings and presentations from the On the Cutting Edge visualization workshops can provide further grounding for the educational uses of visualizations. Several different collections of visualizations, datasets with visualizations and visualization tools are available on the website. Examples include animations of tsunamis, El Nino conditions, braided stream formation and mountain uplift. These collections are grouped by topic and range from simple animations to interactive models. A series of example activities that incorporate visualizations into classroom and laboratory activities illustrate various tactics for using these materials in different types of settings. Activities cover topics such as ocean circulation, land use changes, earthquake simulations and the use of Google Earth to explore geologic processes. These materials can be found at http://serc.carleton.edu/NAGTWorkshops/visualization. Faculty and developers of visualization tools are encouraged to submit teaching activities, references or visualizations to the collections.

  14. Low conspicuity of motorcycles for car drivers: dominant role of bottom-up control of visual attention or deficit of top-down control?

    PubMed

    Rogé, Joceline; Douissembekov, Evgueni; Vienne, Fabrice

    2012-02-01

    The aim of this study was to evaluate whether the low visibility of motorcycles is the result of their low cognitive conspicuity and/or their low sensory conspicuity for car drivers. In several cases of collision between a car and a motorcycle, the car driver failed to detect the motorcyclist in time to avoid the collision. To test the low cognitive conspicuity hypothesis, 42 car drivers (32.02 years old) including 21 motorcyclist motorists and 21 non-motorcyclist motorists carried out a motorcycle detection task in a car-driving simulator.To test the low sensory conspicuity hypothesis, the authors studied the effect of the color contrast between motorcycles and the road surface on the ability of car drivers to detect motorcycles when they appear from different parts of the road. A high level of color contrast enhanced the visibility of motorcycles when they appeared in front of the participants. Moreover, when motorcyclists appeared from behind the participants,the motorcyclist motorists detected oncoming motorcycles at a greater distance than did the non-motorcyclist motorists. Motorcyclist motorists carry out more saccades and rapidly capture information (on their rearview mirrors and on the road in front of them). The results related to the sensory conspicuity and cognitive conspicuity of motorcycles for car drivers are discussed from the viewpoint of visual attention theories. The practical implications of these results and future lines of research related to training methods for car drivers are considered.

  15. Are children with low vision adapted to the visual environment in classrooms of mainstream schools?

    PubMed Central

    Negiloni, Kalpa; Ramani, Krishna Kumar; Jeevitha, R; Kalva, Jayashree; Sudhir, Rachapalle Reddi

    2018-01-01

    Purpose: The study aimed to evaluate the classroom environment of children with low vision and provide recommendations to reduce visual stress, with focus on mainstream schooling. Methods: The medical records of 110 children (5–17 years) seen in low vision clinic during 1 year period (2015) at a tertiary care center in south India were extracted. The visual function levels of children were compared to the details of their classroom environment. The study evaluated and recommended the chalkboard visual task size and viewing distance required for children with mild, moderate, and severe visual impairment (VI). Results: The major causes of low vision based on the site of abnormality and etiology were retinal (80%) and hereditary (67%) conditions, respectively, in children with mild (n = 18), moderate (n = 72), and severe (n = 20) VI. Many of the children (72%) had difficulty in viewing chalkboard and common strategies used for better visibility included copying from friends (47%) and going closer to chalkboard (42%). To view the chalkboard with reduced visual stress, a child with mild VI can be seated at a maximum distance of 4.3 m from the chalkboard, with the minimum size of visual task (height of lowercase letter writing on chalkboard) recommended to be 3 cm. For 3/60–6/60 range, the maximum viewing distance with the visual task size of 4 cm is recommended to be 85 cm to 1.7 m. Conclusion: Simple modifications of the visual task size and seating arrangements can aid children with low vision with better visibility of chalkboard and reduced visual stress to manage in mainstream schools. PMID:29380777

  16. Time Limits in Testing: An Analysis of Eye Movements and Visual Attention in Spatial Problem Solving

    ERIC Educational Resources Information Center

    Roach, Victoria A.; Fraser, Graham M.; Kryklywy, James H.; Mitchell, Derek G. V.; Wilson, Timothy D.

    2017-01-01

    Individuals with an aptitude for interpreting spatial information (high mental rotation ability: HMRA) typically master anatomy with more ease, and more quickly, than those with low mental rotation ability (LMRA). This article explores how visual attention differs with time limits on spatial reasoning tests. Participants were assorted to two…

  17. EyeMusic: Introducing a "visual" colorful experience for the blind using auditory sensory substitution.

    PubMed

    Abboud, Sami; Hanassy, Shlomi; Levy-Tzedek, Shelly; Maidenbaum, Shachar; Amedi, Amir

    2014-01-01

    Sensory-substitution devices (SSDs) provide auditory or tactile representations of visual information. These devices often generate unpleasant sensations and mostly lack color information. We present here a novel SSD aimed at addressing these issues. We developed the EyeMusic, a novel visual-to-auditory SSD for the blind, providing both shape and color information. Our design uses musical notes on a pentatonic scale generated by natural instruments to convey the visual information in a pleasant manner. A short behavioral protocol was utilized to train the blind to extract shape and color information, and test their acquired abilities. Finally, we conducted a survey and a comparison task to assess the pleasantness of the generated auditory stimuli. We show that basic shape and color information can be decoded from the generated auditory stimuli. High performance levels were achieved by all participants following as little as 2-3 hours of training. Furthermore, we show that users indeed found the stimuli pleasant and potentially tolerable for prolonged use. The novel EyeMusic algorithm provides an intuitive and relatively pleasant way for the blind to extract shape and color information. We suggest that this might help facilitating visual rehabilitation because of the added functionality and enhanced pleasantness.

  18. Basic multisensory functions can be acquired after congenital visual pattern deprivation in humans.

    PubMed

    Putzar, Lisa; Gondan, Matthias; Röder, Brigitte

    2012-01-01

    People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth.

  19. Distraction by emotional sounds: Disentangling arousal benefits and orienting costs.

    PubMed

    Max, Caroline; Widmann, Andreas; Kotz, Sonja A; Schröger, Erich; Wetzel, Nicole

    2015-08-01

    Unexpectedly occurring task-irrelevant stimuli have been shown to impair performance. They capture attention away from the main task leaving fewer resources for target processing. However, the actual distraction effect depends on various variables; for example, only target-informative distractors have been shown to cause costs of attentional orienting. Furthermore, recent studies have shown that high arousing emotional distractors, as compared with low arousing neutral distractors, can improve performance by increasing alertness. We aimed to separate costs of attentional orienting and benefits of arousal by presenting negative and neutral environmental sounds (novels) as oddballs in an auditory-visual distraction paradigm. Participants categorized pictures while task-irrelevant sounds preceded visual targets in two conditions: (a) informative sounds reliably signaled onset and occurrence of visual targets, and (b) noninformative sounds occurred unrelated to visual targets. Results confirmed that only informative novels yield distraction. Importantly, irrespective of sounds' informational value participants responded faster in trials with high arousing negative as compared with moderately arousing neutral novels. That is, costs related to attentional orienting are modulated by information, whereas benefits related to emotional arousal are independent of a sound's informational value. This favors a nonspecific facilitating cross-modal influence of emotional arousal on visual task performance and suggests that behavioral distraction by noninformative novels is controlled after their motivational significance has been determined. (c) 2015 APA, all rights reserved).

  20. Hierarchical Spatio-temporal Visual Analysis of Cluster Evolution in Electrocorticography Data

    DOE PAGES

    Murugesan, Sugeerth; Bouchard, Kristofer; Chang, Edward; ...

    2016-10-02

    Here, we present ECoG ClusterFlow, a novel interactive visual analysis tool for the exploration of high-resolution Electrocorticography (ECoG) data. Our system detects and visualizes dynamic high-level structures, such as communities, using the time-varying spatial connectivity network derived from the high-resolution ECoG data. ECoG ClusterFlow provides a multi-scale visualization of the spatio-temporal patterns underlying the time-varying communities using two views: 1) an overview summarizing the evolution of clusters over time and 2) a hierarchical glyph-based technique that uses data aggregation and small multiples techniques to visualize the propagation of clusters in their spatial domain. ECoG ClusterFlow makes it possible 1) tomore » compare the spatio-temporal evolution patterns across various time intervals, 2) to compare the temporal information at varying levels of granularity, and 3) to investigate the evolution of spatial patterns without occluding the spatial context information. Lastly, we present case studies done in collaboration with neuroscientists on our team for both simulated and real epileptic seizure data aimed at evaluating the effectiveness of our approach.« less

  1. An fMRI-study of locally oriented perception in autism: altered early visual processing of the block design test.

    PubMed

    Bölte, S; Hubl, D; Dierks, T; Holtmann, M; Poustka, F

    2008-01-01

    Autism has been associated with enhanced local processing on visual tasks. Originally, this was based on findings that individuals with autism exhibited peak performance on the block design test (BDT) from the Wechsler Intelligence Scales. In autism, the neurofunctional correlates of local bias on this test have not yet been established, although there is evidence of alterations in the early visual cortex. Functional MRI was used to analyze hemodynamic responses in the striate and extrastriate visual cortex during BDT performance and a color counting control task in subjects with autism compared to healthy controls. In autism, BDT processing was accompanied by low blood oxygenation level-dependent signal changes in the right ventral quadrant of V2. Findings indicate that, in autism, locally oriented processing of the BDT is associated with altered responses of angle and grating-selective neurons, that contribute to shape representation, figure-ground, and gestalt organization. The findings favor a low-level explanation of BDT performance in autism.

  2. Manipulating and Visualizing Molecular Interactions in Customized Nanoscale Spaces

    NASA Astrophysics Data System (ADS)

    Stabile, Francis; Henkin, Gil; Berard, Daniel; Shayegan, Marjan; Leith, Jason; Leslie, Sabrina

    We present a dynamically adjustable nanofluidic platform for formatting the conformations of and visualizing the interaction kinetics between biomolecules in solution, offering new time resolution and control of the reaction processes. This platform extends convex lens-induced confinement (CLiC), a technique for imaging molecules under confinement, by introducing a system for in situ modification of the chemical environment; this system uses a deep microchannel to diffusively exchange reagents within the nanoscale imaging region, whose height is fixed by a nanopost array. To illustrate, we visualize and manipulate salt-induced, surfactant-induced, and enzyme-induced reactions between small-molecule reagents and DNA molecules, where the conformations of the DNA molecules are formatted by the imposed nanoscale confinement. By using nanofabricated, nonabsorbing, low-background glass walls to confine biomolecules, our nanofluidic platform facilitates quantitative exploration of physiologically and biotechnologically relevant processes at the nanoscale. This device provides new kinetic information about dynamic chemical processes at the single-molecule level, using advancements in the CLiC design including a microchannel-based diffuser and postarray-based dialysis slit.

  3. Cortical feedback signals generalise across different spatial frequencies of feedforward inputs.

    PubMed

    Revina, Yulia; Petro, Lucy S; Muckli, Lars

    2017-09-22

    Visual processing in cortex relies on feedback projections contextualising feedforward information flow. Primary visual cortex (V1) has small receptive fields and processes feedforward information at a fine-grained spatial scale, whereas higher visual areas have larger, spatially invariant receptive fields. Therefore, feedback could provide coarse information about the global scene structure or alternatively recover fine-grained structure by targeting small receptive fields in V1. We tested if feedback signals generalise across different spatial frequencies of feedforward inputs, or if they are tuned to the spatial scale of the visual scene. Using a partial occlusion paradigm, functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) we investigated whether feedback to V1 contains coarse or fine-grained information by manipulating the spatial frequency of the scene surround outside an occluded image portion. We show that feedback transmits both coarse and fine-grained information as it carries information about both low (LSF) and high spatial frequencies (HSF). Further, feedback signals containing LSF information are similar to feedback signals containing HSF information, even without a large overlap in spatial frequency bands of the HSF and LSF scenes. Lastly, we found that feedback carries similar information about the spatial frequency band across different scenes. We conclude that cortical feedback signals contain information which generalises across different spatial frequencies of feedforward inputs. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  4. Battlefield Visualization

    DTIC Science & Technology

    1998-12-15

    A study analyzing battlefield visualization (BV) as a component of information dominance and superiority. This study outlines basic requirements for effective BV in terms of terrain data, information systems (synthetic environment; COA development and analysis tools) and BV development management, with a focus on technology insertion strategies. This study also reports on existing BV systems and provides 16 recommendations for Army BV support efforts, including interested organization, funding levels and duration of effort for each recommended action.

  5. Sensitivity to synchronicity of biological motion in normal and amblyopic vision

    PubMed Central

    Luu, Jennifer Y.; Levi, Dennis M.

    2017-01-01

    Amblyopia is a developmental disorder of spatial vision that results from abnormal early visual experience usually due to the presence of strabismus, anisometropia, or both strabismus and anisometropia. Amblyopia results in a range of visual deficits that cannot be corrected by optics because the deficits reflect neural abnormalities. Biological motion refers to the motion patterns of living organisms, and is normally displayed as points of lights positioned at the major joints of the body. In this experiment, our goal was twofold. We wished to examine whether the human visual system in people with amblyopia retained the higher-level processing capabilities to extract visual information from the synchronized actions of others, therefore retaining the ability to detect biological motion. Specifically, we wanted to determine if the synchronized interaction of two agents performing a dancing routine allowed the amblyopic observer to use the actions of one agent to predict the expected actions of a second agent. We also wished to establish whether synchronicity sensitivity (detection of synchronized versus desynchronized interactions) is impaired in amblyopic observers relative to normal observers. The two aims are differentiated in that the first aim looks at whether synchronized actions result in improved expected action predictions while the second aim quantitatively compares synchronicity sensitivity, or the ratio of desynchronized to synchronized detection sensitivities, to determine if there is a difference between normal and amblyopic observers. Our results show that the ability to detect biological motion requires more samples in both eyes of amblyopes than in normal control observers. The increased sample threshold is not the result of low-level losses but may reflect losses in feature integration due to undersampling in the amblyopic visual system. However, like normal observers, amblyopes are more sensitive to synchronized versus desynchronized interactions, indicating that higher-level processing of biological motion remains intact. We also found no impairment in synchronicity sensitivity in the amblyopic visual system relative to the normal visual system. Since there is no impairment in synchronicity sensitivity in either the nonamblyopic or amblyopic eye of amblyopes, our results suggest that the higher order processing of biological motion is intact. PMID:23474301

  6. Image-plane processing of visual information

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.

    1984-01-01

    Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.

  7. Anorexia nervosa and body dysmorphic disorder are associated with abnormalities in processing visual information.

    PubMed

    Li, W; Lai, T M; Bohon, C; Loo, S K; McCurdy, D; Strober, M; Bookheimer, S; Feusner, J

    2015-07-01

    Anorexia nervosa (AN) and body dysmorphic disorder (BDD) are characterized by distorted body image and are frequently co-morbid with each other, although their relationship remains little studied. While there is evidence of abnormalities in visual and visuospatial processing in both disorders, no study has directly compared the two. We used two complementary modalities--event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI)--to test for abnormal activity associated with early visual signaling. We acquired fMRI and ERP data in separate sessions from 15 unmedicated individuals in each of three groups (weight-restored AN, BDD, and healthy controls) while they viewed images of faces and houses of different spatial frequencies. We used joint independent component analyses to compare activity in visual systems. AN and BDD groups demonstrated similar hypoactivity in early secondary visual processing regions and the dorsal visual stream when viewing low spatial frequency faces, linked to the N170 component, as well as in early secondary visual processing regions when viewing low spatial frequency houses, linked to the P100 component. Additionally, the BDD group exhibited hyperactivity in fusiform cortex when viewing high spatial frequency houses, linked to the N170 component. Greater activity in this component was associated with lower attractiveness ratings of faces. Results provide preliminary evidence of similar abnormal spatiotemporal activation in AN and BDD for configural/holistic information for appearance- and non-appearance-related stimuli. This suggests a common phenotype of abnormal early visual system functioning, which may contribute to perceptual distortions.

  8. Adolescents with Low Vision: Perceptions of Driving and Nondriving

    ERIC Educational Resources Information Center

    Sacks, Sharon Zell; Rosenblum, L. Penny

    2006-01-01

    Two studies examined how adolescents with low vision perceive their ability to drive. The results of both studies indicated similarities in the participants' responses with respect to knowledge of visual impairment, information about options for driving with low vision, frustrations and obstacles imposed by not being able to drive, and independent…

  9. Two visual systems in monitoring of dynamic traffic: effects of visual disruption.

    PubMed

    Zheng, Xianjun Sam; McConkie, George W

    2010-05-01

    Studies from neurophysiology and neuropsychology provide support for two separate object- and location-based visual systems, ventral and dorsal. In the driving context, a study was conducted using a change detection paradigm to explore drivers' ability to monitor the dynamic traffic flow, and the effects of visual disruption on these two visual systems. While driving, a discrete change, such as vehicle location, color, or identity, was occasionally made in one of the vehicles on the road ahead of the driver. Experiment results show that without visual disruption, all changes were detected very well; yet, these equally perceivable changes were disrupted differently by a brief blank display (150 ms): the detection of location changes was especially reduced. The disruption effects were also bigger for the parked vehicle compared to the moving ones. The findings support the different roles for two visual systems in monitoring the dynamic traffic: the "where", dorsal system, tracks vehicle spatiotemporal information on perceptual level, encoding information in a coarse and transient manner; whereas the "what", ventral system, monitors vehicles' featural information, encoding information more accurately and robustly. Both systems work together contributing to the driver's situation awareness of traffic. Benefits and limitations of using the driving simulation are also discussed. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  10. How scientists develop competence in visual communication

    NASA Astrophysics Data System (ADS)

    Ostergren, Marilyn

    Visuals (maps, charts, diagrams and illustrations) are an important tool for communication in most scientific disciplines, which means that scientists benefit from having strong visual communication skills. This dissertation examines the nature of competence in visual communication and the means by which scientists acquire this competence. This examination takes the form of an extensive multi-disciplinary integrative literature review and a series of interviews with graduate-level science students. The results are presented as a conceptual framework that lays out the components of competence in visual communication, including the communicative goals of science visuals, the characteristics of effective visuals, the skills and knowledge needed to create effective visuals and the learning experiences that promote the acquisition of these forms of skill and knowledge. This conceptual framework can be used to inform pedagogy and thus help graduate students achieve a higher level of competency in this area; it can also be used to identify aspects of acquiring competence in visual communication that need further study.

  11. An investigation of multitasking information behavior and the influence of working memory and flow

    NASA Astrophysics Data System (ADS)

    Alexopoulou, Peggy; Hepworth, Mark; Morris, Anne

    2015-02-01

    This study explored the multitasking information behaviour of Web users and how this is influenced by working memory, flow and Personal, Artefact and Task characteristics, as described in the PAT model. The research was exploratory using a pragmatic, mixed method approach. Thirty University students participated; 10 psychologists, 10 accountants and 10 mechanical engineers. The data collection tools used were: pre and post questionnaires, a working memory test, a flow state scale test, audio-visual data, web search logs, think aloud data, observation, and the critical decision method. All participants searched information on the Web for four topics: two for which they had prior knowledge and two more without prior knowledge. Perception of task complexity was found to be related to working memory. People with low working memory reported a significant increase in task complexity after they had completed information searching tasks for which they had no prior knowledge, this was not the case for tasks with prior knowledge. Regarding flow and task complexity, the results confirmed the suggestion of the PAT model (Finneran and Zhang, 2003), which proposed that a complex task can lead to anxiety and low flow levels as well as to perceived challenge and high flow levels. However, the results did not confirm the suggestion of the PAT model regarding the characteristics of web search systems and especially perceived vividness. All participants experienced high vividness. According to the PAT model, however, only people with high flow should experience high levels of vividness. Flow affected the degree of change of knowledge of the participants. People with high flow gained more knowledge for tasks without prior knowledge rather than people with low flow. Furthermore, accountants felt that tasks without prior knowledge were less complex at the end of the web seeking procedure than psychologists and mechanical engineers. Finally, the three disciplines appeared to differ regarding the multitasking information behaviour characteristics such as queries, web search sessions and opened tabs/windows.

  12. Reading performance with low-vision aids and vision-related quality of life after macular translocation surgery in patients with age-related macular degeneration.

    PubMed

    Nguyen, Nhung X; Besch, Dorothea; Bartz-Schmidt, Karl; Gelisken, Faik; Trauzettel-Klosinski, Susanne

    2007-12-01

    The aim of the present study was to evaluate the power of magnification required, reading performance with low-vision aids and vision-related quality of life with reference to reading ability and ability to carry out day-to-day activities in patients after macular translocation. This study included 15 patients who had undergone macular translocation with 360-degree peripheral retinectomy. The mean length of follow-up was 19.2 +/- 10.8 months (median 11 months). At the final examination, the impact of visual impairment on reading ability and quality of life was assessed according to a modified 9-item questionnaire in conjunction with a comprehensive clinical examination, which included assessment of best corrected visual acuity (BCVA), the magnification power required for reading, use of low-vision aids and reading speed. Patients rated the extent to which low vision restricted their ability to read and participate in other activities that affect quality of life. Responses were scored on a scale of 1.0 (optimum self-evaluation) to 5.0 (very poor). In the operated eye, overall mean postoperative BCVA (distance) was not significantly better than mean preoperative BCVA (0.11 +/- 0.06 and 0.15 +/- 0.08, respectively; p = 0.53). However, 53% of patients reported a subjective increase in visual function after treatment. At the final visit, the mean magnification required was x 7.7 +/- 6.7. A total of 60% of patients needed optical magnifiers for reading and in 40% of patients closed-circuit TV systems were necessary. All patients were able to read newspaper print using adapted low-vision aids at a mean reading speed of 71 +/- 40 words per minute. Mean self-reported scores were 3.2 +/- 1.1 for reading, 2.5 +/- 0.7 for day-to-day activities and 2.7 +/- 3.0 for outdoor walking and using steps or stairs. Patients' levels of dependency were significantly correlated with scores for reading (p = 0.01), day-to-day activities (p < 0.001) and outdoor walking and using steps (p = 0.001). The evaluation of self-reported visual function and vision-related quality of life in patients after macular translocation is necessary to obtain detailed information on treatment effects. Our results indicated improvement in patients' subjective evaluations of visual function, without significant improvement in visual acuity. The postoperative clinical benefits of treatment coincide with subjective benefits in terms of reading ability, quality of life and patient satisfaction. Our study confirms the importance and efficiency of visual rehabilitation with aids for low vision after surgery.

  13. Visual exploration and analysis of human-robot interaction rules

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Boyles, Michael J.

    2013-01-01

    We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming interfaces, information visualization, and visual data mining methods to facilitate designing, comprehending, and evaluating HRI interfaces.

  14. A Graphene-enhanced imaging of microRNA with enzyme-free signal amplification of catalyzed hairpin assembly in living cells.

    PubMed

    Liu, Haiyun; Tian, Tian; Ji, Dandan; Ren, Na; Ge, Shenguang; Yan, Mei; Yu, Jinghua

    2016-11-15

    In situ imaging of miRNA in living cells could help us to monitor the miRNA expression in real time and obtain accurate information for studying miRNA related bioprocesses and disease. Given the low-level expression of miRNA, amplification strategies for intracellular miRNA are imperative. Here, we propose an amplification strategy with a non-destructive enzyme-free manner in living cells using catalyzed hairpin assembly (CHA) based on graphene oxide (GO) for cellular miRNA imaging. The enzyme-free CHA exhibits stringent recognition and excellent signal amplification of miRNA in the living cells. GO is a good candidate as a fluorescence quencher and cellular carrier. Taking the advantages of the CHA and GO, we can monitor the miRNA at low level in living cells with a simple, sensitive and real-time manner. Finally, imaging of miRNAs in the different expression cells is realized. The novel method could supply an effective tool to visualize intracellular low-level miRNAs and help us to further understand the role of miRNAs in cellular processes. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Visualization portal for genetic variation (VizGVar): a tool for interactive visualization of SNPs and somatic mutations in exons, genes and protein domains.

    PubMed

    Solano-Román, Antonio; Alfaro-Arias, Verónica; Cruz-Castillo, Carlos; Orozco-Solano, Allan

    2018-03-15

    VizGVar was designed to meet the growing need of the research community for improved genomic and proteomic data viewers that benefit from better information visualization. We implemented a new information architecture and applied user centered design principles to provide a new improved way of visualizing genetic information and protein data related to human disease. VizGVar connects the entire database of Ensembl protein motifs, domains, genes and exons with annotated SNPs and somatic variations from PharmGKB and COSMIC. VizGVar precisely represents genetic variations and their respective location by colored curves to designate different types of variations. The structured hierarchy of biological data is reflected in aggregated patterns through different levels, integrating several layers of information at once. VizGVar provides a new interactive, web-based JavaScript visualization of somatic mutations and protein variation, enabling fast and easy discovery of clinically relevant variation patterns. VizGVar is accessible at http://vizport.io/vizgvar; http://vizport.io/vizgvar/doc/. asolano@broadinstitute.org or allan.orozcosolano@ucr.ac.cr.

  16. Students using visual thinking to learn science in a Web-based environment

    NASA Astrophysics Data System (ADS)

    Plough, Jean Margaret

    United States students' science test scores are low, especially in problem solving, and traditional science instruction could be improved. Consequently, visual thinking, constructing science structures, and problem solving in a web-based environment may be valuable strategies for improving science learning. This ethnographic study examined the science learning of fifteen fourth grade students in an after school computer club involving diverse students at an inner city school. The investigation was done from the perspective of the students, and it described the processes of visual thinking, web page construction, and problem solving in a web-based environment. The study utilized informal group interviews, field notes, Visual Learning Logs, and student web pages, and incorporated a Standards-Based Rubric which evaluated students' performance on eight science and technology standards. The Visual Learning Logs were drawings done on the computer to represent science concepts related to the Food Chain. Students used the internet to search for information on a plant or animal of their choice. Next, students used this internet information, with the information from their Visual Learning Logs, to make web pages on their plant or animal. Later, students linked their web pages to form Science Structures. Finally, students linked their Science Structures with the structures of other students, and used these linked structures as models for solving problems. Further, during informal group interviews, students answered questions about visual thinking, problem solving, and science concepts. The results of this study showed clearly that (1) making visual representations helped students understand science knowledge, (2) making links between web pages helped students construct Science Knowledge Structures, and (3) students themselves said that visual thinking helped them learn science. In addition, this study found that when using Visual Learning Logs, the main overall ideas of the science concepts were usually represented accurately. Further, looking for information on the internet may cause new problems in learning. Likewise, being absent, starting late, and/or dropping out all may negatively influence students' proficiency on the standards. Finally, the way Science Structures are constructed and linked may provide insights into the way individual students think and process information.

  17. Dissociation of sensitivity to spatial frequency in word and face preferential areas of the fusiform gyrus.

    PubMed

    Woodhead, Zoe Victoria Joan; Wise, Richard James Surtees; Sereno, Marty; Leech, Robert

    2011-10-01

    Different cortical regions within the ventral occipitotemporal junction have been reported to show preferential responses to particular objects. Thus, it is argued that there is evidence for a left-lateralized visual word form area and a right-lateralized fusiform face area, but the unique specialization of these areas remains controversial. Words are characterized by greater power in the high spatial frequency (SF) range, whereas faces comprise a broader range of high and low frequencies. We investigated how these high-order visual association areas respond to simple sine-wave gratings that varied in SF. Using functional magnetic resonance imaging, we demonstrated lateralization of activity that was concordant with the low-level visual property of words and faces; left occipitotemporal cortex is more strongly activated by high than by low SF gratings, whereas the right occipitotemporal cortex responded more to low than high spatial frequencies. Therefore, the SF of a visual stimulus may bias the lateralization of processing irrespective of its higher order properties.

  18. Acquiring skill at medical image inspection: learning localized in early visual processes

    NASA Astrophysics Data System (ADS)

    Sowden, Paul T.; Davies, Ian R. L.; Roling, Penny; Watt, Simon J.

    1997-04-01

    Acquisition of the skill of medical image inspection could be due to changes in visual search processes, 'low-level' sensory learning, and higher level 'conceptual learning.' Here, we report two studies that investigate the extent to which learning in medical image inspection involves low- level learning. Early in the visual processing pathway cells are selective for direction of luminance contrast. We exploit this in the present studies by using transfer across direction of contrast as a 'marker' to indicate the level of processing at which learning occurs. In both studies twelve observers trained for four days at detecting features in x- ray images (experiment one equals discs in the Nijmegen phantom, experiment two equals micro-calcification clusters in digitized mammograms). Half the observers examined negative luminance contrast versions of the images and the remainder examined positive contrast versions. On the fifth day, observers swapped to inspect their respective opposite contrast images. In both experiments leaning occurred across sessions. In experiment one, learning did not transfer across direction of luminance contrast, while in experiment two there was only partial transfer. These findings are consistent with the contention that some of the leaning was localized early in the visual processing pathway. The implications of these results for current medical image inspection training schedules are discussed.

  19. The forest, the trees, and the leaves: Differences of processing across development.

    PubMed

    Krakowski, Claire-Sara; Poirel, Nicolas; Vidal, Julie; Roëll, Margot; Pineau, Arlette; Borst, Grégoire; Houdé, Olivier

    2016-08-01

    To act and think, children and adults are continually required to ignore irrelevant visual information to focus on task-relevant items. As real-world visual information is organized into structures, we designed a feature visual search task containing 3-level hierarchical stimuli (i.e., local shapes that constituted intermediate shapes that formed the global figure) that was presented to 112 participants aged 5, 6, 9, and 21 years old. This task allowed us to explore (a) which level is perceptively the most salient at each age (i.e., the fastest detected level) and (b) what kind of attentional processing occurs for each level across development (i.e., efficient processing: detection time does not increase with the number of stimuli on the display; less efficient processing: detection time increases linearly with the growing number of distractors). Results showed that the global level was the most salient at 5 years of age, whereas the global and intermediate levels were both salient for 9-year-olds and adults. Interestingly, at 6 years of age, the intermediate level was the most salient level. Second, all participants showed an efficient processing of both intermediate and global levels of hierarchical stimuli, and a less efficient processing of the local level, suggesting a local disadvantage rather than a global advantage in visual search. The cognitive cost for selecting the local target was higher for 5- and 6-year-old children compared to 9-year-old children and adults. These results are discussed with regards to the development of executive control. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. Effects of Congruent and Incongruent Stimulus Colour on Flavour Discriminations.

    PubMed

    Wieneke, Leonie; Schmuck, Pauline; Zacher, Julia; Greenlee, Mark W; Plank, Tina

    2018-01-01

    In addition to gustatory, olfactory and somatosensory input, visual information plays a role in our experience of food and drink. We asked whether colour in this context has an effect at the perceptual level via multisensory integration or if higher level cognitive factors are involved. Using an articulatory suppression task, comparable to Stevenson and Oaten, cognitive processes should be interrupted during a flavour discriminatory task, so that any residual colour effects would be traceable to low-level integration. Subjects judged in a three-alternative forced-choice paradigm the presence of a different flavour (triangle test). On each trial, they tasted three liquids from identical glasses, with one of them containing a different flavour. The substances were congruent in colour and flavour, incongruent or uncoloured. Subjects who performed the articulatory suppression task responded faster and made fewer errors. The findings suggest a role for higher level cognitive processing in the effect of colour on flavour judgements.

  1. Effects of Congruent and Incongruent Stimulus Colour on Flavour Discriminations

    PubMed Central

    Wieneke, Leonie; Schmuck, Pauline; Zacher, Julia; Plank, Tina

    2018-01-01

    In addition to gustatory, olfactory and somatosensory input, visual information plays a role in our experience of food and drink. We asked whether colour in this context has an effect at the perceptual level via multisensory integration or if higher level cognitive factors are involved. Using an articulatory suppression task, comparable to Stevenson and Oaten, cognitive processes should be interrupted during a flavour discriminatory task, so that any residual colour effects would be traceable to low-level integration. Subjects judged in a three-alternative forced-choice paradigm the presence of a different flavour (triangle test). On each trial, they tasted three liquids from identical glasses, with one of them containing a different flavour. The substances were congruent in colour and flavour, incongruent or uncoloured. Subjects who performed the articulatory suppression task responded faster and made fewer errors. The findings suggest a role for higher level cognitive processing in the effect of colour on flavour judgements. PMID:29755721

  2. Natural image statistics and low-complexity feature selection.

    PubMed

    Vasconcelos, Manuela; Vasconcelos, Nuno

    2009-02-01

    Low-complexity feature selection is analyzed in the context of visual recognition. It is hypothesized that high-order dependences of bandpass features contain little information for discrimination of natural images. This hypothesis is characterized formally by the introduction of the concepts of conjunctive interference and decomposability order of a feature set. Necessary and sufficient conditions for the feasibility of low-complexity feature selection are then derived in terms of these concepts. It is shown that the intrinsic complexity of feature selection is determined by the decomposability order of the feature set and not its dimension. Feature selection algorithms are then derived for all levels of complexity and are shown to be approximated by existing information-theoretic methods, which they consistently outperform. The new algorithms are also used to objectively test the hypothesis of low decomposability order through comparison of classification performance. It is shown that, for image classification, the gain of modeling feature dependencies has strongly diminishing returns: best results are obtained under the assumption of decomposability order 1. This suggests a generic law for bandpass features extracted from natural images: that the effect, on the dependence of any two features, of observing any other feature is constant across image classes.

  3. Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension.

    PubMed

    Drijvers, Linda; Özyürek, Asli

    2017-01-01

    This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture). Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-band noise-vocoding or visual-only conditions. Gestural enhancement in 2-band noise-vocoding did not differ from gestural enhancement in visual-only conditions. When perceiving degraded speech in a visual context, listeners benefit more from having both visual articulators present compared with 1. This benefit was larger at 6-band than 2-band noise-vocoding, where listeners can benefit from both phonological cues from visible speech and semantic cues from iconic gestures to disambiguate speech.

  4. Excel Spreadsheet Tools for Analyzing Groundwater Level Records and Displaying Information in ArcMap

    USGS Publications Warehouse

    Tillman, Fred D.

    2009-01-01

    When beginning hydrologic investigations, a first action is often to gather existing sources of well information, compile this information into a single dataset, and visualize this information in a geographic information system (GIS) environment. This report presents tools (macros) developed using Visual Basic for Applications (VBA) for Microsoft Excel 2007 to assist in these tasks. One tool combines multiple datasets into a single worksheet and formats the resulting data for use by the other tools. A second tool produces summary information about the dataset, such as a list of unique site identification numbers, the number of water-level observations for each, and a table of the number of sites with a listed number of water-level observations. A third tool creates subsets of the original dataset based on user-specified options and produces a worksheet with water-level information for each well in the subset, including the average and standard deviation of water-level observations and maximum decline and rise in water levels between any two observations, among other information. This water-level information worksheet can be imported directly into ESRI ArcMap as an 'XY Data' file, and each of the fields of summary well information can be used for custom display. A separate set of VBA tools distributed in an additional Excel workbook creates hydrograph charts of each of the wells in the data subset produced by the aforementioned tools and produces portable document format (PDF) versions of the hydrograph charts. These PDF hydrographs can be hyperlinked to well locations in ArcMap or other GIS applications.

  5. Feature saliency and feedback information interactively impact visual category learning

    PubMed Central

    Hammer, Rubi; Sloutsky, Vladimir; Grill-Spector, Kalanit

    2015-01-01

    Visual category learning (VCL) involves detecting which features are most relevant for categorization. VCL relies on attentional learning, which enables effectively redirecting attention to object’s features most relevant for categorization, while ‘filtering out’ irrelevant features. When features relevant for categorization are not salient, VCL relies also on perceptual learning, which enables becoming more sensitive to subtle yet important differences between objects. Little is known about how attentional learning and perceptual learning interact when VCL relies on both processes at the same time. Here we tested this interaction. Participants performed VCL tasks in which they learned to categorize novel stimuli by detecting the feature dimension relevant for categorization. Tasks varied both in feature saliency (low-saliency tasks that required perceptual learning vs. high-saliency tasks), and in feedback information (tasks with mid-information, moderately ambiguous feedback that increased attentional load, vs. tasks with high-information non-ambiguous feedback). We found that mid-information and high-information feedback were similarly effective for VCL in high-saliency tasks. This suggests that an increased attentional load, associated with the processing of moderately ambiguous feedback, has little effect on VCL when features are salient. In low-saliency tasks, VCL relied on slower perceptual learning; but when the feedback was highly informative participants were able to ultimately attain the same performance as during the high-saliency VCL tasks. However, VCL was significantly compromised in the low-saliency mid-information feedback task. We suggest that such low-saliency mid-information learning scenarios are characterized by a ‘cognitive loop paradox’ where two interdependent learning processes have to take place simultaneously. PMID:25745404

  6. Task relevance predicts gaze in videos of real moving scenes.

    PubMed

    Howard, Christina J; Gilchrist, Iain D; Troscianko, Tom; Behera, Ardhendu; Hogg, David C

    2011-09-01

    Low-level stimulus salience and task relevance together determine the human fixation priority assigned to scene locations (Fecteau and Munoz in Trends Cogn Sci 10(8):382-390, 2006). However, surprisingly little is known about the contribution of task relevance to eye movements during real-world visual search where stimuli are in constant motion and where the 'target' for the visual search is abstract and semantic in nature. Here, we investigate this issue when participants continuously search an array of four closed-circuit television (CCTV) screens for suspicious events. We recorded eye movements whilst participants watched real CCTV footage and moved a joystick to continuously indicate perceived suspiciousness. We find that when multiple areas of a display compete for attention, gaze is allocated according to relative levels of reported suspiciousness. Furthermore, this measure of task relevance accounted for twice the amount of variance in gaze likelihood as the amount of low-level visual changes over time in the video stimuli.

  7. Adaptive Visualization for Focused Personalized Information Retrieval

    ERIC Educational Resources Information Center

    Ahn, Jae-wook

    2010-01-01

    The new trend on the Web has totally changed today's information access environment. The traditional information overload problem has evolved into the qualitative level beyond the quantitative growth. The mode of producing and consuming information is changing and we need a new paradigm for accessing information. Personalized search is one of…

  8. Fast object reconstruction in block-based compressive low-light-level imaging

    NASA Astrophysics Data System (ADS)

    Ke, Jun; Sui, Dong; Wei, Ping

    2014-11-01

    In this paper we propose a simply yet effective and efficient method for long-term object tracking. Different from traditional visual tracking method which mainly depends on frame-to-frame correspondence, we combine high-level semantic information with low-level correspondences. Our framework is formulated in a confidence selection framework, which allows our system to recover from drift and partly deal with occlusion problem. To summarize, our algorithm can be roughly decomposed in a initialization stage and a tracking stage. In the initialization stage, an offline classifier is trained to get the object appearance information in category level. When the video stream is coming, the pre-trained offline classifier is used for detecting the potential target and initializing the tracking stage. In the tracking stage, it consists of three parts which are online tracking part, offline tracking part and confidence judgment part. Online tracking part captures the specific target appearance information while detection part localizes the object based on the pre-trained offline classifier. Since there is no data dependence between online tracking and offline detection, these two parts are running in parallel to significantly improve the processing speed. A confidence selection mechanism is proposed to optimize the object location. Besides, we also propose a simple mechanism to judge the absence of the object. If the target is lost, the pre-trained offline classifier is utilized to re-initialize the whole algorithm as long as the target is re-located. During experiment, we evaluate our method on several challenging video sequences and demonstrate competitive results.

  9. Change blindness and visual memory: visual representations get rich and act poor.

    PubMed

    Varakin, D Alexander; Levin, Daniel T

    2006-02-01

    Change blindness is often taken as evidence that visual representations are impoverished, while successful recognition of specific objects is taken as evidence that they are richly detailed. In the current experiments, participants performed cover tasks that required each object in a display to be attended. Change detection trials were unexpectedly introduced and surprise recognition tests were given for nonchanging displays. For both change detection and recognition, participants had to distinguish objects from the same basic-level category, making it likely that specific visual information had to be used for successful performance. Although recognition was above chance, incidental change detection usually remained at floor. These results help reconcile demonstrations of poor change detection with demonstrations of good memory because they suggest that the capability to store visual information in memory is not reflected by the visual system's tendency to utilize these representations for purposes of detecting unexpected changes.

  10. Validation and reliability of the VF-14 questionnaire in a German population.

    PubMed

    Chiang, Peggy Pei-Chia; Fenwick, Eva; Marella, Manjula; Finger, Robert; Lamoureux, Ecosse

    2011-11-21

    To evaluate the validity, reliability, and measurement characteristics of the Visual Function 14 (VF-14) in a German sample using Rasch analysis. This was a clinic-based, cross-sectional study with 184 patients with low vision recruited from an outpatient clinic at a German eye hospital. Participants underwent a clinical examination and completed the German VF-14 scale. The validity of the VF-14 scale was assessed using Rasch analysis. The main outcome measure was the overall functional score provided by the VF-14. After collapsing two response categories for items 13 and 14, the VF-14 scale satisfied fundamental criteria to achieve fit to the Rasch model, namely, ordered thresholds, the ability to distinguish between different strata of participant ability, absence of misfitting items, no evidence of unidimensionality, and no significant differential item functioning for key sociodemographic covariates. The VF-14 is able to discriminate between participants with different levels of vision impairment and across different cultural groups. The VF-14 is a valid, reliable, and unidimensional questionnaire for use in a German population. These findings contribute to the growing evidence base for second generation patient reported outcome measures in ophthalmology, and support the use of the German VF-14 in tertiary eye clinics in Germany to capture the impact of visual impairment on visual function from the patient's perspective and to inform low vision rehabilitation and interventions.

  11. Sensory processing during viewing of cinematographic material: Computational modeling and functional neuroimaging

    PubMed Central

    Bordier, Cecile; Puja, Francesco; Macaluso, Emiliano

    2013-01-01

    The investigation of brain activity using naturalistic, ecologically-valid stimuli is becoming an important challenge for neuroscience research. Several approaches have been proposed, primarily relying on data-driven methods (e.g. independent component analysis, ICA). However, data-driven methods often require some post-hoc interpretation of the imaging results to draw inferences about the underlying sensory, motor or cognitive functions. Here, we propose using a biologically-plausible computational model to extract (multi-)sensory stimulus statistics that can be used for standard hypothesis-driven analyses (general linear model, GLM). We ran two separate fMRI experiments, which both involved subjects watching an episode of a TV-series. In Exp 1, we manipulated the presentation by switching on-and-off color, motion and/or sound at variable intervals, whereas in Exp 2, the video was played in the original version, with all the consequent continuous changes of the different sensory features intact. Both for vision and audition, we extracted stimulus statistics corresponding to spatial and temporal discontinuities of low-level features, as well as a combined measure related to the overall stimulus saliency. Results showed that activity in occipital visual cortex and the superior temporal auditory cortex co-varied with changes of low-level features. Visual saliency was found to further boost activity in extra-striate visual cortex plus posterior parietal cortex, while auditory saliency was found to enhance activity in the superior temporal cortex. Data-driven ICA analyses of the same datasets also identified “sensory” networks comprising visual and auditory areas, but without providing specific information about the possible underlying processes, e.g., these processes could relate to modality, stimulus features and/or saliency. We conclude that the combination of computational modeling and GLM enables the tracking of the impact of bottom–up signals on brain activity during viewing of complex and dynamic multisensory stimuli, beyond the capability of purely data-driven approaches. PMID:23202431

  12. Pyramidal neurovision architecture for vision machines

    NASA Astrophysics Data System (ADS)

    Gupta, Madan M.; Knopf, George K.

    1993-08-01

    The vision system employed by an intelligent robot must be active; active in the sense that it must be capable of selectively acquiring the minimal amount of relevant information for a given task. An efficient active vision system architecture that is based loosely upon the parallel-hierarchical (pyramidal) structure of the biological visual pathway is presented in this paper. Although the computational architecture of the proposed pyramidal neuro-vision system is far less sophisticated than the architecture of the biological visual pathway, it does retain some essential features such as the converging multilayered structure of its biological counterpart. In terms of visual information processing, the neuro-vision system is constructed from a hierarchy of several interactive computational levels, whereupon each level contains one or more nonlinear parallel processors. Computationally efficient vision machines can be developed by utilizing both the parallel and serial information processing techniques within the pyramidal computing architecture. A computer simulation of a pyramidal vision system for active scene surveillance is presented.

  13. Harnessing the web information ecosystem with wiki-based visualization dashboards.

    PubMed

    McKeon, Matt

    2009-01-01

    We describe the design and deployment of Dashiki, a public website where users may collaboratively build visualization dashboards through a combination of a wiki-like syntax and interactive editors. Our goals are to extend existing research on social data analysis into presentation and organization of data from multiple sources, explore new metaphors for these activities, and participate more fully in the web!s information ecology by providing tighter integration with real-time data. To support these goals, our design includes novel and low-barrier mechanisms for editing and layout of dashboard pages and visualizations, connection to data sources, and coordinating interaction between visualizations. In addition to describing these technologies, we provide a preliminary report on the public launch of a prototype based on this design, including a description of the activities of our users derived from observation and interviews.

  14. Improving human object recognition performance using video enhancement techniques

    NASA Astrophysics Data System (ADS)

    Whitman, Lucy S.; Lewis, Colin; Oakley, John P.

    2004-12-01

    Atmospheric scattering causes significant degradation in the quality of video images, particularly when imaging over long distances. The principle problem is the reduction in contrast due to scattered light. It is known that when the scattering particles are not too large compared with the imaging wavelength (i.e. Mie scattering) then high spatial resolution information may be contained within a low-contrast image. Unfortunately this information is not easily perceived by a human observer, particularly when using a standard video monitor. A secondary problem is the difficulty of achieving a sharp focus since automatic focus techniques tend to fail in such conditions. Recently several commercial colour video processing systems have become available. These systems use various techniques to improve image quality in low contrast conditions whilst retaining colour content. These systems produce improvements in subjective image quality in some situations, particularly in conditions of haze and light fog. There is also some evidence that video enhancement leads to improved ATR performance when used as a pre-processing stage. Psychological literature indicates that low contrast levels generally lead to a reduction in the performance of human observers in carrying out simple visual tasks. The aim of this paper is to present the results of an empirical study on object recognition in adverse viewing conditions. The chosen visual task was vehicle number plate recognition at long ranges (500 m and beyond). Two different commercial video enhancement systems are evaluated using the same protocol. The results show an increase in effective range with some differences between the different enhancement systems.

  15. Flicker Adaptation of Low-Level Cortical Visual Neurons Contributes to Temporal Dilation

    ERIC Educational Resources Information Center

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    Several seconds of adaptation to a flickered stimulus causes a subsequent brief static stimulus to appear longer in duration. Nonsensory factors, such as increased arousal and attention, have been thought to mediate this flicker-based temporal-dilation aftereffect. In this study, we provide evidence that adaptation of low-level cortical visual…

  16. Integration of auditory and visual communication information in the primate ventrolateral prefrontal cortex.

    PubMed

    Sugihara, Tadashi; Diltz, Mark D; Averbeck, Bruno B; Romanski, Lizabeth M

    2006-10-25

    The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O'Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication.

  17. Integration of Auditory and Visual Communication Information in the Primate Ventrolateral Prefrontal Cortex

    PubMed Central

    Sugihara, Tadashi; Diltz, Mark D.; Averbeck, Bruno B.; Romanski, Lizabeth M.

    2009-01-01

    The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O’Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication. PMID:17065454

  18. Accurate expectancies diminish perceptual distraction during visual search

    PubMed Central

    Sy, Jocelyn L.; Guerin, Scott A.; Stegman, Anna; Giesbrecht, Barry

    2014-01-01

    The load theory of visual attention proposes that efficient selective perceptual processing of task-relevant information during search is determined automatically by the perceptual demands of the display. If the perceptual demands required to process task-relevant information are not enough to consume all available capacity, then the remaining capacity automatically and exhaustively “spills-over” to task-irrelevant information. The spill-over of perceptual processing capacity increases the likelihood that task-irrelevant information will impair performance. In two visual search experiments, we tested the automaticity of the allocation of perceptual processing resources by measuring the extent to which the processing of task-irrelevant distracting stimuli was modulated by both perceptual load and top-down expectations using behavior, functional magnetic resonance imaging, and electrophysiology. Expectations were generated using a trial-by-trial cue that provided information about the likely load of the upcoming visual search task. When the cues were valid, behavioral interference was eliminated and the influence of load on frontoparietal and visual cortical responses was attenuated relative to when the cues were invalid. In conditions in which task-irrelevant information interfered with performance and modulated visual activity, individual differences in mean blood oxygenation level dependent responses measured from the left intraparietal sulcus were negatively correlated with individual differences in the severity of distraction. These results are consistent with the interpretation that a top-down biasing mechanism interacts with perceptual load to support filtering of task-irrelevant information. PMID:24904374

  19. Novel Scientific Visualization Interfaces for Interactive Information Visualization and Sharing

    NASA Astrophysics Data System (ADS)

    Demir, I.; Krajewski, W. F.

    2012-12-01

    As geoscientists are confronted with increasingly massive datasets from environmental observations to simulations, one of the biggest challenges is having the right tools to gain scientific insight from the data and communicate the understanding to stakeholders. Recent developments in web technologies make it easy to manage, visualize and share large data sets with general public. Novel visualization techniques and dynamic user interfaces allow users to interact with data, and modify the parameters to create custom views of the data to gain insight from simulations and environmental observations. This requires developing new data models and intelligent knowledge discovery techniques to explore and extract information from complex computational simulations or large data repositories. Scientific visualization will be an increasingly important component to build comprehensive environmental information platforms. This presentation provides an overview of the trends and challenges in the field of scientific visualization, and demonstrates information visualization and communication tools in the Iowa Flood Information System (IFIS), developed within the light of these challenges. The IFIS is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to and visualization of flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, and other flood-related data for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and rainfall conditions are available in the IFIS. 2D and 3D interactive visualizations in the IFIS make the data more understandable to general public. Users are able to filter data sources for their communities and selected rivers. The data and information on IFIS is also accessible through web services and mobile applications. The IFIS is optimized for various browsers and screen sizes to provide access through multiple platforms including tablets and mobile devices. Multiple view modes in the IFIS accommodate different user types from general public to researchers and decision makers by providing different level of tools and details. River view mode allows users to visualize data from multiple IFC bridge sensors and USGS stream gauges to follow flooding condition along a river. The IFIS will help communities make better-informed decisions on the occurrence of floods, and will alert communities in advance to help minimize damage of floods.

  20. Object grouping based on real-world regularities facilitates perception by reducing competitive interactions in visual cortex

    PubMed Central

    Kaiser, Daniel; Stein, Timo; Peelen, Marius V.

    2014-01-01

    In virtually every real-life situation humans are confronted with complex and cluttered visual environments that contain a multitude of objects. Because of the limited capacity of the visual system, objects compete for neural representation and cognitive processing resources. Previous work has shown that such attentional competition is partly object based, such that competition among elements is reduced when these elements perceptually group into an object based on low-level cues. Here, using functional MRI (fMRI) and behavioral measures, we show that the attentional benefit of grouping extends to higher-level grouping based on the relative position of objects as experienced in the real world. An fMRI study designed to measure competitive interactions among objects in human visual cortex revealed reduced neural competition between objects when these were presented in commonly experienced configurations, such as a lamp above a table, relative to the same objects presented in other configurations. In behavioral visual search studies, we then related this reduced neural competition to improved target detection when distracter objects were shown in regular configurations. Control studies showed that low-level grouping could not account for these results. We interpret these findings as reflecting the grouping of objects based on higher-level spatial-relational knowledge acquired through a lifetime of seeing objects in specific configurations. This interobject grouping effectively reduces the number of objects that compete for representation and thereby contributes to the efficiency of real-world perception. PMID:25024190

Top