Semantics of the visual environment encoded in parahippocampal cortex
Bonner, Michael F.; Price, Amy Rose; Peelle, Jonathan E.; Grossman, Murray
2016-01-01
Semantic representations capture the statistics of experience and store this information in memory. A fundamental component of this memory system is knowledge of the visual environment, including knowledge of objects and their associations. Visual semantic information underlies a range of behaviors, from perceptual categorization to cognitive processes such as language and reasoning. Here we examine the neuroanatomic system that encodes visual semantics. Across three experiments, we found converging evidence indicating that knowledge of verbally mediated visual concepts relies on information encoded in a region of the ventral-medial temporal lobe centered on parahippocampal cortex. In an fMRI study, this region was strongly engaged by the processing of concepts relying on visual knowledge but not by concepts relying on other sensory modalities. In a study of patients with the semantic variant of primary progressive aphasia (semantic dementia), atrophy that encompassed this region was associated with a specific impairment in verbally mediated visual semantic knowledge. Finally, in a structural study of healthy adults from the fMRI experiment, gray matter density in this region related to individual variability in the processing of visual concepts. The anatomic location of these findings aligns with recent work linking the ventral-medial temporal lobe with high-level visual representation, contextual associations, and reasoning through imagination. Together this work suggests a critical role for parahippocampal cortex in linking the visual environment with knowledge systems in the human brain. PMID:26679216
Semantics of the Visual Environment Encoded in Parahippocampal Cortex.
Bonner, Michael F; Price, Amy Rose; Peelle, Jonathan E; Grossman, Murray
2016-03-01
Semantic representations capture the statistics of experience and store this information in memory. A fundamental component of this memory system is knowledge of the visual environment, including knowledge of objects and their associations. Visual semantic information underlies a range of behaviors, from perceptual categorization to cognitive processes such as language and reasoning. Here we examine the neuroanatomic system that encodes visual semantics. Across three experiments, we found converging evidence indicating that knowledge of verbally mediated visual concepts relies on information encoded in a region of the ventral-medial temporal lobe centered on parahippocampal cortex. In an fMRI study, this region was strongly engaged by the processing of concepts relying on visual knowledge but not by concepts relying on other sensory modalities. In a study of patients with the semantic variant of primary progressive aphasia (semantic dementia), atrophy that encompassed this region was associated with a specific impairment in verbally mediated visual semantic knowledge. Finally, in a structural study of healthy adults from the fMRI experiment, gray matter density in this region related to individual variability in the processing of visual concepts. The anatomic location of these findings aligns with recent work linking the ventral-medial temporal lobe with high-level visual representation, contextual associations, and reasoning through imagination. Together, this work suggests a critical role for parahippocampal cortex in linking the visual environment with knowledge systems in the human brain.
Visual Teaching Strategies for Children with Autism.
ERIC Educational Resources Information Center
Tissot, Catherine; Evans, Roy
2003-01-01
Describes the types of children with autism that would benefit from visual teaching strategies. Discusses the benefits and disadvantages of some of the more well-known programs that use visual teaching strategies, including movement-based systems relying on sign language, and materials-based systems such as Treatment and Education of Autistic and…
The Effects of Concurrent Verbal and Visual Tasks on Category Learning
ERIC Educational Resources Information Center
Miles, Sarah J.; Minda, John Paul
2011-01-01
Current theories of category learning posit separate verbal and nonverbal learning systems. Past research suggests that the verbal system relies on verbal working memory and executive functioning and learns rule-defined categories; the nonverbal system does not rely on verbal working memory and learns non-rule-defined categories (E. M. Waldron…
Fischer-Baum, Simon; Englebretson, Robert
2016-08-01
Reading relies on the recognition of units larger than single letters and smaller than whole words. Previous research has linked sublexical structures in reading to properties of the visual system, specifically on the parallel processing of letters that the visual system enables. But whether the visual system is essential for this to happen, or whether the recognition of sublexical structures may emerge by other means, is an open question. To address this question, we investigate braille, a writing system that relies exclusively on the tactile rather than the visual modality. We provide experimental evidence demonstrating that adult readers of (English) braille are sensitive to sublexical units. Contrary to prior assumptions in the braille research literature, we find strong evidence that braille readers do indeed access sublexical structure, namely the processing of multi-cell contractions as single orthographic units and the recognition of morphemes within morphologically-complex words. Therefore, we conclude that the recognition of sublexical structure is not exclusively tied to the visual system. However, our findings also suggest that there are aspects of morphological processing on which braille and print readers differ, and that these differences may, crucially, be related to reading using the tactile rather than the visual sensory modality. Copyright © 2016 Elsevier B.V. All rights reserved.
Towards Infusing Giovanni with a Semantic and Provenance Aware Visualization System
NASA Astrophysics Data System (ADS)
Del Rio, N.; Pinheiro da Silva, P.; Leptoukh, G. G.; Lynnes, C.
2011-12-01
Giovanni is a Web-based application developed by GES DISC that provides simple and intuitive ways to visualize, analyze, and access vast amounts of Earth science remote sensed data. Currently, the Giovanni visualization module is only aware of the physical links (i.e., hard-coded) between data and services and consequently cannot be easily adapted to new visualization scenarios. VisKo, a semantically enabled visualization framework, can be leveraged by Giovanni as a semantic bridge between data and visualization. VisKo relates data and visualization services at conceptual (i.e., ontological) levels and relies on reasoning systems to leverage the conceptual relationships to automatically infer physical links, facilitating an adaptable environment for new visualization scenarios. This is particularly useful for Giovanni, which has been constantly retrofitted with new visualization software packages to keep up with advancement in visualization capabilities. During our prototype integration of Giovanni with VisKo, a number of future steps were identified that if implemented could cement the integration and promote our prototype to operational status. A number of integration issues arose including the mediation of different languages used by each system to characterize datasets; VisKo relies on semantic data characterization to "match-up" data with visualization processes. It was necessary to identify mappings between Giovanni XML provenance and Proof Markup Language, which is understood by VisKo. Although a translator was implemented based on identified mappings, a more elegant solution is to develop a domain data ontology specific to Giovanni and to "align" this ontology with PML, enabling VisKo to directly ingest the semantic descriptions of Giovanni data. Additionally, the relationship between dataset components (e.g., variables and attributes) and visualization plot components (e.g., geometries, axes, titles) should also be modeled. In Giovanni, meta-data descriptions are used to configure the different properties of the plots such as titles, color-tables, and variable-to-axis bindings. Giovanni services rely on a set of custom attributes and naming conventions that help identify the relationships between dataset components and plot properties. VisKo visualization services however are generic modules that do not rely on any domain specific conventions for identifying relationships between dataset attributes and plot configuration. Rather, VisKo services rely on parameters to configure specific behaviors of the generic services. The relationship between VisKo parameters and plot properties however has yet to formally documented, partly because VisKo regards plots as holistic entities without any internal structure from which to relate parameters. We understand the need for a visualization plot ontology that defines plot components, their retinal properties, such as position and color, and the relationship between the plot properties to controlling service parameter sets. The plot ontology would also be linked to our domain data ontology, providing VisKo with the comprehensive understanding about how data attributes can cue the configuration of plots, and how a specific plot configuration relates to service parameters.
ERIC Educational Resources Information Center
Taylor, Roger S.; Grundstrom, Erika D.
2011-01-01
Given that astronomy heavily relies on visual representations it is especially likely for individuals to assume that instructional materials, such as visual representations of the Earth-Moon system (EMS), would be relatively accurate. However, in our research, we found that images in middle-school textbooks and educational webpages were commonly…
Mastering algebra retrains the visual system to perceive hierarchical structure in equations.
Marghetis, Tyler; Landy, David; Goldstone, Robert L
2016-01-01
Formal mathematics is a paragon of abstractness. It thus seems natural to assume that the mathematical expert should rely more on symbolic or conceptual processes, and less on perception and action. We argue instead that mathematical proficiency relies on perceptual systems that have been retrained to implement mathematical skills. Specifically, we investigated whether the visual system-in particular, object-based attention-is retrained so that parsing algebraic expressions and evaluating algebraic validity are accomplished by visual processing. Object-based attention occurs when the visual system organizes the world into discrete objects, which then guide the deployment of attention. One classic signature of object-based attention is better perceptual discrimination within, rather than between, visual objects. The current study reports that object-based attention occurs not only for simple shapes but also for symbolic mathematical elements within algebraic expressions-but only among individuals who have mastered the hierarchical syntax of algebra. Moreover, among these individuals, increased object-based attention within algebraic expressions is associated with a better ability to evaluate algebraic validity. These results suggest that, in mastering the rules of algebra, people retrain their visual system to represent and evaluate abstract mathematical structure. We thus argue that algebraic expertise involves the regimentation and reuse of evolutionarily ancient perceptual processes. Our findings implicate the visual system as central to learning and reasoning in mathematics, leading us to favor educational approaches to mathematics and related STEM fields that encourage students to adapt, not abandon, their use of perception.
A Future of Reversals: Dyslexic Talents in a World of Computer Visualization.
ERIC Educational Resources Information Center
West, Thomas G.
1992-01-01
This paper proposes that those traits which handicap visually oriented dyslexics in a verbally oriented educational system may confer advantages in new fields which rely on visual methods of analysis, especially those in computer applications. It is suggested that such traits also characterized Albert Einstein, Michael Faraday, James Maxwell, and…
A 2D virtual reality system for visual goal-driven navigation in zebrafish larvae
Jouary, Adrien; Haudrechy, Mathieu; Candelier, Raphaël; Sumbre, German
2016-01-01
Animals continuously rely on sensory feedback to adjust motor commands. In order to study the role of visual feedback in goal-driven navigation, we developed a 2D visual virtual reality system for zebrafish larvae. The visual feedback can be set to be similar to what the animal experiences in natural conditions. Alternatively, modification of the visual feedback can be used to study how the brain adapts to perturbations. For this purpose, we first generated a library of free-swimming behaviors from which we learned the relationship between the trajectory of the larva and the shape of its tail. Then, we used this technique to infer the intended displacements of head-fixed larvae, and updated the visual environment accordingly. Under these conditions, larvae were capable of aligning and swimming in the direction of a whole-field moving stimulus and produced the fine changes in orientation and position required to capture virtual prey. We demonstrate the sensitivity of larvae to visual feedback by updating the visual world in real-time or only at the end of the discrete swimming episodes. This visual feedback perturbation caused impaired performance of prey-capture behavior, suggesting that larvae rely on continuous visual feedback during swimming. PMID:27659496
Mental Imagery and Visual Working Memory
Keogh, Rebecca; Pearson, Joel
2011-01-01
Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory - but not iconic visual memory - can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance diminished performance on visual working memory and imagery tasks, but not working memory for number strings. This suggests that luminance signals were disrupting sensory-based imagery mechanisms and not a general working memory system. Individuals with poor imagery still performed above chance in the visual working memory task, but their performance was not affected by the background luminance, suggesting a dichotomy in strategies for visual working memory: individuals with strong mental imagery rely on sensory-based imagery to support mnemonic performance, while those with poor imagery rely on different strategies. These findings could help reconcile current controversy regarding the mechanism and location of visual mnemonic storage. PMID:22195024
Mental imagery and visual working memory.
Keogh, Rebecca; Pearson, Joel
2011-01-01
Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory--but not iconic visual memory--can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance diminished performance on visual working memory and imagery tasks, but not working memory for number strings. This suggests that luminance signals were disrupting sensory-based imagery mechanisms and not a general working memory system. Individuals with poor imagery still performed above chance in the visual working memory task, but their performance was not affected by the background luminance, suggesting a dichotomy in strategies for visual working memory: individuals with strong mental imagery rely on sensory-based imagery to support mnemonic performance, while those with poor imagery rely on different strategies. These findings could help reconcile current controversy regarding the mechanism and location of visual mnemonic storage.
Using Visual Assessments and Tutorials to Teach Solar System Concepts in Introductory Astronomy
ERIC Educational Resources Information Center
LoPresto, Michael C.
2010-01-01
Visual assessments and tutorials are instruments that rely on student construction and/or examination of pictures and/or diagrams rather than multiple choice and/or short answer questions. Being a very visual subject, astronomy lends itself to assessments and tutorials of this type. What follows is a report on the results of the use of visual…
Higher-order neural processing tunes motion neurons to visual ecology in three species of hawkmoths.
Stöckl, A L; O'Carroll, D; Warrant, E J
2017-06-28
To sample information optimally, sensory systems must adapt to the ecological demands of each animal species. These adaptations can occur peripherally, in the anatomical structures of sensory organs and their receptors; and centrally, as higher-order neural processing in the brain. While a rich body of investigations has focused on peripheral adaptations, our understanding is sparse when it comes to central mechanisms. We quantified how peripheral adaptations in the eyes, and central adaptations in the wide-field motion vision system, set the trade-off between resolution and sensitivity in three species of hawkmoths active at very different light levels: nocturnal Deilephila elpenor, crepuscular Manduca sexta , and diurnal Macroglossum stellatarum. Using optical measurements and physiological recordings from the photoreceptors and wide-field motion neurons in the lobula complex, we demonstrate that all three species use spatial and temporal summation to improve visual performance in dim light. The diurnal Macroglossum relies least on summation, but can only see at brighter intensities. Manduca, with large sensitive eyes, relies less on neural summation than the smaller eyed Deilephila , but both species attain similar visual performance at nocturnal light levels. Our results reveal how the visual systems of these three hawkmoth species are intimately matched to their visual ecologies. © 2017 The Author(s).
Bring It to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis.
Stein, Manuel; Janetzko, Halldor; Lamprecht, Andreas; Breitkreutz, Thorsten; Zimmermann, Philipp; Goldlucke, Bastian; Schreck, Tobias; Andrienko, Gennady; Grossniklaus, Michael; Keim, Daniel A
2018-01-01
Analysts in professional team sport regularly perform analysis to gain strategic and tactical insights into player and team behavior. Goals of team sport analysis regularly include identification of weaknesses of opposing teams, or assessing performance and improvement potential of a coached team. Current analysis workflows are typically based on the analysis of team videos. Also, analysts can rely on techniques from Information Visualization, to depict e.g., player or ball trajectories. However, video analysis is typically a time-consuming process, where the analyst needs to memorize and annotate scenes. In contrast, visualization typically relies on an abstract data model, often using abstract visual mappings, and is not directly linked to the observed movement context anymore. We propose a visual analytics system that tightly integrates team sport video recordings with abstract visualization of underlying trajectory data. We apply appropriate computer vision techniques to extract trajectory data from video input. Furthermore, we apply advanced trajectory and movement analysis techniques to derive relevant team sport analytic measures for region, event and player analysis in the case of soccer analysis. Our system seamlessly integrates video and visualization modalities, enabling analysts to draw on the advantages of both analysis forms. Several expert studies conducted with team sport analysts indicate the effectiveness of our integrated approach.
Visual Control for Multirobot Organized Rendezvous.
Lopez-Nicolas, G; Aranda, M; Mezouar, Y; Sagues, C
2012-08-01
This paper addresses the problem of visual control of a set of mobile robots. In our framework, the perception system consists of an uncalibrated flying camera performing an unknown general motion. The robots are assumed to undergo planar motion considering nonholonomic constraints. The goal of the control task is to drive the multirobot system to a desired rendezvous configuration relying solely on visual information given by the flying camera. The desired multirobot configuration is defined with an image of the set of robots in that configuration without any additional information. We propose a homography-based framework relying on the homography induced by the multirobot system that gives a desired homography to be used to define the reference target, and a new image-based control law that drives the robots to the desired configuration by imposing a rigidity constraint. This paper extends our previous work, and the main contributions are that the motion constraints on the flying camera are removed, the control law is improved by reducing the number of required steps, the stability of the new control law is proved, and real experiments are provided to validate the proposal.
Material and shape perception based on two types of intensity gradient information
Nishida, Shin'ya
2018-01-01
Visual estimation of the material and shape of an object from a single image includes a hard ill-posed computational problem. However, in our daily life we feel we can estimate both reasonably well. The neural computation underlying this ability remains poorly understood. Here we propose that the human visual system uses different aspects of object images to separately estimate the contributions of the material and shape. Specifically, material perception relies mainly on the intensity gradient magnitude information, while shape perception relies mainly on the intensity gradient order information. A clue to this hypothesis was provided by the observation that luminance-histogram manipulation, which changes luminance gradient magnitudes but not the luminance-order map, effectively alters the material appearance but not the shape of an object. In agreement with this observation, we found that the simulated physical material changes do not significantly affect the intensity order information. A series of psychophysical experiments further indicate that human surface shape perception is robust against intensity manipulations provided they do not disturb the intensity order information. In addition, we show that the two types of gradient information can be utilized for the discrimination of albedo changes from highlights. These findings suggest that the visual system relies on these diagnostic image features to estimate physical properties in a distal world. PMID:29702644
Urban Space Explorer: A Visual Analytics System for Urban Planning.
Karduni, Alireza; Cho, Isaac; Wessel, Ginette; Ribarsky, William; Sauda, Eric; Dou, Wenwen
2017-01-01
Understanding people's behavior is fundamental to many planning professions (including transportation, community development, economic development, and urban design) that rely on data about frequently traveled routes, places, and social and cultural practices. Based on the results of a practitioner survey, the authors designed Urban Space Explorer, a visual analytics system that utilizes mobile social media to enable interactive exploration of public-space-related activity along spatial, temporal, and semantic dimensions.
An Avatar-Based Italian Sign Language Visualization System
NASA Astrophysics Data System (ADS)
Falletto, Andrea; Prinetto, Paolo; Tiotto, Gabriele
In this paper, we present an experimental system that supports the translation from Italian to Italian Sign Language (ISL) of the deaf and its visualization through a virtual character. Our objective is to develop a complete platform useful for any application and reusable on several platforms including Web, Digital Television and offline text translation. The system relies on a database that stores both a corpus of Italian words and words coded in the ISL notation system. An interface for the insertion of data is implemented, that allows future extensions and integrations.
High contrast sensitivity for visually guided flight control in bumblebees.
Chakravarthi, Aravin; Kelber, Almut; Baird, Emily; Dacke, Marie
2017-12-01
Many insects rely on vision to find food, to return to their nest and to carefully control their flight between these two locations. The amount of information available to support these tasks is, in part, dictated by the spatial resolution and contrast sensitivity of their visual systems. Here, we investigate the absolute limits of these visual properties for visually guided position and speed control in Bombus terrestris. Our results indicate that the limit of spatial vision in the translational motion detection system of B. terrestris lies at 0.21 cycles deg -1 with a peak contrast sensitivity of at least 33. In the perspective of earlier findings, these results indicate that bumblebees have higher contrast sensitivity in the motion detection system underlying position control than in their object discrimination system. This suggests that bumblebees, and most likely also other insects, have different visual thresholds depending on the behavioral context.
NASA Astrophysics Data System (ADS)
Hellman, Brandon; Bosset, Erica; Ender, Luke; Jafari, Naveed; McCann, Phillip; Nguyen, Chris; Summitt, Chris; Wang, Sunglin; Takashima, Yuzuru
2017-11-01
The ray formalism is critical to understanding light propagation, yet current pedagogy relies on inadequate 2D representations. We present a system in which real light rays are visualized through an optical system by using a collimated laser bundle of light and a fog chamber. Implementation for remote and immersive access is enabled by leveraging a commercially available 3D viewer and gesture-based remote controlling of the tool via bi-directional communication over the Internet.
Challenges in Visual Analysis of Ensembles
Crossno, Patricia
2018-04-12
Modeling physical phenomena through computational simulation increasingly relies on generating a collection of related runs, known as an ensemble. In this paper, we explore the challenges we face in developing analysis and visualization systems for large and complex ensemble data sets, which we seek to understand without having to view the results of every simulation run. Implementing approaches and ideas developed in response to this goal, we demonstrate the analysis of a 15K run material fracturing study using Slycat, our ensemble analysis system.
Challenges in Visual Analysis of Ensembles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crossno, Patricia
Modeling physical phenomena through computational simulation increasingly relies on generating a collection of related runs, known as an ensemble. In this paper, we explore the challenges we face in developing analysis and visualization systems for large and complex ensemble data sets, which we seek to understand without having to view the results of every simulation run. Implementing approaches and ideas developed in response to this goal, we demonstrate the analysis of a 15K run material fracturing study using Slycat, our ensemble analysis system.
Knowledge Management for Command and Control
2004-06-01
interfaces relies on rich visual and conceptual understanding of what is sketched, rather than the pattern-recognition technologies that most systems use...recognizers) required by other approaches. • The underlying conceptual representations that nuSketch uses enable it to serve as a front end to knowledge...constructing enemy-intent hypotheses via mixed visual and conceptual analogies. II.C. Multi-ViewPoint Clustering Analysis (MVP-CA) technology To
ERIC Educational Resources Information Center
Wilkinson, Krista M.; O'Neill, Tara; McIlvane, William J.
2014-01-01
Purpose: Many individuals with communication impairments use aided augmentative and alternative communication (AAC) systems involving letters, words, or line drawings that rely on the visual modality. It seems reasonable to suggest that display design should incorporate information about how users attend to and process visual information. The…
Integrating visual learning within a model-based ATR system
NASA Astrophysics Data System (ADS)
Carlotto, Mark; Nebrich, Mark
2017-05-01
Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.
[Visual perception and its disorders].
Ruf-Bächtiger, L
1989-11-21
It's the brain and not the eye that decides what is perceived. In spite of this fact, quite a lot is known about the functioning of the eye and the first sections of the optic tract, but little about the actual process of perception. Examination of visual perception and its malfunctions relies therefore on certain hypotheses. Proceeding from the model of functional brain systems, variant functional domains of visual perception can be distinguished. Among the more important of these domains are: digit span, visual discrimination and figure-ground discrimination. Evaluation of these functional domains allows us to understand those children with disorders of visual perception better and to develop more effective treatment methods.
Cohn, Neil
2014-01-01
How do people make sense of the sequential images in visual narratives like comics? A growing literature of recent research has suggested that this comprehension involves the interaction of multiple systems: The creation of meaning across sequential images relies on a "narrative grammar" that packages conceptual information into categorical roles organized in hierarchic constituents. These images are encapsulated into panels arranged in the layout of a physical page. Finally, how panels frame information can impact both the narrative structure and page layout. Altogether, these systems operate in parallel to construct the Gestalt whole of comprehension of this visual language found in comics.
Functional and structural comparison of visual lateralization in birds – similar but still different
Ströckens, Felix
2014-01-01
Vertebrate brains display physiological and anatomical left-right differences, which are related to hemispheric dominances for specific functions. Functional lateralizations likely rely on structural left-right differences in intra- and interhemispheric connectivity patterns that develop in tight gene-environment interactions. The visual systems of chickens and pigeons show that asymmetrical light stimulation during ontogeny induces a dominance of the left hemisphere for visuomotor control that is paralleled by projection asymmetries within the ascending visual pathways. But structural asymmetries vary essentially between both species concerning the affected pathway (thalamo- vs. tectofugal system), constancy of effects (transient vs. permanent), and the hemisphere receiving stronger bilateral input (right vs. left). These discrepancies suggest that at least two aspects of visual processes are influenced by asymmetric light stimulation: (1) visuomotor dominance develops within the ontogenetically stronger stimulated hemisphere but not necessarily in the one receiving stronger bottom-up input. As a secondary consequence of asymmetrical light experience, lateralized top-down mechanisms play a critical role in the emergence of hemispheric dominance. (2) Ontogenetic light experiences may affect the dominant use of left- and right-hemispheric strategies. Evidences from social and spatial cognition tasks indicate that chickens rely more on a right-hemispheric global strategy whereas pigeons display a dominance of the left hemisphere. Thus, behavioral asymmetries are linked to a stronger bilateral input to the right hemisphere in chickens but to the left one in pigeons. The degree of bilateral visual input may determine the dominant visual processing strategy when redundant encoding is possible. This analysis supports that environmental stimulation affects the balance between hemispheric-specific processing by lateralized interactions of bottom-up and top-down systems. PMID:24723898
Matsumiya, Kazumichi
2013-10-01
Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.
Cohn, Neil
2014-01-01
How do people make sense of the sequential images in visual narratives like comics? A growing literature of recent research has suggested that this comprehension involves the interaction of multiple systems: The creation of meaning across sequential images relies on a “narrative grammar” that packages conceptual information into categorical roles organized in hierarchic constituents. These images are encapsulated into panels arranged in the layout of a physical page. Finally, how panels frame information can impact both the narrative structure and page layout. Altogether, these systems operate in parallel to construct the Gestalt whole of comprehension of this visual language found in comics. PMID:25071651
Porting the AVS/Express scientific visualization software to Cray XT4.
Leaver, George W; Turner, Martin J; Perrin, James S; Mummery, Paul M; Withers, Philip J
2011-08-28
Remote scientific visualization, where rendering services are provided by larger scale systems than are available on the desktop, is becoming increasingly important as dataset sizes increase beyond the capabilities of desktop workstations. Uptake of such services relies on access to suitable visualization applications and the ability to view the resulting visualization in a convenient form. We consider five rules from the e-Science community to meet these goals with the porting of a commercial visualization package to a large-scale system. The application uses message-passing interface (MPI) to distribute data among data processing and rendering processes. The use of MPI in such an interactive application is not compatible with restrictions imposed by the Cray system being considered. We present details, and performance analysis, of a new MPI proxy method that allows the application to run within the Cray environment yet still support MPI communication required by the application. Example use cases from materials science are considered.
Transformation of an uncertain video search pipeline to a sketch-based visual analytics loop.
Legg, Philip A; Chung, David H S; Parry, Matthew L; Bown, Rhodri; Jones, Mark W; Griffiths, Iwan W; Chen, Min
2013-12-01
Traditional sketch-based image or video search systems rely on machine learning concepts as their core technology. However, in many applications, machine learning alone is impractical since videos may not be semantically annotated sufficiently, there may be a lack of suitable training data, and the search requirements of the user may frequently change for different tasks. In this work, we develop a visual analytics systems that overcomes the shortcomings of the traditional approach. We make use of a sketch-based interface to enable users to specify search requirement in a flexible manner without depending on semantic annotation. We employ active machine learning to train different analytical models for different types of search requirements. We use visualization to facilitate knowledge discovery at the different stages of visual analytics. This includes visualizing the parameter space of the trained model, visualizing the search space to support interactive browsing, visualizing candidature search results to support rapid interaction for active learning while minimizing watching videos, and visualizing aggregated information of the search results. We demonstrate the system for searching spatiotemporal attributes from sports video to identify key instances of the team and player performance.
Towards Guided Underwater Survey Using Light Visual Odometry
NASA Astrophysics Data System (ADS)
Nawaf, M. M.; Drap, P.; Royer, J. P.; Merad, D.; Saccone, M.
2017-02-01
A light distributed visual odometry method adapted to embedded hardware platform is proposed. The aim is to guide underwater surveys in real time. We rely on image stream captured using portable stereo rig attached to the embedded system. Taken images are analyzed on the fly to assess image quality in terms of sharpness and lightness, so that immediate actions can be taken accordingly. Images are then transferred over the network to another processing unit to compute the odometry. Relying on a standard ego-motion estimation approach, we speed up points matching between image quadruplets using a low level points matching scheme relying on fast Harris operator and template matching that is invariant to illumination changes. We benefit from having the light source attached to the hardware platform to estimate a priori rough depth belief following light divergence over distance low. The rough depth is used to limit points correspondence search zone as it linearly depends on disparity. A stochastic relative bundle adjustment is applied to minimize re-projection errors. The evaluation of the proposed method demonstrates the gain in terms of computation time w.r.t. other approaches that use more sophisticated feature descriptors. The built system opens promising areas for further development and integration of embedded computer vision techniques.
Detecting Visually Observable Disease Symptoms from Faces.
Wang, Kuan; Luo, Jiebo
2016-12-01
Recent years have witnessed an increasing interest in the application of machine learning to clinical informatics and healthcare systems. A significant amount of research has been done on healthcare systems based on supervised learning. In this study, we present a generalized solution to detect visually observable symptoms on faces using semi-supervised anomaly detection combined with machine vision algorithms. We rely on the disease-related statistical facts to detect abnormalities and classify them into multiple categories to narrow down the possible medical reasons of detecting. Our method is in contrast with most existing approaches, which are limited by the availability of labeled training data required for supervised learning, and therefore offers the major advantage of flagging any unusual and visually observable symptoms.
The contributions of vision and haptics to reaching and grasping
Stone, Kayla D.; Gonzalez, Claudia L. R.
2015-01-01
This review aims to provide a comprehensive outlook on the sensory (visual and haptic) contributions to reaching and grasping. The focus is on studies in developing children, normal, and neuropsychological populations, and in sensory-deprived individuals. Studies have suggested a right-hand/left-hemisphere specialization for visually guided grasping and a left-hand/right-hemisphere specialization for haptically guided object recognition. This poses the interesting possibility that when vision is not available and grasping relies heavily on the haptic system, there is an advantage to use the left hand. We review the evidence for this possibility and dissect the unique contributions of the visual and haptic systems to grasping. We ultimately discuss how the integration of these two sensory modalities shape hand preference. PMID:26441777
Building a robust vehicle detection and classification module
NASA Astrophysics Data System (ADS)
Grigoryev, Anton; Khanipov, Timur; Koptelov, Ivan; Bocharov, Dmitry; Postnikov, Vassily; Nikolaev, Dmitry
2015-12-01
The growing adoption of intelligent transportation systems (ITS) and autonomous driving requires robust real-time solutions for various event and object detection problems. Most of real-world systems still cannot rely on computer vision algorithms and employ a wide range of costly additional hardware like LIDARs. In this paper we explore engineering challenges encountered in building a highly robust visual vehicle detection and classification module that works under broad range of environmental and road conditions. The resulting technology is competitive to traditional non-visual means of traffic monitoring. The main focus of the paper is on software and hardware architecture, algorithm selection and domain-specific heuristics that help the computer vision system avoid implausible answers.
Memory as Perception of the Past: Compressed Time inMind and Brain.
Howard, Marc W
2018-02-01
In the visual system retinal space is compressed such that acuity decreases further from the fovea. Different forms of memory may rely on a compressed representation of time, manifested as decreased accuracy for events that happened further in the past. Neurophysiologically, "time cells" show receptive fields in time. Analogous to the compression of visual space, time cells show less acuity for events further in the past. Behavioral evidence suggests memory can be accessed by scanning a compressed temporal representation, analogous to visual search. This suggests a common computational language for visual attention and memory retrieval. In this view, time functions like a scaffolding that organizes memories in much the same way that retinal space functions like a scaffolding for visual perception. Copyright © 2017 Elsevier Ltd. All rights reserved.
Visual attention capacity: a review of TVA-based patient studies.
Habekost, Thomas; Starrfelt, Randi
2009-02-01
Psychophysical studies have identified two distinct limitations of visual attention capacity: processing speed and apprehension span. Using a simple test, these cognitive factors can be analyzed by Bundesen's Theory of Visual Attention (TVA). The method has strong specificity and sensitivity, and measurements are highly reliable. As the method is theoretically founded, it also has high validity. TVA-based assessment has recently been used to investigate a broad range of neuropsychological and neurological conditions. We present the method, including the experimental paradigm and practical guidelines to patient testing, and review existing TVA-based patient studies organized by lesion anatomy. Lesions in three anatomical regions affect visual capacity: The parietal lobes, frontal cortex and basal ganglia, and extrastriate cortex. Visual capacity thus depends on large, bilaterally distributed anatomical networks that include several regions outside the visual system. The two visual capacity parameters are functionally separable, but seem to rely on largely overlapping brain areas.
GLO-STIX: Graph-Level Operations for Specifying Techniques and Interactive eXploration
Stolper, Charles D.; Kahng, Minsuk; Lin, Zhiyuan; Foerster, Florian; Goel, Aakash; Stasko, John; Chau, Duen Horng
2015-01-01
The field of graph visualization has produced a wealth of visualization techniques for accomplishing a variety of analysis tasks. Therefore analysts often rely on a suite of different techniques, and visual graph analysis application builders strive to provide this breadth of techniques. To provide a holistic model for specifying network visualization techniques (as opposed to considering each technique in isolation) we present the Graph-Level Operations (GLO) model. We describe a method for identifying GLOs and apply it to identify five classes of GLOs, which can be flexibly combined to re-create six canonical graph visualization techniques. We discuss advantages of the GLO model, including potentially discovering new, effective network visualization techniques and easing the engineering challenges of building multi-technique graph visualization applications. Finally, we implement the GLOs that we identified into the GLO-STIX prototype system that enables an analyst to interactively explore a graph by applying GLOs. PMID:26005315
Giesbrecht, Barry; Sy, Jocelyn L.; Guerin, Scott A.
2012-01-01
Environmental context learned without awareness can facilitate visual processing of goal-relevant information. According to one view, the benefit of implicitly learned context relies on the neural systems involved in spatial attention and hippocampus-mediated memory. While this view has received empirical support, it contradicts traditional models of hippocampal function. The purpose of the present work was to clarify the influence of spatial context on visual search performance and on brain structures involved memory and attention. Event-related functional magnetic resonance imaging revealed that activity in the hippocampus as well as in visual and parietal cortex was modulated by learned visual context even though participants’ subjective reports and performance on a post-experiment recognition task indicated no explicit knowledge of the learned context. Moreover, the magnitude of the initial selective hippocampus response predicted the magnitude of the behavioral benefit due to context observed at the end of the experiment. The results suggest that implicit contextual learning is mediated by attention and memory and that these systems interact to support search of our environment. PMID:23099047
Dynamic modulation of visual and electrosensory gains for locomotor control
Sutton, Erin E.; Demir, Alican; Stamper, Sarah A.; Fortune, Eric S.; Cowan, Noah J.
2016-01-01
Animal nervous systems resolve sensory conflict for the control of movement. For example, the glass knifefish, Eigenmannia virescens, relies on visual and electrosensory feedback as it swims to maintain position within a moving refuge. To study how signals from these two parallel sensory streams are used in refuge tracking, we constructed a novel augmented reality apparatus that enables the independent manipulation of visual and electrosensory cues to freely swimming fish (n = 5). We evaluated the linearity of multisensory integration, the change to the relative perceptual weights given to vision and electrosense in relation to sensory salience, and the effect of the magnitude of sensory conflict on sensorimotor gain. First, we found that tracking behaviour obeys superposition of the sensory inputs, suggesting linear sensorimotor integration. In addition, fish rely more on vision when electrosensory salience is reduced, suggesting that fish dynamically alter sensorimotor gains in a manner consistent with Bayesian integration. However, the magnitude of sensory conflict did not significantly affect sensorimotor gain. These studies lay the theoretical and experimental groundwork for future work investigating multisensory control of locomotion. PMID:27170650
Heinen, Klaartje; Jolij, Jacob; Lamme, Victor A F
2005-09-08
Discriminating objects from their surroundings by the visual system is known as figure-ground segregation. This process entails two different subprocesses: boundary detection and subsequent surface segregation or 'filling in'. In this study, we used transcranial magnetic stimulation to test the hypothesis that temporally distinct processes in V1 and related early visual areas such as V2 or V3 are causally related to the process of figure-ground segregation. Our results indicate that correct discrimination between two visual stimuli, which relies on figure-ground segregation, requires two separate periods of information processing in the early visual cortex: one around 130-160 ms and the other around 250-280 ms.
1992-01-01
Astronaut Ulf Merbold on the stationary seat of the mini-sled, stares into an umbrella-shaped rotating dome with colored dots. Astronaut Merbold, assisted by astronaut David Hilmer, are conducting the Visual Simulator Experiment, a space physiology experiment. The Visual Stimulator Experiment measures the relative importance of visual and vestibular information in determining body orientation. When a person looks at a rotating visual field, a false sensation of self-rotation, called circularvection, results. In weightlessness, circularvection should increase immediately and may continue to increase as the nervous system comes to rely more on visual than vestibular cues. As Astronaut Merbold stares into the rotating dome with a pattern of colored dots and its interior, he turns a knob to indicate his perception of body rotation. The strength of circularvection is calculated by comparing signals from the dome and the knob. The greater the false sense of circularvection, the more the subject is relying on visual information instead of otolith information. The IML-1 mission was the first in a series of Shuttle flights dedicated to fundamental materials and life sciences research with the international partners. The participating space agencies included: NASA, the 14-nation European Space Agency (ESA), the Canadian Space Agency (CSA), the French National Center of Space Studies (CNES), the German Space Agency and the German Aerospace Research Establishment (DAR/DLR), and the National Space Development Agency of Japan (NASDA). Managed by the Marshall Space Flight Center, IML-1 was launched on January 22, 1992 aboard the Space Shuttle Orbiter Discovery (STS-42 mission).
Measuring and Visualizing Students' Behavioral Engagement in Writing Activities
ERIC Educational Resources Information Center
Liu, Ming; Calvo, Rafael A.; Pardo, Abelardo; Martin, Andrew
2015-01-01
Engagement is critical to the success of learning activities such as writing, and can be promoted with appropriate feedback. Current engagement measures rely mostly on data collected by observers or self-reported by the participants. In this paper, we describe a learning analytic system called Tracer, which derives behavioral engagement measures…
Educational Technology in Distance Learning (for the Deaf).
ERIC Educational Resources Information Center
Hales, Gerald
This discussion of the use of distance education for deaf students argues that distance education methodologies appear to be relatively attractive to the hearing impaired student because they rely to a substantial extent upon the written word and visual transmission of information. Several projects that use computer or interactive systems to teach…
What Google Maps can do for biomedical data dissemination: examples and a design study.
Jianu, Radu; Laidlaw, David H
2013-05-04
Biologists often need to assess whether unfamiliar datasets warrant the time investment required for more detailed exploration. Basing such assessments on brief descriptions provided by data publishers is unwieldy for large datasets that contain insights dependent on specific scientific questions. Alternatively, using complex software systems for a preliminary analysis may be deemed as too time consuming in itself, especially for unfamiliar data types and formats. This may lead to wasted analysis time and discarding of potentially useful data. We present an exploration of design opportunities that the Google Maps interface offers to biomedical data visualization. In particular, we focus on synergies between visualization techniques and Google Maps that facilitate the development of biological visualizations which have both low-overhead and sufficient expressivity to support the exploration of data at multiple scales. The methods we explore rely on displaying pre-rendered visualizations of biological data in browsers, with sparse yet powerful interactions, by using the Google Maps API. We structure our discussion around five visualizations: a gene co-regulation visualization, a heatmap viewer, a genome browser, a protein interaction network, and a planar visualization of white matter in the brain. Feedback from collaborative work with domain experts suggests that our Google Maps visualizations offer multiple, scale-dependent perspectives and can be particularly helpful for unfamiliar datasets due to their accessibility. We also find that users, particularly those less experienced with computer use, are attracted by the familiarity of the Google Maps API. Our five implementations introduce design elements that can benefit visualization developers. We describe a low-overhead approach that lets biologists access readily analyzed views of unfamiliar scientific datasets. We rely on pre-computed visualizations prepared by data experts, accompanied by sparse and intuitive interactions, and distributed via the familiar Google Maps framework. Our contributions are an evaluation demonstrating the validity and opportunities of this approach, a set of design guidelines benefiting those wanting to create such visualizations, and five concrete example visualizations.
What google maps can do for biomedical data dissemination: examples and a design study
2013-01-01
Background Biologists often need to assess whether unfamiliar datasets warrant the time investment required for more detailed exploration. Basing such assessments on brief descriptions provided by data publishers is unwieldy for large datasets that contain insights dependent on specific scientific questions. Alternatively, using complex software systems for a preliminary analysis may be deemed as too time consuming in itself, especially for unfamiliar data types and formats. This may lead to wasted analysis time and discarding of potentially useful data. Results We present an exploration of design opportunities that the Google Maps interface offers to biomedical data visualization. In particular, we focus on synergies between visualization techniques and Google Maps that facilitate the development of biological visualizations which have both low-overhead and sufficient expressivity to support the exploration of data at multiple scales. The methods we explore rely on displaying pre-rendered visualizations of biological data in browsers, with sparse yet powerful interactions, by using the Google Maps API. We structure our discussion around five visualizations: a gene co-regulation visualization, a heatmap viewer, a genome browser, a protein interaction network, and a planar visualization of white matter in the brain. Feedback from collaborative work with domain experts suggests that our Google Maps visualizations offer multiple, scale-dependent perspectives and can be particularly helpful for unfamiliar datasets due to their accessibility. We also find that users, particularly those less experienced with computer use, are attracted by the familiarity of the Google Maps API. Our five implementations introduce design elements that can benefit visualization developers. Conclusions We describe a low-overhead approach that lets biologists access readily analyzed views of unfamiliar scientific datasets. We rely on pre-computed visualizations prepared by data experts, accompanied by sparse and intuitive interactions, and distributed via the familiar Google Maps framework. Our contributions are an evaluation demonstrating the validity and opportunities of this approach, a set of design guidelines benefiting those wanting to create such visualizations, and five concrete example visualizations. PMID:23642009
Differential temporal dynamics during visual imagery and perception.
Dijkstra, Nadine; Mostert, Pim; Lange, Floris P de; Bosch, Sander; van Gerven, Marcel Aj
2018-05-29
Visual perception and imagery rely on similar representations in the visual cortex. During perception, visual activity is characterized by distinct processing stages, but the temporal dynamics underlying imagery remain unclear. Here, we investigated the dynamics of visual imagery in human participants using magnetoencephalography. Firstly, we show that, compared to perception, imagery decoding becomes significant later and representations at the start of imagery already overlap with later time points. This suggests that during imagery, the entire visual representation is activated at once or that there are large differences in the timing of imagery between trials. Secondly, we found consistent overlap between imagery and perceptual processing around 160 ms and from 300 ms after stimulus onset. This indicates that the N170 gets reactivated during imagery and that imagery does not rely on early perceptual representations. Together, these results provide important insights for our understanding of the neural mechanisms of visual imagery. © 2018, Dijkstra et al.
Normalization as a canonical neural computation
Carandini, Matteo; Heeger, David J.
2012-01-01
There is increasing evidence that the brain relies on a set of canonical neural computations, repeating them across brain regions and modalities to apply similar operations to different problems. A promising candidate for such a computation is normalization, in which the responses of neurons are divided by a common factor that typically includes the summed activity of a pool of neurons. Normalization was developed to explain responses in the primary visual cortex and is now thought to operate throughout the visual system, and in many other sensory modalities and brain regions. Normalization may underlie operations such as the representation of odours, the modulatory effects of visual attention, the encoding of value and the integration of multisensory information. Its presence in such a diversity of neural systems in multiple species, from invertebrates to mammals, suggests that it serves as a canonical neural computation. PMID:22108672
Nocturnal insects use optic flow for flight control
Baird, Emily; Kreiss, Eva; Wcislo, William; Warrant, Eric; Dacke, Marie
2011-01-01
To avoid collisions when navigating through cluttered environments, flying insects must control their flight so that their sensory systems have time to detect obstacles and avoid them. To do this, day-active insects rely primarily on the pattern of apparent motion generated on the retina during flight (optic flow). However, many flying insects are active at night, when obtaining reliable visual information for flight control presents much more of a challenge. To assess whether nocturnal flying insects also rely on optic flow cues to control flight in dim light, we recorded flights of the nocturnal neotropical sweat bee, Megalopta genalis, flying along an experimental tunnel when: (i) the visual texture on each wall generated strong horizontal (front-to-back) optic flow cues, (ii) the texture on only one wall generated these cues, and (iii) horizontal optic flow cues were removed from both walls. We find that Megalopta increase their groundspeed when horizontal motion cues in the tunnel are reduced (conditions (ii) and (iii)). However, differences in the amount of horizontal optic flow on each wall of the tunnel (condition (ii)) do not affect the centred position of the bee within the flight tunnel. To better understand the behavioural response of Megalopta, we repeated the experiments on day-active bumble-bees (Bombus terrestris). Overall, our findings demonstrate that despite the limitations imposed by dim light, Megalopta—like their day-active relatives—rely heavily on vision to control flight, but that they use visual cues in a different manner from diurnal insects. PMID:21307047
Satellite Imagery Assisted Road-Based Visual Navigation System
NASA Astrophysics Data System (ADS)
Volkova, A.; Gibbens, P. W.
2016-06-01
There is a growing demand for unmanned aerial systems as autonomous surveillance, exploration and remote sensing solutions. Among the key concerns for robust operation of these systems is the need to reliably navigate the environment without reliance on global navigation satellite system (GNSS). This is of particular concern in Defence circles, but is also a major safety issue for commercial operations. In these circumstances, the aircraft needs to navigate relying only on information from on-board passive sensors such as digital cameras. An autonomous feature-based visual system presented in this work offers a novel integral approach to the modelling and registration of visual features that responds to the specific needs of the navigation system. It detects visual features from Google Earth* build a feature database. The same algorithm then detects features in an on-board cameras video stream. On one level this serves to localise the vehicle relative to the environment using Simultaneous Localisation and Mapping (SLAM). On a second level it correlates them with the database to localise the vehicle with respect to the inertial frame. The performance of the presented visual navigation system was compared using the satellite imagery from different years. Based on comparison results, an analysis of the effects of seasonal, structural and qualitative changes of the imagery source on the performance of the navigation algorithm is presented. * The algorithm is independent of the source of satellite imagery and another provider can be used
Carbon, nitrogen, and phosphorus stoichiometry and eutrophication in River Thames Tributaries, UK
USDA-ARS?s Scientific Manuscript database
Primary productivity in aquatic systems relies on the availability of carbon (C), nitrogen (N) and phosphorus (P), with a preferred stoichiometric ratio of 106 C/16 N/1 P, known as the Redfield ratio. The intent of this paper is to present a methodology to visualize C/N/P stoichiometry and examine ...
A novel brain-computer interface based on the rapid serial visual presentation paradigm.
Acqualagna, Laura; Treder, Matthias Sebastian; Schreuder, Martijn; Blankertz, Benjamin
2010-01-01
Most present-day visual brain computer interfaces (BCIs) suffer from the fact that they rely on eye movements, are slow-paced, or feature a small vocabulary. As a potential remedy, we explored a novel BCI paradigm consisting of a central rapid serial visual presentation (RSVP) of the stimuli. It has a large vocabulary and realizes a BCI system based on covert non-spatial selective visual attention. In an offline study, eight participants were presented sequences of rapid bursts of symbols. Two different speeds and two different color conditions were investigated. Robust early visual and P300 components were elicited time-locked to the presentation of the target. Offline classification revealed a mean accuracy of up to 90% for selecting the correct symbol out of 30 possibilities. The results suggest that RSVP-BCI is a promising new paradigm, also for patients with oculomotor impairments.
Visual just noticeable differences
NASA Astrophysics Data System (ADS)
Nankivil, Derek; Chen, Minghan; Wooley, C. Benjamin
2018-02-01
A visual just noticeable difference (VJND) is the amount of change in either an image (e.g. a photographic print) or in vision (e.g. due to a change in refractive power of a vision correction device or visually coupled optical system) that is just noticeable when compared with the prior state. Numerous theoretical and clinical studies have been performed to determine the amount of change in various visual inputs (power, spherical aberration, astigmatism, etc.) that result in a just noticeable visual change. Each of these approaches, in defining a VJND, relies on the comparison of two visual stimuli. The first stimulus is the nominal or baseline state and the second is the perturbed state that results in a VJND. Using this commonality, we converted each result to the change in the area of the modulation transfer function (AMTF) to provide a more fundamental understanding of what results in a VJND. We performed an analysis of the wavefront criteria from basic optics, the image quality metrics, and clinical studies testing various visual inputs, showing that fractional changes in AMTF resulting in one VJND range from 0.025 to 0.075. In addition, cycloplegia appears to desensitize the human visual system so that a much larger change in the retinal image is required to give a VJND. This finding may be of great import for clinical vision tests. Finally, we present applications of the VJND model for the determination of threshold ocular aberrations and manufacturing tolerances of visually coupled optical systems.
Visual adaptation dominates bimodal visual-motor action adaptation
de la Rosa, Stephan; Ferstl, Ylva; Bülthoff, Heinrich H.
2016-01-01
A long standing debate revolves around the question whether visual action recognition primarily relies on visual or motor action information. Previous studies mainly examined the contribution of either visual or motor information to action recognition. Yet, the interaction of visual and motor action information is particularly important for understanding action recognition in social interactions, where humans often observe and execute actions at the same time. Here, we behaviourally examined the interaction of visual and motor action recognition processes when participants simultaneously observe and execute actions. We took advantage of behavioural action adaptation effects to investigate behavioural correlates of neural action recognition mechanisms. In line with previous results, we find that prolonged visual exposure (visual adaptation) and prolonged execution of the same action with closed eyes (non-visual motor adaptation) influence action recognition. However, when participants simultaneously adapted visually and motorically – akin to simultaneous execution and observation of actions in social interactions - adaptation effects were only modulated by visual but not motor adaptation. Action recognition, therefore, relies primarily on vision-based action recognition mechanisms in situations that require simultaneous action observation and execution, such as social interactions. The results suggest caution when associating social behaviour in social interactions with motor based information. PMID:27029781
Wilkinson, Krista M; Light, Janice; Drager, Kathryn
2012-09-01
Aided augmentative and alternative (AAC) interventions have been demonstrated to facilitate a variety of communication outcomes in persons with intellectual disabilities. Most aided AAC systems rely on a visual modality. When the medium for communication is visual, it seems likely that the effectiveness of intervention depends in part on the effectiveness and efficiency with which the information presented in the display can be perceived, identified, and extracted by communicators and their partners. Understanding of visual-cognitive processing - that is, how a user attends, perceives, and makes sense of the visual information on the display - therefore seems critical to designing effective aided AAC interventions. In this Forum Note, we discuss characteristics of one particular type of aided AAC display, that is, Visual Scene Displays (VSDs) as they may relate to user visual and cognitive processing. We consider three specific ways in which bodies of knowledge drawn from the visual cognitive sciences may be relevant to the composition of VSDs, with the understanding the direct research with children with complex communication needs is necessary to verify or refute our speculations.
Jung, Wonmo; Bülthoff, Isabelle; Armann, Regine G M
2017-11-01
The brain can only attend to a fraction of all the information that is entering the visual system at any given moment. One way of overcoming the so-called bottleneck of selective attention (e.g., J. M. Wolfe, Võ, Evans, & Greene, 2011) is to make use of redundant visual information and extract summarized statistical information of the whole visual scene. Such ensemble representation occurs for low-level features of textures or simple objects, but it has also been reported for complex high-level properties. While the visual system has, for example, been shown to compute summary representations of facial expression, gender, or identity, it is less clear whether perceptual input from all parts of the visual field contributes equally to the ensemble percept. Here we extend the line of ensemble-representation research into the realm of race and look at the possibility that ensemble perception relies on weighting visual information differently depending on its origin from either the fovea or the visual periphery. We find that observers can judge the mean race of a set of faces, similar to judgments of mean emotion from faces and ensemble representations in low-level domains of visual processing. We also find that while peripheral faces seem to be taken into account for the ensemble percept, far more weight is given to stimuli presented foveally than peripherally. Whether this precision weighting of information stems from differences in the accuracy with which the visual system processes information across the visual field or from statistical inferences about the world needs to be determined by further research.
Avilés, J M; Soler, J J
2010-01-01
We have recently published support to the hypothesis that visual systems of parents could affect nestling detectability and, consequently, influences the evolution of nestling colour designs in altricial birds. We provided comparative evidence of an adjustment of nestling colour designs to the visual system of parents that we have found in a comparative study on 22 altricial bird species. In this issue, however, Renoult et al. (J. Evol. Biol., 2009) question some of the assumptions and statistical approaches in our study. Their argumentation relied on two major points: (1) an incorrect assignment of vision system to four out of 22 sampled species in our study; and (2) the use of an incorrect approach for phylogenetic correction of the predicted associations. Here, we discuss in detail re-assignation of vision systems in that study and propose alternative interpretation for current knowledge on spectrophotometric data of avian pigments. We reanalysed the data by using phylogenetic generalized least squares analyses that account for the alluded limitations of phylogenetically independent contrasts and, in accordance with the hypothesis, confirmed a significant influence of parental visual system on gape coloration. Our results proved to be robust to the assumptions on visual system evolution for Laniidae and nocturnal owls that Renoult et al. (J. Evol. Biol., 2009) study suggested may have flawed our early findings. Thus, the hypothesis that selection has resulted in increased detectability of nestling by adjusting gape coloration to parental visual systems is currently supported by our comparative data.
Looking without Perceiving: Impaired Preattentive Perceptual Grouping in Autism Spectrum Disorder
Carther-Krone, Tiffany A.; Shomstein, Sarah; Marotta, Jonathan J.
2016-01-01
Before becoming aware of a visual scene, our perceptual system has organized and selected elements in our environment to which attention should be allocated. Part of this process involves grouping perceptual features into a global whole. Individuals with autism spectrum disorders (ASD) rely on a more local processing strategy, which may be driven by difficulties perceptually grouping stimuli. We tested this notion using a line discrimination task in which two horizontal lines were superimposed on a background of black and white dots organized so that, on occasion, the dots induced the Ponzo illusion if perceptually grouped together. Results showed that even though neither group was aware of the illusion, the ASD group was significantly less likely than typically developing group to make perceptual judgments influenced by the illusion, revealing difficulties in preattentive grouping of visual stimuli. This may explain why individuals with ASD rely on local processing strategies, and offers new insight into the mechanism driving perceptual grouping in the typically developing human brain. PMID:27355678
NASA Astrophysics Data System (ADS)
Akimoto, Makio; Chen, Yu; Miyazaki, Michio; Yamashita, Toyonobu; Miyakawa, Michio; Hata, Mieko
The skin is unique as an organ that is highly accessible to direct visual inspection with light. Visual inspection of cutaneous morphology is the mainstay of clinical dermatology, but relies heavily on subjective assessment by the skilled dermatologists. We present an imaging colorimeter of non-contact skin color measuring system and some experimented results using such instrument. The system is comprised by a video camera, light source, a real-time image processing board, magneto optics disk and personal computer which controls the entire system. The CIE-L*a*b* uniform color space is used. This system is used for monitoring of some clinical diagnosis. The instrument is non-contact, easy to operate, and has a high precision unlike the conventional colorimeters. This instrument is useful for clinical diagnoses, monitoring and evaluating the effectiveness of treatment.
Gilaie-Dotan, Sharon; Doron, Ravid
2017-06-01
Visual categories are associated with eccentricity biases in high-order visual cortex: Faces and reading with foveally-biased regions, while common objects and space with mid- and peripherally-biased regions. As face perception and reading are among the most challenging human visual skills, and are often regarded as the peak achievements of a distributed neural network supporting common objects perception, it is unclear why objects, which also rely on foveal vision to be processed, are associated with mid-peripheral rather than with a foveal bias. Here, we studied BN, a 9 y.o. boy who has normal basic-level vision, abnormal (limited) oculomotor pursuit and saccades, and shows developmental object and contour integration deficits but with no indication of prosopagnosia. Although we cannot infer causation from the data presented here, we suggest that normal pursuit and saccades could be critical for the development of contour integration and object perception. While faces and perhaps reading, when fixated upon, take up a small portion of central visual field and require only small eye movements to be properly processed, common objects typically prevail in mid-peripheral visual field and rely on longer-distance voluntary eye movements as saccades to be brought to fixation. While retinal information feeds into early visual cortex in an eccentricity orderly manner, we hypothesize that propagation of non-foveal information to mid and high-order visual cortex critically relies on circuitry involving eye movements. Limited or atypical eye movements, as in the case of BN, may hinder normal information flow to mid-eccentricity biased high-order visual cortex, adversely affecting its development and consequently inducing visual perceptual deficits predominantly for categories associated with these regions. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Babu, Rakesh
2011-01-01
The central premise of this research is that blind and visually impaired (BVI) people cannot use the Internet effectively due to accessibility and usability problems. Use of the Internet is indispensable in today's education system that relies on Web-enhanced instruction (WEI). Therefore, BVI students cannot participate effectively in WEI. Extant…
Nocturnal insects use optic flow for flight control.
Baird, Emily; Kreiss, Eva; Wcislo, William; Warrant, Eric; Dacke, Marie
2011-08-23
To avoid collisions when navigating through cluttered environments, flying insects must control their flight so that their sensory systems have time to detect obstacles and avoid them. To do this, day-active insects rely primarily on the pattern of apparent motion generated on the retina during flight (optic flow). However, many flying insects are active at night, when obtaining reliable visual information for flight control presents much more of a challenge. To assess whether nocturnal flying insects also rely on optic flow cues to control flight in dim light, we recorded flights of the nocturnal neotropical sweat bee, Megalopta genalis, flying along an experimental tunnel when: (i) the visual texture on each wall generated strong horizontal (front-to-back) optic flow cues, (ii) the texture on only one wall generated these cues, and (iii) horizontal optic flow cues were removed from both walls. We find that Megalopta increase their groundspeed when horizontal motion cues in the tunnel are reduced (conditions (ii) and (iii)). However, differences in the amount of horizontal optic flow on each wall of the tunnel (condition (ii)) do not affect the centred position of the bee within the flight tunnel. To better understand the behavioural response of Megalopta, we repeated the experiments on day-active bumble-bees (Bombus terrestris). Overall, our findings demonstrate that despite the limitations imposed by dim light, Megalopta-like their day-active relatives-rely heavily on vision to control flight, but that they use visual cues in a different manner from diurnal insects. This journal is © 2011 The Royal Society
Imitation and matching of meaningless gestures: distinct involvement from motor and visual imagery.
Lesourd, Mathieu; Navarro, Jordan; Baumard, Josselin; Jarry, Christophe; Le Gall, Didier; Osiurak, François
2017-05-01
The aim of the present study was to understand the underlying cognitive processes of imitation and matching of meaningless gestures. Neuropsychological evidence obtained in brain damaged patients, has shown that distinct cognitive processes supported imitation and matching of meaningless gestures. Left-brain damaged (LBD) patients failed to imitate while right-brain damaged (RBD) patients failed to match meaningless gestures. Moreover, other studies with brain damaged patients showed that LBD patients were impaired in motor imagery while RBD patients were impaired in visual imagery. Thus, we hypothesize that imitation of meaningless gestures might rely on motor imagery, whereas matching of meaningless gestures might be based on visual imagery. In a first experiment, using a correlational design, we demonstrated that posture imitation relies on motor imagery but not on visual imagery (Experiment 1a) and that posture matching relies on visual imagery but not on motor imagery (Experiment 1b). In a second experiment, by manipulating directly the body posture of the participants, we demonstrated that such manipulation evokes a difference only in imitation task but not in matching task. In conclusion, the present study provides direct evidence that the way we imitate or we have to compare postures depends on motor imagery or visual imagery, respectively. Our results are discussed in the light of recent findings about underlying mechanisms of meaningful and meaningless gestures.
ERIC Educational Resources Information Center
Chen, Y.; Norton, D. J.; McBain, R.; Gold, J.; Frazier, J. A.; Coyle, J. T.
2012-01-01
An important issue for understanding visual perception in autism concerns whether individuals with this neurodevelopmental disorder possess an advantage in processing local visual information, and if so, what is the nature of this advantage. Perception of movement speed is a visual process that relies on computation of local spatiotemporal signals…
NASA Astrophysics Data System (ADS)
Brandstetter, Miriam; Sandmann, Angela; Florian, Christine
2017-06-01
In classroom, scientific contents are increasingly communicated through visual forms of representations. Students' learning outcomes rely on their ability to read and understand pictorial information. Understanding pictorial information in biology requires cognitive effort and can be challenging to students. Yet evidence-based knowledge about students' visual reading strategies during the process of understanding pictorial information is pending. Therefore, 42 students at the age of 14-15 were asked to think aloud while trying to understand visual representations of the blood circulatory system and the patellar reflex. A category system was developed differentiating 16 categories of cognitive activities. A Principal Component Analysis revealed two underlying patterns of activities that can be interpreted as visual reading strategies: 1. Inferences predominated by using a problem-solving schema; 2. Inferences predominated by recall of prior content knowledge. Each pattern consists of a specific set of cognitive activities that reflect selection, organisation and integration of pictorial information as well as different levels of expertise. The results give detailed insights into cognitive activities of students who were required to understand the pictorial information of complex organ systems. They provide an evidence-based foundation to derive instructional aids that can promote students pictorial-information-based learning on different levels of expertise.
Kalia, Amy A.; Legge, Gordon E.; Giudice, Nicholas A.
2009-01-01
Previous studies suggest that humans rely on geometric visual information (hallway structure) rather than non-geometric visual information (e.g., doors, signs and lighting) for acquiring cognitive maps of novel indoor layouts. This study asked whether visual impairment and age affect reliance on non-geometric visual information for layout learning. We tested three groups of participants—younger (< 50 years) normally sighted, older (50–70 years) normally sighted, and low vision (people with heterogeneous forms of visual impairment ranging in age from 18–67). Participants learned target locations in building layouts using four presentation modes: a desktop virtual environment (VE) displaying only geometric cues (Sparse VE), a VE displaying both geometric and non-geometric cues (Photorealistic VE), a Map, and a Real building. Layout knowledge was assessed by map drawing and by asking participants to walk to specified targets in the real space. Results indicate that low-vision and older normally-sighted participants relied on additional non-geometric information to accurately learn layouts. In conclusion, visual impairment and age may result in reduced perceptual and/or memory processing that makes it difficult to learn layouts without non-geometric visual information. PMID:19189732
ERIC Educational Resources Information Center
Gopal, Nikhil
2017-01-01
Biomedical research increasingly relies on the analysis and visualization of a wide range of collected data. However, for certain research questions, such as those investigating the interconnectedness of biological elements, the sheer quantity and variety of data results in rather uninterpretable--this is especially true for network visualization,…
Communicating Science Concepts to Individuals with Visual Impairments Using Short Learning Modules
ERIC Educational Resources Information Center
Stender, Anthony S.; Newell, Ryan; Villarreal, Eduardo; Swearer, Dayne F.; Bianco, Elisabeth; Ringe, Emilie
2016-01-01
Of the 6.7 million individuals in the United States who are visually impaired, 63% are unemployed, and 59% have not attained an education beyond a high school diploma. Providing a basic science education to children and adults with visual disabilities can be challenging because most scientific learning relies on visual demonstrations. Creating…
ERIC Educational Resources Information Center
Absoud, Michael; Parr, Jeremy R.; Salt, Alison; Dale, Naomi
2011-01-01
Available observational tools used in the identification of social communication difficulties and diagnosis of autism spectrum disorder (ASD) rely partly on visual behaviours and therefore may not be valid in children with visual impairment. A pilot observational instrument, the Visual Impairment and Social Communication Schedule (VISS), was…
To develop behavioral tests of vestibular functioning in the Wistar rat
NASA Technical Reports Server (NTRS)
Nielson, H. C.
1980-01-01
Two tests of vestibular functioning in the rat were developed. The first test was the water maze. In the water maze the rat does not have the normal proprioceptive feedback from its limbs to help it maintain its orientation, and must rely primarily on the sensory input from its visual and vestibular systems. By altering lighting conditions and visual cues the vestibular functioning without visual cues was assessed. Whether there was visual compensation for some vestibular dysfunction was determined. The second test measured vestibular functioning of the rat's behavior on a parallel swing. In this test the rat's postural adjustments while swinging on the swing with the otoliths being stimulated were assessed. Less success was achieved in developing the parallel swing as a test of vestibular functioning than with the water maze. The major problem was incorrect initial assumptions of what the rat's probable behavior on the parallel swing would be.
Anemonefishes rely on visual and chemical cues to correctly identify conspecifics
NASA Astrophysics Data System (ADS)
Johnston, Nicole K.; Dixson, Danielle L.
2017-09-01
Organisms rely on sensory cues to interpret their environment and make important life-history decisions. Accurate recognition is of particular importance in diverse reef environments. Most evidence on the use of sensory cues focuses on those used in predator avoidance or habitat recognition, with little information on their role in conspecific recognition. Yet conspecific recognition is essential for life-history decisions including settlement, mate choice, and dominance interactions. Using a sensory manipulated tank and a two-chamber choice flume, anemonefish conspecific response was measured in the presence and absence of chemical and/or visual cues. Experiments were then repeated in the presence or absence of two heterospecific species to evaluate whether a heterospecific fish altered the conspecific response. Anemonefishes responded to both the visual and chemical cues of conspecifics, but relied on the combination of the two cues to recognize conspecifics inside the sensory manipulated tank. These results contrast previous studies focusing on predator detection where anemonefishes were found to compensate for the loss of one sensory cue (chemical) by utilizing a second cue (visual). This lack of sensory compensation may impact the ability of anemonefishes to acclimate to changing reef environments in the future.
Are visual peripheries forever young?
Burnat, Kalina
2015-01-01
The paper presents a concept of lifelong plasticity of peripheral vision. Central vision processing is accepted as critical and irreplaceable for normal perception in humans. While peripheral processing chiefly carries information about motion stimuli features and redirects foveal attention to new objects, it can also take over functions typical for central vision. Here I review the data showing the plasticity of peripheral vision found in functional, developmental, and comparative studies. Even though it is well established that afferent projections from central and peripheral retinal regions are not established simultaneously during early postnatal life, central vision is commonly used as a general model of development of the visual system. Based on clinical studies and visually deprived animal models, I describe how central and peripheral visual field representations separately rely on early visual experience. Peripheral visual processing (motion) is more affected by binocular visual deprivation than central visual processing (spatial resolution). In addition, our own experimental findings show the possible recruitment of coarse peripheral vision for fine spatial analysis. Accordingly, I hypothesize that the balance between central and peripheral visual processing, established in the course of development, is susceptible to plastic adaptations during the entire life span, with peripheral vision capable of taking over central processing.
Wilkinson, Krista M.; Light, Janice; Drager, Kathryn
2013-01-01
Aided augmentative and alternative (AAC) interventions have been demonstrated to facilitate a variety of communication outcomes in persons with intellectual disabilities. Most aided AAC systems rely on a visual modality. When the medium for communication is visual, it seems likely that the effectiveness of intervention depends in part on the effectiveness and efficiency with which the information presented in the display can be perceived, identified, and extracted by communicators and their partners. Understanding of visual-cognitive processing – that is, how a user attends, perceives, and makes sense of the visual information on the display – therefore seems critical to designing effective aided AAC interventions. In this Forum Note, we discuss characteristics of one particular type of aided AAC display, that is, Visual Scene Displays (VSDs) as they may relate to user visual and cognitive processing. We consider three specific ways in which bodies of knowledge drawn from the visual cognitive sciences may be relevant to the composition of VSDs, with the understanding the direct research with children with complex communication needs is necessary to verify or refute our speculations. PMID:22946989
Assessment of visual communication by information theory
NASA Astrophysics Data System (ADS)
Huck, Friedrich O.; Fales, Carl L.
1994-01-01
This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.
People can understand descriptions of motion without activating visual motion brain regions
Dravida, Swethasri; Saxe, Rebecca; Bedny, Marina
2013-01-01
What is the relationship between our perceptual and linguistic neural representations of the same event? We approached this question by asking whether visual perception of motion and understanding linguistic depictions of motion rely on the same neural architecture. The same group of participants took part in two language tasks and one visual task. In task 1, participants made semantic similarity judgments with high motion (e.g., “to bounce”) and low motion (e.g., “to look”) words. In task 2, participants made plausibility judgments for passages describing movement (“A centaur hurled a spear … ”) or cognitive events (“A gentleman loved cheese …”). Task 3 was a visual motion localizer in which participants viewed animations of point-light walkers, randomly moving dots, and stationary dots changing in luminance. Based on the visual motion localizer we identified classic visual motion areas of the temporal (MT/MST and STS) and parietal cortex (inferior and superior parietal lobules). We find that these visual cortical areas are largely distinct from neural responses to linguistic depictions of motion. Motion words did not activate any part of the visual motion system. Motion passages produced a small response in the right superior parietal lobule, but none of the temporal motion regions. These results suggest that (1) as compared to words, rich language stimuli such as passages are more likely to evoke mental imagery and more likely to affect perceptual circuits and (2) effects of language on the visual system are more likely in secondary perceptual areas as compared to early sensory areas. We conclude that language and visual perception constitute distinct but interacting systems. PMID:24009592
The Role of the Oculomotor System in Updating Visual-Spatial Working Memory across Saccades.
Boon, Paul J; Belopolsky, Artem V; Theeuwes, Jan
2016-01-01
Visual-spatial working memory (VSWM) helps us to maintain and manipulate visual information in the absence of sensory input. It has been proposed that VSWM is an emergent property of the oculomotor system. In the present study we investigated the role of the oculomotor system in updating of spatial working memory representations across saccades. Participants had to maintain a location in memory while making a saccade to a different location. During the saccade the target was displaced, which went unnoticed by the participants. After executing the saccade, participants had to indicate the memorized location. If memory updating fully relies on cancellation driven by extraretinal oculomotor signals, the displacement should have no effect on the perceived location of the memorized stimulus. However, if postsaccadic retinal information about the location of the saccade target is used, the perceived location will be shifted according to the target displacement. As it has been suggested that maintenance of accurate spatial representations across saccades is especially important for action control, we used different ways of reporting the location held in memory; a match-to-sample task, a mouse click or by making another saccade. The results showed a small systematic target displacement bias in all response modalities. Parametric manipulation of the distance between the to-be-memorized stimulus and saccade target revealed that target displacement bias increased over time and changed its spatial profile from being initially centered on locations around the saccade target to becoming spatially global. Taken together results suggest that we neither rely exclusively on extraretinal nor on retinal information in updating working memory representations across saccades. The relative contribution of retinal signals is not fixed but depends on both the time available to integrate these signals as well as the distance between the saccade target and the remembered location.
Demonstrating NaradaBrokering as a Middleware Fabric for Grid-based Remote Visualization Services
NASA Astrophysics Data System (ADS)
Pallickara, S.; Erlebacher, G.; Yuen, D.; Fox, G.; Pierce, M.
2003-12-01
Remote Visualization Services (RVS) have tended to rely on approaches based on the client server paradigm. Here we demonstrate our approach - based on a distributed brokering infrastructure, NaradaBrokering [1] - that relies on distributed, asynchronous and loosely coupled interactions to meet the requirements and constraints of RVS. In our approach to RVS, services advertise their capabilities to the broker network that manages these service advertisements. Among the services considered within our system are those that perform graphic transformations, mediate access to specialized datasets and finally those that manage the execution of specified tasks. There could be multiple instances of each of these services and the system ensures that load for a given service is distributed efficiently over these service instances. We will demonstrate implementation of concepts that we outlined in the oral presentation. This would involve two or more visualization servers interacting asynchronously with multiple clients through NaradaBrokering. The communicating entities may exchange SOAP [2] (Simple Object Access Protocol) messages. SOAP is a lightweight protocol for exchange of information in a decentralized, distributed environment. It is an XML based protocol that consists of three parts: an envelope that describes what is in a message and how to process it, rules for expressing instances of application-defined data types, and a convention for representing remote invocation related operations. Furthermore, we will also demonstrate how clients can retrieve their results after prolonged disconnects or after any failures that might have taken place. The entities, services and clients alike, are not limited by the geographical distances that separate them. We are planning to test this system in the context of trans-Atlantic links separating interacting entities. {[1]} The NaradaBrokering Project: http://www.naradabrokering.org {[2]} Newcomer, E., 2002, Understanding web services: XML, WSDL, SOAP, and UDDI, Addison Wesley Professional.
Don’t Assume Deaf Students are Visual Learners
Marschark, Marc; Paivio, Allan; Spencer, Linda J.; Durkin, Andreana; Borgna, Georgianna; Convertino, Carol; Machmer, Elizabeth
2016-01-01
In the education of deaf learners, from primary school to postsecondary settings, it frequently is suggested that deaf students are visual learners. That assumption appears to be based on the visual nature of signed languages—used by some but not all deaf individuals—and the fact that with greater hearing losses, deaf students will rely relatively more on vision than audition. However, the questions of whether individuals with hearing loss are more likely to be visual learners than verbal learners or more likely than hearing peers to be visual learners have not been empirically explored. Several recent studies, in fact, have indicated that hearing learners typically perform as well or better than deaf learners on a variety of visual-spatial tasks. The present study used two standardized instruments to examine learning styles among college deaf students who primarily rely on sign language or spoken language and their hearing peers. The visual-verbal dimension was of particular interest. Consistent with recent indirect findings, results indicated that deaf students are no more likely than hearing students to be visual learners and are no stronger in their visual skills and habits than their verbal skills and habits, nor are deaf students’ visual orientations associated with sign language skills. The results clearly have specific implications for the educating of deaf learners. PMID:28344430
Astronomy, Visual Literacy, and Liberal Arts Education
NASA Astrophysics Data System (ADS)
Crider, Anthony
2016-01-01
With the exponentially growing amount of visual content that twenty-first century students will face throughout their lives, teaching them to respond to it with visual and information literacy skills should be a clear priority for liberal arts education. While visual literacy is more commonly covered within humanities curricula, I will argue that because astronomy is inherently a visual science, it is a fertile academic discipline for the teaching and learning of visual literacy. Astronomers, like many scientists, rely on three basic types of visuals to convey information: images, qualitative diagrams, and quantitative plots. In this talk, I will highlight classroom methods that can be used to teach students to "read" and "write" these three separate visuals. Examples of "reading" exercises include questioning the authorship and veracity of images, confronting the distorted scales of many diagrams published in astronomy textbooks, and extracting quantitative information from published plots. Examples of "writing" exercises include capturing astronomical images with smartphones, re-sketching textbook diagrams on whiteboards, and plotting data with Google Motion Charts or iPython notebooks. Students can be further pushed to synthesize these skills with end-of-semester slide presentations that incorporate relevant images, diagrams, and plots rather than relying solely on bulleted lists.
Assessment of short-term memory in Arabic speaking children with specific language impairment.
Kaddah, F A; Shoeib, R M; Mahmoud, H E
2010-12-15
Children with Specific Language Impairment (SLI) may have some kind of memory disorder that could increase their linguistic impairment. This study assessed the short-term memory skills in Arabic speaking children with either Expressive Language Impairment (ELI) or Receptive/Expressive Language Impairment (R/ELI) in comparison to controls in order to estimate the nature and extent of any specific deficits in these children that could explain the different prognostic results of language intervention. Eighteen children were included in each group. Receptive, expressive and total language quotients were calculated using the Arabic language test. Assessment of auditory and visual short-term memory was done using the Arabic version of the Illinois Test of Psycholinguistic Abilities. Both groups of SLI performed significantly lower linguistic abilities and poorer auditory and visual short-term memory in comparison to normal children. The R/ELI group presented an inferior performance than the ELI group in all measured parameters. Strong association was found between most tasks of auditory and visual short-term memory and linguistic abilities. The results of this study highlighted a specific degree of deficit of auditory and visual short-term memories in both groups of SLI. These deficits were more prominent in R/ELI group. Moreover, the strong association between the different auditory and visual short-term memories and language abilities in children with SLI must be taken into account when planning an intervention program for these children.
Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion
Fajen, Brett R.; Matthis, Jonathan S.
2013-01-01
Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983
Concerning the Video Drift Method to Measure Double Stars
NASA Astrophysics Data System (ADS)
Nugent, Richard L.; Iverson, Ernest W.
2015-05-01
Classical methods to measure position angles and separations of double stars rely on just a few measurements either from visual observations or photographic means. Visual and photographic CCD observations are subject to errors from the following sources: misalignments from eyepiece/camera/barlow lens/micrometer/focal reducers, systematic errors from uncorrected optical distortions, aberrations from the telescope system, camera tilt, magnitude and color effects. Conventional video methods rely on calibration doubles and graphically calculating the east-west direction plus careful choice of select video frames stacked for measurement. Atmospheric motion is one of the larger sources of error in any exposure/measurement method which is on the order of 0.5-1.5. Ideally, if a data set from a short video can be used to derive position angle and separation, with each data set self-calibrating independent of any calibration doubles or star catalogues, this would provide measurements of high systematic accuracy. These aims are achieved by the video drift method first proposed by the authors in 2011. This self calibrating video method automatically analyzes 1,000's of measurements from a short video clip.
Unilateral Amblyopia Affects Two Eyes: Fellow Eye Deficits in Amblyopia.
Meier, Kimberly; Giaschi, Deborah
2017-03-01
Unilateral amblyopia is a visual disorder that arises after selective disruption of visual input to one eye during critical periods of development. In the clinic, amblyopia is understood as poor visual acuity in an eye that was deprived of pattern vision early in life. By its nature, however, amblyopia has an adverse effect on the development of a binocular visual system and the interactions between signals from two eyes. Visual functions aside from visual acuity are impacted, and many studies have indicated compromised sensitivity in the fellow eye even though it demonstrates normal visual acuity. While these fellow eye deficits have been noted, no overarching theory has been proposed to describe why and under what conditions the fellow eye is impacted by amblyopia. Here, we consider four explanations that may account for decreased fellow eye sensitivity: the fellow eye is adversely impacted by treatment for amblyopia; the maturation of the fellow eye is delayed by amblyopia; fellow eye sensitivity is impacted for visual functions that rely on binocular cortex; and fellow eye deficits reflect an adaptive mechanism that works to equalize the sensitivity of the two eyes. To evaluate these ideas, we describe five visual functions that are commonly reported to be deficient in the amblyopic eye (hyperacuity, contrast sensitivity, spatial integration, global motion, and motion-defined form), and unify the current evidence for fellow eye deficits. Further research targeted at exploring fellow eye deficits in amblyopia will provide us with a broader understanding of normal visual development and how amblyopia impacts the developing visual system.
Postural and Spatial Orientation Driven by Virtual Reality
Keshner, Emily A.; Kenyon, Robert V.
2009-01-01
Orientation in space is a perceptual variable intimately related to postural orientation that relies on visual and vestibular signals to correctly identify our position relative to vertical. We have combined a virtual environment with motion of a posture platform to produce visual-vestibular conditions that allow us to explore how motion of the visual environment may affect perception of vertical and, consequently, affect postural stabilizing responses. In order to involve a higher level perceptual process, we needed to create a visual environment that was immersive. We did this by developing visual scenes that possess contextual information using color, texture, and 3-dimensional structures. Update latency of the visual scene was close to physiological latencies of the vestibulo-ocular reflex. Using this system we found that even when healthy young adults stand and walk on a stable support surface, they are unable to ignore wide field of view visual motion and they adapt their postural orientation to the parameters of the visual motion. Balance training within our environment elicited measurable rehabilitation outcomes. Thus we believe that virtual environments can serve as a clinical tool for evaluation and training of movement in situations that closely reflect conditions found in the physical world. PMID:19592796
Vision in two cyprinid fish: implications for collective behavior
Moore, Bret A.; Tyrrell, Luke P.; Fernández-Juricic, Esteban
2015-01-01
Many species of fish rely on their visual systems to interact with conspecifics and these interactions can lead to collective behavior. Individual-based models have been used to predict collective interactions; however, these models generally make simplistic assumptions about the sensory systems that are applied without proper empirical testing to different species. This could limit our ability to predict (and test empirically) collective behavior in species with very different sensory requirements. In this study, we characterized components of the visual system in two species of cyprinid fish known to engage in visually dependent collective interactions (zebrafish Danio rerio and golden shiner Notemigonus crysoleucas) and derived quantitative predictions about the positioning of individuals within schools. We found that both species had relatively narrow binocular and blind fields and wide visual coverage. However, golden shiners had more visual coverage in the vertical plane (binocular field extending behind the head) and higher visual acuity than zebrafish. The centers of acute vision (areae) of both species projected in the fronto-dorsal region of the visual field, but those of the zebrafish projected more dorsally than those of the golden shiner. Based on this visual sensory information, we predicted that: (a) predator detection time could be increased by >1,000% in zebrafish and >100% in golden shiners with an increase in nearest neighbor distance, (b) zebrafish schools would have a higher roughness value (surface area/volume ratio) than those of golden shiners, (c) and that nearest neighbor distance would vary from 8 to 20 cm to visually resolve conspecific striping patterns in both species. Overall, considering between-species differences in the sensory system of species exhibiting collective behavior could change the predictions about the positioning of individuals in the group as well as the shape of the school, which can have implications for group cohesion. We suggest that more effort should be invested in assessing the role of the sensory system in shaping local interactions driving collective behavior. PMID:26290783
NASA Astrophysics Data System (ADS)
Vucinic, Dean; Deen, Danny; Oanta, Emil; Batarilo, Zvonimir; Lacor, Chris
This paper focuses on visualization and manipulation of graphical content in distributed network environments. The developed graphical middleware and 3D desktop prototypes were specialized for situational awareness. This research was done in the LArge Scale COllaborative decision support Technology (LASCOT) project, which explored and combined software technologies to support human-centred decision support system for crisis management (earthquake, tsunami, flooding, airplane or oil-tanker incidents, chemical, radio-active or other pollutants spreading, etc.). The performed state-of-the-art review did not identify any publicly available large scale distributed application of this kind. Existing proprietary solutions rely on the conventional technologies and 2D representations. Our challenge was to apply the "latest" available technologies, such Java3D, X3D and SOAP, compatible with average computer graphics hardware. The selected technologies are integrated and we demonstrate: the flow of data, which originates from heterogeneous data sources; interoperability across different operating systems and 3D visual representations to enhance the end-users interactions.
Visual behavior characterization for intrusion and misuse detection
NASA Astrophysics Data System (ADS)
Erbacher, Robert F.; Frincke, Deborah
2001-05-01
As computer and network intrusions become more and more of a concern, the need for better capabilities, to assist in the detection and analysis of intrusions also increase. System administrators typically rely on log files to analyze usage and detect misuse. However, as a consequence of the amount of data collected by each machine, multiplied by the tens or hundreds of machines under the system administrator's auspices, the entirety of the data available is neither collected nor analyzed. This is compounded by the need to analyze network traffic data as well. We propose a methodology for analyzing network and computer log information visually based on the analysis of the behavior of the users. Each user's behavior is the key to determining their intent and overriding activity, whether they attempt to hide their actions or not. Proficient hackers will attempt to hide their ultimate activities, which hinders the reliability of log file analysis. Visually analyzing the users''s behavior however, is much more adaptable and difficult to counteract.
Acoustic-tactile rendering of visual information
NASA Astrophysics Data System (ADS)
Silva, Pubudu Madhawa; Pappas, Thrasyvoulos N.; Atkins, Joshua; West, James E.; Hartmann, William M.
2012-03-01
In previous work, we have proposed a dynamic, interactive system for conveying visual information via hearing and touch. The system is implemented with a touch screen that allows the user to interrogate a two-dimensional (2-D) object layout by active finger scanning while listening to spatialized auditory feedback. Sound is used as the primary source of information for object localization and identification, while touch is used both for pointing and for kinesthetic feedback. Our previous work considered shape and size perception of simple objects via hearing and touch. The focus of this paper is on the perception of a 2-D layout of simple objects with identical size and shape. We consider the selection and rendition of sounds for object identification and localization. We rely on the head-related transfer function for rendering sound directionality, and consider variations of sound intensity and tempo as two alternative approaches for rendering proximity. Subjective experiments with visually-blocked subjects are used to evaluate the effectiveness of the proposed approaches. Our results indicate that intensity outperforms tempo as a proximity cue, and that the overall system for conveying a 2-D layout is quite promising.
Horowitz, Seth S; Cheney, Cheryl A; Simmons, James A
2004-01-01
The big brown bat (Eptesicus fuscus) is an aerial-feeding insectivorous species that relies on echolocation to avoid obstacles and to detect flying insects. Spatial perception in the dark using echolocation challenges the vestibular system to function without substantial visual input for orientation. IR thermal video recordings show the complexity of bat flights in the field and suggest a highly dynamic role for the vestibular system in orientation and flight control. To examine this role, we carried out laboratory studies of flight behavior under illuminated and dark conditions in both static and rotating obstacle tests while administering heavy water (D2O) to impair vestibular inputs. Eptesicus carried out complex maneuvers through both fixed arrays of wires and a rotating obstacle array using both vision and echolocation, or when guided by echolocation alone. When treated with D2O in combination with lack of visual cues, bats showed considerable decrements in performance. These data indicate that big brown bats use both vision and echolocation to provide spatial registration for head position information generated by the vestibular system.
Camouflage predicts survival in ground-nesting birds
Troscianko, Jolyon; Wilson-Aggarwal, Jared; Stevens, Martin; Spottiswoode, Claire N.
2016-01-01
Evading detection by predators is crucial for survival. Camouflage is therefore a widespread adaptation, but despite substantial research effort our understanding of different camouflage strategies has relied predominantly on artificial systems and on experiments disregarding how camouflage is perceived by predators. Here we show for the first time in a natural system, that survival probability of wild animals is directly related to their level of camouflage as perceived by the visual systems of their main predators. Ground-nesting plovers and coursers flee as threats approach, and their clutches were more likely to survive when their egg contrast matched their surrounds. In nightjars – which remain motionless as threats approach – clutch survival depended on plumage pattern matching between the incubating bird and its surrounds. Our findings highlight the importance of pattern and luminance based camouflage properties, and the effectiveness of modern techniques in capturing the adaptive properties of visual phenotypes. PMID:26822039
Camouflage predicts survival in ground-nesting birds.
Troscianko, Jolyon; Wilson-Aggarwal, Jared; Stevens, Martin; Spottiswoode, Claire N
2016-01-29
Evading detection by predators is crucial for survival. Camouflage is therefore a widespread adaptation, but despite substantial research effort our understanding of different camouflage strategies has relied predominantly on artificial systems and on experiments disregarding how camouflage is perceived by predators. Here we show for the first time in a natural system, that survival probability of wild animals is directly related to their level of camouflage as perceived by the visual systems of their main predators. Ground-nesting plovers and coursers flee as threats approach, and their clutches were more likely to survive when their egg contrast matched their surrounds. In nightjars - which remain motionless as threats approach - clutch survival depended on plumage pattern matching between the incubating bird and its surrounds. Our findings highlight the importance of pattern and luminance based camouflage properties, and the effectiveness of modern techniques in capturing the adaptive properties of visual phenotypes.
On the assessment of visual communication by information theory
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Fales, Carl L.
1993-01-01
This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.
Native American Visual Vocabulary: Ways of Thinking and Living.
ERIC Educational Resources Information Center
Dyc, Gloria; Milligan, Carolyn
Visual literacy is a culturally-derived strength of Native American students. On a continent with more than 200 languages, Native Americans relied heavily on visual intelligence for trade and communication between tribes. Tribal people interpreted medicine paint, tattoos, and clothing styles to determine the social roles of those with whom they…
Visual Knowledge in Tactical Planning: Preliminary Knowledge Acquisition Phase 1 Technical Report
1990-04-05
MANAGEMENT INFORMATION , COMMUNICATIONS, AND COMPUTER SCIENCES Visual Knowledge in Tactical Planning: Preliminary Knowledge Acquisition Phase I Technical...perceived provides information in multiple modalities and, in fact, we may rely on a non-verbal mode for much of our understanding of the situation...some tasks, almost all the pertinent information is provided via diagrams, maps, znd other illustrations. Visual Knowledge Visual experience forms a
Wiegand, Iris; Töllner, Thomas; Habekost, Thomas; Dyrholm, Mads; Müller, Hermann J; Finke, Kathrin
2014-08-01
An individual's visual attentional capacity is characterized by 2 central processing resources, visual perceptual processing speed and visual short-term memory (vSTM) storage capacity. Based on Bundesen's theory of visual attention (TVA), independent estimates of these parameters can be obtained from mathematical modeling of performance in a whole report task. The framework's neural interpretation (NTVA) further suggests distinct brain mechanisms underlying these 2 functions. Using an interindividual difference approach, the present study was designed to establish the respective ERP correlates of both parameters. Participants with higher compared to participants with lower processing speed were found to show significantly reduced visual N1 responses, indicative of higher efficiency in early visual processing. By contrast, for participants with higher relative to lower vSTM storage capacity, contralateral delay activity over visual areas was enhanced while overall nonlateralized delay activity was reduced, indicating that holding (the maximum number of) items in vSTM relies on topographically specific sustained activation within the visual system. Taken together, our findings show that the 2 main aspects of visual attentional capacity are reflected in separable neurophysiological markers, validating a central assumption of NTVA. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Rosselli, Federica B.; Alemi, Alireza; Ansuini, Alessio; Zoccolan, Davide
2015-01-01
In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness). In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant) to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: (i) smaller and more scattered; (ii) only partially preserved across object views; and (iii) only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning. PMID:25814936
3d visualization of atomistic simulations on every desktop
NASA Astrophysics Data System (ADS)
Peled, Dan; Silverman, Amihai; Adler, Joan
2013-08-01
Once upon a time, after making simulations, one had to go to a visualization center with fancy SGI machines to run a GL visualization and make a movie. More recently, OpenGL and its mesa clone have let us create 3D on simple desktops (or laptops), whether or not a Z-buffer card is present. Today, 3D a la Avatar is a commodity technique, presented in cinemas and sold for home TV. However, only a few special research centers have systems large enough for entire classes to view 3D, or special immersive facilities like visualization CAVEs or walls, and not everyone finds 3D immersion easy to view. For maximum physics with minimum effort a 3D system must come to each researcher and student. So how do we create 3D visualization cheaply on every desktop for atomistic simulations? After several months of attempts to select commodity equipment for a whole room system, we selected an approach that goes back a long time, even predating GL. The old concept of anaglyphic stereo relies on two images, slightly displaced, and viewed through colored glasses, or two squares of cellophane from a regular screen/projector or poster. We have added this capability to our AViz atomistic visualization code in its new, 6.1 version, which is RedHat, CentOS and Ubuntu compatible. Examples using data from our own research and that of other groups will be given.
Orhan, U.; Erdogmus, D.; Roark, B.; Oken, B.; Purwar, S.; Hild, K. E.; Fowler, A.; Fried-Oken, M.
2013-01-01
RSVP Keyboard™ is an electroencephalography (EEG) based brain computer interface (BCI) typing system, designed as an assistive technology for the communication needs of people with locked-in syndrome (LIS). It relies on rapid serial visual presentation (RSVP) and does not require precise eye gaze control. Existing BCI typing systems which uses event related potentials (ERP) in EEG suffer from low accuracy due to low signal-to-noise ratio. Henceforth, RSVP Keyboard™ utilizes a context based decision making via incorporating a language model, to improve the accuracy of letter decisions. To further improve the contributions of the language model, we propose recursive Bayesian estimation, which relies on non-committing string decisions, and conduct an offline analysis, which compares it with the existing naïve Bayesian fusion approach. The results indicate the superiority of the recursive Bayesian fusion and in the next generation of RSVP Keyboard™ we plan to incorporate this new approach. PMID:23366432
Familiar route loyalty implies visual pilotage in the homing pigeon
Biro, Dora; Meade, Jessica; Guilford, Tim
2004-01-01
Wide-ranging animals, such as birds, regularly traverse large areas of the landscape efficiently in the course of their local movement patterns, which raises fundamental questions about the cognitive mechanisms involved. By using precision global-positioning-system loggers, we show that homing pigeons (Columba livia) not only come to rely on highly stereotyped yet surprisingly inefficient routes within the local area but are attracted directly back to their individually preferred routes even when released from novel sites off-route. This precise route loyalty demonstrates a reliance on familiar landmarks throughout the flight, which was unexpected under current models of avian navigation. We discuss how visual landmarks may be encoded as waypoints within familiar route maps. PMID:15572457
Visualizing the deep end of sound: plotting multi-parameter results from infrasound data analysis
NASA Astrophysics Data System (ADS)
Perttu, A. B.; Taisne, B.
2016-12-01
Infrasound is sound below the threshold of human hearing: approximately 20 Hz. The field of infrasound research, like other waveform based fields relies on several standard processing methods and data visualizations, including waveform plots and spectrograms. The installation of the International Monitoring System (IMS) global network of infrasound arrays, contributed to the resurgence of infrasound research. Array processing is an important method used in infrasound research, however, this method produces data sets with a large number of parameters, and requires innovative plotting techniques. The goal in designing new figures is to be able to present easily comprehendible, and information-rich plots by careful selection of data density and plotting methods.
Modeling and measuring the visual detection of ecologically relevant motion by an Anolis lizard.
Pallus, Adam C; Fleishman, Leo J; Castonguay, Philip M
2010-01-01
Motion in the visual periphery of lizards, and other animals, often causes a shift of visual attention toward the moving object. This behavioral response must be more responsive to relevant motion (predators, prey, conspecifics) than to irrelevant motion (windblown vegetation). Early stages of visual motion detection rely on simple local circuits known as elementary motion detectors (EMDs). We presented a computer model consisting of a grid of correlation-type EMDs, with videos of natural motion patterns, including prey, predators and windblown vegetation. We systematically varied the model parameters and quantified the relative response to the different classes of motion. We carried out behavioral experiments with the lizard Anolis sagrei and determined that their visual response could be modeled with a grid of correlation-type EMDs with a spacing parameter of 0.3 degrees visual angle, and a time constant of 0.1 s. The model with these parameters gave substantially stronger responses to relevant motion patterns than to windblown vegetation under equivalent conditions. However, the model is sensitive to local contrast and viewer-object distance. Therefore, additional neural processing is probably required for the visual system to reliably distinguish relevant from irrelevant motion under a full range of natural conditions.
Effect of Lags on Human Performance with Head-Coupled Simulators
1993-06-01
fashion relying on visual feedback. (iii) Precognitive This condition exists when the operator has complete information about the future system input...and so it is no longer necessary to maintain continuous closed-loop control of th- perceived error. Although precognitive control is not really...ahead so that an appropriat6 maneuver is made in advance. This situation can be simulated by the " Precognitive DCsplay" in which the next target
JView Visualization for Next Generation Air Transportation System
2011-01-01
hardware graphics acceleration. JView relies on concrete Object Oriented Design (OOD) and programming techniques to provide a robust and venue non...visibility priority of a texture set. A good example of this is you have translucent images that should always be visible over the other textures...elements present in the scene. • Capture Alpha. Allows the alpha color channel ( translucency ) to be saved when capturing images or movies of a 3D scene
ERIC Educational Resources Information Center
Garcia-Belmonte, Germà
2017-01-01
Spatial visualization is a well-established topic of education research that has allowed improving science and engineering students' skills on spatial relations. Connections have been established between visualization as a comprehension tool and instruction in several scientific fields. Learning about dynamic processes mainly relies upon static…
A Task-Dependent Causal Role for Low-Level Visual Processes in Spoken Word Comprehension
ERIC Educational Resources Information Center
Ostarek, Markus; Huettig, Falk
2017-01-01
It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual…
Visual Navigation in Nocturnal Insects.
Warrant, Eric; Dacke, Marie
2016-05-01
Despite their tiny eyes and brains, nocturnal insects have evolved a remarkable capacity to visually navigate at night. Whereas some use moonlight or the stars as celestial compass cues to maintain a straight-line course, others use visual landmarks to navigate to and from their nest. These impressive abilities rely on highly sensitive compound eyes and specialized visual processing strategies in the brain. ©2016 Int. Union Physiol. Sci./Am. Physiol. Soc.
Toward a Scalable Visualization System for Network Traffic Monitoring
NASA Astrophysics Data System (ADS)
Malécot, Erwan Le; Kohara, Masayoshi; Hori, Yoshiaki; Sakurai, Kouichi
With the multiplication of attacks against computer networks, system administrators are required to monitor carefully the traffic exchanged by the networks they manage. However, that monitoring task is increasingly laborious because of the augmentation of the amount of data to analyze. And that trend is going to intensify with the explosion of the number of devices connected to computer networks along with the global rise of the available network bandwidth. So system administrators now heavily rely on automated tools to assist them and simplify the analysis of the data. Yet, these tools provide limited support and, most of the time, require highly skilled operators. Recently, some research teams have started to study the application of visualization techniques to the analysis of network traffic data. We believe that this original approach can also allow system administrators to deal with the large amount of data they have to process. In this paper, we introduce a tool for network traffic monitoring using visualization techniques that we developed in order to assist the system administrators of our corporate network. We explain how we designed the tool and some of the choices we made regarding the visualization techniques to use. The resulting tool proposes two linked representations of the network traffic and activity, one in 2D and the other in 3D. As 2D and 3D visualization techniques have different assets, we resulted in combining them in our tool to take advantage of their complementarity. We finally tested our tool in order to evaluate the accuracy of our approach.
Nature as a model for biomimetic sensors
NASA Astrophysics Data System (ADS)
Bleckmann, H.
2012-04-01
Mammals, like humans, rely mainly on acoustic, visual and olfactory information. In addition, most also use tactile and thermal cues for object identification and spatial orientation. Most non-mammalian animals also possess a visual, acoustic and olfactory system. However, besides these systems they have developed a large variety of highly specialized sensors. For instance, pyrophilous insects use infrared organs for the detection of forest fires while boas, pythons and pit vipers sense the infrared radiation emitted by prey animals. All cartilaginous and bony fishes as well as some amphibians have a mechnaosensory lateral line. It is used for the detection of weak water motions and pressure gradients. For object detection and spatial orientation many species of nocturnal fish employ active electrolocation. This review describes certain aspects of the detection and processing of infrared, mechano- and electrosensory information. It will be shown that the study of these seemingly exotic sensory systems can lead to discoveries that are useful for the construction of technical sensors and artificial control systems.
MobileODT: a case study of a novel approach to an mHealth-based model of sustainable impact
Mink, Jonah
2016-01-01
A persistent challenge facing global health actors is ensuring that time-bound interventions are ultimately adopted and integrated into local health systems for long term health system strengthening and capacity building. This level of sustainability is rarely achieved with current models of global health intervention that rely on continuous injection of resources or persistent external presence on the ground. Presented here is a case study of a flipped approach to creating capacity and adoption through an engagement strategy centered around an innovative mHealth device and connected service. Through an impact-oriented business model, this mHealth solution engages stakeholders in a cohesive and interdependent network by appealing to the pain points for each actor throughout the health system. This particular intervention centers around the MobileODT, Inc. Enhanced Visual Assessment (EVA) System for enhanced visualization. While focused on challenges to cervical cancer screening and treatment services, the lessons learned are offered as a model for lateral translation into adjacent health condition verticals. PMID:28293590
The scope and control of attention as separate aspects of working memory.
Shipstead, Zach; Redick, Thomas S; Hicks, Kenny L; Engle, Randall W
2012-01-01
The present study examines two varieties of working memory (WM) capacity task: visual arrays (i.e., a measure of the amount of information that can be maintained in working memory) and complex span (i.e., a task that taps WM-related attentional control). Using previously collected data sets we employ confirmatory factor analysis to demonstrate that visual arrays and complex span tasks load on separate, but correlated, factors. A subsequent series of structural equation models and regression analyses demonstrate that these factors contribute both common and unique variance to the prediction of general fluid intelligence (Gf). However, while visual arrays does contribute uniquely to higher cognition, its overall correlation to Gf is largely mediated by variance associated with the complex span factor. Thus we argue that visual arrays performance is not strictly driven by a limited-capacity storage system (e.g., the focus of attention; Cowan, 2001), but may also rely on control processes such as selective attention and controlled memory search.
Visual landmark-directed scatter-hoarding of Siberian chipmunks Tamias sibiricus.
Zhang, Dongyuan; Li, Jia; Wang, Zhenyu; Yi, Xianfeng
2016-05-01
Spatial memory of cached food items plays an important role in cache recovery by scatter-hoarding animals. However, whether scatter-hoarding animals intentionally select cache sites with respect to visual landmarks in the environment and then rely on them to recover their cached seeds for later use has not been extensively explored. Furthermore, there is a lack of evidence on whether there are sex differences in visual landmark-based food-hoarding behaviors in small rodents even though male and female animals exhibit different spatial abilities. In the present study, we used a scatter-hoarding animal, the Siberian chipmunk, Tamias sibiricus to explore these questions in semi-natural enclosures. Our results showed that T. sibiricus preferred to establish caches in the shallow pits labeled with visual landmarks (branches of Pinus sylvestris, leaves of Athyrium brevifrons and PVC tubes). In addition, visual landmarks of P. sylvestris facilitated cache recovery by T. sibiricus. We also found significant sex differences in visual landmark-based food-hoarding strategies in Siberian chipmunks. Males, rather than females, chipmunks tended to establish their caches with respect to the visual landmarks. Our studies show that T. sibiricus rely on visual landmarks to establish and recover their caches, and that sex differences exist in visual landmark-based food hoarding in Siberian chipmunks. © 2015 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.
Universal brain systems for recognizing word shapes and handwriting gestures during reading
Nakamura, Kimihiro; Kuo, Wen-Jui; Pegado, Felipe; Cohen, Laurent; Tzeng, Ovid J. L.; Dehaene, Stanislas
2012-01-01
Do the neural circuits for reading vary across culture? Reading of visually complex writing systems such as Chinese has been proposed to rely on areas outside the classical left-hemisphere network for alphabetic reading. Here, however, we show that, once potential confounds in cross-cultural comparisons are controlled for by presenting handwritten stimuli to both Chinese and French readers, the underlying network for visual word recognition may be more universal than previously suspected. Using functional magnetic resonance imaging in a semantic task with words written in cursive font, we demonstrate that two universal circuits, a shape recognition system (reading by eye) and a gesture recognition system (reading by hand), are similarly activated and show identical patterns of activation and repetition priming in the two language groups. These activations cover most of the brain regions previously associated with culture-specific tuning. Our results point to an extended reading network that invariably comprises the occipitotemporal visual word-form system, which is sensitive to well-formed static letter strings, and a distinct left premotor region, Exner’s area, which is sensitive to the forward or backward direction with which cursive letters are dynamically presented. These findings suggest that cultural effects in reading merely modulate a fixed set of invariant macroscopic brain circuits, depending on surface features of orthographies. PMID:23184998
Montijn, Jorrit S; Goltstein, Pieter M; Pennartz, Cyriel MA
2015-01-01
Previous studies have demonstrated the importance of the primary sensory cortex for the detection, discrimination, and awareness of visual stimuli, but it is unknown how neuronal populations in this area process detected and undetected stimuli differently. Critical differences may reside in the mean strength of responses to visual stimuli, as reflected in bulk signals detectable in functional magnetic resonance imaging, electro-encephalogram, or magnetoencephalography studies, or may be more subtly composed of differentiated activity of individual sensory neurons. Quantifying single-cell Ca2+ responses to visual stimuli recorded with in vivo two-photon imaging, we found that visual detection correlates more strongly with population response heterogeneity rather than overall response strength. Moreover, neuronal populations showed consistencies in activation patterns across temporally spaced trials in association with hit responses, but not during nondetections. Contrary to models relying on temporally stable networks or bulk signaling, these results suggest that detection depends on transient differentiation in neuronal activity within cortical populations. DOI: http://dx.doi.org/10.7554/eLife.10163.001 PMID:26646184
Neural pathways for visual speech perception
Bernstein, Lynne E.; Liebenthal, Einat
2014-01-01
This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA. PMID:25520611
Feed-forward segmentation of figure-ground and assignment of border-ownership.
Supèr, Hans; Romeo, August; Keil, Matthias
2010-05-19
Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment.
The Identity-Location Binding Problem.
Howe, Piers D L; Ferguson, Adam
2015-09-01
The binding problem is fundamental to visual perception. It is the problem of associating an object's visual properties with itself and not with some other object. The problem is made particular difficult because different properties of an object, such as its color, shape, size, and motion, are often processed independently, sometimes in different cortical areas. The results of these separate analyses have to be combined before the object can be seen as a single coherent entity as opposed to a collection of unconnected features. Visual bindings are typically initiated and updated in a serial fashion, one object at a time. Here, we show that one type of binding, location-identity bindings, can be updated in parallel. We do this by using two complementary techniques, the simultaneous-sequential paradigm and systems factorial technology. These techniques make different assumptions and rely on different behavioral measures, yet both came to the same conclusion. Copyright © 2014 Cognitive Science Society, Inc.
A visual short-term memory advantage for objects of expertise
Curby, Kim M.; Glazek, Kuba; Gauthier, Isabel
2014-01-01
Visual short-term memory (VSTM) is limited, especially for complex objects. Its capacity, however, is greater for faces than for other objects, an advantage that may stem from the holistic nature of face processing. If the holistic processing explains this advantage, then object expertise—which also relies on holistic processing—should endow experts with a VSTM advantage. We compared VSTM for cars among car experts to that among car novices. Car experts, but not car novices, demonstrated a VSTM advantage similar to that for faces; this advantage was orientation-specific and was correlated with an individual's level of car expertise. Control experiments ruled out accounts based solely on verbal- or long-term memory representations. These findings suggest that the processing advantages afforded by visual expertise result in domain-specific increases in VSTM capacity, perhaps by allowing experts to maximize the use of an inherently limited VSTM system. PMID:19170473
Identifying a "default" visual search mode with operant conditioning.
Kawahara, Jun-ichiro
2010-09-01
The presence of a singleton in a task-irrelevant domain can impair visual search. This impairment, known as the attentional capture depends on the set of participants. When narrowly searching for a specific feature (the feature search mode), only matching stimuli capture attention. When searching broadly (the singleton detection mode), any oddball captures attention. The present study examined which strategy represents the "default" mode using an operant conditioning approach in which participants were trained, in the absence of explicit instructions, to search for a target in an ambiguous context in which one of two modes was available. The results revealed that participants behaviorally adopted the singleton detection as the default mode but reported using the feature search mode. Conscious strategies did not eliminate capture. These results challenge the view that a conscious set always modulates capture, suggesting that the visual system tends to rely on stimulus salience to deploy attention.
Feed-Forward Segmentation of Figure-Ground and Assignment of Border-Ownership
Supèr, Hans; Romeo, August; Keil, Matthias
2010-01-01
Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment. PMID:20502718
Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer.
Ashtiani, Matin N; Kheradpisheh, Saeed R; Masquelier, Timothée; Ganjtabesh, Mohammad
2017-01-01
The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the "entry" level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies).
Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer
Ashtiani, Matin N.; Kheradpisheh, Saeed R.; Masquelier, Timothée; Ganjtabesh, Mohammad
2017-01-01
The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the “entry” level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies). PMID:28790954
Chen, Hui-Ya; Chang, Hsiao-Yun; Ju, Yan-Ying; Tsao, Hung-Ting
2017-06-01
Rhythmic gymnasts specialise in dynamic balance under sensory conditions of numerous somatosensory, visual, and vestibular stimulations. This study investigated whether adolescent rhythmic gymnasts are superior to peers in Sensory Organisation test (SOT) performance, which quantifies the ability to maintain standing balance in six sensory conditions, and explored whether they plateaued faster during familiarisation with the SOT. Three and six sessions of SOTs were administered to 15 female rhythmic gymnasts (15.0 ± 1.8 years) and matched peers (15.1 ± 2.1 years), respectively. The gymnasts were superior to their peers in terms of fitness measures, and their performance was better in the SOT equilibrium score when visual information was unreliable. The SOT learning effects were shown in more challenging sensory conditions between Sessions 1 and 2 and were equivalent in both groups; however, over time, the gymnasts gained marginally significant better visual ability and relied less on visual sense when unreliable. In conclusion, adolescent rhythmic gymnasts have generally the same sensory organisation ability and learning rates as their peers. However, when visual information is unreliable, they have superior sensory organisation ability and learn faster to rely less on visual sense.
Peripheral Processing Facilitates Optic Flow-Based Depth Perception
Li, Jinglin; Lindemann, Jens P.; Egelhaaf, Martin
2016-01-01
Flying insects, such as flies or bees, rely on consistent information regarding the depth structure of the environment when performing their flight maneuvers in cluttered natural environments. These behaviors include avoiding collisions, approaching targets or spatial navigation. Insects are thought to obtain depth information visually from the retinal image displacements (“optic flow”) during translational ego-motion. Optic flow in the insect visual system is processed by a mechanism that can be modeled by correlation-type elementary motion detectors (EMDs). However, it is still an open question how spatial information can be extracted reliably from the responses of the highly contrast- and pattern-dependent EMD responses, especially if the vast range of light intensities encountered in natural environments is taken into account. This question will be addressed here by systematically modeling the peripheral visual system of flies, including various adaptive mechanisms. Different model variants of the peripheral visual system were stimulated with image sequences that mimic the panoramic visual input during translational ego-motion in various natural environments, and the resulting peripheral signals were fed into an array of EMDs. We characterized the influence of each peripheral computational unit on the representation of spatial information in the EMD responses. Our model simulations reveal that information about the overall light level needs to be eliminated from the EMD input as is accomplished under light-adapted conditions in the insect peripheral visual system. The response characteristics of large monopolar cells (LMCs) resemble that of a band-pass filter, which reduces the contrast dependency of EMDs strongly, effectively enhancing the representation of the nearness of objects and, especially, of their contours. We furthermore show that local brightness adaptation of photoreceptors allows for spatial vision under a wide range of dynamic light conditions. PMID:27818631
Experimental Test of Spatial Updating Models for Monkey Eye-Head Gaze Shifts
Van Grootel, Tom J.; Van der Willigen, Robert F.; Van Opstal, A. John
2012-01-01
How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static), or during (dynamic) the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements. PMID:23118883
System identification and sensorimotor determinants of flight maneuvers in an insect
NASA Astrophysics Data System (ADS)
Sponberg, Simon; Hall, Robert; Roth, Eatai
Locomotor maneuvers are inherently closed-loop processes. They are generally characterized by the integration of multiple sensory inputs and adaptation or learning over time. To probe sensorimotor processing we take a system identification approach treating the underlying physiological systems as dynamic processes and altering the feedback topology in experiment and analysis. As a model system, we use agile hawk moths (Manduca sexta), which feed from real and robotic flowers while hovering in mid air. Moths rely on vision and mechanosensation to track floral targets and can do so at exceptionally low luminance levels despite hovering being a mechanically unstable behavior that requires neural feedback to stabilize. By altering the sensory environment and placing mechanical and visual signals in conflict we show a surprisingly simple linear summation of visual and mechanosensation produces a generative prediction of behavior to novel stimuli. Tracking performance is also limited more by the mechanics of flight than the magnitude of the sensory cue. A feedback systems approach to locomotor control results in new insights into how behavior emerges from the interaction of nonlinear physiological systems.
Lightweight genome viewer: portable software for browsing genomics data in its chromosomal context
Faith, Jeremiah J; Olson, Andrew J; Gardner, Timothy S; Sachidanandam, Ravi
2007-01-01
Background Lightweight genome viewer (lwgv) is a web-based tool for visualization of sequence annotations in their chromosomal context. It performs most of the functions of larger genome browsers, while relying on standard flat-file formats and bypassing the database needs of most visualization tools. Visualization as an aide to discovery requires display of novel data in conjunction with static annotations in their chromosomal context. With database-based systems, displaying dynamic results requires temporary tables that need to be tracked for removal. Results lwgv simplifies the visualization of user-generated results on a local computer. The dynamic results of these analyses are written to transient files, which can import static content from a more permanent file. lwgv is currently used in many different applications, from whole genome browsers to single-gene RNAi design visualization, demonstrating its applicability in a large variety of contexts and scales. Conclusion lwgv provides a lightweight alternative to large genome browsers for visualizing biological annotations and dynamic analyses in their chromosomal context. It is particularly suited for applications ranging from short sequences to medium-sized genomes when the creation and maintenance of a large software and database infrastructure is not necessary or desired. PMID:17877794
Lightweight genome viewer: portable software for browsing genomics data in its chromosomal context.
Faith, Jeremiah J; Olson, Andrew J; Gardner, Timothy S; Sachidanandam, Ravi
2007-09-18
Lightweight genome viewer (lwgv) is a web-based tool for visualization of sequence annotations in their chromosomal context. It performs most of the functions of larger genome browsers, while relying on standard flat-file formats and bypassing the database needs of most visualization tools. Visualization as an aide to discovery requires display of novel data in conjunction with static annotations in their chromosomal context. With database-based systems, displaying dynamic results requires temporary tables that need to be tracked for removal. lwgv simplifies the visualization of user-generated results on a local computer. The dynamic results of these analyses are written to transient files, which can import static content from a more permanent file. lwgv is currently used in many different applications, from whole genome browsers to single-gene RNAi design visualization, demonstrating its applicability in a large variety of contexts and scales. lwgv provides a lightweight alternative to large genome browsers for visualizing biological annotations and dynamic analyses in their chromosomal context. It is particularly suited for applications ranging from short sequences to medium-sized genomes when the creation and maintenance of a large software and database infrastructure is not necessary or desired.
A unified dynamic neural field model of goal directed eye movements
NASA Astrophysics Data System (ADS)
Quinton, J. C.; Goffart, L.
2018-01-01
Primates heavily rely on their visual system, which exploits signals of graded precision based on the eccentricity of the target in the visual field. The interactions with the environment involve actively selecting and focusing on visual targets or regions of interest, instead of contemplating an omnidirectional visual flow. Eye-movements specifically allow foveating targets and track their motion. Once a target is brought within the central visual field, eye-movements are usually classified into catch-up saccades (jumping from one orientation or fixation to another) and smooth pursuit (continuously tracking a target with low velocity). Building on existing dynamic neural field equations, we introduce a novel model that incorporates internal projections to better estimate the current target location (associated to a peak of activity). Such estimate is then used to trigger an eye movement, leading to qualitatively different behaviours depending on the dynamics of the whole oculomotor system: (1) fixational eye-movements due to small variations in the weights of projections when the target is stationary, (2) interceptive and catch-up saccades when peaks build and relax on the neural field, (3) smooth pursuit when the peak stabilises near the centre of the field, the system reaching a fixed point attractor. Learning is nevertheless required for tracking a rapidly moving target, and the proposed model thus replicates recent results in the monkey, in which repeated exercise permits the maintenance of the target within in the central visual field at its current (here-and-now) location, despite the delays involved in transmitting retinal signals to the oculomotor neurons.
Hoffmann, Susanne; Vega-Zuniga, Tomas; Greiter, Wolfgang; Krabichler, Quirin; Bley, Alexandra; Matthes, Mariana; Zimmer, Christiane; Firzlaff, Uwe; Luksch, Harald
2016-11-01
The midbrain superior colliculus (SC) commonly features a retinotopic representation of visual space in its superficial layers, which is congruent with maps formed by multisensory neurons and motor neurons in its deep layers. Information flow between layers is suggested to enable the SC to mediate goal-directed orienting movements. While most mammals strongly rely on vision for orienting, some species such as echolocating bats have developed alternative strategies, which raises the question how sensory maps are organized in these animals. We probed the visual system of the echolocating bat Phyllostomus discolor and found that binocular high acuity vision is frontally oriented and thus aligned with the biosonar system, whereas monocular visual fields cover a large area of peripheral space. For the first time in echolocating bats, we could show that in contrast with other mammals, visual processing is restricted to the superficial layers of the SC. The topographic representation of visual space, however, followed the general mammalian pattern. In addition, we found a clear topographic representation of sound azimuth in the deeper collicular layers, which was congruent with the superficial visual space map and with a previously documented map of orienting movements. Especially for bats navigating at high speed in densely structured environments, it is vitally important to transfer and coordinate spatial information between sensors and motor systems. Here, we demonstrate first evidence for the existence of congruent maps of sensory space in the bat SC that might serve to generate a unified representation of the environment to guide motor actions. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Schwarz, Sebastian; Albert, Laurence; Wystrach, Antoine; Cheng, Ken
2011-03-15
Many animal species, including some social hymenoptera, use the visual system for navigation. Although the insect compound eyes have been well studied, less is known about the second visual system in some insects, the ocelli. Here we demonstrate navigational functions of the ocelli in the visually guided Australian desert ant Melophorus bagoti. These ants are known to rely on both visual landmark learning and path integration. We conducted experiments to reveal the role of ocelli in the perception and use of celestial compass information and landmark guidance. Ants with directional information from their path integration system were tested with covered compound eyes and open ocelli on an unfamiliar test field where only celestial compass cues were available for homing. These full-vector ants, using only their ocelli for visual information, oriented significantly towards the fictive nest on the test field, indicating the use of celestial compass information that is presumably based on polarised skylight, the sun's position or the colour gradient of the sky. Ants without any directional information from their path-integration system (zero-vector) were tested, also with covered compound eyes and open ocelli, on a familiar training field where they have to use the surrounding panorama to home. These ants failed to orient significantly in the homeward direction. Together, our results demonstrated that M. bagoti could perceive and process celestial compass information for directional orientation with their ocelli. In contrast, the ocelli do not seem to contribute to terrestrial landmark-based navigation in M. bagoti.
Lobjois, Régis; Dagonneau, Virginie; Isableu, Brice
2016-11-01
Compared with driving or flight simulation, little is known about self-motion perception in riding simulation. The goal of this study was to examine whether or not continuous roll motion supports the sensation of leaning into bends in dynamic motorcycle simulation. To this end, riders were able to freely tune the visual scene and/or motorcycle simulator roll angle to find a pattern that matched their prior knowledge. Our results revealed idiosyncrasy in the combination of visual and proprioceptive information. Some subjects relied more on the visual dimension, but reported increased sickness symptoms with the visual roll angle. Others relied more on proprioceptive information, tuning the direction of the visual scenery to match three possible patterns. Our findings also showed that these two subgroups tuned the motorcycle simulator roll angle in a similar way. This suggests that sustained inertially specified roll motion have contributed to the sensation of leaning in spite of the occurrence of unexpected gravito-inertial stimulation during the tilt. Several hypotheses are discussed. Practitioner Summary: Self-motion perception in motorcycle simulation is a relatively new research area. We examined how participants combined visual and proprioceptive information. Findings revealed individual differences in the visual dimension. However, participants tuned the simulator roll angle similarly, supporting the hypothesis that sustained inertially specified roll motion contributes to a leaning sensation.
Kreeft, Davey; Arkenbout, Ewout Aart; Henselmans, Paulus Wilhelmus Johannes; van Furth, Wouter R.; Breedveld, Paul
2017-01-01
A clear visualization of the operative field is of critical importance in endoscopic surgery. During surgery the endoscope lens can get fouled by body fluids (eg, blood), ground substance, rinsing fluid, bone dust, or smoke plumes, resulting in visual impairment. As a result, surgeons spend part of the procedure on intermittent cleaning of the endoscope lens. Current cleaning methods that rely on manual wiping or a lens irrigation system are still far from ideal, leading to longer procedure times, dirtying of the surgical site, and reduced visual acuity, potentially reducing patient safety. With the goal of finding a solution to these issues, a literature review was conducted to identify and categorize existing techniques capable of achieving optically clean surfaces, and to show which techniques can potentially be implemented in surgical practice. The review found that the most promising method for achieving surface cleanliness consists of a hybrid solution, namely, that of a hydrophilic or hydrophobic coating on the endoscope lens and the use of the existing lens irrigation system. PMID:28511635
Intrusive Images in Psychological Disorders
Brewin, Chris R.; Gregory, James D.; Lipton, Michelle; Burgess, Neil
2010-01-01
Involuntary images and visual memories are prominent in many types of psychopathology. Patients with posttraumatic stress disorder, other anxiety disorders, depression, eating disorders, and psychosis frequently report repeated visual intrusions corresponding to a small number of real or imaginary events, usually extremely vivid, detailed, and with highly distressing content. Both memory and imagery appear to rely on common networks involving medial prefrontal regions, posterior regions in the medial and lateral parietal cortices, the lateral temporal cortex, and the medial temporal lobe. Evidence from cognitive psychology and neuroscience implies distinct neural bases to abstract, flexible, contextualized representations (C-reps) and to inflexible, sensory-bound representations (S-reps). We revise our previous dual representation theory of posttraumatic stress disorder to place it within a neural systems model of healthy memory and imagery. The revised model is used to explain how the different types of distressing visual intrusions associated with clinical disorders arise, in terms of the need for correct interaction between the neural systems supporting S-reps and C-reps via visuospatial working memory. Finally, we discuss the treatment implications of the new model and relate it to existing forms of psychological therapy. PMID:20063969
Human vision is attuned to the diffuseness of natural light
Morgenstern, Yaniv; Geisler, Wilson S.; Murray, Richard F.
2014-01-01
All images are highly ambiguous, and to perceive 3-D scenes, the human visual system relies on assumptions about what lighting conditions are most probable. Here we show that human observers' assumptions about lighting diffuseness are well matched to the diffuseness of lighting in real-world scenes. We use a novel multidirectional photometer to measure lighting in hundreds of environments, and we find that the diffuseness of natural lighting falls in the same range as previous psychophysical estimates of the visual system's assumptions about diffuseness. We also find that natural lighting is typically directional enough to override human observers' assumption that light comes from above. Furthermore, we find that, although human performance on some tasks is worse in diffuse light, this can be largely accounted for by intrinsic task difficulty. These findings suggest that human vision is attuned to the diffuseness levels of natural lighting conditions. PMID:25139864
Electrophysiological evidence for Audio-visuo-lingual speech integration.
Treille, Avril; Vilain, Coriandre; Schwartz, Jean-Luc; Hueber, Thomas; Sato, Marc
2018-01-31
Recent neurophysiological studies demonstrate that audio-visual speech integration partly operates through temporal expectations and speech-specific predictions. From these results, one common view is that the binding of auditory and visual, lipread, speech cues relies on their joint probability and prior associative audio-visual experience. The present EEG study examined whether visual tongue movements integrate with relevant speech sounds, despite little associative audio-visual experience between the two modalities. A second objective was to determine possible similarities and differences of audio-visual speech integration between unusual audio-visuo-lingual and classical audio-visuo-labial modalities. To this aim, participants were presented with auditory, visual, and audio-visual isolated syllables, with the visual presentation related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, with lingual and facial movements previously recorded by an ultrasound imaging system and a video camera. In line with previous EEG studies, our results revealed an amplitude decrease and a latency facilitation of P2 auditory evoked potentials in both audio-visual-lingual and audio-visuo-labial conditions compared to the sum of unimodal conditions. These results argue against the view that auditory and visual speech cues solely integrate based on prior associative audio-visual perceptual experience. Rather, they suggest that dynamic and phonetic informational cues are sharable across sensory modalities, possibly through a cross-modal transfer of implicit articulatory motor knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Wilhelm, Lance
2005-01-01
The use of images is becoming more pervasive in modern culture, and schools must adapt their curricula and instructional practices accordingly. Visual literacy is becoming more important from a curricular standpoint as society relies to a greater degree on images and visual communication strategies. Thus, in order for students to be marketable in…
The Development of Individuation in Autism
ERIC Educational Resources Information Center
O'Hearn, Kirsten; Franconeri, Steven; Wright, Catherine; Minshew, Nancy; Luna, Beatriz
2013-01-01
Evidence suggests that people with autism rely less on holistic visual information than typical adults. The current studies examine this by investigating core visual processes that contribute to holistic processing--namely, individuation and element grouping--and how they develop in participants with autism and typically developing (TD)…
NASA Astrophysics Data System (ADS)
Lipsa, D.; Chaudhary, A.; Williams, D. N.; Doutriaux, C.; Jhaveri, S.
2017-12-01
Climate Data Analysis Tools (UV-CDAT, https://uvcdat.llnl.gov) is a data analysis and visualization software package developed at Lawrence Livermore National Laboratory and designed for climate scientists. Core components of UV-CDAT include: 1) Community Data Management System (CDMS) which provides I/O support and a data model for climate data;2) CDAT Utilities (GenUtil) that processes data using spatial and temporal averaging and statistic functions; and 3) Visualization Control System (VCS) for interactive visualization of the data. VCS is a Python visualization package primarily built for climate scientists, however, because of its generality and breadth of functionality, it can be a useful tool to other scientific applications. VCS provides 1D, 2D and 3D visualization functions such as scatter plot and line graphs for 1d data, boxfill, meshfill, isofill, isoline for 2d scalar data, vector glyphs and streamlines for 2d vector data and 3d_scalar and 3d_vector for 3d data. Specifically for climate data our plotting routines include projections, Skew-T plots and Taylor diagrams. While VCS provided a user-friendly API, the previous implementation of VCS relied on slow performing vector graphics (Cairo) backend which is suitable for smaller dataset and non-interactive graphics. LLNL and Kitware team has added a new backend to VCS that uses the Visualization Toolkit (VTK) as its visualization backend. VTK is one of the most popular open source, multi-platform scientific visualization library written in C++. Its use of OpenGL and pipeline processing architecture results in a high performant VCS library. Its multitude of available data formats and visualization algorithms results in easy adoption of new visualization methods and new data formats in VCS. In this presentation, we describe recent contributions to VCS that includes new visualization plots, continuous integration testing using Conda and CircleCI, tutorials and examples using Jupyter notebooks as well as upgrades that we are planning in the near future which will improve its ease of use and reliability and extend its capabilities.
Eye evolution at high resolution: the neuron as a unit of homology.
Erclik, Ted; Hartenstein, Volker; McInnes, Roderick R; Lipshitz, Howard D
2009-08-01
Based on differences in morphology, photoreceptor-type usage and lens composition it has been proposed that complex eyes have evolved independently many times. The remarkable observation that different eye types rely on a conserved network of genes (including Pax6/eyeless) for their formation has led to the revised proposal that disparate complex eye types have evolved from a shared and simpler prototype. Did this ancestral eye already contain the neural circuitry required for image processing? And what were the evolutionary events that led to the formation of complex visual systems, such as those found in vertebrates and insects? The recent identification of unexpected cell-type homologies between neurons in the vertebrate and Drosophila visual systems has led to two proposed models for the evolution of complex visual systems from a simple prototype. The first, as an extension of the finding that the neurons of the vertebrate retina share homologies with both insect (rhabdomeric) and vertebrate (ciliary) photoreceptor cell types, suggests that the vertebrate retina is a composite structure, made up of neurons that have evolved from two spatially separate ancestral photoreceptor populations. The second model, based largely on the conserved role for the Vsx homeobox genes in photoreceptor-target neuron development, suggests that the last common ancestor of vertebrates and flies already possessed a relatively sophisticated visual system that contained a mixture of rhabdomeric and ciliary photoreceptors as well as their first- and second-order target neurons. The vertebrate retina and fly visual system would have subsequently evolved by elaborating on this ancestral neural circuit. Here we present evidence for these two cell-type homology-based models and discuss their implications.
NASA Astrophysics Data System (ADS)
Cheng, Maurice M. W.; Gilbert, John K.
2015-01-01
This study investigated students' interpretation of diagrams representing the human circulatory system. We conducted an interview study with three students aged 14-15 (Year 10) who were studying biology in a Hong Kong school. During the interviews, students were asked to interpret diagrams and relationships between diagrams that represented aspects of the circulatory system. All diagrams used in the interviews had been used by their teacher when teaching the topic. Students' interpretations were expressed by their verbal response and their drawing. Dual coding theory was used to interpret students' responses. There was evidence that one student relied on verbal recall as a strategy in interpreting diagrams. It was found that students might have relied unduly on similarities in spatial features, rather than on deeper meanings represented by conventions, of diagrams when they associated diagrams that represented different aspects of the circulatory system. A pattern of students' understanding of structure-behaviour-function relationship of the biological system was observed. This study suggests the importance of a consistent diagrammatic and verbal representation in communicating scientific ideas. Implications for teaching practice that facilitates learning with diagrams and address students' undue focus on spatial features of diagrams are discussed.
High Accuracy Monocular SFM and Scale Correction for Autonomous Driving.
Song, Shiyu; Chandraker, Manmohan; Guest, Clark C
2016-04-01
We present a real-time monocular visual odometry system that achieves high accuracy in real-world autonomous driving applications. First, we demonstrate robust monocular SFM that exploits multithreading to handle driving scenes with large motions and rapidly changing imagery. To correct for scale drift, we use known height of the camera from the ground plane. Our second contribution is a novel data-driven mechanism for cue combination that allows highly accurate ground plane estimation by adapting observation covariances of multiple cues, such as sparse feature matching and dense inter-frame stereo, based on their relative confidences inferred from visual data on a per-frame basis. Finally, we demonstrate extensive benchmark performance and comparisons on the challenging KITTI dataset, achieving accuracy comparable to stereo and exceeding prior monocular systems. Our SFM system is optimized to output pose within 50 ms in the worst case, while average case operation is over 30 fps. Our framework also significantly boosts the accuracy of applications like object localization that rely on the ground plane.
An Attractive Reelin Gradient Establishes Synaptic Lamination in the Vertebrate Visual System.
Di Donato, Vincenzo; De Santis, Flavia; Albadri, Shahad; Auer, Thomas Oliver; Duroure, Karine; Charpentier, Marine; Concordet, Jean-Paul; Gebhardt, Christoph; Del Bene, Filippo
2018-03-07
A conserved organizational and functional principle of neural networks is the segregation of axon-dendritic synaptic connections into laminae. Here we report that targeting of synaptic laminae by retinal ganglion cell (RGC) arbors in the vertebrate visual system is regulated by a signaling system relying on target-derived Reelin and VLDLR/Dab1a on the projecting neurons. Furthermore, we find that Reelin is distributed as a gradient on the target tissue and stabilized by heparan sulfate proteoglycans (HSPGs) in the extracellular matrix (ECM). Through genetic manipulations, we show that this Reelin gradient is important for laminar targeting and that it is attractive for RGC axons. Finally, we suggest a comprehensive model of synaptic lamina formation in which attractive Reelin counter-balances repulsive Slit1, thereby guiding RGC axons toward single synaptic laminae. We establish a mechanism that may represent a general principle for neural network assembly in vertebrate species and across different brain areas. Copyright © 2018 Elsevier Inc. All rights reserved.
[Retinal vasculitis and systemic diseases].
Gascon, P; Jarrot, P-A; Matonti, F; Kaplanski, G
2018-06-19
Retinal vasculitis (RV) is an inflammation of retinal blood vessels that can be associated with uveitis or be isolated, and can induce vascular occlusion and retinal ischemia. Visual acuity can be severely affected in case of macular involvement or neovessel formation. The diagnosis relies on fundoscopy and fluorescein angiography. Systemic diseases may be associated with RV, the most frequently encountered are Behçet's disease, sarcoidosis or multiple sclerosis, all predominantly associated with venous involvement, whereas systemic lupus erythematosus and necrotizing vasculitis are less frequently observed and predominantly associated with arterial or mixed vasculitis. Treatments are usually aggressive in order to preserve a good visual acuity and to reduce retinal inflammation and chronic ischemia. Steroids, immunosuppressive drugs, retinal laser photocoagulation, intravitreal anti-VEGF injections are usual treatments and more recently, anti-TNFalpha monoclonal therapeutic antibodies have been shown to be very successful. Copyright © 2018 Société Nationale Française de Médecine Interne (SNFMI). Published by Elsevier Masson SAS. All rights reserved.
Behavioural system identification of visual flight speed control in Drosophila melanogaster
Rohrseitz, Nicola; Fry, Steven N.
2011-01-01
Behavioural control in many animals involves complex mechanisms with intricate sensory-motor feedback loops. Modelling allows functional aspects to be captured without relying on a description of the underlying complex, and often unknown, mechanisms. A wide range of engineering techniques are available for modelling, but their ability to describe time-continuous processes is rarely exploited to describe sensory-motor control mechanisms in biological systems. We performed a system identification of visual flight speed control in the fruitfly Drosophila, based on an extensive dataset of open-loop responses previously measured under free flight conditions. We identified a second-order under-damped control model with just six free parameters that well describes both the transient and steady-state characteristics of the open-loop data. We then used the identified control model to predict flight speed responses after a visual perturbation under closed-loop conditions and validated the model with behavioural measurements performed in free-flying flies under the same closed-loop conditions. Our system identification of the fruitfly's flight speed response uncovers the high-level control strategy of a fundamental flight control reflex without depending on assumptions about the underlying physiological mechanisms. The results are relevant for future investigations of the underlying neuromotor processing mechanisms, as well as for the design of biomimetic robots, such as micro-air vehicles. PMID:20525744
Behavioural system identification of visual flight speed control in Drosophila melanogaster.
Rohrseitz, Nicola; Fry, Steven N
2011-02-06
Behavioural control in many animals involves complex mechanisms with intricate sensory-motor feedback loops. Modelling allows functional aspects to be captured without relying on a description of the underlying complex, and often unknown, mechanisms. A wide range of engineering techniques are available for modelling, but their ability to describe time-continuous processes is rarely exploited to describe sensory-motor control mechanisms in biological systems. We performed a system identification of visual flight speed control in the fruitfly Drosophila, based on an extensive dataset of open-loop responses previously measured under free flight conditions. We identified a second-order under-damped control model with just six free parameters that well describes both the transient and steady-state characteristics of the open-loop data. We then used the identified control model to predict flight speed responses after a visual perturbation under closed-loop conditions and validated the model with behavioural measurements performed in free-flying flies under the same closed-loop conditions. Our system identification of the fruitfly's flight speed response uncovers the high-level control strategy of a fundamental flight control reflex without depending on assumptions about the underlying physiological mechanisms. The results are relevant for future investigations of the underlying neuromotor processing mechanisms, as well as for the design of biomimetic robots, such as micro-air vehicles.
The Role of the Oculomotor System in Updating Visual-Spatial Working Memory across Saccades
Boon, Paul J.; Belopolsky, Artem V.; Theeuwes, Jan
2016-01-01
Visual-spatial working memory (VSWM) helps us to maintain and manipulate visual information in the absence of sensory input. It has been proposed that VSWM is an emergent property of the oculomotor system. In the present study we investigated the role of the oculomotor system in updating of spatial working memory representations across saccades. Participants had to maintain a location in memory while making a saccade to a different location. During the saccade the target was displaced, which went unnoticed by the participants. After executing the saccade, participants had to indicate the memorized location. If memory updating fully relies on cancellation driven by extraretinal oculomotor signals, the displacement should have no effect on the perceived location of the memorized stimulus. However, if postsaccadic retinal information about the location of the saccade target is used, the perceived location will be shifted according to the target displacement. As it has been suggested that maintenance of accurate spatial representations across saccades is especially important for action control, we used different ways of reporting the location held in memory; a match-to-sample task, a mouse click or by making another saccade. The results showed a small systematic target displacement bias in all response modalities. Parametric manipulation of the distance between the to-be-memorized stimulus and saccade target revealed that target displacement bias increased over time and changed its spatial profile from being initially centered on locations around the saccade target to becoming spatially global. Taken together results suggest that we neither rely exclusively on extraretinal nor on retinal information in updating working memory representations across saccades. The relative contribution of retinal signals is not fixed but depends on both the time available to integrate these signals as well as the distance between the saccade target and the remembered location. PMID:27631767
Lesion classification using clinical and visual data fusion by multiple kernel learning
NASA Astrophysics Data System (ADS)
Kisilev, Pavel; Hashoul, Sharbell; Walach, Eugene; Tzadok, Asaf
2014-03-01
To overcome operator dependency and to increase diagnosis accuracy in breast ultrasound (US), a lot of effort has been devoted to developing computer-aided diagnosis (CAD) systems for breast cancer detection and classification. Unfortunately, the efficacy of such CAD systems is limited since they rely on correct automatic lesions detection and localization, and on robustness of features computed based on the detected areas. In this paper we propose a new approach to boost the performance of a Machine Learning based CAD system, by combining visual and clinical data from patient files. We compute a set of visual features from breast ultrasound images, and construct the textual descriptor of patients by extracting relevant keywords from patients' clinical data files. We then use the Multiple Kernel Learning (MKL) framework to train SVM based classifier to discriminate between benign and malignant cases. We investigate different types of data fusion methods, namely, early, late, and intermediate (MKL-based) fusion. Our database consists of 408 patient cases, each containing US images, textual description of complaints and symptoms filled by physicians, and confirmed diagnoses. We show experimentally that the proposed MKL-based approach is superior to other classification methods. Even though the clinical data is very sparse and noisy, its MKL-based fusion with visual features yields significant improvement of the classification accuracy, as compared to the image features only based classifier.
Analysis of Rhythms in Experimental Signals
NASA Astrophysics Data System (ADS)
Desherevskii, A. V.; Zhuravlev, V. I.; Nikolsky, A. N.; Sidorin, A. Ya.
2017-12-01
We compare algorithms designed to extract quasiperiodic components of a signal and estimate the amplitude, phase, stability, and other characteristics of a rhythm in a sliding window in the presence of data gaps. Each algorithm relies on its own rhythm model; therefore, it is necessary to use different algorithms depending on the research objectives. The described set of algorithms and methods is implemented in the WinABD software package, which includes a time-series database management system, a powerful research complex, and an interactive data-visualization environment.
Application of visual cryptography for learning in optics and photonics
NASA Astrophysics Data System (ADS)
Mandal, Avikarsha; Wozniak, Peter; Vauderwange, Oliver; Curticapean, Dan
2016-09-01
In the age data digitalization, important applications of optics and photonics based sensors and technology lie in the field of biometrics and image processing. Protecting user data in a safe and secure way is an essential task in this area. However, traditional cryptographic protocols rely heavily on computer aided computation. Secure protocols which rely only on human interactions are usually simpler to understand. In many scenarios development of such protocols are also important for ease of implementation and deployment. Visual cryptography (VC) is an encryption technique on images (or text) in which decryption is done by human visual system. In this technique, an image is encrypted into number of pieces (known as shares). When the printed shares are physically superimposed together, the image can be decrypted with human vision. Modern digital watermarking technologies can be combined with VC for image copyright protection where the shares can be watermarks (small identification) embedded in the image. Similarly, VC can be used for improving security of biometric authentication. This paper presents about design and implementation of a practical laboratory experiment based on the concept of VC for a course in media engineering. Specifically, our contribution deals with integration of VC in different schemes for applications like digital watermarking and biometric authentication in the field of optics and photonics. We describe theoretical concepts and propose our infrastructure for the experiment. Finally, we will evaluate the learning outcome of the experiment, performed by the students.
Nondestructive evaluation of incipient decay in hardwood logs
Xiping Wang; Jan Wiedenbeck; Robert J. Ross; John W. Forsman; John R. Erickson; Crystal Pilon; Brian K. Brashaw
2005-01-01
Decay can cause significant damage to high-value hardwood timber. New nondestructive evaluation (NDE) technologies are urgently needed to effectively detect incipient decay in hardwood timber at the earliest possible stage. Currently, the primary means of inspecting timber relies on visual assessment criteria. When visual inspections are used exclusively, they provide...
Extending Our Vision: Access to Inclusive Dance Education for People with Visual Impairment
ERIC Educational Resources Information Center
Seham, Jenny; Yeo, Anna J.
2015-01-01
Environmental, organizational and attitudinal obstacles continue to prevent people with vision loss from meaningfully engaging in dance education and performance. This article addresses the societal disabilities that handicap access to dance education for the blind. Although much of traditional dance instruction relies upon visual cuing and…
JVIEW Visualization for Virtual Airspace Modeling and Simulation
2009-04-01
23 4.2.2 Translucency ................................................................................................................. 25 4.3... Translucency Used to Display Multiple Visualization Elements .............................. 26 Figure 26 - Textual Labels Feature...been done by Jason Moore and other AFRL/RISF staff and support personnel developing the JView API. JView relies on concrete Object Oriented Design
Teacher Vision: Expert and Novice Teachers' Perception of Problematic Classroom Management Scenes
ERIC Educational Resources Information Center
Wolff, Charlotte E.; Jarodzka, Halszka; van den Bogert, Niek; Boshuizen, Henny P. A.
2016-01-01
Visual expertise has been explored in numerous professions, but research on teachers' vision remains limited. Teachers' visual expertise is an important professional skill, particularly the ability to simultaneously perceive and interpret classroom situations for effective classroom management. This skill is complex and relies on an awareness of…
Code of Federal Regulations, 2012 CFR
2012-04-01
... to the process under which such article was produced; (2) Drawings, photographs, or other visual..., photographs, or other visual representations, should be labeled so that they can be read in conjunction with... unenforceable, the basis for such assertion, including, when prior art is relied on, a showing of how the prior...
ERIC Educational Resources Information Center
Brandstetter, Miriam; Sandmann, Angela; Florian, Christine
2017-01-01
In classroom, scientific contents are increasingly communicated through visual forms of representations. Students' learning outcomes rely on their ability to read and understand pictorial information. Understanding pictorial information in biology requires cognitive effort and can be challenging to students. Yet evidence-based knowledge about…
Visual Literacy in Instructional Design Programs
ERIC Educational Resources Information Center
Ervine, Michelle D.
2016-01-01
In this technologically advanced environment, users have become highly visual, with television, videos, web sites and images dominating the learning environment. These new forms of searching and learning are changing the perspective of what it means to be literate. Literacy can no longer solely rely on text-based materials, but should also…
ERIC Educational Resources Information Center
Bahrick, Lorraine E.; Krogh-Jespersen, Sheila; Argumosa, Melissa A.; Lopez, Hassel
2014-01-01
Although infants and children show impressive face-processing skills, little research has focused on the conditions that facilitate versus impair face perception. According to the intersensory redundancy hypothesis (IRH), face discrimination, which relies on detection of visual featural information, should be impaired in the context of…
MBSE-Driven Visualization of Requirements Allocation and Traceability
NASA Technical Reports Server (NTRS)
Jackson, Maddalena; Wilkerson, Marcus
2016-01-01
In a Model Based Systems Engineering (MBSE) infusion effort, there is a usually a concerted effort to define the information architecture, ontologies, and patterns that drive the construction and architecture of MBSE models, but less attention is given to the logical follow-on of that effort: how to practically leverage the resulting semantic richness of a well-formed populated model to enable systems engineers to work more effectively, as MBSE promises. While ontologies and patterns are absolutely necessary, an MBSE effort must also design and provide practical demonstration of value (through human-understandable representations of model data that address stakeholder concerns) or it will not succeed. This paper will discuss opportunities that exist for visualization in making the richness of a well-formed model accessible to stakeholders, specifically stakeholders who rely on the model for their day-to-day work. This paper will discuss the value added by MBSE-driven visualizations in the context of a small case study of interactive visualizations created and used on NASA's proposed Europa Mission. The case study visualizations were created for the purpose of understanding and exploring targeted aspects of requirements flow, allocation, and comparing the structure of that flow-down to a conceptual project decomposition. The work presented in this paper is an example of a product that leverages the richness and formalisms of our knowledge representation while also responding to the quality attributes SEs care about.
Visual tracking for multi-modality computer-assisted image guidance
NASA Astrophysics Data System (ADS)
Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp
2017-03-01
With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.
Harlow C. Landphair
1979-01-01
This paper relates the evolution of an empirical model used to predict public response to scenic quality objectively. The text relates the methods used to develop the visual quality index model, explains the terms used in the equation and briefly illustrates how the model is applied and how it is tested. While the technical application of the model relies heavily on...
Model of rhythmic ball bouncing using a visually controlled neural oscillator.
Avrin, Guillaume; Siegler, Isabelle A; Makarov, Maria; Rodriguez-Ayerbe, Pedro
2017-10-01
The present paper investigates the sensory-driven modulations of central pattern generator dynamics that can be expected to reproduce human behavior during rhythmic hybrid tasks. We propose a theoretical model of human sensorimotor behavior able to account for the observed data from the ball-bouncing task. The novel control architecture is composed of a Matsuoka neural oscillator coupled with the environment through visual sensory feedback. The architecture's ability to reproduce human-like performance during the ball-bouncing task in the presence of perturbations is quantified by comparison of simulated and recorded trials. The results suggest that human visual control of the task is achieved online. The adaptive behavior is made possible by a parametric and state control of the limit cycle emerging from the interaction of the rhythmic pattern generator, the musculoskeletal system, and the environment. NEW & NOTEWORTHY The study demonstrates that a behavioral model based on a neural oscillator controlled by visual information is able to accurately reproduce human modulations in a motor action with respect to sensory information during the rhythmic ball-bouncing task. The model attractor dynamics emerging from the interaction between the neuromusculoskeletal system and the environment met task requirements, environmental constraints, and human behavioral choices without relying on movement planning and explicit internal models of the environment. Copyright © 2017 the American Physiological Society.
Chanel, Laure-Anais; Nageotte, Florent; Vappou, Jonathan; Luo, Jianwen; Cuvillon, Loic; de Mathelin, Michel
2015-01-01
High Intensity Focused Ultrasound (HIFU) therapy is a very promising method for ablation of solid tumors. However, intra-abdominal organ motion, principally due to breathing, is a substantial limitation that results in incorrect tumor targeting. The objective of this work is to develop an all-in-one robotized HIFU system that can compensate motion in real-time during HIFU treatment. To this end, an ultrasound visual servoing scheme working at 20 Hz was designed. It relies on the motion estimation by using a fast ultrasonic speckle tracking algorithm and on the use of an interleaved imaging/HIFU sonication sequence for avoiding ultrasonic wave interferences. The robotized HIFU system was tested on a sample of chicken breast undergoing a vertical sinusoidal motion at 0.25 Hz. Sonications with and without motion compensation were performed in order to assess the effect of motion compensation on thermal lesions induced by HIFU. Motion was reduced by more than 80% thanks to this ultrasonic visual servoing system.
The role of attention in figure-ground segregation in areas V1 and V4 of the visual cortex.
Poort, Jasper; Raudies, Florian; Wannig, Aurel; Lamme, Victor A F; Neumann, Heiko; Roelfsema, Pieter R
2012-07-12
Our visual system segments images into objects and background. Figure-ground segregation relies on the detection of feature discontinuities that signal boundaries between the figures and the background and on a complementary region-filling process that groups together image regions with similar features. The neuronal mechanisms for these processes are not well understood and it is unknown how they depend on visual attention. We measured neuronal activity in V1 and V4 in a task where monkeys either made an eye movement to texture-defined figures or ignored them. V1 activity predicted the timing and the direction of the saccade if the figures were task relevant. We found that boundary detection is an early process that depends little on attention, whereas region filling occurs later and is facilitated by visual attention, which acts in an object-based manner. Our findings are explained by a model with local, bottom-up computations for boundary detection and feedback processing for region filling. Copyright © 2012 Elsevier Inc. All rights reserved.
Sklar, A E; Sarter, N B
1999-12-01
Observed breakdowns in human-machine communication can be explained, in part, by the nature of current automation feedback, which relies heavily on focal visual attention. Such feedback is not well suited for capturing attention in case of unexpected changes and events or for supporting the parallel processing of large amounts of data in complex domains. As suggested by multiple-resource theory, one possible solution to this problem is to distribute information across various sensory modalities. A simulator study was conducted to compare the effectiveness of visual, tactile, and redundant visual and tactile cues for indicating unexpected changes in the status of an automated cockpit system. Both tactile conditions resulted in higher detection rates for, and faster response times to, uncommanded mode transitions. Tactile feedback did not interfere with, nor was its effectiveness affected by, the performance of concurrent visual tasks. The observed improvement in task-sharing performance indicates that the introduction of tactile feedback is a promising avenue toward better supporting human-machine communication in event-driven, information-rich domains.
Independent effects of motivation and spatial attention in the human visual cortex.
Bayer, Mareike; Rossi, Valentina; Vanlessen, Naomi; Grass, Annika; Schacht, Annekathrin; Pourtois, Gilles
2017-01-01
Motivation and attention constitute major determinants of human perception and action. Nonetheless, it remains a matter of debate whether motivation effects on the visual cortex depend on the spatial attention system, or rely on independent pathways. This study investigated the impact of motivation and spatial attention on the activity of the human primary and extrastriate visual cortex by employing a factorial manipulation of the two factors in a cued pattern discrimination task. During stimulus presentation, we recorded event-related potentials and pupillary responses. Motivational relevance increased the amplitudes of the C1 component at ∼70 ms after stimulus onset. This modulation occurred independently of spatial attention effects, which were evident at the P1 level. Furthermore, motivation and spatial attention had independent effects on preparatory activation as measured by the contingent negative variation; and pupil data showed increased activation in response to incentive targets. Taken together, these findings suggest independent pathways for the influence of motivation and spatial attention on the activity of the human visual cortex. © The Author (2016). Published by Oxford University Press.
Data management in Oceanography at SOCIB
NASA Astrophysics Data System (ADS)
Joaquin, Tintoré; March, David; Lora, Sebastian; Sebastian, Kristian; Frontera, Biel; Gómara, Sonia; Pau Beltran, Joan
2014-05-01
SOCIB, the Balearic Islands Coastal Ocean Observing and Forecasting System (http://www.socib.es), is a Marine Research Infrastructure, a multiplatform distributed and integrated system, a facility of facilities that extends from the nearshore to the open sea and provides free, open and quality control data. SOCIB is a facility o facilities and has three major infrastructure components: (1) a distributed multiplatform observing system, (2) a numerical forecasting system, and (3) a data management and visualization system. We present the spatial data infrastructure and applications developed at SOCIB. One of the major goals of the SOCIB Data Centre is to provide users with a system to locate and download the data of interest (near real-time and delayed mode) and to visualize and manage the information. Following SOCIB principles, data need to be (1) discoverable and accessible, (2) freely available, and (3) interoperable and standardized. In consequence, SOCIB Data Centre Facility is implementing a general data management system to guarantee international standards, quality assurance and interoperability. The combination of different sources and types of information requires appropriate methods to ingest, catalogue, display, and distribute this information. SOCIB Data Centre is responsible for directing the different stages of data management, ranging from data acquisition to its distribution and visualization through web applications. The system implemented relies on open source solutions. In other words, the data life cycle relies in the following stages: • Acquisition: The data managed by SOCIB mostly come from its own observation platforms, numerical models or information generated from the activities in the SIAS Division. • Processing: Applications developed at SOCIB to deal with all collected platform data performing data calibration, derivation, quality control and standardization. • Archival: Storage in netCDF and spatial databases. • Distribution: Data web services using Thredds, Geoserver and RESTful own services. • Catalogue: Metadata is provided through the ncISO plugin in Thredds and Geonetwork. • Visualization: web and mobile applications to present SOCIB data to different user profiles. SOCIB data services and applications have been developed to provide response to science and society needs (eg. European initiatives such as Emodnet or Copernicus), by targeting different user profiles (eg. researchers, technicians, policy and decision makers, educators, students, and society in general). For example, SOCIB has developed applications to: 1) allow researchers and technicians to access oceanographic information; 2) provide decision support for oil spills response; 3) disseminate information about the coastal state for tourists and recreational users; 4) present coastal research in educational programs; and 5) offer easy and fast access to marine information through mobile devices. In conclusion, the organizational and conceptual structure of SOCIB's Data Centre and the components developed provide an example of marine information systems within the framework of new ocean observatories and/or marine research infrastructures.
Alphonsa, Sushma; Dai, Boyi; Benham-Deal, Tami; Zhu, Qin
2016-01-01
The speed-accuracy trade-off is a fundamental movement problem that has been extensively investigated. It has been established that the speed at which one can move to tap targets depends on how large the targets are and how far they are apart. These spatial properties of the targets can be quantified by the index of difficulty (ID). Two visual illusions are known to affect the perception of target size and movement amplitude: the Ebbinghaus illusion and Muller-Lyer illusion. We created visual images that combined these two visual illusions to manipulate the perceived ID, and then examined people's visual perception of the targets in illusory context as well as their performance in tapping those targets in both discrete and continuous manners. The findings revealed that the combined visual illusions affected the perceived ID similarly in both discrete and continuous judgment conditions. However, the movement outcomes were affected by the combined visual illusions according to the tapping mode. In discrete tapping, the combined visual illusions affected both movement accuracy and movement amplitude such that the effective ID resembled the perceived ID. In continuous tapping, none of the movement outcomes were affected by the combined visual illusions. Participants tapped the targets with higher speed and accuracy in all visual conditions. Based on these findings, we concluded that distinct visual-motor control mechanisms were responsible for execution of discrete and continuous Fitts' tapping. Although discrete tapping relies on allocentric information (object-centered) to plan for action, continuous tapping relies on egocentric information (self-centered) to control for action. The planning-control model for rapid aiming movements is supported.
All These Rays! What's the Point?
ERIC Educational Resources Information Center
Roberts, Sally K.; Tayeh, Carla
2011-01-01
Every semester, the authors encounter students who are attracted to the visual and spatial aspects of geometry. They have other students who consider geometry to be challenging for the very same reasons. Students are confounded not only by the fact that geometry relies on visual interpretations but also because it has a language of its own and…
A Visual Short-Term Memory Advantage for Objects of Expertise
ERIC Educational Resources Information Center
Curby, Kim M.; Glazek, Kuba; Gauthier, Isabel
2009-01-01
Visual short-term memory (VSTM) is limited, especially for complex objects. Its capacity, however, is greater for faces than for other objects; this advantage may stem from the holistic nature of face processing. If the holistic processing explains this advantage, object expertise--which also relies on holistic processing--should endow experts…
Communicating Science Concepts through Art: 21st-Century Skills in Practice
ERIC Educational Resources Information Center
Buczynski, Sandy; Ireland, Kathleen; Reed, Sherri; Lacanienta, Evelyn
2012-01-01
There is a dynamic synergy between the visual arts and the natural sciences. For example, science relies heavily on individuals with visual-art skills to render detailed illustrations, depicting everything from atoms to zebras. Likewise, artists apply analytic, linear, and logical thinking to compose and scale their work of art. These parallel…
ERIC Educational Resources Information Center
Kumazaki, Hirokazu; Kikuchi, Mitsuru; Yoshimura, Yuko; Miyao, Masutomo; Okada, Ken-ichi; Mimura, Masaru; Minabe, Yoshio
2018-01-01
Understanding the nature of olfactory abnormalities is crucial for optimal interventions in children with autism spectrum disorders (ASD). However, previous studies that have investigated odor identification in children with ASD have produced inconsistent results. The ability to correctly identify an odor relies heavily on visual inputs in the…
Auditory biofeedback substitutes for loss of sensory information in maintaining stance.
Dozza, Marco; Horak, Fay B; Chiari, Lorenzo
2007-03-01
The importance of sensory feedback for postural control in stance is evident from the balance improvements occurring when sensory information from the vestibular, somatosensory, and visual systems is available. However, the extent to which also audio-biofeedback (ABF) information can improve balance has not been determined. It is also unknown why additional artificial sensory feedback is more effective for some subjects than others and in some environmental contexts than others. The aim of this study was to determine the relative effectiveness of an ABF system to reduce postural sway in stance in healthy control subjects and in subjects with bilateral vestibular loss, under conditions of reduced vestibular, visual, and somatosensory inputs. This ABF system used a threshold region and non-linear scaling parameters customized for each individual, to provide subjects with pitch and volume coding of their body sway. ABF had the largest effect on reducing the body sway of the subjects with bilateral vestibular loss when the environment provided limited visual and somatosensory information; it had the smallest effect on reducing the sway of subjects with bilateral vestibular loss, when the environment provided full somatosensory information. The extent that all subjects substituted ABF information for their loss of sensory information was related to the extent that each subject was visually dependent or somatosensory-dependent for their postural control. Comparison of postural sway under a variety of sensory conditions suggests that patients with profound bilateral loss of vestibular function show larger than normal information redundancy among the remaining senses and ABF of trunk sway. The results support the hypothesis that the nervous system uses augmented sensory information differently depending both on the environment and on individual proclivities to rely on vestibular, somatosensory or visual information to control sway.
Lau, Johnny King L; Humphreys, Glyn W; Douis, Hassan; Balani, Alex; Bickerton, Wai-Ling; Rotshtein, Pia
2015-01-01
We report a lesion-symptom mapping analysis of visual speech production deficits in a large group (280) of stroke patients at the sub-acute stage (<120 days post-stroke). Performance on object naming was evaluated alongside three other tests of visual speech production, namely sentence production to a picture, sentence reading and nonword reading. A principal component analysis was performed on all these tests' scores and revealed a 'shared' component that loaded across all the visual speech production tasks and a 'unique' component that isolated object naming from the other three tasks. Regions for the shared component were observed in the left fronto-temporal cortices, fusiform gyrus and bilateral visual cortices. Lesions in these regions linked to both poor object naming and impairment in general visual-speech production. On the other hand, the unique naming component was potentially associated with the bilateral anterior temporal poles, hippocampus and cerebellar areas. This is in line with the models proposing that object naming relies on a left-lateralised language dominant system that interacts with a bilateral anterior temporal network. Neuropsychological deficits in object naming can reflect both the increased demands specific to the task and the more general difficulties in language processing.
Patient DF's visual brain in action: Visual feedforward control in visual form agnosia.
Whitwell, Robert L; Milner, A David; Cavina-Pratesi, Cristiana; Barat, Masihullah; Goodale, Melvyn A
2015-05-01
Patient DF, who developed visual form agnosia following ventral-stream damage, is unable to discriminate the width of objects, performing at chance, for example, when asked to open her thumb and forefinger a matching amount. Remarkably, however, DF adjusts her hand aperture to accommodate the width of objects when reaching out to pick them up (grip scaling). While this spared ability to grasp objects is presumed to be mediated by visuomotor modules in her relatively intact dorsal stream, it is possible that it may rely abnormally on online visual or haptic feedback. We report here that DF's grip scaling remained intact when her vision was completely suppressed during grasp movements, and it still dissociated sharply from her poor perceptual estimates of target size. We then tested whether providing trial-by-trial haptic feedback after making such perceptual estimates might improve DF's performance, but found that they remained significantly impaired. In a final experiment, we re-examined whether DF's grip scaling depends on receiving veridical haptic feedback during grasping. In one condition, the haptic feedback was identical to the visual targets. In a second condition, the haptic feedback was of a constant intermediate width while the visual target varied trial by trial. Despite this incongruent feedback, DF still scaled her grip aperture to the visual widths of the target blocks, showing only normal adaptation to the false haptically-experienced width. Taken together, these results strengthen the view that DF's spared grasping relies on a normal mode of dorsal-stream functioning, based chiefly on visual feedforward processing. Copyright © 2014 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, M Pauline
2007-06-30
The VisPort visualization portal is an experiment in providing Web-based access to visualization functionality from any place and at any time. VisPort adopts a service-oriented architecture to encapsulate visualization functionality and to support remote access. Users employ browser-based client applications to choose data and services, set parameters, and launch visualization jobs. Visualization products typically images or movies are viewed in the user's standard Web browser. VisPort emphasizes visualization solutions customized for specific application communities. Finally, VisPort relies heavily on XML, and introduces the notion of visualization informatics - the formalization and specialization of information related to the process and productsmore » of visualization.« less
A novel computational model to probe visual search deficits during motor performance
Singh, Tarkeshwar; Fridriksson, Julius; Perry, Christopher M.; Tryon, Sarah C.; Ross, Angela; Fritz, Stacy
2016-01-01
Successful execution of many motor skills relies on well-organized visual search (voluntary eye movements that actively scan the environment for task-relevant information). Although impairments of visual search that result from brain injuries are linked to diminished motor performance, the neural processes that guide visual search within this context remain largely unknown. The first objective of this study was to examine how visual search in healthy adults and stroke survivors is used to guide hand movements during the Trail Making Test (TMT), a neuropsychological task that is a strong predictor of visuomotor and cognitive deficits. Our second objective was to develop a novel computational model to investigate combinatorial interactions between three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing). We predicted that stroke survivors would exhibit deficits in integrating the three underlying processes, resulting in deteriorated overall task performance. We found that normal TMT performance is associated with patterns of visual search that primarily rely on spatial planning and/or working memory (but not peripheral visual processing). Our computational model suggested that abnormal TMT performance following stroke is associated with impairments of visual search that are characterized by deficits integrating spatial planning and working memory. This innovative methodology provides a novel framework for studying how the neural processes underlying visual search interact combinatorially to guide motor performance. NEW & NOTEWORTHY Visual search has traditionally been studied in cognitive and perceptual paradigms, but little is known about how it contributes to visuomotor performance. We have developed a novel computational model to examine how three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing) contribute to visual search during a visuomotor task. We show that deficits integrating spatial planning and working memory underlie abnormal performance in stroke survivors with frontoparietal damage. PMID:27733596
Controlling the digital transfer process
NASA Astrophysics Data System (ADS)
Brunner, Felix
1997-02-01
The accuracy of today's color management systems fails to satisfy the requirements of the graphic arts market. A first explanation for this is that color calibration charts on which these systems rely, because of print technical reasons, are subject to color deviations and inconsistencies. A second reason is that colorimetry describes the human visual perception of color differences and has no direct relation to the rendering technology itself of a proofing or printing device. The author explains that only firm process control of the many parameters in offset printing by means of a system as for example EUROSTANDARD System Brunner, can lead to accurate and consistent calibration of scanner, display, proof and print. The same principles hold for the quality management of digital presses.
Absoud, Michael; Parr, Jeremy R; Salt, Alison; Dale, Naomi
2011-03-01
Available observational tools used in the identification of social communication difficulties and diagnosis of autism spectrum disorder (ASD) rely partly on visual behaviours and therefore may not be valid in children with visual impairment. A pilot observational instrument, the Visual Impairment and Social Communication Schedule (VISS), was developed to aid in identifying social communication difficulties and ASD in young children with visual impairment affected by congenital disorders of the peripheral visual system (disorders of the globe, retina, and anterior optic nerve). The VISS was administered to 23 consecutive children (age range 1 y 9 mo-6 y 11 mo, mean 4 y 1 mo [SD 1.6]; 12 males, 11 females) with visual impairment (nine with severe and 14 with profound visual impairment). Item analysis was carried out by fit of the items to the Rasch model. Validity of the VISS was explored by comparison with the Childhood Autism Rating Scale (CARS) score, and the clinical ASD diagnosis (n=9). Correlation between the VISS and CARS total scores was highly significant (Spearman's rho=-0.89; p=0.01). Below threshold rating on the VISS (score of 35) showed good agreement with the clinical ASD diagnosis (sensitivity 89%, specificity 100%). This preliminary study shows the VISS to be a promising schedule to aid the identification of ASD in young children with visual impairment. © The Authors. Journal compilation © Mac Keith Press 2010.
NASA Astrophysics Data System (ADS)
Christensen, C.; Summa, B.; Scorzelli, G.; Lee, J. W.; Venkat, A.; Bremer, P. T.; Pascucci, V.
2017-12-01
Massive datasets are becoming more common due to increasingly detailed simulations and higher resolution acquisition devices. Yet accessing and processing these huge data collections for scientific analysis is still a significant challenge. Solutions that rely on extensive data transfers are increasingly untenable and often impossible due to lack of sufficient storage at the client side as well as insufficient bandwidth to conduct such large transfers, that in some cases could entail petabytes of data. Large-scale remote computing resources can be useful, but utilizing such systems typically entails some form of offline batch processing with long delays, data replications, and substantial cost for any mistakes. Both types of workflows can severely limit the flexible exploration and rapid evaluation of new hypotheses that are crucial to the scientific process and thereby impede scientific discovery. In order to facilitate interactivity in both analysis and visualization of these massive data ensembles, we introduce a dynamic runtime system suitable for progressive computation and interactive visualization of arbitrarily large, disparately located spatiotemporal datasets. Our system includes an embedded domain-specific language (EDSL) that allows users to express a wide range of data analysis operations in a simple and abstract manner. The underlying runtime system transparently resolves issues such as remote data access and resampling while at the same time maintaining interactivity through progressive and interruptible processing. Computations involving large amounts of data can be performed remotely in an incremental fashion that dramatically reduces data movement, while the client receives updates progressively thereby remaining robust to fluctuating network latency or limited bandwidth. This system facilitates interactive, incremental analysis and visualization of massive remote datasets up to petabytes in size. Our system is now available for general use in the community through both docker and anaconda.
Carl Linnaeus and the visual representation of nature.
Charmantier, Isabelle
2011-01-01
The Swedish naturalist Carl Linnaeus (1707-1778) is reputed to have transformed botanical practice by shunning the process of illustrating plants and relying on the primacy of literary descriptions of plant specimens. Botanists and historians have long debated Linnaeus's capacities as a draftsman. While some of his detailed sketches of plants and insects reveal a sure hand, his more general drawings of landscapes and people seem ill-executed. The overwhelming consensus, based mostly on his Lapland diary (1732), is that Linnaeus could not draw. Little has been said, however, on the role of drawing and other visual representations in Linnaeus's daily work as seen in his other numerous manuscripts. These manuscripts, held mostly at the Linnean Society of London, are peppered with sketches, maps, tables, and diagrams. Reassessing these manuscripts, along with the printed works that also contain illustrations of plant species, shows that Linnaeus's thinking was profoundly visual and that he routinely used visual representational devices in his various publications. This paper aims to explore the full range of visual representations Linnaeus used through his working life, and to reevaluate the epistemological value of visualization in the making of natural knowledge. By analyzing Linnaeus's use of drawings, maps, tables, and diagrams, I will show that he did not, as has been asserted, reduce the discipline of botany to text, and that his visual thinking played a fundamental role in his construction of new systems of classification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown-VanHoozer, S.A.
Most designers are not schooled in the area of human-interaction psychology and therefore tend to rely on the traditional ergonomic aspects of human factors when designing complex human-interactive workstations related to reactor operations. They do not take into account the differences in user information processing behavior and how these behaviors may affect individual and team performance when accessing visual displays or utilizing system models in process and control room areas. Unfortunately, by ignoring the importance of the integration of the user interface at the information process level, the result can be sub-optimization and inherently error- and failure-prone systems. Therefore, tomore » minimize or eliminate failures in human-interactive systems, it is essential that the designers understand how each user`s processing characteristics affects how the user gathers information, and how the user communicates the information to the designer and other users. A different type of approach in achieving this understanding is Neuro Linguistic Programming (NLP). The material presented in this paper is based on two studies involving the design of visual displays, NLP, and the user`s perspective model of a reactor system. The studies involve the methodology known as NLP, and its use in expanding design choices from the user`s ``model of the world,`` in the areas of virtual reality, workstation design, team structure, decision and learning style patterns, safety operations, pattern recognition, and much, much more.« less
ERIC Educational Resources Information Center
Reeder, Kevin
2005-01-01
The movie industry heavily relies on storyboards as an effective way to visually describe the process of a movie. The storyboard visually describes how the movie flows from beginning to end, how the characters are interacting, and where transitions and/or gaps exist in the storyline. The storyboard is an effective tool in industrial design as…
Sligte, Ilja G; Wokke, Martijn E; Tesselaar, Johannes P; Scholte, H Steven; Lamme, Victor A F
2011-05-01
To guide our behavior in successful ways, we often need to rely on information that is no longer in view, but maintained in visual short-term memory (VSTM). While VSTM is usually broken down into iconic memory (brief and high-capacity store) and visual working memory (sustained, yet limited-capacity store), recent studies have suggested the existence of an additional and intermediate form of VSTM that depends on activity in extrastriate cortex. In previous work, we have shown that this fragile form of VSTM can be dissociated from iconic memory. In the present study, we provide evidence that fragile VSTM is different from visual working memory as magnetic stimulation of the right dorsolateral prefrontal cortex (DLPFC) disrupts visual working memory, while leaving fragile VSTM intact. In addition, we observed that people with high DLPFC activity had superior working memory capacity compared to people with low DLPFC activity, and only people with high DLPFC activity really showed a reduction in working memory capacity in response to magnetic stimulation. Altogether, this study shows that VSTM consists of three stages that have clearly different characteristics and rely on different neural structures. On the methodological side, we show that it is possible to predict individual susceptibility to magnetic stimulation based on functional MRI activity. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.
Addressing Challenges in Web Accessibility for the Blind and Visually Impaired
ERIC Educational Resources Information Center
Guercio, Angela; Stirbens, Kathleen A.; Williams, Joseph; Haiber, Charles
2011-01-01
Searching for relevant information on the web is an important aspect of distance learning. This activity is a challenge for visually impaired distance learners. While sighted people have the ability to filter information in a fast and non sequential way, blind persons rely on tools that process the information in a sequential way. Learning is…
Encourage Students to Read through the Use of Data Visualization
ERIC Educational Resources Information Center
Bandeen, Heather M.; Sawin, Jason E.
2012-01-01
Instructors are always looking for new ways to engage students in reading assignments. The authors present a few techniques that rely on a web-based data visualization tool called Wordle (wordle.net). Wordle creates word frequency representations called word clouds. The larger a word appears within a cloud, the more frequently it occurs within a…
ERIC Educational Resources Information Center
Diesendruck, Gil; Peretz, Shimon
2013-01-01
Visual appearance is one of the main cues children rely on when categorizing novel objects. In 3 studies, testing 128 3-year-olds and 192 5-year-olds, we investigated how various kinds of information may differentially lead children to overlook visual appearance in their categorization decisions across domains. Participants saw novel animals or…
ERIC Educational Resources Information Center
Andrews, Deborah C.
2016-01-01
Business and professional communicators increasingly rely on visual thinking and design strategies to create effective messages. The workplace need for such thinking, however, is not readily accommodated in current pedagogy. A long-running study abroad short course for American students taught in London provides a model for meeting this need.…
Surfing a spike wave down the ventral stream.
VanRullen, Rufin; Thorpe, Simon J
2002-10-01
Numerous theories of neural processing, often motivated by experimental observations, have explored the computational properties of neural codes based on the absolute or relative timing of spikes in spike trains. Spiking neuron models and theories however, as well as their experimental counterparts, have generally been limited to the simulation or observation of isolated neurons, isolated spike trains, or reduced neural populations. Such theories would therefore seem inappropriate to capture the properties of a neural code relying on temporal spike patterns distributed across large neuronal populations. Here we report a range of computer simulations and theoretical considerations that were designed to explore the possibilities of one such code and its relevance for visual processing. In a unified framework where the relation between stimulus saliency and spike relative timing plays the central role, we describe how the ventral stream of the visual system could process natural input scenes and extract meaningful information, both rapidly and reliably. The first wave of spikes generated in the retina in response to a visual stimulation carries information explicitly in its spatio-temporal structure: the most salient information is represented by the first spikes over the population. This spike wave, propagating through a hierarchy of visual areas, is regenerated at each processing stage, where its temporal structure can be modified by (i). the selectivity of the cortical neurons, (ii). lateral interactions and (iii). top-down attentional influences from higher order cortical areas. The resulting model could account for the remarkable efficiency and rapidity of processing observed in the primate visual system.
Altered visual perception in long-term ecstasy (MDMA) users.
White, Claire; Brown, John; Edwards, Mark
2013-09-01
The present study investigated the long-term consequences of ecstasy use on visual processes thought to reflect serotonergic functions in the occipital lobe. Evidence indicates that the main psychoactive ingredient in ecstasy (methylendioxymethamphetamine) causes long-term changes to the serotonin system in human users. Previous research has found that amphetamine-abstinent ecstasy users have disrupted visual processing in the occipital lobe which relies on serotonin, with researchers concluding that ecstasy broadens orientation tuning bandwidths. However, other processes may have accounted for these results. The aim of the present research was to determine if amphetamine-abstinent ecstasy users have changes in occipital lobe functioning, as revealed by two studies: a masking study that directly measured the width of orientation tuning bandwidths and a contour integration task that measured the strength of long-range connections in the visual cortex of drug users compared to controls. Participants were compared on the width of orientation tuning bandwidths (26 controls, 12 ecstasy users, 10 ecstasy + amphetamine users) and the strength of long-range connections (38 controls, 15 ecstasy user, 12 ecstasy + amphetamine users) in the occipital lobe. Amphetamine-abstinent ecstasy users had significantly broader orientation tuning bandwidths than controls and significantly lower contour detection thresholds (CDTs), indicating worse performance on the task, than both controls and ecstasy + amphetamine users. These results extend on previous research, which is consistent with the proposal that ecstasy may damage the serotonin system, resulting in behavioral changes on tests of visual perception processes which are thought to reflect serotonergic functions in the occipital lobe.
The Impact of Interactivity on Comprehending 2D and 3D Visualizations of Movement Data.
Amini, Fereshteh; Rufiange, Sebastien; Hossain, Zahid; Ventura, Quentin; Irani, Pourang; McGuffin, Michael J
2015-01-01
GPS, RFID, and other technologies have made it increasingly common to track the positions of people and objects over time as they move through two-dimensional spaces. Visualizing such spatio-temporal movement data is challenging because each person or object involves three variables (two spatial variables as a function of the time variable), and simply plotting the data on a 2D geographic map can result in overplotting and occlusion that hides details. This also makes it difficult to understand correlations between space and time. Software such as GeoTime can display such data with a three-dimensional visualization, where the 3rd dimension is used for time. This allows for the disambiguation of spatially overlapping trajectories, and in theory, should make the data clearer. However, previous experimental comparisons of 2D and 3D visualizations have so far found little advantage in 3D visualizations, possibly due to the increased complexity of navigating and understanding a 3D view. We present a new controlled experimental comparison of 2D and 3D visualizations, involving commonly performed tasks that have not been tested before, and find advantages in 3D visualizations for more complex tasks. In particular, we tease out the effects of various basic interactions and find that the 2D view relies significantly on "scrubbing" the timeline, whereas the 3D view relies mainly on 3D camera navigation. Our work helps to improve understanding of 2D and 3D visualizations of spatio-temporal data, particularly with respect to interactivity.
Takechi, Hiroki; Kawamura, Hinata
2017-01-01
Formation of a functional neuronal network requires not only precise target recognition, but also stabilization of axonal contacts within their appropriate synaptic layers. Little is known about the molecular mechanisms underlying the stabilization of axonal connections after reaching their specifically targeted layers. Here, we show that two receptor protein tyrosine phosphatases (RPTPs), LAR and Ptp69D, act redundantly in photoreceptor afferents to stabilize axonal connections to the specific layers of the Drosophila visual system. Surprisingly, by combining loss-of-function and genetic rescue experiments, we found that the depth of the final layer of stable termination relied primarily on the cumulative amount of LAR and Ptp69D cytoplasmic activity, while specific features of their ectodomains contribute to the choice between two synaptic layers, M3 and M6, in the medulla. These data demonstrate how the combination of overlapping downstream but diversified upstream properties of two RPTPs can shape layer-specific wiring. PMID:29116043
Reading faces: investigating the use of a novel face-based orthography in acquired alexia.
Moore, Michelle W; Brendel, Paul C; Fiez, Julie A
2014-02-01
Skilled visual word recognition is thought to rely upon a particular region within the left fusiform gyrus, the visual word form area (VWFA). We investigated whether an individual (AA1) with pure alexia resulting from acquired damage to the VWFA territory could learn an alphabetic "FaceFont" orthography, in which faces rather than typical letter-like units are used to represent phonemes. FaceFont was designed to distinguish between perceptual versus phonological influences on the VWFA. AA1 was unable to learn more than five face-phoneme mappings, performing well below that of controls. AA1 succeeded, however, in learning and using a proto-syllabary comprising 15 face-syllable mappings. These results suggest that the VWFA provides a "linguistic bridge" into left hemisphere speech and language regions, irrespective of the perceptual characteristics of a written language. They also suggest that some individuals may be able to acquire a non-alphabetic writing system more readily than an alphabetic writing system. Copyright © 2013 Elsevier Inc. All rights reserved.
Reading faces: Investigating the use of a novel face-based orthography in acquired alexia
Moore, Michelle W.; Brendel, Paul C.; Fiez, Julie A.
2014-01-01
Skilled visual word recognition is thought to rely upon a particular region within the left fusiform gyrus, the visual word form area (VWFA). We investigated whether an individual (AA1) with pure alexia resulting from acquired damage to the VWFA territory could learn an alphabetic “FaceFont” orthography, in which faces rather than typical letter-like units are used to represent phonemes. FaceFont was designed to distinguish between perceptual versus phonological influences on the VWFA. AA1 was unable to learn more than five face-phoneme mappings, performing well below that of controls. AA1 succeeded, however, in learning and using a proto-syllabary comprising 15 face-syllable mappings. These results suggest that the VWFA provides a “linguistic bridge” into left hemisphere speech and language regions, irrespective of the perceptual characteristics of a written language. They also suggest that some individuals may be able to acquire a non-alphabetic writing system more readily than an alphabetic writing system. PMID:24463310
Sarlegna, Fabrice R; Baud-Bovy, Gabriel; Danion, Frédéric
2010-08-01
When we manipulate an object, grip force is adjusted in anticipation of the mechanical consequences of hand motion (i.e., load force) to prevent the object from slipping. This predictive behavior is assumed to rely on an internal representation of the object dynamic properties, which would be elaborated via visual information before the object is grasped and via somatosensory feedback once the object is grasped. Here we examined this view by investigating the effect of delayed visual feedback during dextrous object manipulation. Adult participants manually tracked a sinusoidal target by oscillating a handheld object whose current position was displayed as a cursor on a screen along with the visual target. A delay was introduced between actual object displacement and cursor motion. This delay was linearly increased (from 0 to 300 ms) and decreased within 2-min trials. As previously reported, delayed visual feedback altered performance in manual tracking. Importantly, although the physical properties of the object remained unchanged, delayed visual feedback altered the timing of grip force relative to load force by about 50 ms. Additional experiments showed that this effect was not due to task complexity nor to manual tracking. A model inspired by the behavior of mass-spring systems suggests that delayed visual feedback may have biased the representation of object dynamics. Overall, our findings support the idea that visual feedback of object motion can influence the predictive control of grip force even when the object is grasped.
Filling gaps in visual motion for target capture
Bosco, Gianfranco; Delle Monache, Sergio; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco
2015-01-01
A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation. PMID:25755637
Filling gaps in visual motion for target capture.
Bosco, Gianfranco; Monache, Sergio Delle; Gravano, Silvio; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Zago, Myrka; Lacquaniti, Francesco
2015-01-01
A remarkable challenge our brain must face constantly when interacting with the environment is represented by ambiguous and, at times, even missing sensory information. This is particularly compelling for visual information, being the main sensory system we rely upon to gather cues about the external world. It is not uncommon, for example, that objects catching our attention may disappear temporarily from view, occluded by visual obstacles in the foreground. Nevertheless, we are often able to keep our gaze on them throughout the occlusion or even catch them on the fly in the face of the transient lack of visual motion information. This implies that the brain can fill the gaps of missing sensory information by extrapolating the object motion through the occlusion. In recent years, much experimental evidence has been accumulated that both perceptual and motor processes exploit visual motion extrapolation mechanisms. Moreover, neurophysiological and neuroimaging studies have identified brain regions potentially involved in the predictive representation of the occluded target motion. Within this framework, ocular pursuit and manual interceptive behavior have proven to be useful experimental models for investigating visual extrapolation mechanisms. Studies in these fields have pointed out that visual motion extrapolation processes depend on manifold information related to short-term memory representations of the target motion before the occlusion, as well as to longer term representations derived from previous experience with the environment. We will review recent oculomotor and manual interception literature to provide up-to-date views on the neurophysiological underpinnings of visual motion extrapolation.
The role of visual deprivation and experience on the performance of sensory substitution devices.
Stronks, H Christiaan; Nau, Amy C; Ibbotson, Michael R; Barnes, Nick
2015-10-22
It is commonly accepted that the blind can partially compensate for their loss of vision by developing enhanced abilities with their remaining senses. This visual compensation may be related to the fact that blind people rely on their other senses in everyday life. Many studies have indeed shown that experience plays an important role in visual compensation. Numerous neuroimaging studies have shown that the visual cortices of the blind are recruited by other functional brain areas and can become responsive to tactile or auditory input instead. These cross-modal plastic changes are more pronounced in the early blind compared to late blind individuals. The functional consequences of cross-modal plasticity on visual compensation in the blind are debated, as are the influences of various etiologies of vision loss (i.e., blindness acquired early or late in life). Distinguishing between the influences of experience and visual deprivation on compensation is especially relevant for rehabilitation of the blind with sensory substitution devices. The BrainPort artificial vision device and The vOICe are assistive devices for the blind that redirect visual information to another intact sensory system. Establishing how experience and different etiologies of vision loss affect the performance of these devices may help to improve existing rehabilitation strategies, formulate effective selection criteria and develop prognostic measures. In this review we will discuss studies that investigated the influence of training and visual deprivation on the performance of various sensory substitution approaches. Copyright © 2015 Elsevier B.V. All rights reserved.
Strategic search from long-term memory: an examination of semantic and autobiographical recall.
Unsworth, Nash; Brewer, Gene A; Spillers, Gregory J
2014-01-01
Searching long-term memory is theoretically driven by both directed (search strategies) and random components. In the current study we conducted four experiments evaluating strategic search in semantic and autobiographical memory. Participants were required to generate either exemplars from the category of animals or the names of their friends for several minutes. Self-reported strategies suggested that participants typically relied on visualization strategies for both tasks and were less likely to rely on ordered strategies (e.g., alphabetic search). When participants were instructed to use particular strategies, the visualization strategy resulted in the highest levels of performance and the most efficient search, whereas ordered strategies resulted in the lowest levels of performance and fairly inefficient search. These results are consistent with the notion that retrieval from long-term memory is driven, in part, by search strategies employed by the individual, and that one particularly efficient strategy is to visualize various situational contexts that one has experienced in the past in order to constrain the search and generate the desired information.
Hippocampus, perirhinal cortex, and complex visual discriminations in rats and humans
Hales, Jena B.; Broadbent, Nicola J.; Velu, Priya D.
2015-01-01
Structures in the medial temporal lobe, including the hippocampus and perirhinal cortex, are known to be essential for the formation of long-term memory. Recent animal and human studies have investigated whether perirhinal cortex might also be important for visual perception. In our study, using a simultaneous oddity discrimination task, rats with perirhinal lesions were impaired and did not exhibit the normal preference for exploring the odd object. Notably, rats with hippocampal lesions exhibited the same impairment. Thus, the deficit is unlikely to illuminate functions attributed specifically to perirhinal cortex. Both lesion groups were able to acquire visual discriminations involving the same objects used in the oddity task. Patients with hippocampal damage or larger medial temporal lobe lesions were intact in a similar oddity task that allowed participants to explore objects quickly using eye movements. We suggest that humans were able to rely on an intact working memory capacity to perform this task, whereas rats (who moved slowly among the objects) needed to rely on long-term memory. PMID:25593294
Morey, Candice C; Miron, Monica D
2016-12-01
Among models of working memory, there is not yet a consensus about how to describe functions specific to storing verbal or visual-spatial memories. We presented aural-verbal and visual-spatial lists simultaneously and sometimes cued one type of information after presentation, comparing accuracy in conditions with and without informative retro-cues. This design isolates interference due specifically to maintenance, which appears most clearly in the uncued trials, from interference due to encoding, which occurs in all dual-task trials. When recall accuracy was comparable between tasks, we found that spatial memory was worse in uncued than in retro-cued trials, whereas verbal memory was not. Our findings bolster proposals that maintenance of spatial serial order, like maintenance of visual materials more broadly, relies on general rather than specialized resources, while maintenance of verbal sequences may rely on domain-specific resources. We argue that this asymmetry should be explicitly incorporated into models of working memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Pecot, Matthew Y.; Chen, Yi; Akin, Orkun; Chen, Zhenqing; Tsui, C.Y. Kimberly; Zipursky, S. Lawrence
2015-01-01
SUMMARY Neural circuit formation relies on interactions between axons and cells within the target field. While it is well established that target-derived signals act on axons to regulate circuit assembly, the extent to which axon-derived signals control circuit formation is not known. In the Drosophila visual system, anterograde signals numerically match R1–R6 photoreceptors with their targets by controlling target proliferation and neuronal differentiation. Here we demonstrate that additional axon-derived signals selectively couple target survival with layer-specificity. We show that Jelly belly (Jeb) produced by R1–R6 axons interacts with its receptor, anaplastic lymphoma kinase (Alk), on budding dendrites to control survival of L3 neurons, one of three postsynaptic targets. L3 axons then produce Netrin, which regulates the layer-specific targeting of another neuron within the same circuit. We propose that a cascade of axon-derived signals, regulating diverse cellular processes, provides a strategy for coordinating circuit assembly across different regions of the nervous system. PMID:24742459
Ravankar, Abhijeet; Ravankar, Ankit A.; Kobayashi, Yukinori; Emaru, Takanori
2017-01-01
Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from ‘driver-lost’ scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results. PMID:28809803
Ravankar, Abhijeet; Ravankar, Ankit A; Kobayashi, Yukinori; Emaru, Takanori
2017-08-15
Hitchhiking is a means of transportation gained by asking other people for a (free) ride. We developed a multi-robot system which is the first of its kind to incorporate hitchhiking in robotics, and discuss its advantages. Our method allows the hitchhiker robot to skip redundant computations in navigation like path planning, localization, obstacle avoidance, and map update by completely relying on the driver robot. This allows the hitchhiker robot, which performs only visual servoing, to save computation while navigating on the common path with the driver robot. The driver robot, in the proposed system performs all the heavy computations in navigation and updates the hitchhiker about the current localized positions and new obstacle positions in the map. The proposed system is robust to recover from `driver-lost' scenario which occurs due to visual servoing failure. We demonstrate robot hitchhiking in real environments considering factors like service-time and task priority with different start and goal configurations of the driver and hitchhiker robots. We also discuss the admissible characteristics of the hitchhiker, when hitchhiking should be allowed and when not, through experimental results.
cellVIEW: a Tool for Illustrative and Multi-Scale Rendering of Large Biomolecular Datasets
Le Muzic, Mathieu; Autin, Ludovic; Parulek, Julius; Viola, Ivan
2017-01-01
In this article we introduce cellVIEW, a new system to interactively visualize large biomolecular datasets on the atomic level. Our tool is unique and has been specifically designed to match the ambitions of our domain experts to model and interactively visualize structures comprised of several billions atom. The cellVIEW system integrates acceleration techniques to allow for real-time graphics performance of 60 Hz display rate on datasets representing large viruses and bacterial organisms. Inspired by the work of scientific illustrators, we propose a level-of-detail scheme which purpose is two-fold: accelerating the rendering and reducing visual clutter. The main part of our datasets is made out of macromolecules, but it also comprises nucleic acids strands which are stored as sets of control points. For that specific case, we extend our rendering method to support the dynamic generation of DNA strands directly on the GPU. It is noteworthy that our tool has been directly implemented inside a game engine. We chose to rely on a third party engine to reduce software development work-load and to make bleeding-edge graphics techniques more accessible to the end-users. To our knowledge cellVIEW is the only suitable solution for interactive visualization of large bimolecular landscapes on the atomic level and is freely available to use and extend. PMID:29291131
Selective attention in peacocks during predator detection.
Yorzinski, Jessica L; Platt, Michael L
2014-05-01
Predation can exert strong selective pressure on the evolution of behavioral and morphological traits in birds. Because predator avoidance is key to survival and birds rely heavily on visual perception, predation may have shaped avian visual systems as well. To address this question, we examined the role of visual attention in antipredator behavior in peacocks (Pavo cristatus). Peacocks were exposed to a model predator while their gaze was continuously recorded with a telemetric eye-tracker. We found that peacocks spent more time looking at and made more fixations on the predator compared to the same spatial location before the predator was revealed. The duration of fixations they directed toward conspecifics and environmental features decreased after the predator was revealed, indicating that the peacocks were rapidly scanning their environment with their eyes. Maximum eye movement amplitudes and amplitudes of consecutive saccades were similar before and after the predator was revealed. In cases where conspecifics detected the predator first, peacocks appeared to learn that danger was present by observing conspecifics' antipredator behavior. Peacocks were faster to detect the predator when they were fixating closer to the area where the predator would eventually appear. In addition, pupil size increased after predator exposure, consistent with increased physiological arousal. These findings demonstrate that peacocks selectively direct their attention toward predatory threats and suggest that predation has influenced the evolution of visual orienting systems.
PDF-modulated visual inputs and cryptochrome define diurnal behavior in Drosophila.
Cusumano, Paola; Klarsfeld, André; Chélot, Elisabeth; Picot, Marie; Richier, Benjamin; Rouyer, François
2009-11-01
Morning and evening circadian oscillators control the bimodal activity of Drosophila in light-dark cycles. The lateral neurons evening oscillator (LN-EO) is important for promoting diurnal activity at dusk. We found that the LN-EO autonomously synchronized to light-dark cycles through either the cryptochrome (CRY) that it expressed or the visual system. In conditions in which CRY was not activated, flies depleted for pigment-dispersing factor (PDF) or its receptor lost the evening activity and displayed reversed PER oscillations in the LN-EO. Rescue experiments indicated that normal PER cycling and the presence of evening activity relied on PDF secretion from the large ventral lateral neurons and PDF receptor function in the LN-EO. The LN-EO thus integrates light inputs and PDF signaling to control Drosophila diurnal behavior, revealing a new clock-independent function for PDF.
Sex identification in female crayfish is bimodal
NASA Astrophysics Data System (ADS)
Aquiloni, Laura; Massolo, Alessandro; Gherardi, Francesca
2009-01-01
Sex identification has been studied in several species of crustacean decapods but only seldom was the role of multimodality investigated in a systematic fashion. Here, we analyse the effect of single/combined chemical and visual stimuli on the ability of the crayfish Procambarus clarkii to identify the sex of a conspecific during mating interactions. Our results show that crayfish respond to the offered stimuli depending on their sex. While males rely on olfaction alone for sex identification, females require the combination of olfaction and vision to do so. In the latter, chemical and visual stimuli act as non-redundant signal components that possibly enhance the female ability to discriminate potential mates in the crowded social context experienced during mating period. This is one of the few clear examples in invertebrates of non-redundancy in a bimodal communication system.
Eye-gaze independent EEG-based brain-computer interfaces for communication.
Riccio, A; Mattia, D; Simione, L; Olivetti, M; Cincotti, F
2012-08-01
The present review systematically examines the literature reporting gaze independent interaction modalities in non-invasive brain-computer interfaces (BCIs) for communication. BCIs measure signals related to specific brain activity and translate them into device control signals. This technology can be used to provide users with severe motor disability (e.g. late stage amyotrophic lateral sclerosis (ALS); acquired brain injury) with an assistive device that does not rely on muscular contraction. Most of the studies on BCIs explored mental tasks and paradigms using visual modality. Considering that in ALS patients the oculomotor control can deteriorate and also other potential users could have impaired visual function, tactile and auditory modalities have been investigated over the past years to seek alternative BCI systems which are independent from vision. In addition, various attentional mechanisms, such as covert attention and feature-directed attention, have been investigated to develop gaze independent visual-based BCI paradigms. Three areas of research were considered in the present review: (i) auditory BCIs, (ii) tactile BCIs and (iii) independent visual BCIs. Out of a total of 130 search results, 34 articles were selected on the basis of pre-defined exclusion criteria. Thirteen articles dealt with independent visual BCIs, 15 reported on auditory BCIs and the last six on tactile BCIs, respectively. From the review of the available literature, it can be concluded that a crucial point is represented by the trade-off between BCI systems/paradigms with high accuracy and speed, but highly demanding in terms of attention and memory load, and systems requiring lower cognitive effort but with a limited amount of communicable information. These issues should be considered as priorities to be explored in future studies to meet users' requirements in a real-life scenario.
Eye-gaze independent EEG-based brain-computer interfaces for communication
NASA Astrophysics Data System (ADS)
Riccio, A.; Mattia, D.; Simione, L.; Olivetti, M.; Cincotti, F.
2012-08-01
The present review systematically examines the literature reporting gaze independent interaction modalities in non-invasive brain-computer interfaces (BCIs) for communication. BCIs measure signals related to specific brain activity and translate them into device control signals. This technology can be used to provide users with severe motor disability (e.g. late stage amyotrophic lateral sclerosis (ALS); acquired brain injury) with an assistive device that does not rely on muscular contraction. Most of the studies on BCIs explored mental tasks and paradigms using visual modality. Considering that in ALS patients the oculomotor control can deteriorate and also other potential users could have impaired visual function, tactile and auditory modalities have been investigated over the past years to seek alternative BCI systems which are independent from vision. In addition, various attentional mechanisms, such as covert attention and feature-directed attention, have been investigated to develop gaze independent visual-based BCI paradigms. Three areas of research were considered in the present review: (i) auditory BCIs, (ii) tactile BCIs and (iii) independent visual BCIs. Out of a total of 130 search results, 34 articles were selected on the basis of pre-defined exclusion criteria. Thirteen articles dealt with independent visual BCIs, 15 reported on auditory BCIs and the last six on tactile BCIs, respectively. From the review of the available literature, it can be concluded that a crucial point is represented by the trade-off between BCI systems/paradigms with high accuracy and speed, but highly demanding in terms of attention and memory load, and systems requiring lower cognitive effort but with a limited amount of communicable information. These issues should be considered as priorities to be explored in future studies to meet users’ requirements in a real-life scenario.
ERIC Educational Resources Information Center
Aparicio, Mario; Demont, Elisabeth; Metz-Lutz, Marie-Noëlle; Leybaert, J.; Alegria, Jesús
2014-01-01
During a visual rhyming task, deaf participants traditionally perform more poorly than hearing participants in making rhyme judgements for written words in which the rhyme and the spelling pattern are incongruent (e.g. "hair/bear"). It has been suggested that deaf participants' low accuracy results from their tendency to rely on…
Impaired Visual Expertise for Print in French Adults with Dyslexia as Shown by N170 Tuning
ERIC Educational Resources Information Center
Mahe, Gwendoline; Bonnefond, Anne; Gavens, Nathalie; Dufour, Andre; Doignon-Camus, Nadege
2012-01-01
Efficient reading relies on expertise in the visual word form area, with abnormalities in the functional specialization of this area observed in individuals with developmental dyslexia. We have investigated event related potentials in print tuning in adults with dyslexia, based on their N170 response at 135-255 ms. Control and dyslexic adults…
ERIC Educational Resources Information Center
Brossart, Daniel F.; Parker, Richard I.; Olson, Elizabeth A.; Mahadevan, Lakshmi
2006-01-01
This study explored some practical issues for single-case researchers who rely on visual analysis of graphed data, but who also may consider supplemental use of promising statistical analysis techniques. The study sought to answer three major questions: (a) What is a typical range of effect sizes from these analytic techniques for data from…
ERIC Educational Resources Information Center
Wilson, Kristy J.; Rigakos, Bessie
2016-01-01
The scientific process is nonlinear, unpredictable, and ongoing. Assessing the nature of science is difficult with methods that rely on Likert-scale or multiple-choice questions. This study evaluated conceptions about the scientific process using student-created visual representations that we term "flowcharts." The methodology,…
Fast visual prediction and slow optimization of preferred walking speed.
O'Connor, Shawn M; Donelan, J Maxwell
2012-05-01
People prefer walking speeds that minimize energetic cost. This may be accomplished by directly sensing metabolic rate and adapting gait to minimize it, but only slowly due to the compounded effects of sensing delays and iterative convergence. Visual and other sensory information is available more rapidly and could help predict which gait changes reduce energetic cost, but only approximately because it relies on prior experience and an indirect means to achieve economy. We used virtual reality to manipulate visually presented speed while 10 healthy subjects freely walked on a self-paced treadmill to test whether the nervous system beneficially combines these two mechanisms. Rather than manipulating the speed of visual flow directly, we coupled it to the walking speed selected by the subject and then manipulated the ratio between these two speeds. We then quantified the dynamics of walking speed adjustments in response to perturbations of the visual speed. For step changes in visual speed, subjects responded with rapid speed adjustments (lasting <2 s) and in a direction opposite to the perturbation and consistent with returning the visually presented speed toward their preferred walking speed, when visual speed was suddenly twice (one-half) the walking speed, subjects decreased (increased) their speed. Subjects did not maintain the new speed but instead gradually returned toward the speed preferred before the perturbation (lasting >300 s). The timing and direction of these responses strongly indicate that a rapid predictive process informed by visual feedback helps select preferred speed, perhaps to complement a slower optimization process that seeks to minimize energetic cost.
Visual Sensing for Urban Flood Monitoring
Lo, Shi-Wei; Wu, Jyh-Horng; Lin, Fang-Pang; Hsu, Ching-Han
2015-01-01
With the increasing climatic extremes, the frequency and severity of urban flood events have intensified worldwide. In this study, image-based automated monitoring of flood formation and analyses of water level fluctuation were proposed as value-added intelligent sensing applications to turn a passive monitoring camera into a visual sensor. Combined with the proposed visual sensing method, traditional hydrological monitoring cameras have the ability to sense and analyze the local situation of flood events. This can solve the current problem that image-based flood monitoring heavily relies on continuous manned monitoring. Conventional sensing networks can only offer one-dimensional physical parameters measured by gauge sensors, whereas visual sensors can acquire dynamic image information of monitored sites and provide disaster prevention agencies with actual field information for decision-making to relieve flood hazards. The visual sensing method established in this study provides spatiotemporal information that can be used for automated remote analysis for monitoring urban floods. This paper focuses on the determination of flood formation based on image-processing techniques. The experimental results suggest that the visual sensing approach may be a reliable way for determining the water fluctuation and measuring its elevation and flood intrusion with respect to real-world coordinates. The performance of the proposed method has been confirmed; it has the capability to monitor and analyze the flood status, and therefore, it can serve as an active flood warning system. PMID:26287201
Guidance of retinal axons in mammals.
Herrera, Eloísa; Erskine, Lynda; Morenilla-Palao, Cruz
2017-11-26
In order to navigate through the surrounding environment many mammals, including humans, primarily rely on vision. The eye, composed of the choroid, sclera, retinal pigmented epithelium, cornea, lens, iris and retina, is the structure that receives the light and converts it into electrical impulses. The retina contains six major types of neurons involving in receiving and modifying visual information and passing it onto higher visual processing centres in the brain. Visual information is relayed to the brain via the axons of retinal ganglion cells (RGCs), a projection known as the optic pathway. The proper formation of this pathway during development is essential for normal vision in the adult individual. Along this pathway there are several points where visual axons face 'choices' in their direction of growth. Understanding how these choices are made has advanced significantly our knowledge of axon guidance mechanisms. Thus, the development of the visual pathway has served as an extremely useful model to reveal general principles of axon pathfinding throughout the nervous system. However, due to its particularities, some cellular and molecular mechanisms are specific for the visual circuit. Here we review both general and specific mechanisms involved in the guidance of mammalian RGC axons when they are traveling from the retina to the brain to establish precise and stereotyped connections that will sustain vision. Copyright © 2017 Elsevier Ltd. All rights reserved.
Affective and contextual values modulate spatial frequency use in object recognition
Caplette, Laurent; West, Gregory; Gomot, Marie; Gosselin, Frédéric; Wicker, Bruno
2014-01-01
Visual object recognition is of fundamental importance in our everyday interaction with the environment. Recent models of visual perception emphasize the role of top-down predictions facilitating object recognition via initial guesses that limit the number of object representations that need to be considered. Several results suggest that this rapid and efficient object processing relies on the early extraction and processing of low spatial frequencies (LSF). The present study aimed to investigate the SF content of visual object representations and its modulation by contextual and affective values of the perceived object during a picture-name verification task. Stimuli consisted of pictures of objects equalized in SF content and categorized as having low or high affective and contextual values. To access the SF content of stored visual representations of objects, SFs of each image were then randomly sampled on a trial-by-trial basis. Results reveal that intermediate SFs between 14 and 24 cycles per object (2.3–4 cycles per degree) are correlated with fast and accurate identification for all categories of objects. Moreover, there was a significant interaction between affective and contextual values over the SFs correlating with fast recognition. These results suggest that affective and contextual values of a visual object modulate the SF content of its internal representation, thus highlighting the flexibility of the visual recognition system. PMID:24904514
The Characteristics and Limits of Rapid Visual Categorization
Fabre-Thorpe, Michèle
2011-01-01
Visual categorization appears both effortless and virtually instantaneous. The study by Thorpe et al. (1996) was the first to estimate the processing time necessary to perform fast visual categorization of animals in briefly flashed (20 ms) natural photographs. They observed a large differential EEG activity between target and distracter correct trials that developed from 150 ms after stimulus onset, a value that was later shown to be even shorter in monkeys! With such strong processing time constraints, it was difficult to escape the conclusion that rapid visual categorization was relying on massively parallel, essentially feed-forward processing of visual information. Since 1996, we have conducted a large number of studies to determine the characteristics and limits of fast visual categorization. The present chapter will review some of the main results obtained. I will argue that rapid object categorizations in natural scenes can be done without focused attention and are most likely based on coarse and unconscious visual representations activated with the first available (magnocellular) visual information. Fast visual processing proved efficient for the categorization of large superordinate object or scene categories, but shows its limits when more detailed basic representations are required. The representations for basic objects (dogs, cars) or scenes (mountain or sea landscapes) need additional processing time to be activated. This finding is at odds with the widely accepted idea that such basic representations are at the entry level of the system. Interestingly, focused attention is still not required to perform these time consuming basic categorizations. Finally we will show that object and context processing can interact very early in an ascending wave of visual information processing. We will discuss how such data could result from our experience with a highly structured and predictable surrounding world that shaped neuronal visual selectivity. PMID:22007180
Gonzalez, Jose; Soma, Hirokazu; Sekine, Masashi; Yu, Wenwei
2012-06-09
Prosthetic hand users have to rely extensively on visual feedback, which seems to lead to a high conscious burden for the users, in order to manipulate their prosthetic devices. Indirect methods (electro-cutaneous, vibrotactile, auditory cues) have been used to convey information from the artificial limb to the amputee, but the usability and advantages of these feedback methods were explored mainly by looking at the performance results, not taking into account measurements of the user's mental effort, attention, and emotions. The main objective of this study was to explore the feasibility of using psycho-physiological measurements to assess cognitive effort when manipulating a robot hand with and without the usage of a sensory substitution system based on auditory feedback, and how these psycho-physiological recordings relate to temporal and grasping performance in a static setting. 10 male subjects (26+/-years old), participated in this study and were asked to come for 2 consecutive days. On the first day the experiment objective, tasks, and experiment setting was explained. Then, they completed a 30 minutes guided training. On the second day each subject was tested in 3 different modalities: Auditory Feedback only control (AF), Visual Feedback only control (VF), and Audiovisual Feedback control (AVF). For each modality they were asked to perform 10 trials. At the end of each test, the subject had to answer the NASA TLX questionnaire. Also, during the test the subject's EEG, ECG, electro-dermal activity (EDA), and respiration rate were measured. The results show that a higher mental effort is needed when the subjects rely only on their vision, and that this effort seems to be reduced when auditory feedback is added to the human-machine interaction (multimodal feedback). Furthermore, better temporal performance and better grasping performance was obtained in the audiovisual modality. The performance improvements when using auditory cues, along with vision (multimodal feedback), can be attributed to a reduced attentional demand during the task, which can be attributed to a visual "pop-out" or enhance effect. Also, the NASA TLX, the EEG's Alpha and Beta band, and the Heart Rate could be used to further evaluate sensory feedback systems in prosthetic applications.
Sadeghi, Zahra; Testolin, Alberto
2017-08-01
In humans, efficient recognition of written symbols is thought to rely on a hierarchical processing system, where simple features are progressively combined into more abstract, high-level representations. Here, we present a computational model of Persian character recognition based on deep belief networks, where increasingly more complex visual features emerge in a completely unsupervised manner by fitting a hierarchical generative model to the sensory data. Crucially, high-level internal representations emerging from unsupervised deep learning can be easily read out by a linear classifier, achieving state-of-the-art recognition accuracy. Furthermore, we tested the hypothesis that handwritten digits and letters share many common visual features: A generative model that captures the statistical structure of the letters distribution should therefore also support the recognition of written digits. To this aim, deep networks trained on Persian letters were used to build high-level representations of Persian digits, which were indeed read out with high accuracy. Our simulations show that complex visual features, such as those mediating the identification of Persian symbols, can emerge from unsupervised learning in multilayered neural networks and can support knowledge transfer across related domains.
Visual Working Memory Enhances the Neural Response to Matching Visual Input.
Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp
2017-07-12
Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind's eye after termination of its retinal input. It is hypothesized that information maintained in visual working memory relies on the same neural populations that process visual input. Accordingly, the content of visual working memory is known to affect our conscious perception of concurrent visual input. Here, we demonstrate for the first time that visual input elicits an enhanced neural response when it matches the content of visual working memory, both in terms of signal strength and information content. Copyright © 2017 the authors 0270-6474/17/376638-10$15.00/0.
Euro Banknote Recognition System for Blind People.
Dunai Dunai, Larisa; Chillarón Pérez, Mónica; Peris-Fajarnés, Guillermo; Lengua Lengua, Ismael
2017-01-20
This paper presents the development of a portable system with the aim of allowing blind people to detect and recognize Euro banknotes. The developed device is based on a Raspberry Pi electronic instrument and a Raspberry Pi camera, Pi NoIR (No Infrared filter) dotted with additional infrared light, which is embedded into a pair of sunglasses that permit blind and visually impaired people to independently handle Euro banknotes, especially when receiving their cash back when shopping. The banknote detection is based on the modified Viola and Jones algorithms, while the banknote value recognition relies on the Speed Up Robust Features (SURF) technique. The accuracies of banknote detection and banknote value recognition are 84% and 97.5%, respectively.
Using Heuristic Evaluation to Improve Sepsis Alert Usability.
Pertiwi, Ariani Arista Putri; Fraczkowski, Dan; Stogis, Sheryl L; Lopez, Karen Dunn
2018-06-01
Sepsis, life-threatening organ dysfunction in response to infection, is an alarmingly common and aggressive illness in US hospitals, especially for intensive care patients. Preventing sepsis deaths rests on the clinicians' ability to promptly recognize and treat sepsis. To aid early recognition, many organizations have employed clinician-facing electronic sepsis alert systems. However, the effectiveness of the alert relies on heavily on the visual interface, textual information, and overall usability. This article reports a usability inspection of a sepsis alert system. The authors found violations in 12 of the 14 usability principles and promote use of this method in practice to systematically identify usability problems. Copyright © 2018 Elsevier Inc. All rights reserved.
Euro Banknote Recognition System for Blind People
Dunai Dunai, Larisa; Chillarón Pérez, Mónica; Peris-Fajarnés, Guillermo; Lengua Lengua, Ismael
2017-01-01
This paper presents the development of a portable system with the aim of allowing blind people to detect and recognize Euro banknotes. The developed device is based on a Raspberry Pi electronic instrument and a Raspberry Pi camera, Pi NoIR (No Infrared filter) dotted with additional infrared light, which is embedded into a pair of sunglasses that permit blind and visually impaired people to independently handle Euro banknotes, especially when receiving their cash back when shopping. The banknote detection is based on the modified Viola and Jones algorithms, while the banknote value recognition relies on the Speed Up Robust Features (SURF) technique. The accuracies of banknote detection and banknote value recognition are 84% and 97.5%, respectively. PMID:28117703
A simple white noise analysis of neuronal light responses.
Chichilnisky, E J
2001-05-01
A white noise technique is presented for estimating the response properties of spiking visual system neurons. The technique is simple, robust, efficient and well suited to simultaneous recordings from multiple neurons. It provides a complete and easily interpretable model of light responses even for neurons that display a common form of response nonlinearity that precludes classical linear systems analysis. A theoretical justification of the technique is presented that relies only on elementary linear algebra and statistics. Implementation is described with examples. The technique and the underlying model of neural responses are validated using recordings from retinal ganglion cells, and in principle are applicable to other neurons. Advantages and disadvantages of the technique relative to classical approaches are discussed.
How do schizophrenia patients use visual information to decode facial emotion?
Lee, Junghee; Gosselin, Frédéric; Wynn, Jonathan K; Green, Michael F
2011-09-01
Impairment in recognizing facial emotions is a prominent feature of schizophrenia patients, but the underlying mechanism of this impairment remains unclear. This study investigated the specific aspects of visual information that are critical for schizophrenia patients to recognize emotional expression. Using the Bubbles technique, we probed the use of visual information during a facial emotion discrimination task (fear vs. happy) in 21 schizophrenia patients and 17 healthy controls. Visual information was sampled through randomly located Gaussian apertures (or "bubbles") at 5 spatial frequency scales. Online calibration of the amount of face exposed through bubbles was used to ensure 75% overall accuracy for each subject. Least-square multiple linear regression analyses between sampled information and accuracy were performed to identify critical visual information that was used to identify emotional expression. To accurately identify emotional expression, schizophrenia patients required more exposure of facial areas (i.e., more bubbles) compared with healthy controls. To identify fearful faces, schizophrenia patients relied less on bilateral eye regions at high-spatial frequency compared with healthy controls. For identification of happy faces, schizophrenia patients relied on the mouth and eye regions; healthy controls did not utilize eyes and used the mouth much less than patients did. Schizophrenia patients needed more facial information to recognize emotional expression of faces. In addition, patients differed from controls in their use of high-spatial frequency information from eye regions to identify fearful faces. This study provides direct evidence that schizophrenia patients employ an atypical strategy of using visual information to recognize emotional faces.
A comparison of visuomotor cue integration strategies for object placement and prehension.
Greenwald, Hal S; Knill, David C
2009-01-01
Visual cue integration strategies are known to depend on cue reliability and how rapidly the visual system processes incoming information. We investigated whether these strategies also depend on differences in the information demands for different natural tasks. Using two common goal-oriented tasks, prehension and object placement, we determined whether monocular and binocular information influence estimates of three-dimensional (3D) orientation differently depending on task demands. Both tasks rely on accurate 3D orientation estimates, but 3D position is potentially more important for grasping. Subjects placed an object on or picked up a disc in a virtual environment. On some trials, the monocular cues (aspect ratio and texture compression) and binocular cues (e.g., binocular disparity) suggested slightly different 3D orientations for the disc; these conflicts either were present upon initial stimulus presentation or were introduced after movement initiation, which allowed us to quantify how information from the cues accumulated over time. We analyzed the time-varying orientations of subjects' fingers in the grasping task and those of the object in the object placement task to quantify how different visual cues influenced motor control. In the first experiment, different subjects performed each task, and those performing the grasping task relied on binocular information more when orienting their hands than those performing the object placement task. When subjects in the second experiment performed both tasks in interleaved sessions, binocular cues were still more influential during grasping than object placement, and the different cue integration strategies observed for each task in isolation were maintained. In both experiments, the temporal analyses showed that subjects processed binocular information faster than monocular information, but task demands did not affect the time course of cue processing. How one uses visual cues for motor control depends on the task being performed, although how quickly the information is processed appears to be task invariant.
Episodic Memory Retrieval Functionally Relies on Very Rapid Reactivation of Sensory Information.
Waldhauser, Gerd T; Braun, Verena; Hanslmayr, Simon
2016-01-06
Episodic memory retrieval is assumed to rely on the rapid reactivation of sensory information that was present during encoding, a process termed "ecphory." We investigated the functional relevance of this scarcely understood process in two experiments in human participants. We presented stimuli to the left or right of fixation at encoding, followed by an episodic memory test with centrally presented retrieval cues. This allowed us to track the reactivation of lateralized sensory memory traces during retrieval. Successful episodic retrieval led to a very early (∼100-200 ms) reactivation of lateralized alpha/beta (10-25 Hz) electroencephalographic (EEG) power decreases in the visual cortex contralateral to the visual field at encoding. Applying rhythmic transcranial magnetic stimulation to interfere with early retrieval processing in the visual cortex led to decreased episodic memory performance specifically for items encoded in the visual field contralateral to the site of stimulation. These results demonstrate, for the first time, that episodic memory functionally relies on very rapid reactivation of sensory information. Remembering personal experiences requires a "mental time travel" to revisit sensory information perceived in the past. This process is typically described as a controlled, relatively slow process. However, by using electroencephalography to measure neural activity with a high time resolution, we show that such episodic retrieval entails a very rapid reactivation of sensory brain areas. Using transcranial magnetic stimulation to alter brain function during retrieval revealed that this early sensory reactivation is causally relevant for conscious remembering. These results give first neural evidence for a functional, preconscious component of episodic remembering. This provides new insight into the nature of human memory and may help in the understanding of psychiatric conditions that involve the automatic intrusion of unwanted memories. Copyright © 2016 the authors 0270-6474/16/360251-10$15.00/0.
Caspers, Julian; Zilles, Karl; Amunts, Katrin; Laird, Angela R.; Fox, Peter T.; Eickhoff, Simon B.
2016-01-01
The ventral stream of the human extrastriate visual cortex shows a considerable functional heterogeneity from early visual processing (posterior) to higher, domain-specific processing (anterior). The fusiform gyrus hosts several of those “high-level” functional areas. We recently found a subdivision of the posterior fusiform gyrus on the microstructural level, that is, two distinct cytoarchitectonic areas, FG1 and FG2 (Caspers et al., Brain Structure & Function, 2013). To gain a first insight in the function of these two areas, here we studied their behavioral involvement and coactivation patterns by means of meta-analytic connectivity modeling based on the BrainMap database (www.brainmap.org), using probabilistic maps of these areas as seed regions. The coactivation patterns of the areas support the concept of a common involvement in a core network subserving different cognitive tasks, that is, object recognition, visual language perception, or visual attention. In addition, the analysis supports the previous cytoarchitectonic parcellation, indicating that FG1 appears as a transitional area between early and higher visual cortex and FG2 as a higher-order one. The latter area is furthermore lateralized, as it shows strong relations to the visual language processing system in the left hemisphere, while its right side is stronger associated with face selective regions. These findings indicate that functional lateralization of area FG2 relies on a different pattern of connectivity rather than side-specific cytoarchitectonic features. PMID:24038902
Dynamic reweighting of three modalities for sensor fusion.
Hwang, Sungjae; Agada, Peter; Kiemel, Tim; Jeka, John J
2014-01-01
We simultaneously perturbed visual, vestibular and proprioceptive modalities to understand how sensory feedback is re-weighted so that overall feedback remains suited to stabilizing upright stance. Ten healthy young subjects received an 80 Hz vibratory stimulus to their bilateral Achilles tendons (stimulus turns on-off at 0.28 Hz), a ± 1 mA binaural monopolar galvanic vestibular stimulus at 0.36 Hz, and a visual stimulus at 0.2 Hz during standing. The visual stimulus was presented at different amplitudes (0.2, 0.8 deg rotation about ankle axis) to measure: the change in gain (weighting) to vision, an intramodal effect; and a change in gain to vibration and galvanic vestibular stimulation, both intermodal effects. The results showed a clear intramodal visual effect, indicating a de-emphasis on vision when the amplitude of visual stimulus increased. At the same time, an intermodal visual-proprioceptive reweighting effect was observed with the addition of vibration, which is thought to change proprioceptive inputs at the ankles, forcing the nervous system to rely more on vision and vestibular modalities. Similar intermodal effects for visual-vestibular reweighting were observed, suggesting that vestibular information is not a "fixed" reference, but is dynamically adjusted in the sensor fusion process. This is the first time, to our knowledge, that the interplay between the three primary modalities for postural control has been clearly delineated, illustrating a central process that fuses these modalities for accurate estimates of self-motion.
Cadieu, Charles F.; Hong, Ha; Yamins, Daniel L. K.; Pinto, Nicolas; Ardila, Diego; Solomon, Ethan A.; Majaj, Najib J.; DiCarlo, James J.
2014-01-01
The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of “kernel analysis” that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds. PMID:25521294
ERIC Educational Resources Information Center
Hirose, Nobuyuki; Kihara, Ken; Mima, Tatsuya; Ueki, Yoshino; Fukuyama, Hidenao; Osaka, Naoyuki
2007-01-01
Object substitution masking is a form of visual backward masking in which a briefly presented target is rendered invisible by a lingering mask that is too sparse to produce lower image-level interference. Recent studies suggested the importance of an updating process in a higher object-level representation, which should rely on the processing of…
ERIC Educational Resources Information Center
Zenkov, Kristien; Ewaida, Marriam; Lynch, Megan R.; Bell, Athene; Harmon, James; Pellegrino, Anthony; Sell, Corey
2014-01-01
Relying on a critical pedagogy framework and youth participatory action research (YPAR) and visual sociology methods, the authors of this article--teachers, teacher educators, and community activists--have worked with photo elicitation methods and young adults in the USA and Haiti to document youths' impressions of the purposes of, supports for,…
ERIC Educational Resources Information Center
Gross, M. Melissa; Wright, Mary C.; Anderson, Olivia S.
2017-01-01
Research on the benefits of visual learning has relied primarily on lecture-based pedagogy, but the potential benefits of combining active learning strategies with visual and verbal materials on learning anatomy has not yet been explored. In this study, the differential effects of text-based and image-based active learning exercises on examination…
Eye Contact Is Crucial for Referential Communication in Pet Dogs.
Savalli, Carine; Resende, Briseida; Gaunet, Florence
2016-01-01
Dogs discriminate human direction of attention cues, such as body, gaze, head and eye orientation, in several circumstances. Eye contact particularly seems to provide information on human readiness to communicate; when there is such an ostensive cue, dogs tend to follow human communicative gestures more often. However, little is known about how such cues influence the production of communicative signals (e.g. gaze alternation and sustained gaze) in dogs. In the current study, in order to get an unreachable food, dogs needed to communicate with their owners in several conditions that differ according to the direction of owners' visual cues, namely gaze, head, eyes, and availability to make eye contact. Results provided evidence that pet dogs did not rely on details of owners' direction of visual attention. Instead, they relied on the whole combination of visual cues and especially on the owners' availability to make eye contact. Dogs increased visual communicative behaviors when they established eye contact with their owners, a different strategy compared to apes and baboons, that intensify vocalizations and gestures when human is not visually attending. The difference in strategy is possibly due to distinct status: domesticated vs wild. Results are discussed taking into account the ecological relevance of the task since pet dogs live in human environment and face similar situations on a daily basis during their lives.
Perceptual Averaging in Individuals with Autism Spectrum Disorder.
Corbett, Jennifer E; Venuti, Paola; Melcher, David
2016-01-01
There is mounting evidence that observers rely on statistical summaries of visual information to maintain stable and coherent perception. Sensitivity to the mean (or other prototypical value) of a visual feature (e.g., mean size) appears to be a pervasive process in human visual perception. Previous studies in individuals diagnosed with Autism Spectrum Disorder (ASD) have uncovered characteristic patterns of visual processing that suggest they may rely more on enhanced local representations of individual objects instead of computing such perceptual averages. To further explore the fundamental nature of abstract statistical representation in visual perception, we investigated perceptual averaging of mean size in a group of 12 high-functioning individuals diagnosed with ASD using simplified versions of two identification and adaptation tasks that elicited characteristic perceptual averaging effects in a control group of neurotypical participants. In Experiment 1, participants performed with above chance accuracy in recalling the mean size of a set of circles ( mean task ) despite poor accuracy in recalling individual circle sizes ( member task ). In Experiment 2, their judgments of single circle size were biased by mean size adaptation. Overall, these results suggest that individuals with ASD perceptually average information about sets of objects in the surrounding environment. Our results underscore the fundamental nature of perceptual averaging in vision, and further our understanding of how autistic individuals make sense of the external environment.
Size Constancy in Bat Biosonar? Perceptual Interaction of Object Aperture and Distance
Heinrich, Melina; Wiegrebe, Lutz
2013-01-01
Perception and encoding of object size is an important feature of sensory systems. In the visual system object size is encoded by the visual angle (visual aperture) on the retina, but the aperture depends on the distance of the object. As object distance is not unambiguously encoded in the visual system, higher computational mechanisms are needed. This phenomenon is termed “size constancy”. It is assumed to reflect an automatic re-scaling of visual aperture with perceived object distance. Recently, it was found that in echolocating bats, the ‘sonar aperture’, i.e., the range of angles from which sound is reflected from an object back to the bat, is unambiguously perceived and neurally encoded. Moreover, it is well known that object distance is accurately perceived and explicitly encoded in bat sonar. Here, we addressed size constancy in bat biosonar, recruiting virtual-object techniques. Bats of the species Phyllostomus discolor learned to discriminate two simple virtual objects that only differed in sonar aperture. Upon successful discrimination, test trials were randomly interspersed using virtual objects that differed in both aperture and distance. It was tested whether the bats spontaneously assigned absolute width information to these objects by combining distance and aperture. The results showed that while the isolated perceptual cues encoding object width, aperture, and distance were all perceptually well resolved by the bats, the animals did not assign absolute width information to the test objects. This lack of sonar size constancy may result from the bats relying on different modalities to extract size information at different distances. Alternatively, it is conceivable that familiarity with a behaviorally relevant, conspicuous object is required for sonar size constancy, as it has been argued for visual size constancy. Based on the current data, it appears that size constancy is not necessarily an essential feature of sonar perception in bats. PMID:23630598
Size constancy in bat biosonar? Perceptual interaction of object aperture and distance.
Heinrich, Melina; Wiegrebe, Lutz
2013-01-01
Perception and encoding of object size is an important feature of sensory systems. In the visual system object size is encoded by the visual angle (visual aperture) on the retina, but the aperture depends on the distance of the object. As object distance is not unambiguously encoded in the visual system, higher computational mechanisms are needed. This phenomenon is termed "size constancy". It is assumed to reflect an automatic re-scaling of visual aperture with perceived object distance. Recently, it was found that in echolocating bats, the 'sonar aperture', i.e., the range of angles from which sound is reflected from an object back to the bat, is unambiguously perceived and neurally encoded. Moreover, it is well known that object distance is accurately perceived and explicitly encoded in bat sonar. Here, we addressed size constancy in bat biosonar, recruiting virtual-object techniques. Bats of the species Phyllostomus discolor learned to discriminate two simple virtual objects that only differed in sonar aperture. Upon successful discrimination, test trials were randomly interspersed using virtual objects that differed in both aperture and distance. It was tested whether the bats spontaneously assigned absolute width information to these objects by combining distance and aperture. The results showed that while the isolated perceptual cues encoding object width, aperture, and distance were all perceptually well resolved by the bats, the animals did not assign absolute width information to the test objects. This lack of sonar size constancy may result from the bats relying on different modalities to extract size information at different distances. Alternatively, it is conceivable that familiarity with a behaviorally relevant, conspicuous object is required for sonar size constancy, as it has been argued for visual size constancy. Based on the current data, it appears that size constancy is not necessarily an essential feature of sonar perception in bats.
A Methodology to Analyze Photovoltaic Tracker Uptime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muller, Matthew T; Ruth, Dan
A metric is developed to analyze the daily performance of single-axis photovoltaic (PV) trackers. The metric relies on comparing correlations between the daily time series of the PV power output and an array of simulated plane-of-array irradiances for the given day. Mathematical thresholds and a logic sequence are presented, so the daily tracking metric can be applied in an automated fashion on large-scale PV systems. The results of applying the metric are visually examined against the time series of the power output data for a large number of days and for various systems. The visual inspection results suggest that overall,more » the algorithm is accurate in identifying stuck or functioning trackers on clear-sky days. Visual inspection also shows that there are days that are not classified by the metric where the power output data may be sufficient to identify a stuck tracker. Based on the daily tracking metric, uptime results are calculated for 83 different inverters at 34 PV sites. The mean tracker uptime is calculated at 99% based on 2 different calculation methods. The daily tracking metric clearly has limitations, but as there is no existing metrics in the literature, it provides a valuable tool for flagging stuck trackers.« less
Updating visual memory across eye movements for ocular and arm motor control.
Thompson, Aidan A; Henriques, Denise Y P
2008-11-01
Remembered object locations are stored in an eye-fixed reference frame, so that every time the eyes move, spatial representations must be updated for the arm-motor system to reflect the target's new relative position. To date, studies have not investigated how the brain updates these spatial representations during other types of eye movements, such as smooth-pursuit. Further, it is unclear what information is used in spatial updating. To address these questions we investigated whether remembered locations of pointing targets are updated following smooth-pursuit eye movements, as they are following saccades, and also investigated the role of visual information in estimating eye-movement amplitude for updating spatial memory. Misestimates of eye-movement amplitude were induced when participants visually tracked stimuli presented with a background that moved in either the same or opposite direction of the eye before pointing or looking back to the remembered target location. We found that gaze-dependent pointing errors were similar following saccades and smooth-pursuit and that incongruent background motion did result in a misestimate of eye-movement amplitude. However, the background motion had no effect on spatial updating for pointing, but did when subjects made a return saccade, suggesting that the oculomotor and arm-motor systems may rely on different sources of information for spatial updating.
Cytoskeleton and Cytoskeleton-Bound RNA Visualization in Frog and Insect Oocytes.
Kloc, Malgorzata; Bilinski, Szczepan; Kubiak, Jacek Z
2016-01-01
The majority of oocyte functions involves and depends on the cytoskeletal elements, which include microtubules and actin and cytokeratin filaments. Various structures and molecules are temporarily or permanently bound to the cytoskeletal elements and their functions rely on cytoskeleton integrity and its timely assembly. Thus the accurate visualization of cytoskeleton is often crucial for studies and analyses of oocyte structure and functions. Here we describe several reliable methods for microtubule and/or microfilaments preservation and visualization in Xenopus oocyte extracts, and in situ in live and fixed insect and frog (Xenopus) oocytes. In addition, we describe visualization of cytoskeleton-bound RNAs using molecular beacons in live Xenopus oocytes.
Visual salience metrics for image inpainting
NASA Astrophysics Data System (ADS)
Ardis, Paul A.; Singhal, Amit
2009-01-01
Quantitative metrics for successful image inpainting currently do not exist, with researchers instead relying upon qualitative human comparisons to evaluate their methodologies and techniques. In an attempt to rectify this situation, we propose two new metrics to capture the notions of noticeability and visual intent in order to evaluate inpainting results. The proposed metrics use a quantitative measure of visual salience based upon a computational model of human visual attention. We demonstrate how these two metrics repeatably correlate with qualitative opinion in a human observer study, correctly identify the optimum uses for exemplar-based inpainting (as specified in the original publication), and match qualitative opinion in published examples.
Bateman, J; Proctor, M; Buchnev, O; Podoliak, N; D'Alessandro, G; Kaczmarek, M
2014-07-01
The voltage transfer function is a rapid and visually effective method to determine the electrical response of liquid crystal (LC) systems using optical measurements. This method relies on crosspolarized intensity measurements as a function of the frequency and amplitude of the voltage applied to the device. Coupled with a mathematical model of the device it can be used to determine the device time constants and electrical properties. We validate the method using photorefractive LC cells and determine the main time constants and the voltage dropped across the layers using a simple nonlinear filter model.
Olsson, Pontus; Nysjö, Fredrik; Hirsch, Jan-Michaél; Carlbom, Ingrid B
2013-11-01
Cranio-maxillofacial (CMF) surgery to restore normal skeletal anatomy in patients with serious trauma to the face can be both complex and time-consuming. But it is generally accepted that careful pre-operative planning leads to a better outcome with a higher degree of function and reduced morbidity in addition to reduced time in the operating room. However, today's surgery planning systems are primitive, relying mostly on the user's ability to plan complex tasks with a two-dimensional graphical interface. A system for planning the restoration of skeletal anatomy in facial trauma patients using a virtual model derived from patient-specific CT data. The system combines stereo visualization with six degrees-of-freedom, high-fidelity haptic feedback that enables analysis, planning, and preoperative testing of alternative solutions for restoring bone fragments to their proper positions. The stereo display provides accurate visual spatial perception, and the haptics system provides intuitive haptic feedback when bone fragments are in contact as well as six degrees-of-freedom attraction forces for precise bone fragment alignment. A senior surgeon without prior experience of the system received 45 min of system training. Following the training session, he completed a virtual reconstruction in 22 min of a complex mandibular fracture with an adequately reduced result. Preliminary testing with one surgeon indicates that our surgery planning system, which combines stereo visualization with sophisticated haptics, has the potential to become a powerful tool for CMF surgery planning. With little training, it allows a surgeon to complete a complex plan in a short amount of time.
Adding statistical regularity results in a global slowdown in visual search.
Vaskevich, Anna; Luria, Roy
2018-05-01
Current statistical learning theories predict that embedding implicit regularities within a task should further improve online performance, beyond general practice. We challenged this assumption by contrasting performance in a visual search task containing either a consistent-mapping (regularity) condition, a random-mapping condition, or both conditions, mixed. Surprisingly, performance in a random visual search, without any regularity, was better than performance in a mixed design search that contained a beneficial regularity. This result was replicated using different stimuli and different regularities, suggesting that mixing consistent and random conditions leads to an overall slowing down of performance. Relying on the predictive-processing framework, we suggest that this global detrimental effect depends on the validity of the regularity: when its predictive value is low, as it is in the case of a mixed design, reliance on all prior information is reduced, resulting in a general slowdown. Our results suggest that our cognitive system does not maximize speed, but rather continues to gather and implement statistical information at the expense of a possible slowdown in performance. Copyright © 2018 Elsevier B.V. All rights reserved.
Canessa, Andrea; Gibaldi, Agostino; Chessa, Manuela; Fato, Marco; Solari, Fabio; Sabatini, Silvio P.
2017-01-01
Binocular stereopsis is the ability of a visual system, belonging to a live being or a machine, to interpret the different visual information deriving from two eyes/cameras for depth perception. From this perspective, the ground-truth information about three-dimensional visual space, which is hardly available, is an ideal tool both for evaluating human performance and for benchmarking machine vision algorithms. In the present work, we implemented a rendering methodology in which the camera pose mimics realistic eye pose for a fixating observer, thus including convergent eye geometry and cyclotorsion. The virtual environment we developed relies on highly accurate 3D virtual models, and its full controllability allows us to obtain the stereoscopic pairs together with the ground-truth depth and camera pose information. We thus created a stereoscopic dataset: GENUA PESTO—GENoa hUman Active fixation database: PEripersonal space STereoscopic images and grOund truth disparity. The dataset aims to provide a unified framework useful for a number of problems relevant to human and computer vision, from scene exploration and eye movement studies to 3D scene reconstruction. PMID:28350382
Military readiness: an exploration of the relationship between marksmanship and visual acuity.
Wells, Kenney H; Wagner, Heidi; Reich, Lewis N; Hardigan, Patrick C
2009-04-01
The United States military relies on visual acuity standards to assess enlistment induction and military occupational specialty eligibility, as well as to monitor soldiers' combat vision readiness. However, these vision standards are not evidence based and may not accurately reflect appropriate standards for military readiness or reflect a correlation between visual acuity and occupational performance. The aim of this study was to investigate the relationship between visual acuity and marksmanship performance using a single blind trial with the Engagement Skills Trainer 2000. Marksmanship performance was evaluated in 28 subjects under simulated day and night conditions with habitual spectacle prescription and contact lenses that created visual blur. Panel Poisson regression using an independent correlation structure revealed significant differences (p < 0.001) as visual acuity decreased from 20/25 to 20/50. We conclude that marksmanship performance decreases as visual acuity decreases. We believe that this relationship supports the use of a visual acuity requirement.
Creating accessible science museums with user-activated environmental audio beacons (ping!).
Landau, Steven; Wiener, William; Naghshineh, Koorosh; Giusti, Ellen
2005-01-01
In 2003, Touch Graphics Company carried out research on a new invention that promises to improve accessibility to science museums for visitors who are visually impaired. The system, nicknamed Ping!, allows users to navigate an exhibit area, listen to audio descriptions, and interact with exhibits using a cell phone-based interface. The system relies on computer telephony, and it incorporates a network of wireless environmental audio beacons that can be triggered by users wishing to travel to destinations they choose. User testing indicates that the system is effective, both as a way-finding tool and as a means of providing accessible information on museum content. Follow-up development projects will determine if this approach can be successfully implemented in other settings and for other user populations.
Salimi, Zohreh; Ferguson-Pell, Martin
2018-06-01
Although wheelchair ergometers provide a safe and controlled environment for studying or training wheelchair users, until recently they had a major disadvantage in only being capable of simulating straight-line wheelchair propulsion. Virtual reality has helped overcome this problem and broaden the usability of wheelchair ergometers. However, for a wheelchair ergometer to be validly used in research studies, it needs to be able to simulate the biomechanics of real world wheelchair propulsion. In this paper, three versions of a wheelchair simulator were developed. They provide a sophisticated wheelchair ergometer in an immersive virtual reality environment. They are intended for manual wheelchair propulsion and all are able to simulate simple translational inertia. In addition, each of the systems reported uses a different approach to simulate wheelchair rotation and accommodate rotational inertial effects. The first system does not provide extra resistance against rotation and relies on merely linear inertia, hypothesizing that it can provide acceptable replication of biomechanics of wheelchair maneuvers. The second and third systems, however, are designed to simulate rotational inertia. System II uses mechanical compensation, and System III uses visual compensation simulating the influence that rotational inertia has on the visual perception of wheelchair movement in response to rotation at different speeds.
Photoacoustic characterization of radiofrequency ablation lesions
NASA Astrophysics Data System (ADS)
Bouchard, Richard; Dana, Nicholas; Di Biase, Luigi; Natale, Andrea; Emelianov, Stanislav
2012-02-01
Radiofrequency ablation (RFA) procedures are used to destroy abnormal electrical pathways in the heart that can cause cardiac arrhythmias. Current methods relying on fluoroscopy, echocardiography and electrical conduction mapping are unable to accurately assess ablation lesion size. In an effort to better visualize RFA lesions, photoacoustic (PA) and ultrasonic (US) imaging were utilized to obtain co-registered images of ablated porcine cardiac tissue. The left ventricular free wall of fresh (i.e., never frozen) porcine hearts was harvested within 24 hours of the animals' sacrifice. A THERMOCOOLR Ablation System (Biosense Webster, Inc.) operating at 40 W for 30-60 s was used to induce lesions through the endocardial and epicardial walls of the cardiac samples. Following lesion creation, the ablated tissue samples were placed in 25 °C saline to allow for multi-wavelength PA imaging. Samples were imaged with a VevoR 2100 ultrasound system (VisualSonics, Inc.) using a modified 20-MHz array that could provide laser irradiation to the sample from a pulsed tunable laser (Newport Corp.) to allow for co-registered photoacoustic-ultrasound (PAUS) imaging. PA imaging was conducted from 750-1064 nm, with a surface fluence of approximately 15 mJ/cm2 maintained during imaging. In this preliminary study with PA imaging, the ablated region could be well visualized on the surface of the sample, with contrasts of 6-10 dB achieved at 750 nm. Although imaging penetration depth is a concern, PA imaging shows promise in being able to reliably visualize RF ablation lesions.
Audiovisual speech perception development at varying levels of perceptual processing
Lalonde, Kaylah; Holt, Rachael Frush
2016-01-01
This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children. PMID:27106318
Audiovisual speech perception development at varying levels of perceptual processing.
Lalonde, Kaylah; Holt, Rachael Frush
2016-04-01
This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.
Rösler, Lara; Rolfs, Martin; van der Stigchel, Stefan; Neggers, Sebastiaan F. W.; Cahn, Wiepke; Kahn, René S.
2015-01-01
Corollary discharge (CD) refers to “copies” of motor signals sent to sensory areas, allowing prediction of future sensory states. They enable the putative mechanisms supporting the distinction between self-generated and externally generated sensations. Accordingly, many authors have suggested that disturbed CD engenders psychotic symptoms of schizophrenia, which are characterized by agency distortions. CD also supports perceived visual stability across saccadic eye movements and is used to predict the postsaccadic retinal coordinates of visual stimuli, a process called remapping. We tested whether schizophrenia patients (SZP) show remapping disturbances as evidenced by systematic transsaccadic mislocalizations of visual targets. SZP and healthy controls (HC) performed a task in which a saccadic target disappeared upon saccade initiation and, after a brief delay, reappeared at a horizontally displaced position. HC judged the direction of this displacement accurately, despite spatial errors in saccade landing site, indicating that their comparison of the actual to predicted postsaccadic target location relied on accurate CD. SZP performed worse and relied more on saccade landing site as a proxy for the presaccadic target, consistent with disturbed CD. This remapping failure was strongest in patients with more severe psychotic symptoms, consistent with the theoretical link between disturbed CD and phenomenological experiences in schizophrenia. PMID:26108951
Proprioceptive feedback determines visuomotor gain in Drosophila
Bartussek, Jan; Lehmann, Fritz-Olaf
2016-01-01
Multisensory integration is a prerequisite for effective locomotor control in most animals. Especially, the impressive aerial performance of insects relies on rapid and precise integration of multiple sensory modalities that provide feedback on different time scales. In flies, continuous visual signalling from the compound eyes is fused with phasic proprioceptive feedback to ensure precise neural activation of wing steering muscles (WSM) within narrow temporal phase bands of the stroke cycle. This phase-locked activation relies on mechanoreceptors distributed over wings and gyroscopic halteres. Here we investigate visual steering performance of tethered flying fruit flies with reduced haltere and wing feedback signalling. Using a flight simulator, we evaluated visual object fixation behaviour, optomotor altitude control and saccadic escape reflexes. The behavioural assays show an antagonistic effect of wing and haltere signalling on visuomotor gain during flight. Compared with controls, suppression of haltere feedback attenuates while suppression of wing feedback enhances the animal’s wing steering range. Our results suggest that the generation of motor commands owing to visual perception is dynamically controlled by proprioception. We outline a potential physiological mechanism based on the biomechanical properties of WSM and sensory integration processes at the level of motoneurons. Collectively, the findings contribute to our general understanding how moving animals integrate sensory information with dynamically changing temporal structure. PMID:26909184
The internal representation of head orientation differs for conscious perception and balance control
Dalton, Brian H.; Rasman, Brandon G.; Inglis, J. Timothy
2017-01-01
Key points We tested perceived head‐on‐feet orientation and the direction of vestibular‐evoked balance responses in passively and actively held head‐turned postures.The direction of vestibular‐evoked balance responses was not aligned with perceived head‐on‐feet orientation while maintaining prolonged passively held head‐turned postures. Furthermore, static visual cues of head‐on‐feet orientation did not update the estimate of head posture for the balance controller.A prolonged actively held head‐turned posture did not elicit a rotation in the direction of the vestibular‐evoked balance response despite a significant rotation in perceived angular head posture.It is proposed that conscious perception of head posture and the transformation of vestibular signals for standing balance relying on this head posture are not dependent on the same internal representation. Rather, the balance system may operate under its own sensorimotor principles, which are partly independent from perception. Abstract Vestibular signals used for balance control must be integrated with other sensorimotor cues to allow transformation of descending signals according to an internal representation of body configuration. We explored two alternative models of sensorimotor integration that propose (1) a single internal representation of head‐on‐feet orientation is responsible for perceived postural orientation and standing balance or (2) conscious perception and balance control are driven by separate internal representations. During three experiments, participants stood quietly while passively or actively maintaining a prolonged head‐turned posture (>10 min). Throughout the trials, participants intermittently reported their perceived head angular position, and subsequently electrical vestibular stimuli were delivered to elicit whole‐body balance responses. Visual recalibration of head‐on‐feet posture was used to determine whether static visual cues are used to update the internal representation of body configuration for perceived orientation and standing balance. All three experiments involved situations in which the vestibular‐evoked balance response was not orthogonal to perceived head‐on‐feet orientation, regardless of the visual information provided. For prolonged head‐turned postures, balance responses consistent with actual head‐on‐feet posture occurred only during the active condition. Our results indicate that conscious perception of head‐on‐feet posture and vestibular control of balance do not rely on the same internal representation, but instead treat sensorimotor cues in parallel and may arrive at different conclusions regarding head‐on‐feet posture. The balance system appears to bypass static visual cues of postural orientation and mainly use other sensorimotor signals of head‐on‐feet position to transform vestibular signals of head motion, a mechanism appropriate for most daily activities. PMID:28035656
Disruption of functional networks in dyslexia: A whole-brain, data-driven analysis of connectivity
Finn, Emily S.; Shen, Xilin; Holahan, John M.; Scheinost, Dustin; Lacadie, Cheryl; Papademetris, Xenophon; Shaywitz, Sally E.; Shaywitz, Bennett A.; Constable, R. Todd
2013-01-01
Background Functional connectivity analyses of fMRI data are a powerful tool for characterizing brain networks and how they are disrupted in neural disorders. However, many such analyses examine only one or a small number of a priori seed regions. Studies that consider the whole brain frequently rely on anatomic atlases to define network nodes, which may result in mixing distinct activation timecourses within a single node. Here, we improve upon previous methods by using a data-driven brain parcellation to compare connectivity profiles of dyslexic (DYS) versus non-impaired (NI) readers in the first whole-brain functional connectivity analysis of dyslexia. Methods Whole-brain connectivity was assessed in children (n = 75; 43 NI, 32 DYS) and adult (n = 104; 64 NI, 40 DYS) readers. Results Compared to NI readers, DYS readers showed divergent connectivity within the visual pathway and between visual association areas and prefrontal attention areas; increased right-hemisphere connectivity; reduced connectivity in the visual word-form area (part of the left fusiform gyrus specialized for printed words); and persistent connectivity to anterior language regions around the inferior frontal gyrus. Conclusions Together, findings suggest that NI readers are better able to integrate visual information and modulate their attention to visual stimuli, allowing them to recognize words based on their visual properties, while DYS readers recruit altered reading circuits and rely on laborious phonology-based “sounding out” strategies into adulthood. These results deepen our understanding of the neural basis of dyslexia and highlight the importance of synchrony between diverse brain regions for successful reading. PMID:24124929
Augmented reality user interface for mobile ground robots with manipulator arms
NASA Astrophysics Data System (ADS)
Vozar, Steven; Tilbury, Dawn M.
2011-01-01
Augmented Reality (AR) is a technology in which real-world visual data is combined with an overlay of computer graphics, enhancing the original feed. AR is an attractive tool for teleoperated UGV UIs as it can improve communication between robots and users via an intuitive spatial and visual dialogue, thereby increasing operator situational awareness. The successful operation of UGVs often relies upon both chassis navigation and manipulator arm control, and since existing literature usually focuses on one task or the other, there is a gap in mobile robot UIs that take advantage of AR for both applications. This work describes the development and analysis of an AR UI system for a UGV with an attached manipulator arm. The system supplements a video feed shown to an operator with information about geometric relationships within the robot task space to improve the operator's situational awareness. Previous studies on AR systems and preliminary analyses indicate that such an implementation of AR for a mobile robot with a manipulator arm is anticipated to improve operator performance. A full user-study can determine if this hypothesis is supported by performing an analysis of variance on common test metrics associated with UGV teleoperation.
Natural image sequences constrain dynamic receptive fields and imply a sparse code.
Häusler, Chris; Susemihl, Alex; Nawrot, Martin P
2013-11-06
In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
On the role of spatial phase and phase correlation in vision, illusion, and cognition
Gladilin, Evgeny; Eils, Roland
2015-01-01
Numerous findings indicate that spatial phase bears an important cognitive information. Distortion of phase affects topology of edge structures and makes images unrecognizable. In turn, appropriately phase-structured patterns give rise to various illusions of virtual image content and apparent motion. Despite a large body of phenomenological evidence not much is known yet about the role of phase information in neural mechanisms of visual perception and cognition. Here, we are concerned with analysis of the role of spatial phase in computational and biological vision, emergence of visual illusions and pattern recognition. We hypothesize that fundamental importance of phase information for invariant retrieval of structural image features and motion detection promoted development of phase-based mechanisms of neural image processing in course of evolution of biological vision. Using an extension of Fourier phase correlation technique, we show that the core functions of visual system such as motion detection and pattern recognition can be facilitated by the same basic mechanism. Our analysis suggests that emergence of visual illusions can be attributed to presence of coherently phase-shifted repetitive patterns as well as the effects of acuity compensation by saccadic eye movements. We speculate that biological vision relies on perceptual mechanisms effectively similar to phase correlation, and predict neural features of visual pattern (dis)similarity that can be used for experimental validation of our hypothesis of “cognition by phase correlation.” PMID:25954190
On the role of spatial phase and phase correlation in vision, illusion, and cognition.
Gladilin, Evgeny; Eils, Roland
2015-01-01
Numerous findings indicate that spatial phase bears an important cognitive information. Distortion of phase affects topology of edge structures and makes images unrecognizable. In turn, appropriately phase-structured patterns give rise to various illusions of virtual image content and apparent motion. Despite a large body of phenomenological evidence not much is known yet about the role of phase information in neural mechanisms of visual perception and cognition. Here, we are concerned with analysis of the role of spatial phase in computational and biological vision, emergence of visual illusions and pattern recognition. We hypothesize that fundamental importance of phase information for invariant retrieval of structural image features and motion detection promoted development of phase-based mechanisms of neural image processing in course of evolution of biological vision. Using an extension of Fourier phase correlation technique, we show that the core functions of visual system such as motion detection and pattern recognition can be facilitated by the same basic mechanism. Our analysis suggests that emergence of visual illusions can be attributed to presence of coherently phase-shifted repetitive patterns as well as the effects of acuity compensation by saccadic eye movements. We speculate that biological vision relies on perceptual mechanisms effectively similar to phase correlation, and predict neural features of visual pattern (dis)similarity that can be used for experimental validation of our hypothesis of "cognition by phase correlation."
Design and Implementation of High-Performance GIS Dynamic Objects Rendering Engine
NASA Astrophysics Data System (ADS)
Zhong, Y.; Wang, S.; Li, R.; Yun, W.; Song, G.
2017-12-01
Spatio-temporal dynamic visualization is more vivid than static visualization. It important to use dynamic visualization techniques to reveal the variation process and trend vividly and comprehensively for the geographical phenomenon. To deal with challenges caused by dynamic visualization of both 2D and 3D spatial dynamic targets, especially for different spatial data types require high-performance GIS dynamic objects rendering engine. The main approach for improving the rendering engine with vast dynamic targets relies on key technologies of high-performance GIS, including memory computing, parallel computing, GPU computing and high-performance algorisms. In this study, high-performance GIS dynamic objects rendering engine is designed and implemented for solving the problem based on hybrid accelerative techniques. The high-performance GIS rendering engine contains GPU computing, OpenGL technology, and high-performance algorism with the advantage of 64-bit memory computing. It processes 2D, 3D dynamic target data efficiently and runs smoothly with vast dynamic target data. The prototype system of high-performance GIS dynamic objects rendering engine is developed based SuperMap GIS iObjects. The experiments are designed for large-scale spatial data visualization, the results showed that the high-performance GIS dynamic objects rendering engine have the advantage of high performance. Rendering two-dimensional and three-dimensional dynamic objects achieve 20 times faster on GPU than on CPU.
Acute exercise and aerobic fitness influence selective attention during visual search.
Bullock, Tom; Giesbrecht, Barry
2014-01-01
Successful goal directed behavior relies on a human attention system that is flexible and able to adapt to different conditions of physiological stress. However, the effects of physical activity on multiple aspects of selective attention and whether these effects are mediated by aerobic capacity, remains unclear. The aim of the present study was to investigate the effects of a prolonged bout of physical activity on visual search performance and perceptual distraction. Two groups of participants completed a hybrid visual search flanker/response competition task in an initial baseline session and then at 17-min intervals over a 2 h 16 min test period. Participants assigned to the exercise group engaged in steady-state aerobic exercise between completing blocks of the visual task, whereas participants assigned to the control group rested in between blocks. The key result was a correlation between individual differences in aerobic capacity and visual search performance, such that those individuals that were more fit performed the search task more quickly. Critically, this relationship only emerged in the exercise group after the physical activity had begun. The relationship was not present in either group at baseline and never emerged in the control group during the test period, suggesting that under these task demands, aerobic capacity may be an important determinant of visual search performance under physical stress. The results enhance current understanding about the relationship between exercise and cognition, and also inform current models of selective attention.
Acute exercise and aerobic fitness influence selective attention during visual search
Bullock, Tom; Giesbrecht, Barry
2014-01-01
Successful goal directed behavior relies on a human attention system that is flexible and able to adapt to different conditions of physiological stress. However, the effects of physical activity on multiple aspects of selective attention and whether these effects are mediated by aerobic capacity, remains unclear. The aim of the present study was to investigate the effects of a prolonged bout of physical activity on visual search performance and perceptual distraction. Two groups of participants completed a hybrid visual search flanker/response competition task in an initial baseline session and then at 17-min intervals over a 2 h 16 min test period. Participants assigned to the exercise group engaged in steady-state aerobic exercise between completing blocks of the visual task, whereas participants assigned to the control group rested in between blocks. The key result was a correlation between individual differences in aerobic capacity and visual search performance, such that those individuals that were more fit performed the search task more quickly. Critically, this relationship only emerged in the exercise group after the physical activity had begun. The relationship was not present in either group at baseline and never emerged in the control group during the test period, suggesting that under these task demands, aerobic capacity may be an important determinant of visual search performance under physical stress. The results enhance current understanding about the relationship between exercise and cognition, and also inform current models of selective attention. PMID:25426094
Collision Detection for Underwater ROV Manipulator Systems
Rossi, Matija; Dooly, Gerard; Toal, Daniel
2018-01-01
Work-class ROVs equipped with robotic manipulators are extensively used for subsea intervention operations. Manipulators are teleoperated by human pilots relying on visual feedback from the worksite. Operating in a remote environment, with limited pilot perception and poor visibility, manipulator collisions which may cause significant damage are likely to happen. This paper presents a real-time collision detection algorithm for marine robotic manipulation. The proposed collision detection mechanism is developed, integrated into a commercial ROV manipulator control system, and successfully evaluated in simulations and experimental setup using a real industry standard underwater manipulator. The presented collision sensing solution has a potential to be a useful pilot assisting tool that can reduce the task load, operational time, and costs of subsea inspection, repair, and maintenance operations. PMID:29642396
Collision Detection for Underwater ROV Manipulator Systems.
Sivčev, Satja; Rossi, Matija; Coleman, Joseph; Omerdić, Edin; Dooly, Gerard; Toal, Daniel
2018-04-06
Work-class ROVs equipped with robotic manipulators are extensively used for subsea intervention operations. Manipulators are teleoperated by human pilots relying on visual feedback from the worksite. Operating in a remote environment, with limited pilot perception and poor visibility, manipulator collisions which may cause significant damage are likely to happen. This paper presents a real-time collision detection algorithm for marine robotic manipulation. The proposed collision detection mechanism is developed, integrated into a commercial ROV manipulator control system, and successfully evaluated in simulations and experimental setup using a real industry standard underwater manipulator. The presented collision sensing solution has a potential to be a useful pilot assisting tool that can reduce the task load, operational time, and costs of subsea inspection, repair, and maintenance operations.
Vision in semi-aquatic snakes: Intraocular morphology, accommodation, and eye: Body allometry
NASA Astrophysics Data System (ADS)
Plylar, Helen Bond
Vision in vertebrates generally relies on the refractive power of the cornea and crystalline lens to facilitate vision. Light from the environment enters the eye and is refracted by the cornea and lens onto the retina for production of an image. When an animal with a system designed for air submerges underwater, the refractive power of the cornea is lost. Semi-aquatic animals (e.g., water snakes, turtles, aquatic mammals) must overcome this loss of corneal refractive power through visual accommodation. Accommodation relies on change of the position or shape of the lens to change the focal length of the optical system. Intraocular muscles and fibers facilitate lenticular displacement and deformation. Snakes, in general, are largely unstudied in terms of visual acuity and intraocular morphology. I used light microscopy and scanning electron microscopy to examine differences in eye anatomy between five sympatric colubrid snake species (Nerodia cyclopion, N. fasciata, N. rhombifer, Pantherophis obsoletus, and Thamnophis proximus) from Southeast Louisiana. I discovered previously undescribed structures associated with the lens in semi-aquatic species. Photorefractive methods were used to assess refractive error. While all species overcame the expected hyperopia imposed by submergence, there was interspecific variation in refractive error. To assess scaling of eye size with body size, I measure of eye size, head size, and body size in Nerodia cyclopion and N. fasciata from the SLU Vertebrate Museum. In both species, body size increases at a significantly faster rate than head size and eye size (negative allometry). Small snakes have large eyes relative to body size, and large snakes have relatively small eyes. There were interspecific differences in scaling of eye size with body size, where N. fasciata had larger eye diameter, but N. cyclopion had longer eyes (axial length).
Ngo, Kathy T.; Andrade, Ingrid; Hartenstein, Volker
2018-01-01
Visual information processing in animals with large image forming eyes is carried out in highly structured retinotopically ordered neuropils. Visual neuropils in Drosophila form the optic lobe, which consists of four serially arranged major subdivisions; the lamina, medulla, lobula and lobula plate; the latter three of these are further subdivided into multiple layers. The visual neuropils are formed by more than 100 different cell types, distributed and interconnected in an invariant highly regular pattern. This pattern relies on a protracted sequence of developmental steps, whereby different cell types are born at specific time points and nerve connections are formed in a tightly controlled sequence that has to be coordinated among the different visual neuropils. The developing fly visual system has become a highly regarded and widely studied paradigm to investigate the genetic mechanisms that control the formation of neural circuits. However, these studies are often made difficult by the complex and shifting patterns in which different types of neurons and their connections are distributed throughout development. In the present paper we have reconstructed the three-dimensional architecture of the Drosophila optic lobe from the early larva to the adult. Based on specific markers, we were able to distinguish the populations of progenitors of the four optic neuropils and map the neurons and their connections. Our paper presents sets of annotated confocal z-projections and animated 3D digital models of these structures for representative stages. The data reveal the temporally coordinated growth of the optic neuropils, and clarify how the position and orientation of the neuropils and interconnecting tracts (inner and outer optic chiasm) changes over time. Finally, we have analyzed the emergence of the discrete layers of the medulla and lobula complex using the same markers (DN-cadherin, Brp) employed to systematically explore the structure and development of the central brain neuropil. Our work will facilitate experimental studies of the molecular mechanisms regulating neuronal fate and connectivity in the fly visual system, which bears many fundamental similarities with the retina of vertebrates. PMID:28533086
A Data-Driven Approach to Interactive Visualization of Power Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Jun
Driven by emerging industry standards, electric utilities and grid coordination organizations are eager to seek advanced tools to assist grid operators to perform mission-critical tasks and enable them to make quick and accurate decisions. The emerging field of visual analytics holds tremendous promise for improving the business practices in today’s electric power industry. The conducted investigation, however, has revealed that the existing commercial power grid visualization tools heavily rely on human designers, hindering user’s ability to discover. Additionally, for a large grid, it is very labor-intensive and costly to build and maintain the pre-designed visual displays. This project proposes amore » data-driven approach to overcome the common challenges. The proposed approach relies on developing powerful data manipulation algorithms to create visualizations based on the characteristics of empirically or mathematically derived data. The resulting visual presentations emphasize what the data is rather than how the data should be presented, thus fostering comprehension and discovery. Furthermore, the data-driven approach formulates visualizations on-the-fly. It does not require a visualization design stage, completely eliminating or significantly reducing the cost for building and maintaining visual displays. The research and development (R&D) conducted in this project is mainly divided into two phases. The first phase (Phase I & II) focuses on developing data driven techniques for visualization of power grid and its operation. Various data-driven visualization techniques were investigated, including pattern recognition for auto-generation of one-line diagrams, fuzzy model based rich data visualization for situational awareness, etc. The R&D conducted during the second phase (Phase IIB) focuses on enhancing the prototyped data driven visualization tool based on the gathered requirements and use cases. The goal is to evolve the prototyped tool developed during the first phase into a commercial grade product. We will use one of the identified application areas as an example to demonstrate how research results achieved in this project are successfully utilized to address an emerging industry need. In summary, the data-driven visualization approach developed in this project has proven to be promising for building the next-generation power grid visualization tools. Application of this approach has resulted in a state-of-the-art commercial tool currently being leveraged by more than 60 utility organizations in North America and Europe .« less
Garaizar, Pablo; Reips, Ulf-Dietrich
2015-09-01
DMDX is a software package for the experimental control and timing of stimulus display for Microsoft Windows systems. DMDX is reliable, flexible, millisecond accurate, and can be downloaded free of charge; therefore it has become very popular among experimental researchers. However, setting up a DMDX-based experiment is burdensome because of its command-based interface. Further, DMDX relies on RTF files in which parts of the stimuli, design, and procedure of an experiment are defined in a complicated (DMASTR-compatible) syntax. Other experiment software, such as E-Prime, Psychopy, and WEXTOR, became successful as a result of integrated visual authoring tools. Such an intuitive interface was lacking for DMDX. We therefore created and present here Visual DMDX (http://visualdmdx.com/), a HTML5-based web interface to set up experiments and export them to DMDX item files format in RTF. Visual DMDX offers most of the features available from the rich DMDX/DMASTR syntax, and it is a useful tool to support researchers who are new to DMDX. Both old and modern versions of DMDX syntax are supported. Further, with Visual DMDX, we go beyond DMDX by having added export to JSON (a versatile web format), easy backup, and a preview option for experiments. In two examples, one experiment each on lexical decision making and affective priming, we explain in a step-by-step fashion how to create experiments using Visual DMDX. We release Visual DMDX under an open-source license to foster collaboration in its continuous improvement.
Hayashi, Ryusuke; Watanabe, Osamu; Yokoyama, Hiroki; Nishida, Shin'ya
2017-06-01
Characterization of the functional relationship between sensory inputs and neuronal or observers' perceptual responses is one of the fundamental goals of systems neuroscience and psychophysics. Conventional methods, such as reverse correlation and spike-triggered data analyses are limited in their ability to resolve complex and inherently nonlinear neuronal/perceptual processes because these methods require input stimuli to be Gaussian with a zero mean. Recent studies have shown that analyses based on a generalized linear model (GLM) do not require such specific input characteristics and have advantages over conventional methods. GLM, however, relies on iterative optimization algorithms and its calculation costs become very expensive when estimating the nonlinear parameters of a large-scale system using large volumes of data. In this paper, we introduce a new analytical method for identifying a nonlinear system without relying on iterative calculations and yet also not requiring any specific stimulus distribution. We demonstrate the results of numerical simulations, showing that our noniterative method is as accurate as GLM in estimating nonlinear parameters in many cases and outperforms conventional, spike-triggered data analyses. As an example of the application of our method to actual psychophysical data, we investigated how different spatiotemporal frequency channels interact in assessments of motion direction. The nonlinear interaction estimated by our method was consistent with findings from previous vision studies and supports the validity of our method for nonlinear system identification.
Ovis: A Framework for Visual Analysis of Ocean Forecast Ensembles.
Höllt, Thomas; Magdy, Ahmed; Zhan, Peng; Chen, Guoning; Gopalakrishnan, Ganesh; Hoteit, Ibrahim; Hansen, Charles D; Hadwiger, Markus
2014-08-01
We present a novel integrated visualization system that enables interactive visual analysis of ensemble simulations of the sea surface height that is used in ocean forecasting. The position of eddies can be derived directly from the sea surface height and our visualization approach enables their interactive exploration and analysis.The behavior of eddies is important in different application settings of which we present two in this paper. First, we show an application for interactive planning of placement as well as operation of off-shore structures using real-world ensemble simulation data of the Gulf of Mexico. Off-shore structures, such as those used for oil exploration, are vulnerable to hazards caused by eddies, and the oil and gas industry relies on ocean forecasts for efficient operations. We enable analysis of the spatial domain, as well as the temporal evolution, for planning the placement and operation of structures.Eddies are also important for marine life. They transport water over large distances and with it also heat and other physical properties as well as biological organisms. In the second application we present the usefulness of our tool, which could be used for planning the paths of autonomous underwater vehicles, so called gliders, for marine scientists to study simulation data of the largely unexplored Red Sea.
Charpentier, Corie L; Cohen, Jonathan H
2015-11-01
Several predator avoidance strategies in zooplankton rely on the use of light to control vertical position in the water column. Although light is the primary cue for such photobehavior, predator chemical cues or kairomones increase swimming responses to light. We currently lack a mechanistic understanding for how zooplankton integrate visual and chemical cues to mediate phenotypic plasticity in defensive photobehavior. In marine systems, kairomones are thought to be amino sugar degradation products of fish body mucus. Here, we demonstrate that increasing concentrations of fish kairomones heightened sensitivity of light-mediated swimming behavior for two larval crab species (Rhithropanopeus harrisii and Hemigrapsus sanguineus). Consistent with these behavioral results, we report increased visual sensitivity at the retinal level in larval crab eyes directly following acute (1-3 h) kairomone exposure, as evidenced electrophysiologically from V-log I curves and morphologically from wider, shorter rhabdoms. The observed increases in visual sensitivity do not correspond with a decline in temporal resolution, because latency in electrophysiological responses actually increased after kairomone exposure. Collectively, these data suggest that phenotypic plasticity in larval crab photobehavior is achieved, at least in part, through rapid changes in photoreceptor structure and function. © 2015. Published by The Company of Biologists Ltd.
The correlation dimension: a useful objective measure of the transient visual evoked potential?
Boon, Mei Ying; Henry, Bruce I; Suttle, Catherine M; Dain, Stephen J
2008-01-14
Visual evoked potentials (VEPs) may be analyzed by examination of the morphology of their components, such as negative (N) and positive (P) peaks. However, methods that rely on component identification may be unreliable when dealing with responses of complex and variable morphology; therefore, objective methods are also useful. One potentially useful measure of the VEP is the correlation dimension. Its relevance to the visual system was investigated by examining its behavior when applied to the transient VEP in response to a range of chromatic contrasts (42%, two times psychophysical threshold, at psychophysical threshold) and to the visually unevoked response (zero contrast). Tests of nonlinearity (e.g., surrogate testing) were conducted. The correlation dimension was found to be negatively correlated with a stimulus property (chromatic contrast) and a known linear measure (the Fourier-derived VEP amplitude). It was also found to be related to visibility and perception of the stimulus such that the dimension reached a maximum for most of the participants at psychophysical threshold. The latter suggests that the correlation dimension may be useful as a diagnostic parameter to estimate psychophysical threshold and may find application in the objective screening and monitoring of congenital and acquired color vision deficiencies, with or without associated disease processes.
Slow changing postural cues cancel visual field dependence on self-tilt detection.
Scotto Di Cesare, C; Macaluso, T; Mestre, D R; Bringoux, L
2015-01-01
Interindividual differences influence the multisensory integration process involved in spatial perception. Here, we assessed the effect of visual field dependence on self-tilt detection relative to upright, as a function of static vs. slow changing visual or postural cues. To that aim, we manipulated slow rotations (i.e., 0.05° s(-1)) of the body and/or the visual scene in pitch. Participants had to indicate whether they felt being tilted forward at successive angles. Results show that thresholds for self-tilt detection substantially differed between visual field dependent/independent subjects, when only the visual scene was rotated. This difference was no longer present when the body was actually rotated, whatever the visual scene condition (i.e., absent, static or rotated relative to the observer). These results suggest that the cancellation of visual field dependence by dynamic postural cues may rely on a multisensory reweighting process, where slow changing vestibular/somatosensory inputs may prevail over visual inputs. Copyright © 2014 Elsevier B.V. All rights reserved.
Sensitivity to timing and order in human visual cortex
Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.
2014-01-01
Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116
Clonal selection versus clonal cooperation: the integrated perception of immune objects
Nataf, Serge
2016-01-01
Analogies between the immune and nervous systems were first envisioned by the immunologist Niels Jerne who introduced the concepts of antigen "recognition" and immune "memory". However, since then, it appears that only the cognitive immunology paradigm proposed by Irun Cohen, attempted to further theorize the immune system functions through the prism of neurosciences. The present paper is aimed at revisiting this analogy-based reasoning. In particular, a parallel is drawn between the brain pathways of visual perception and the processes allowing the global perception of an "immune object". Thus, in the visual system, distinct features of a visual object (shape, color, motion) are perceived separately by distinct neuronal populations during a primary perception task. The output signals generated during this first step instruct then an integrated perception task performed by other neuronal networks. Such a higher order perception step is by essence a cooperative task that is mandatory for the global perception of visual objects. Based on a re-interpretation of recent experimental data, it is suggested that similar general principles drive the integrated perception of immune objects in secondary lymphoid organs (SLOs). In this scheme, the four main categories of signals characterizing an immune object (antigenic, contextual, temporal and localization signals) are first perceived separately by distinct networks of immunocompetent cells. Then, in a multitude of SLO niches, the output signals generated during this primary perception step are integrated by TH-cells at the single cell level. This process eventually generates a multitude of T-cell and B-cell clones that perform, at the scale of SLOs, an integrated perception of immune objects. Overall, this new framework proposes that integrated immune perception and, consequently, integrated immune responses, rely essentially on clonal cooperation rather than clonal selection. PMID:27830060
Programming (Tips) for Physicists & Engineers
Ozcan, Erkcan
2018-02-19
Programming for today's physicists and engineers. Work environment: today's astroparticle, accelerator experiments and information industry rely on large collaborations. Need more than ever: code sharing/resuse, code building--framework integration, documentation and good visualization, working remotely, not reinventing the wheel.
Programming (Tips) for Physicists & Engineers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozcan, Erkcan
2010-07-13
Programming for today's physicists and engineers. Work environment: today's astroparticle, accelerator experiments and information industry rely on large collaborations. Need more than ever: code sharing/resuse, code building--framework integration, documentation and good visualization, working remotely, not reinventing the wheel.
Airflow and optic flow mediate antennal positioning in flying honeybees
Roy Khurana, Taruni; Sane, Sanjay P
2016-01-01
To maintain their speeds during navigation, insects rely on feedback from their visual and mechanosensory modalities. Although optic flow plays an essential role in speed determination, it is less reliable under conditions of low light or sparse landmarks. Under such conditions, insects rely on feedback from antennal mechanosensors but it is not clear how these inputs combine to elicit flight-related antennal behaviours. We here show that antennal movements of the honeybee, Apis mellifera, are governed by combined visual and antennal mechanosensory inputs. Frontal airflow, as experienced during forward flight, causes antennae to actively move forward as a sigmoidal function of absolute airspeed values. However, corresponding front-to-back optic flow causes antennae to move backward, as a linear function of relative optic flow, opposite the airspeed response. When combined, these inputs maintain antennal position in a state of dynamic equilibrium. DOI: http://dx.doi.org/10.7554/eLife.14449.001 PMID:27097104
Building University Capacity to Visualize Solutions to Complex Problems in the Arctic
NASA Astrophysics Data System (ADS)
Broderson, D.; Veazey, P.; Raymond, V. L.; Kowalski, K.; Prakash, A.; Signor, B.
2016-12-01
Rapidly changing environments are creating complex problems across the globe, which are particular magnified in the Arctic. These worldwide challenges can best be addressed through diverse and interdisciplinary research teams. It is incumbent on such teams to promote co-production of knowledge and data-driven decision-making by identifying effective methods to communicate their findings and to engage with the public. Decision Theater North (DTN) is a new semi-immersive visualization system that provides a space for teams to collaborate and develop solutions to complex problems, relying on diverse sets of skills and knowledge. It provides a venue to synthesize the talents of scientists, who gather information (data); modelers, who create models of complex systems; artists, who develop visualizations; communicators, who connect and bridge populations; and policymakers, who can use the visualizations to develop sustainable solutions to pressing problems. The mission of Decision Theater North is to provide a cutting-edge visual environment to facilitate dialogue and decision-making by stakeholders including government, industry, communities and academia. We achieve this mission by adopting a multi-faceted approach reflected in the theater's design, technology, networking capabilities, user support, community relationship building, and strategic partnerships. DTN is a joint project of Alaska's National Science Foundation Experimental Program to Stimulate Competitive Research (NSF EPSCoR) and the University of Alaska Fairbanks (UAF), who have brought the facility up to full operational status and are now expanding its development space to support larger team science efforts. Based in Fairbanks, Alaska, DTN is uniquely poised to address changes taking place in the Arctic and subarctic, and is connected with a larger network of decision theaters that include the Arizona State University Decision Theater Network and the McCain Institute in Washington, DC.
Visualizing multiple inter-organelle contact sites using the organelle-targeted split-GFP system.
Kakimoto, Yuriko; Tashiro, Shinya; Kojima, Rieko; Morozumi, Yuki; Endo, Toshiya; Tamura, Yasushi
2018-04-18
Functional integrity of eukaryotic organelles relies on direct physical contacts between distinct organelles. However, the entity of organelle-tethering factors is not well understood due to lack of means to analyze inter-organelle interactions in living cells. Here we evaluate the split-GFP system for visualizing organelle contact sites in vivo and show its advantages and disadvantages. We observed punctate GFP signals from the split-GFP fragments targeted to any pairs of organelles among the ER, mitochondria, peroxisomes, vacuole and lipid droplets in yeast cells, which suggests that these organelles form contact sites with multiple organelles simultaneously although it is difficult to rule out the possibilities that these organelle contacts sites are artificially formed by the irreversible associations of the split-GFP probes. Importantly, split-GFP signals in the overlapped regions of the ER and mitochondria were mainly co-localized with ERMES, an authentic ER-mitochondria tethering structure, suggesting that split-GFP assembly depends on the preexisting inter-organelle contact sites. We also confirmed that the split-GFP system can be applied to detection of the ER-mitochondria contact sites in HeLa cells. We thus propose that the split-GFP system is a potential tool to observe and analyze inter-organelle contact sites in living yeast and mammalian cells.
Brain-Computer Interfaces With Multi-Sensory Feedback for Stroke Rehabilitation: A Case Study.
Irimia, Danut C; Cho, Woosang; Ortner, Rupert; Allison, Brendan Z; Ignat, Bogdan E; Edlinger, Guenter; Guger, Christoph
2017-11-01
Conventional therapies do not provide paralyzed patients with closed-loop sensorimotor integration for motor rehabilitation. This work presents the recoveriX system, a hardware and software platform that combines a motor imagery (MI)-based brain-computer interface (BCI), functional electrical stimulation (FES), and visual feedback technologies for a complete sensorimotor closed-loop therapy system for poststroke rehabilitation. The proposed system was tested on two chronic stroke patients in a clinical environment. The patients were instructed to imagine the movement of either the left or right hand in random order. During these two MI tasks, two types of feedback were provided: a bar extending to the left or right side of a monitor as visual feedback and passive hand opening stimulated from FES as proprioceptive feedback. Both types of feedback relied on the BCI classification result achieved using common spatial patterns and a linear discriminant analysis classifier. After 10 sessions of recoveriX training, one patient partially regained control of wrist extension in her paretic wrist and the other patient increased the range of middle finger movement by 1 cm. A controlled group study is planned with a new version of the recoveriX system, which will have several improvements. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Sensor supported pilot assistance for helicopter flight in DVE
NASA Astrophysics Data System (ADS)
Waanders, Tim; Münsterer, T.; Kress, M.
2013-05-01
Helicopter operations at low altitude are to this day only performed under VFR conditions in which safe piloting of the aircraft relies on the pilot's visual perception of the outside environment. However, there are situations in which a deterioration of visibility conditions may cause the pilot to lose important visual cues thereby increasing workload and compromising flight safety and mission effectiveness. This paper reports on a pilot assistance system for all phases of flight which is intended to: • Provide navigational support and mission management • Support landings/take-offs in unknown environment and in DVE • Enhance situational awareness in DVE • Provide obstacle and terrain surface detection and warning • Provide upload, sensor based update and download of database information for debriefing and later missions. The system comprises a digital terrain and obstacle database, tactical information, flight plan management combined with an active 3D sensor enabling the above mentioned functionalities. To support pilots during operations in DVE, an intuitive 3D/2D cueing through both head-up and head-down means is proposed to retain situational awareness. This paper further describes the system concept and will elaborate on results of simulator trials in which the functionality was evaluated by operational pilots in realistic and demanding scenarios such as a SAR mission to be performed in mountainous area under different visual conditions. The objective of the simulator trials was to evaluate the functional integration and HMI definition for the NH90 Tactical Transport Helicopter.
NASA Astrophysics Data System (ADS)
Potter, Michael; Bensch, Alexander; Dawson-Elli, Alexander; Linte, Cristian A.
2015-03-01
In minimally invasive surgical interventions direct visualization of the target area is often not available. Instead, clinicians rely on images from various sources, along with surgical navigation systems for guidance. These spatial localization and tracking systems function much like the Global Positioning Systems (GPS) that we are all well familiar with. In this work we demonstrate how the video feed from a typical camera, which could mimic a laparoscopic or endoscopic camera used during an interventional procedure, can be used to identify the pose of the camera with respect to the viewed scene and augment the video feed with computer-generated information, such as rendering of internal anatomy not visible beyond the imaged surface, resulting in a simple augmented reality environment. This paper describes the software and hardware environment and methodology for augmenting the real world with virtual models extracted from medical images to provide enhanced visualization beyond the surface view achieved using traditional imaging. Following intrinsic and extrinsic camera calibration, the technique was implemented and demonstrated using a LEGO structure phantom, as well as a 3D-printed patient-specific left atrial phantom. We assessed the quality of the overlay according to fiducial localization, fiducial registration, and target registration errors, as well as the overlay offset error. Using the software extensions we developed in conjunction with common webcams it is possible to achieve tracking accuracy comparable to that seen with significantly more expensive hardware, leading to target registration errors on the order of 2 mm.
Volumetric 3D display using a DLP projection engine
NASA Astrophysics Data System (ADS)
Geng, Jason
2012-03-01
In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.
Lightweight computational steering of very large scale molecular dynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beazley, D.M.; Lomdahl, P.S.
1996-09-01
We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show howmore » this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.« less
Reddy, Gadi V P; Raman, A
2011-04-01
Trap designs for banana root borer, Cosmopolites sordidus (Germar) (Coleoptera: Curculionidae), have been done essentially on the understanding that C. sordidus rely primarily on chemical cues. Our present results indicate that these borers also rely on visual cues. Previous studies have demonstrated that among the eight differently colored traps tested in the field, brown traps were the most effective compared with the performances of yellow, red, gray, blue, black, white, and green traps; mahogany-brown was more effective than other shades of brown.In the current study, efficiency of ground traps with different colors was evaluated in the laboratory for the capture of C. sordidius. Response of C. sordidus to pheromone-baited ground traps of several different colors (used either individually or as 1:1 mixtures of two different colors) were compared with the standardized mahogany-brown traps. Traps with mahogany-brown mixed with different colors had no significant effect. In contrast, a laboratory color-choice tests indicated C. sordidus preferred black traps over other color traps, with no specific preferences for different shades of black. Here again, traps with black mixed with other colors (1:1) had no influence on the catches. Therefore, any other color that mixes with mahogany-brown or black does not cause color-specific dilution of attractiveness. By exploiting these results, it may be possible to produce efficacious trapping systems that could be used in a behavioral approach to banana root borer control.
Visual and proprioceptive interaction in patients with bilateral vestibular loss☆
Cutfield, Nicholas J.; Scott, Gregory; Waldman, Adam D.; Sharp, David J.; Bronstein, Adolfo M.
2014-01-01
Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients. PMID:25061564
Visual and proprioceptive interaction in patients with bilateral vestibular loss.
Cutfield, Nicholas J; Scott, Gregory; Waldman, Adam D; Sharp, David J; Bronstein, Adolfo M
2014-01-01
Following bilateral vestibular loss (BVL) patients gradually adapt to the loss of vestibular input and rely more on other sensory inputs. Here we examine changes in the way proprioceptive and visual inputs interact. We used functional magnetic resonance imaging (fMRI) to investigate visual responses in the context of varying levels of proprioceptive input in 12 BVL subjects and 15 normal controls. A novel metal-free vibrator was developed to allow vibrotactile neck proprioceptive input to be delivered in the MRI system. A high level (100 Hz) and low level (30 Hz) control stimulus was applied over the left splenius capitis; only the high frequency stimulus generates a significant proprioceptive stimulus. The neck stimulus was applied in combination with static and moving (optokinetic) visual stimuli, in a factorial fMRI experimental design. We found that high level neck proprioceptive input had more cortical effect on brain activity in the BVL patients. This included a reduction in visual motion responses during high levels of proprioceptive input and differential activation in the midline cerebellum. In early visual cortical areas, the effect of high proprioceptive input was present for both visual conditions but in lateral visual areas, including V5/MT, the effect was only seen in the context of visual motion stimulation. The finding of a cortical visuo-proprioceptive interaction in BVL patients is consistent with behavioural data indicating that, in BVL patients, neck afferents partly replace vestibular input during the CNS-mediated compensatory process. An fMRI cervico-visual interaction may thus substitute the known visuo-vestibular interaction reported in normal subject fMRI studies. The results provide evidence for a cortical mechanism of adaptation to vestibular failure, in the form of an enhanced proprioceptive influence on visual processing. The results may provide the basis for a cortical mechanism involved in proprioceptive substitution of vestibular function in BVL patients.
3D endoscopic imaging using structured illumination technique (Conference Presentation)
NASA Astrophysics Data System (ADS)
Le, Hanh N. D.; Nguyen, Hieu; Wang, Zhaoyang; Kang, Jin U.
2017-02-01
Surgeons have been increasingly relying on minimally invasive surgical guidance techniques not only to reduce surgical trauma but also to achieve accurate and objective surgical risk evaluations. A typical minimally invasive surgical guidance system provides visual assistance in two-dimensional anatomy and pathology of internal organ within a limited field of view. In this work, we propose and implement a structure illumination endoscope to provide a simple, inexpensive 3D endoscopic imaging to conduct high resolution 3D imagery for use in surgical guidance system. The system is calibrated and validated for quantitative depth measurement in both calibrated target and human subject. The system exhibits a depth of field of 20 mm, depth resolution of 0.2mm and a relative accuracy of 0.1%. The demonstrated setup affirms the feasibility of using the structured illumination endoscope for depth quantization and assisting medical diagnostic assessments
Visual search for motion-form conjunctions: is form discriminated within the motion system?
von Mühlenen, A; Müller, H J
2001-06-01
Motion-form conjunction search can be more efficient when the target is moving (a moving 45 degrees tilted line among moving vertical and stationary 45 degrees tilted lines) rather than stationary. This asymmetry may be due to aspects of form being discriminated within a motion system representing only moving items, whereas discrimination of stationary items relies on a static form system (J. Driver & P. McLeod, 1992). Alternatively, it may be due to search exploiting differential motion velocity and direction signals generated by the moving-target and distractor lines. To decide between these alternatives, 4 experiments systematically varied the motion-signal information conveyed by the moving target and distractors while keeping their form difference salient. Moving-target search was found to be facilitated only when differential motion-signal information was available. Thus, there is no need to assume that form is discriminated within the motion system.
Microscale Symmetrical Electroporator Array as a Versatile Molecular Delivery System
NASA Astrophysics Data System (ADS)
Ouyang, Mengxing; Hill, Winfield; Lee, Jung Hyun; Hur, Soojung Claire
2017-03-01
Successful developments of new therapeutic strategies often rely on the ability to deliver exogenous molecules into cytosol. We have developed a versatile on-chip vortex-assisted electroporation system, engineered to conduct sequential intracellular delivery of multiple molecules into various cell types at low voltage in a dosage-controlled manner. Micro-patterned planar electrodes permit substantial reduction in operational voltages and seamless integration with an existing microfluidic technology. Equipped with real-time process visualization functionality, the system enables on-chip optimization of electroporation parameters for cells with varying properties. Moreover, the system’s dosage control and multi-molecular delivery capabilities facilitate intracellular delivery of various molecules as a single agent or in combination and its utility in biological research has been demonstrated by conducting RNA interference assays. We envision the system to be a powerful tool, aiding a wide range of applications, requiring single-cell level co-administrations of multiple molecules with controlled dosages.
Assembly of the cnidarian camera-type eye from vertebrate-like components.
Kozmik, Zbynek; Ruzickova, Jana; Jonasova, Kristyna; Matsumoto, Yoshifumi; Vopalensky, Pavel; Kozmikova, Iryna; Strnad, Hynek; Kawamura, Shoji; Piatigorsky, Joram; Paces, Vaclav; Vlcek, Cestmir
2008-07-01
Animal eyes are morphologically diverse. Their assembly, however, always relies on the same basic principle, i.e., photoreceptors located in the vicinity of dark shielding pigment. Cnidaria as the likely sister group to the Bilateria are the earliest branching phylum with a well developed visual system. Here, we show that camera-type eyes of the cubozoan jellyfish, Tripedalia cystophora, use genetic building blocks typical of vertebrate eyes, namely, a ciliary phototransduction cascade and melanogenic pathway. Our findings indicative of parallelism provide an insight into eye evolution. Combined, the available data favor the possibility that vertebrate and cubozoan eyes arose by independent recruitment of orthologous genes during evolution.
Automatic identification of bacterial types using statistical imaging methods
NASA Astrophysics Data System (ADS)
Trattner, Sigal; Greenspan, Hayit; Tepper, Gapi; Abboud, Shimon
2003-05-01
The objective of the current study is to develop an automatic tool to identify bacterial types using computer-vision and statistical modeling techniques. Bacteriophage (phage)-typing methods are used to identify and extract representative profiles of bacterial types, such as the Staphylococcus Aureus. Current systems rely on the subjective reading of plaque profiles by human expert. This process is time-consuming and prone to errors, especially as technology is enabling the increase in the number of phages used for typing. The statistical methodology presented in this work, provides for an automated, objective and robust analysis of visual data, along with the ability to cope with increasing data volumes.
DOT National Transportation Integrated Search
2015-06-01
Bridge managers have historically relied on visual inspection reports and field observation, including : photographs, to assess bridge health. The inclusion of instrumentation, including strain gauges, along : with a structural model can enhance brid...
Terrestrial laser scanning-based bridge structural condition assessment : InTrans project reports.
DOT National Transportation Integrated Search
2016-05-01
Objective, accurate, and fast assessment of a bridges structural condition is critical to the timely assessment of safety risks. : Current practices for bridge condition assessment rely on visual observations and manual interpretation of reports a...
A task-dependent causal role for low-level visual processes in spoken word comprehension.
Ostarek, Markus; Huettig, Falk
2017-08-01
It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical-decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete versus abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Milner-Bolotin, Marina; Nashon, Samson Madera
2012-02-01
Science, engineering and mathematics-related disciplines have relied heavily on a researcher's ability to visualize phenomena under study and being able to link and superimpose various abstract and concrete representations including visual, spatial, and temporal. The spatial representations are especially important in all branches of biology (in developmental biology time becomes an important dimension), where 3D and often 4D representations are crucial for understanding the phenomena. By the time biology students get to undergraduate education, they are supposed to have acquired visual-spatial thinking skills, yet it has been documented that very few undergraduates and a small percentage of graduate students have had a chance to develop these skills to a sufficient degree. The current paper discusses the literature that highlights the essence of visual-spatial thinking and the development of visual-spatial literacy, considers the application of the visual-spatial thinking to biology education, and proposes how modern technology can help to promote visual-spatial literacy and higher order thinking among undergraduate students of biology.
Disruption of functional networks in dyslexia: a whole-brain, data-driven analysis of connectivity.
Finn, Emily S; Shen, Xilin; Holahan, John M; Scheinost, Dustin; Lacadie, Cheryl; Papademetris, Xenophon; Shaywitz, Sally E; Shaywitz, Bennett A; Constable, R Todd
2014-09-01
Functional connectivity analyses of functional magnetic resonance imaging data are a powerful tool for characterizing brain networks and how they are disrupted in neural disorders. However, many such analyses examine only one or a small number of a priori seed regions. Studies that consider the whole brain frequently rely on anatomic atlases to define network nodes, which might result in mixing distinct activation time-courses within a single node. Here, we improve upon previous methods by using a data-driven brain parcellation to compare connectivity profiles of dyslexic (DYS) versus non-impaired (NI) readers in the first whole-brain functional connectivity analysis of dyslexia. Whole-brain connectivity was assessed in children (n = 75; 43 NI, 32 DYS) and adult (n = 104; 64 NI, 40 DYS) readers. Compared to NI readers, DYS readers showed divergent connectivity within the visual pathway and between visual association areas and prefrontal attention areas; increased right-hemisphere connectivity; reduced connectivity in the visual word-form area (part of the left fusiform gyrus specialized for printed words); and persistent connectivity to anterior language regions around the inferior frontal gyrus. Together, findings suggest that NI readers are better able to integrate visual information and modulate their attention to visual stimuli, allowing them to recognize words on the basis of their visual properties, whereas DYS readers recruit altered reading circuits and rely on laborious phonology-based "sounding out" strategies into adulthood. These results deepen our understanding of the neural basis of dyslexia and highlight the importance of synchrony between diverse brain regions for successful reading. © 2013 Society of Biological Psychiatry Published by Society of Biological Psychiatry All rights reserved.
Web-Based Geographic Information System Tool for Accessing Hanford Site Environmental Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Triplett, Mark B.; Seiple, Timothy E.; Watson, David J.
Data volume, complexity, and access issues pose severe challenges for analysts, regulators and stakeholders attempting to efficiently use legacy data to support decision making at the U.S. Department of Energy’s (DOE) Hanford Site. DOE has partnered with the Pacific Northwest National Laboratory (PNNL) on the PHOENIX (PNNL-Hanford Online Environmental Information System) project, which seeks to address data access, transparency, and integration challenges at Hanford to provide effective decision support. PHOENIX is a family of spatially-enabled web applications providing quick access to decades of valuable scientific data and insight through intuitive query, visualization, and analysis tools. PHOENIX realizes broad, public accessibilitymore » by relying only on ubiquitous web-browsers, eliminating the need for specialized software. It accommodates a wide range of users with intuitive user interfaces that require little or no training to quickly obtain and visualize data. Currently, PHOENIX is actively hosting three applications focused on groundwater monitoring, groundwater clean-up performance reporting, and in-tank monitoring. PHOENIX-based applications are being used to streamline investigative and analytical processes at Hanford, saving time and money. But more importantly, by integrating previously isolated datasets and developing relevant visualization and analysis tools, PHOENIX applications are enabling DOE to discover new correlations hidden in legacy data, allowing them to more effectively address complex issues at Hanford.« less
2012-01-01
Background Prosthetic hand users have to rely extensively on visual feedback, which seems to lead to a high conscious burden for the users, in order to manipulate their prosthetic devices. Indirect methods (electro-cutaneous, vibrotactile, auditory cues) have been used to convey information from the artificial limb to the amputee, but the usability and advantages of these feedback methods were explored mainly by looking at the performance results, not taking into account measurements of the user’s mental effort, attention, and emotions. The main objective of this study was to explore the feasibility of using psycho-physiological measurements to assess cognitive effort when manipulating a robot hand with and without the usage of a sensory substitution system based on auditory feedback, and how these psycho-physiological recordings relate to temporal and grasping performance in a static setting. Methods 10 male subjects (26+/-years old), participated in this study and were asked to come for 2 consecutive days. On the first day the experiment objective, tasks, and experiment setting was explained. Then, they completed a 30 minutes guided training. On the second day each subject was tested in 3 different modalities: Auditory Feedback only control (AF), Visual Feedback only control (VF), and Audiovisual Feedback control (AVF). For each modality they were asked to perform 10 trials. At the end of each test, the subject had to answer the NASA TLX questionnaire. Also, during the test the subject’s EEG, ECG, electro-dermal activity (EDA), and respiration rate were measured. Results The results show that a higher mental effort is needed when the subjects rely only on their vision, and that this effort seems to be reduced when auditory feedback is added to the human-machine interaction (multimodal feedback). Furthermore, better temporal performance and better grasping performance was obtained in the audiovisual modality. Conclusions The performance improvements when using auditory cues, along with vision (multimodal feedback), can be attributed to a reduced attentional demand during the task, which can be attributed to a visual “pop-out” or enhance effect. Also, the NASA TLX, the EEG’s Alpha and Beta band, and the Heart Rate could be used to further evaluate sensory feedback systems in prosthetic applications. PMID:22682425
NASA Astrophysics Data System (ADS)
Krueger, Evan; Messier, Erik; Linte, Cristian A.; Diaz, Gabriel
2017-03-01
Recent advances in medical image acquisition allow for the reconstruction of anatomies with 3D, 4D, and 5D renderings. Nevertheless, standard anatomical and medical data visualization still relies heavily on the use of traditional 2D didactic tools (i.e., textbooks and slides), which restrict the presentation of image data to a 2D slice format. While these approaches have their merits beyond being cost effective and easy to disseminate, anatomy is inherently three-dimensional. By using 2D visualizations to illustrate more complex morphologies, important interactions between structures can be missed. In practice, such as in the planning and execution of surgical interventions, professionals require intricate knowledge of anatomical complexities, which can be more clearly communicated and understood through intuitive interaction with 3D volumetric datasets, such as those extracted from high-resolution CT or MRI scans. Open source, high quality, 3D medical imaging datasets are freely available, and with the emerging popularity of 3D display technologies, affordable and consistent 3D anatomical visualizations can be created. In this study we describe the design, implementation, and evaluation of one such interactive, stereoscopic visualization paradigm for human anatomy extracted from 3D medical images. A stereoscopic display was created by projecting the scene onto the lab floor using sequential frame stereo projection and viewed through active shutter glasses. By incorporating a PhaseSpace motion tracking system, a single viewer can navigate an augmented reality environment and directly manipulate virtual objects in 3D. While this paradigm is sufficiently versatile to enable a wide variety of applications in need of 3D visualization, we designed our study to work as an interactive game, which allows users to explore the anatomy of various organs and systems. In this study we describe the design, implementation, and evaluation of an interactive and stereoscopic visualization platform for exploring and understanding human anatomy. This system can present medical imaging data in three dimensions and allows for direct physical interaction and manipulation by the viewer. This should provide numerous benefits over traditional, 2D display and interaction modalities, and in our analysis, we aim to quantify and qualify users' visual and motor interactions with the virtual environment when employing this interactive display as a 3D didactic tool.
OnSight: Multi-platform Visualization of the Surface of Mars
NASA Astrophysics Data System (ADS)
Abercrombie, S. P.; Menzies, A.; Winter, A.; Clausen, M.; Duran, B.; Jorritsma, M.; Goddard, C.; Lidawer, A.
2017-12-01
A key challenge of planetary geology is to develop an understanding of an environment that humans cannot (yet) visit. Instead, scientists rely on visualizations created from images sent back by robotic explorers, such as the Curiosity Mars rover. OnSight is a multi-platform visualization tool that helps scientists and engineers to visualize the surface of Mars. Terrain visualization allows scientists to understand the scale and geometric relationships of the environment around the Curiosity rover, both for scientific understanding and for tactical consideration in safely operating the rover. OnSight includes a web-based 2D/3D visualization tool, as well as an immersive mixed reality visualization. In addition, OnSight offers a novel feature for communication among the science team. Using the multiuser feature of OnSight, scientists can meet virtually on Mars, to discuss geology in a shared spatial context. Combining web-based visualization with immersive visualization allows OnSight to leverage strengths of both platforms. This project demonstrates how 3D visualization can be adapted to either an immersive environment or a computer screen, and will discuss advantages and disadvantages of both platforms.
Influence of Immersive Human Scale Architectural Representation on Design Judgment
NASA Astrophysics Data System (ADS)
Elder, Rebecca L.
Unrealistic visual representation of architecture within our existing environments have lost all reference to the human senses. As a design tool, visual and auditory stimuli can be utilized to determine human's perception of design. This experiment renders varying building inputs within different sites, simulated with corresponding immersive visual and audio sensory cues. Introducing audio has been proven to influence the way a person perceives a space, yet most inhabitants rely strictly on their sense of vision to make design judgments. Though not as apparent, users prefer spaces that have a better quality of sound and comfort. Through a series of questions, we can begin to analyze whether a design is fit for both an acoustic and visual environment.
A new spherical scanning system for infrared reflectography of paintings
NASA Astrophysics Data System (ADS)
Gargano, M.; Cavaliere, F.; Viganò, D.; Galli, A.; Ludwig, N.
2017-03-01
Infrared reflectography is an imaging technique used to visualize the underdrawings of ancient paintings; it relies on the fact that most pigment layers are quite transparent to infrared radiation in the spectral band between 0.8 μm and 2.5 μm. InGaAs sensor cameras are nowadays the most used devices to visualize the underdrawings but due to the small size of the detectors, these cameras are usually mounted on scanning systems to record high resolution reflectograms. This work describes a portable scanning system prototype based on a peculiar spherical scanning system built through a light weight and low cost motorized head. The motorized head was built with the purpose of allowing the refocusing adjustment needed to compensate the variable camera-painting distance during the rotation of the camera. The prototype has been tested first in laboratory and then in-situ for the Giotto panel "God the Father with Angels" with a 256 pixel per inch resolution. The system performance is comparable with that of other reflectographic devices with the advantage of extending the scanned area up to 1 m × 1 m, with a 40 min scanning time. The present configuration can be easily modified to increase the resolution up to 560 pixels per inch or to extend the scanned area up to 2 m × 2 m.
2010-03-01
United States Air Force relies heavily on computer networks to transmit vast amounts of information throughout its organizations and with agencies...4 1.5. Thesis Organization ...and concepts are presented and explored. 1.5. Thesis Organization Chapter II provides background information on the current technologies that
Visualizing Host-Nation Sentiment at the Tactical Edge
2014-06-01
HUMINT) and open source intelligence ( OSINT ) become prioritized above more traditional intelligence based on signals (SIGINT) and electronic sources...reliance on HUMINT and OSINT . Soldiers as peacekeepers must manage multiple information assets and resources, often relying on local and international
Automatic summarization of soccer highlights using audio-visual descriptors.
Raventós, A; Quijada, R; Torres, Luis; Tarrés, Francesc
2015-01-01
Automatic summarization generation of sports video content has been object of great interest for many years. Although semantic descriptions techniques have been proposed, many of the approaches still rely on low-level video descriptors that render quite limited results due to the complexity of the problem and to the low capability of the descriptors to represent semantic content. In this paper, a new approach for automatic highlights summarization generation of soccer videos using audio-visual descriptors is presented. The approach is based on the segmentation of the video sequence into shots that will be further analyzed to determine its relevance and interest. Of special interest in the approach is the use of the audio information that provides additional robustness to the overall performance of the summarization system. For every video shot a set of low and mid level audio-visual descriptors are computed and lately adequately combined in order to obtain different relevance measures based on empirical knowledge rules. The final summary is generated by selecting those shots with highest interest according to the specifications of the user and the results of relevance measures. A variety of results are presented with real soccer video sequences that prove the validity of the approach.
Application of computer-generated models using low-bandwidth vehicle data
NASA Astrophysics Data System (ADS)
Heyes, Neil J.
2002-05-01
One of the main issues with remote teleoperation of vehicles is that during visual operation, one relies on fixed camera positions that ultimately constrain the operator's view of the real world. The paper describes a solution that has been developed at QinetiQ where the operator his given a unique virtual perspective of the vehicle and the surrounding terrain as the vehicle operates. This system helps to solve problems that are generic to remote systems, such as reduction of high data transmission rates and providing 360 degree(s) three dimensional operator view positions regardless of terrain features, light levels and near real time operation. A summary of technologies is listed that could be applied to different types of vehicles and placed in many different situations in order to enhance operator spatial awareness.
Born, Jannis; Galeazzi, Juan M; Stringer, Simon M
2017-01-01
A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet.
Born, Jannis; Stringer, Simon M.
2017-01-01
A subset of neurons in the posterior parietal and premotor areas of the primate brain respond to the locations of visual targets in a hand-centred frame of reference. Such hand-centred visual representations are thought to play an important role in visually-guided reaching to target locations in space. In this paper we show how a biologically plausible, Hebbian learning mechanism may account for the development of localized hand-centred representations in a hierarchical neural network model of the primate visual system, VisNet. The hand-centered neurons developed in the model use an invariance learning mechanism known as continuous transformation (CT) learning. In contrast to previous theoretical proposals for the development of hand-centered visual representations, CT learning does not need a memory trace of recent neuronal activity to be incorporated in the synaptic learning rule. Instead, CT learning relies solely on a Hebbian learning rule, which is able to exploit the spatial overlap that naturally occurs between successive images of a hand-object configuration as it is shifted across different retinal locations due to saccades. Our simulations show how individual neurons in the network model can learn to respond selectively to target objects in particular locations with respect to the hand, irrespective of where the hand-object configuration occurs on the retina. The response properties of these hand-centred neurons further generalise to localised receptive fields in the hand-centred space when tested on novel hand-object configurations that have not been explored during training. Indeed, even when the network is trained with target objects presented across a near continuum of locations around the hand during training, the model continues to develop hand-centred neurons with localised receptive fields in hand-centred space. With the help of principal component analysis, we provide the first theoretical framework that explains the behavior of Hebbian learning in VisNet. PMID:28562618
Saidi, Maryam; Towhidkhah, Farzad; Gharibzadeh, Shahriar; Lari, Abdolaziz Azizi
2013-12-01
Humans perceive the surrounding world by integration of information through different sensory modalities. Earlier models of multisensory integration rely mainly on traditional Bayesian and causal Bayesian inferences for single causal (source) and two causal (for two senses such as visual and auditory systems), respectively. In this paper a new recurrent neural model is presented for integration of visual and proprioceptive information. This model is based on population coding which is able to mimic multisensory integration of neural centers in the human brain. The simulation results agree with those achieved by casual Bayesian inference. The model can also simulate the sensory training process of visual and proprioceptive information in human. Training process in multisensory integration is a point with less attention in the literature before. The effect of proprioceptive training on multisensory perception was investigated through a set of experiments in our previous study. The current study, evaluates the effect of both modalities, i.e., visual and proprioceptive training and compares them with each other through a set of new experiments. In these experiments, the subject was asked to move his/her hand in a circle and estimate its position. The experiments were performed on eight subjects with proprioception training and eight subjects with visual training. Results of the experiments show three important points: (1) visual learning rate is significantly more than that of proprioception; (2) means of visual and proprioceptive errors are decreased by training but statistical analysis shows that this decrement is significant for proprioceptive error and non-significant for visual error, and (3) visual errors in training phase even in the beginning of it, is much less than errors of the main test stage because in the main test, the subject has to focus on two senses. The results of the experiments in this paper is in agreement with the results of the neural model simulation.
Hanley, M E; Cruickshanks, K L; Dunn, D; Stewart-Jones, A; Goulson, D
2009-03-01
Houseflies (Musca domestica L.) are a major pest species of livestock units and landfill sites. Insecticide resistance has resulted in an increased emphasis on lure-and-kill control methods, but the success of this approach relies on the effective attraction of houseflies with olfactory or visual stimuli. This study examined the efficacy of olfactory (cuticular hydrocarbons) or visual (colours and groups of flies) attractants in a commercial poultry unit. Despite simulating the cuticular hydrocarbon profiles of male and female houseflies, we found no significant increase in the number of individuals lured to traps and no sex-specific responses were evident. The use of target colours selected to match the three peaks in housefly visual spectral sensitivity yielded no significant increase in the catch rate of traps to which they were applied. This study also demonstrated that male and female flies possess significantly different spectral reflectance (males are brighter at 320-470 nm; females are brighter at 470-670 nm). An experiment incorporating groups of recently killed flies from which cuticular hydrocarbons were either removed by solvent or left intact also failed to show any evidence of olfactory or visual attraction for houseflies of either sex. This study concluded that variations of the most commonly applied methods of luring houseflies to traps in commercial livestock units fail to significantly increase capture rates. These results support commonly observed inconsistencies associated with using olfactory or visual stimuli in lure-and-kill systems, possibly because field conditions lessen the attractant properties observed in laboratory experiments.
Ceponiene, R; Westerfield, M; Torki, M; Townsend, J
2008-06-18
Major accounts of aging implicate changes in processing external stimulus information. Little is known about differential effects of auditory and visual sensory aging, and the mechanisms of sensory aging are still poorly understood. Using event-related potentials (ERPs) elicited by unattended stimuli in younger (M=25.5 yrs) and older (M=71.3 yrs) subjects, this study examined mechanisms of sensory aging under minimized attention conditions. Auditory and visual modalities were examined to address modality-specificity vs. generality of sensory aging. Between-modality differences were robust. The earlier-latency responses (P1, N1) were unaffected in the auditory modality but were diminished in the visual modality. The auditory N2 and early visual N2 were diminished. Two similarities between the modalities were age-related enhancements in the late P2 range and positive behavior-early N2 correlation, the latter suggesting that N2 may reflect long-latency inhibition of irrelevant stimuli. Since there is no evidence for salient differences in neuro-biological aging between the two sensory regions, the observed between-modality differences are best explained by the differential reliance of auditory and visual systems on attention. Visual sensory processing relies on facilitation by visuo-spatial attention, withdrawal of which appears to be more disadvantageous in older populations. In contrast, auditory processing is equipped with powerful inhibitory capacities. However, when the whole auditory modality is unattended, thalamo-cortical gating deficits may not manifest in the elderly. In contrast, ERP indices of longer-latency, stimulus-level inhibitory modulation appear to diminish with age.
Does It Really Matter Where You Look When Walking on Stairs? Insights from a Dual-Task Study
Miyasike-daSilva, Veronica; McIlroy, William E.
2012-01-01
Although the visual system is known to provide relevant information to guide stair locomotion, there is less understanding of the specific contributions of foveal and peripheral visual field information. The present study investigated the specific role of foveal vision during stair locomotion and ground-stairs transitions by using a dual-task paradigm to influence the ability to rely on foveal vision. Fifteen healthy adults (26.9±3.3 years; 8 females) ascended a 7-step staircase under four conditions: no secondary tasks (CONTROL); gaze fixation on a fixed target located at the end of the pathway (TARGET); visual reaction time task (VRT); and auditory reaction time task (ART). Gaze fixations towards stair features were significantly reduced in TARGET and VRT compared to CONTROL and ART. Despite the reduced fixations, participants were able to successfully ascend stairs and rarely used the handrail. Step time was increased during VRT compared to CONTROL in most stair steps. Navigating on the transition steps did not require more gaze fixations than the middle steps. However, reaction time tended to increase during locomotion on transitions suggesting additional executive demands during this phase. These findings suggest that foveal vision may not be an essential source of visual information regarding stair features to guide stair walking, despite the unique control challenges at transition phases as highlighted by phase-specific challenges in dual-tasking. Instead, the tendency to look at the steps in usual conditions likely provides a stable reference frame for extraction of visual information regarding step features from the entire visual field. PMID:22970297
Turbidity interferes with foraging success of visual but not chemosensory predators
Smee, Delbert L.
2015-01-01
Predation can significantly affect prey populations and communities, but predator effects can be attenuated when abiotic conditions interfere with foraging activities. In estuarine communities, turbidity can affect species richness and abundance and is changing in many areas because of coastal development. Many fish species are less efficient foragers in turbid waters, and previous research revealed that in elevated turbidity, fish are less abundant whereas crabs and shrimp are more abundant. We hypothesized that turbidity altered predatory interactions in estuaries by interfering with visually-foraging predators and prey but not with organisms relying on chemoreception. We measured the effects of turbidity on the predation rates of two model predators: a visual predator (pinfish, Lagodon rhomboides) and a chemosensory predator (blue crabs, Callinectes sapidus) in clear and turbid water (0 and ∼100 nephelometric turbidity units). Feeding assays were conducted with two prey items, mud crabs (Panopeus spp.) that rely heavily on chemoreception to detect predators, and brown shrimp (Farfantepenaus aztecus) that use both chemical and visual cues for predator detection. Because turbidity reduced pinfish foraging on both mud crabs and shrimp, the changes in predation rates are likely driven by turbidity attenuating fish foraging ability and not by affecting prey vulnerability to fish consumers. Blue crab foraging was unaffected by turbidity, and blue crabs were able to successfully consume nearly all mud crab and shrimp prey. Turbidity can influence predator–prey interactions by reducing the feeding efficiency of visual predators, providing a competitive advantage to chemosensory predators, and altering top-down control in food webs. PMID:26401444
Turbidity interferes with foraging success of visual but not chemosensory predators.
Lunt, Jessica; Smee, Delbert L
2015-01-01
Predation can significantly affect prey populations and communities, but predator effects can be attenuated when abiotic conditions interfere with foraging activities. In estuarine communities, turbidity can affect species richness and abundance and is changing in many areas because of coastal development. Many fish species are less efficient foragers in turbid waters, and previous research revealed that in elevated turbidity, fish are less abundant whereas crabs and shrimp are more abundant. We hypothesized that turbidity altered predatory interactions in estuaries by interfering with visually-foraging predators and prey but not with organisms relying on chemoreception. We measured the effects of turbidity on the predation rates of two model predators: a visual predator (pinfish, Lagodon rhomboides) and a chemosensory predator (blue crabs, Callinectes sapidus) in clear and turbid water (0 and ∼100 nephelometric turbidity units). Feeding assays were conducted with two prey items, mud crabs (Panopeus spp.) that rely heavily on chemoreception to detect predators, and brown shrimp (Farfantepenaus aztecus) that use both chemical and visual cues for predator detection. Because turbidity reduced pinfish foraging on both mud crabs and shrimp, the changes in predation rates are likely driven by turbidity attenuating fish foraging ability and not by affecting prey vulnerability to fish consumers. Blue crab foraging was unaffected by turbidity, and blue crabs were able to successfully consume nearly all mud crab and shrimp prey. Turbidity can influence predator-prey interactions by reducing the feeding efficiency of visual predators, providing a competitive advantage to chemosensory predators, and altering top-down control in food webs.
Predicting Airport Screening Officers' Visual Search Competency With a Rapid Assessment.
Mitroff, Stephen R; Ericson, Justin M; Sharpe, Benjamin
2018-03-01
Objective The study's objective was to assess a new personnel selection and assessment tool for aviation security screeners. A mobile app was modified to create a tool, and the question was whether it could predict professional screeners' on-job performance. Background A variety of professions (airport security, radiology, the military, etc.) rely on visual search performance-being able to detect targets. Given the importance of such professions, it is necessary to maximize performance, and one means to do so is to select individuals who excel at visual search. A critical question is whether it is possible to predict search competency within a professional search environment. Method Professional searchers from the USA Transportation Security Administration (TSA) completed a rapid assessment on a tablet-based X-ray simulator (XRAY Screener, derived from the mobile technology app Airport Scanner; Kedlin Company). The assessment contained 72 trials that were simulated X-ray images of bags. Participants searched for prohibited items and tapped on them with their finger. Results Performance on the assessment significantly related to on-job performance measures for the TSA officers such that those who were better XRAY Screener performers were both more accurate and faster at the actual airport checkpoint. Conclusion XRAY Screener successfully predicted on-job performance for professional aviation security officers. While questions remain about the underlying cognitive mechanisms, this quick assessment was found to significantly predict on-job success for a task that relies on visual search performance. Application It may be possible to quickly assess an individual's visual search competency, which could help organizations select new hires and assess their current workforce.
Normal form from biological motion despite impaired ventral stream function.
Gilaie-Dotan, S; Bentin, S; Harel, M; Rees, G; Saygin, A P
2011-04-01
We explored the extent to which biological motion perception depends on ventral stream integration by studying LG, an unusual case of developmental visual agnosia. LG has significant ventral stream processing deficits but no discernable structural cortical abnormality. LG's intermediate visual areas and object-sensitive regions exhibit abnormal activation during visual object perception, in contrast to area V5/MT+ which responds normally to visual motion (Gilaie-Dotan, Perry, Bonneh, Malach, & Bentin, 2009). Here, in three studies we used point light displays, which require visual integration, in adaptive threshold experiments to examine LG's ability to detect form from biological and non-biological motion cues. LG's ability to detect and discriminate form from biological motion was similar to healthy controls. In contrast, he was significantly deficient in processing form from non-biological motion. Thus, LG can rely on biological motion cues to perceive human forms, but is considerably impaired in extracting form from non-biological motion. Finally, we found that while LG viewed biological motion, activity in a network of brain regions associated with processing biological motion was functionally correlated with his V5/MT+ activity, indicating that normal inputs from V5/MT+ might suffice to activate his action perception system. These results indicate that processing of biologically moving form can dissociate from other form processing in the ventral pathway. Furthermore, the present results indicate that integrative ventral stream processing is necessary for uncompromised processing of non-biological form from motion. Copyright © 2011 Elsevier Ltd. All rights reserved.
Kapeller, Christoph; Kamada, Kyousuke; Ogawa, Hiroshi; Prueckl, Robert; Scharinger, Josef; Guger, Christoph
2014-01-01
A brain-computer-interface (BCI) allows the user to control a device or software with brain activity. Many BCIs rely on visual stimuli with constant stimulation cycles that elicit steady-state visual evoked potentials (SSVEP) in the electroencephalogram (EEG). This EEG response can be generated with a LED or a computer screen flashing at a constant frequency, and similar EEG activity can be elicited with pseudo-random stimulation sequences on a screen (code-based BCI). Using electrocorticography (ECoG) instead of EEG promises higher spatial and temporal resolution and leads to more dominant evoked potentials due to visual stimulation. This work is focused on BCIs based on visual evoked potentials (VEP) and its capability as a continuous control interface for augmentation of video applications. One 35 year old female subject with implanted subdural grids participated in the study. The task was to select one out of four visual targets, while each was flickering with a code sequence. After a calibration run including 200 code sequences, a linear classifier was used during an evaluation run to identify the selected visual target based on the generated code-based VEPs over 20 trials. Multiple ECoG buffer lengths were tested and the subject reached a mean online classification accuracy of 99.21% for a window length of 3.15 s. Finally, the subject performed an unsupervised free run in combination with visual feedback of the current selection. Additionally, an algorithm was implemented that allowed to suppress false positive selections and this allowed the subject to start and stop the BCI at any time. The code-based BCI system attained very high online accuracy, which makes this approach very promising for control applications where a continuous control signal is needed. PMID:25147509
Astronomical Data and Information Visualization
NASA Astrophysics Data System (ADS)
Goodman, Alyssa A.
2010-01-01
As the size and complexity of data sets increases, the need to "see" them more clearly increases as well. In the past, many scientists saw "fancy" data and information visualization as necessary for "outreach," but not for research. In this talk, I wlll demonstrate, using specific examples, why more and more scientists--not just astronomers--are coming to rely upon the development of new visualization strategies not just to present their data, but to understand it. Principal examples will be drawn from the "Astronomical Medicine" project at Harvard's Initiative in Innovative Computing, and from the "Seamless Astronomy" effort, which is co-sponsored by the VAO (NASA/NSF) and Microsoft Research.
NASA Astrophysics Data System (ADS)
Liu, Yan; Shen, Yuecheng; Ruan, Haowen; Brodie, Frank L.; Wong, Terence T. W.; Yang, Changhuei; Wang, Lihong V.
2018-01-01
Normal development of the visual system in infants relies on clear images being projected onto the retina, which can be disrupted by lens opacity caused by congenital cataract. This disruption, if uncorrected in early life, results in amblyopia (permanently decreased vision even after removal of the cataract). Doctors are able to prevent amblyopia by removing the cataract during the first several weeks of life, but this surgery risks a host of complications, which can be equally visually disabling. Here, we investigated the feasibility of focusing light noninvasively through highly scattering cataractous lenses to stimulate the retina, thereby preventing amblyopia. This approach would allow the cataractous lens removal surgery to be delayed and hence greatly reduce the risk of complications from early surgery. Employing a wavefront shaping technique named time-reversed ultrasonically encoded optical focusing in reflection mode, we focused 532-nm light through a highly scattering ex vivo adult human cataractous lens. This work demonstrates a potential clinical application of wavefront shaping techniques.
Swain, Carol-Ann; Sawicki, Steven; Addison, Diane; Katz, Benjamin; Piersanti, Kelly; Baim-Lance, Abigail; Gordon, Daniel; Anderson, Bridget J; Nash, Denis; Steinbock, Clemens; Agins, Bruce
2018-04-02
Existing data dissemination structures primarily rely on top-down approaches. Unless designed with the end user in mind, this may impair data-driven clinical improvements to Human Immunodeficiency Virus (HIV) prevention and care. In this study, we implemented a data visualization activity to create region-specific data presentations collaboratively with HIV providers, consumers of HIV care, and New York State (NYS) Department of Health AIDS Institute staff for use in local HIV care decision-making. Data from the NYS HIV Surveillance Registry (2009-2013) and HIV care facilities (2010-2015) participating in a Health Resources and Services Administration (HRSA) Systems Linkages and Access to Care project were used. Each data package incorporated visuals for: linkage to HIV care, retention in care and HIV viral suppression. End-users were vocal about their data needs and their capacity to interpret public health data. This experience suggests that data dissemination strategies should incorporate input from the end user to improve comprehension and optimize HIV care.
The Active Side of Stereopsis: Fixation Strategy and Adaptation to Natural Environments.
Gibaldi, Agostino; Canessa, Andrea; Sabatini, Silvio P
2017-03-20
Depth perception in near viewing strongly relies on the interpretation of binocular retinal disparity to obtain stereopsis. Statistical regularities of retinal disparities have been claimed to greatly impact on the neural mechanisms that underlie binocular vision, both to facilitate perceptual decisions and to reduce computational load. In this paper, we designed a novel and unconventional approach in order to assess the role of fixation strategy in conditioning the statistics of retinal disparity. We integrated accurate realistic three-dimensional models of natural scenes with binocular eye movement recording, to obtain accurate ground-truth statistics of retinal disparity experienced by a subject in near viewing. Our results evidence how the organization of human binocular visual system is finely adapted to the disparity statistics characterizing actual fixations, thus revealing a novel role of the active fixation strategy over the binocular visual functionality. This suggests an ecological explanation for the intrinsic preference of stereopsis for a close central object surrounded by a far background, as an early binocular aspect of the figure-ground segregation process.
Biophysics of object segmentation in a collision-detecting neuron
Dewell, Richard Burkett
2018-01-01
Collision avoidance is critical for survival, including in humans, and many species possess visual neurons exquisitely sensitive to objects approaching on a collision course. Here, we demonstrate that a collision-detecting neuron can detect the spatial coherence of a simulated impending object, thereby carrying out a computation akin to object segmentation critical for proper escape behavior. At the cellular level, object segmentation relies on a precise selection of the spatiotemporal pattern of synaptic inputs by dendritic membrane potential-activated channels. One channel type linked to dendritic computations in many neural systems, the hyperpolarization-activated cation channel, HCN, plays a central role in this computation. Pharmacological block of HCN channels abolishes the neuron's spatial selectivity and impairs the generation of visually guided escape behaviors, making it directly relevant to survival. Additionally, our results suggest that the interaction of HCN and inactivating K+ channels within active dendrites produces neuronal and behavioral object specificity by discriminating between complex spatiotemporal synaptic activation patterns. PMID:29667927
NASA Astrophysics Data System (ADS)
Van De Ven, C. J. C.; Mumford, Kevin G.
2018-05-01
The study of gas-water mass transfer in porous media is important in many applications, including unconventional resource extraction, carbon storage, deep geological waste storage, and remediation of contaminated groundwater, all of which rely on an understanding of the fate and transport of free and dissolved gas. The novel visual technique developed in this study provided both quantitative and qualitative observations of gas-water mass transfer. Findings included interaction between free gas architecture and dissolved plume migration, plume geometry and longevity. The technique was applied to the injection of CO2 in source patterns expected for stray gas originating from oil and gas operations to measure dissolved phase concentrations of CO2 at high spatial and temporal resolutions. The data set is the first of its kind to provide high resolution quantification of gas-water dissolution, and will facilitate an improved understanding of the fundamental processes of gas movement and fate in these complex systems.
2017-01-01
This technical report details the results of an uncontrolled study of EyeGuide Focus, a 10-second concussion management tool which relies on eye tracking to determine the potential impairment of visual attention, an indicator often of mild traumatic brain injury (mTBI). Essentially, people who can visually keep steady and accurate attention on a moving object in their environment likely suffer from no impairment. However, if after a potential mTBI event, subjects cannot keep attention on a moving object in a normal way as demonstrated on their previous healthy baseline tests. This may indicate possible neurological impairment. Now deployed at multiple locations across the United States, Focus (EyeGuide, Lubbock, Texas, United States) to date, has recorded more than 4,000 test scores. Our data analysis of these results shows the promise of Focus as a low-cost, ocular-based impairment test for assessing potential neurological impairment caused by mTBI in subjects ages eight and older. PMID:28630809
Kelly, Michael
2017-05-15
This technical report details the results of an uncontrolled study of EyeGuide Focus, a 10-second concussion management tool which relies on eye tracking to determine the potential impairment of visual attention, an indicator often of mild traumatic brain injury (mTBI). Essentially, people who can visually keep steady and accurate attention on a moving object in their environment likely suffer from no impairment. However, if after a potential mTBI event, subjects cannot keep attention on a moving object in a normal way as demonstrated on their previous healthy baseline tests. This may indicate possible neurological impairment. Now deployed at multiple locations across the United States, Focus (EyeGuide, Lubbock, Texas, United States) to date, has recorded more than 4,000 test scores. Our data analysis of these results shows the promise of Focus as a low-cost, ocular-based impairment test for assessing potential neurological impairment caused by mTBI in subjects ages eight and older.
Human infrared vision is triggered by two-photon chromophore isomerization
Palczewska, Grazyna; Vinberg, Frans; Stremplewski, Patrycjusz; Bircher, Martin P.; Salom, David; Komar, Katarzyna; Zhang, Jianye; Cascella, Michele; Wojtkowski, Maciej; Kefalov, Vladimir J.; Palczewski, Krzysztof
2014-01-01
Vision relies on photoactivation of visual pigments in rod and cone photoreceptor cells of the retina. The human eye structure and the absorption spectra of pigments limit our visual perception of light. Our visual perception is most responsive to stimulating light in the 400- to 720-nm (visible) range. First, we demonstrate by psychophysical experiments that humans can perceive infrared laser emission as visible light. Moreover, we show that mammalian photoreceptors can be directly activated by near infrared light with a sensitivity that paradoxically increases at wavelengths above 900 nm, and display quadratic dependence on laser power, indicating a nonlinear optical process. Biochemical experiments with rhodopsin, cone visual pigments, and a chromophore model compound 11-cis-retinyl-propylamine Schiff base demonstrate the direct isomerization of visual chromophore by a two-photon chromophore isomerization. Indeed, quantum mechanics modeling indicates the feasibility of this mechanism. Together, these findings clearly show that human visual perception of near infrared light occurs by two-photon isomerization of visual pigments. PMID:25453064
NASA Astrophysics Data System (ADS)
Lempe, B.; Taudt, Ch.; Maschke, R.; Gruening, J.; Ernstberger, M.; Basan, F.; Baselt, T.; Grunert, R.; Hartmann, P.
2013-02-01
Minimal invasive surgery methods have received growing attention in recent years. In vital important areas, it is crucial for the surgeon to have a precise knowledge of the tissue structure. Especially the visualization of arteries is desirable, as the destruction of the same can be lethal to the patient. In order to meet this requirement, the study presents a novel assistance system for endoscopic surgery. While state-of-the art systems rely on pre-operational data like computer-tomographic maps and require the use of radiation, the goal of the presented approach is to provide the clarification of subjacent blood vessels on live images of the endoscope camera system. Based on the transmission and reflection spectra of various human tissues, a prototype system with a NIR illumination unit working at 808 nm was established. Several image filtering, processing and enhancement techniques have been investigated and evaluated on the raw pictures in order to obtain high quality results. The most important were increasing contrast and thresholding by difference of Gaussian method. Based on that, it is possible to rectify a fragmented artery pattern and extract geometrical information about the structure in terms of position and orientation. By superposing the original image and the extracted segment, the surgeon is assisted with valuable live pictures of the region of interest. The whole system has been tested on a laboratory scale. An outlook on the integration of such a system in a clinical environment and obvious benefits are discussed.
Sensitivity to timing and order in human visual cortex.
Singer, Jedediah M; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel
2015-03-01
Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. Copyright © 2015 the American Physiological Society.
Wade, Nicholas J
2008-01-01
The art of visual communication is not restricted to the fine arts. Scientists also apply art in communicating their ideas graphically. Diagrams of anatomical structures, like the eye and visual pathways, and figures displaying specific visual phenomena have assisted in the communication of visual ideas for centuries. It is often the case that the development of a discipline can be traced through graphical representations and this is explored here in the context of concepts of visual science. As with any science, vision can be subdivided in a variety of ways. The classification adopted is in terms of optics, anatomy, and visual phenomena; each of these can in turn be further subdivided. Optics can be considered in terms of the nature of light and its transmission through the eye. Understanding of the gross anatomy of the eye and visual pathways was initially dependent upon the skills of the anatomist whereas microanatomy relied to a large extent on the instruments that could resolve cellular detail, allied to the observational skills of the microscopist. Visual phenomena could often be displayed on the printed page, although novel instruments expanded the scope of seeing, particularly in the nineteenth century.
Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception
Su, Yi-Huang; Salazar-López, Elvira
2016-01-01
Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900
Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception.
Su, Yi-Huang; Salazar-López, Elvira
2016-01-01
Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.
Bremner, J Gavin; Hatton, Fran; Foster, Kirsty A; Mason, Uschi
2011-09-01
Although there is much research on infants' ability to orient in space, little is known regarding the information they use to do so. This research uses a rotating room to evaluate the relative contribution of visual and vestibular information to location of a target following bodily rotation. Adults responded precisely on the basis of visual flow information. Seven-month-olds responded mostly on the basis of visual flow, whereas 9-month-olds responded mostly on the basis of vestibular information, and 12-month-olds responded mostly on the basis of visual information. Unlike adults, infants of all ages showed partial influence by both modalities. Additionally, 7-month-olds were capable of using vestibular information when there was no visual information for movement or stability, and 9-month-olds still relied on vestibular information when visual information was enhanced. These results are discussed in the context of neuroscientific evidence regarding visual-vestibular interaction, and in relation to possible changes in reliance on visual and vestibular information following acquisition of locomotion. © 2011 Blackwell Publishing Ltd.
28 CFR 36.303 - Auxiliary aids and services.
Code of Federal Regulations, 2011 CFR
2011-07-01
...; videotext displays; accessible electronic and information technology; or other effective methods of making... electronic and information technology; or other effective methods of making visually delivered materials... circumstances. (4) A public accommodation shall not rely on a minor child to interpret or facilitate...
28 CFR 36.303 - Auxiliary aids and services.
Code of Federal Regulations, 2013 CFR
2013-07-01
...; videotext displays; accessible electronic and information technology; or other effective methods of making... electronic and information technology; or other effective methods of making visually delivered materials... circumstances. (4) A public accommodation shall not rely on a minor child to interpret or facilitate...
28 CFR 36.303 - Auxiliary aids and services.
Code of Federal Regulations, 2014 CFR
2014-07-01
...; videotext displays; accessible electronic and information technology; or other effective methods of making... electronic and information technology; or other effective methods of making visually delivered materials... circumstances. (4) A public accommodation shall not rely on a minor child to interpret or facilitate...
28 CFR 36.303 - Auxiliary aids and services.
Code of Federal Regulations, 2012 CFR
2012-07-01
...; videotext displays; accessible electronic and information technology; or other effective methods of making... electronic and information technology; or other effective methods of making visually delivered materials... circumstances. (4) A public accommodation shall not rely on a minor child to interpret or facilitate...
An Experimental Evaluation of a Field Sobriety Test Battery in the Marine Environment
DOT National Transportation Integrated Search
1990-06-01
This Report describes an investigation of the accuracy of a FST (Field Sobriety Test) : battery used in the marine environment. FSTs rely on the observation and measurement of : the effect of alcohol intoxication on coordination, visual tracking and ...
Feature saliency and feedback information interactively impact visual category learning
Hammer, Rubi; Sloutsky, Vladimir; Grill-Spector, Kalanit
2015-01-01
Visual category learning (VCL) involves detecting which features are most relevant for categorization. VCL relies on attentional learning, which enables effectively redirecting attention to object’s features most relevant for categorization, while ‘filtering out’ irrelevant features. When features relevant for categorization are not salient, VCL relies also on perceptual learning, which enables becoming more sensitive to subtle yet important differences between objects. Little is known about how attentional learning and perceptual learning interact when VCL relies on both processes at the same time. Here we tested this interaction. Participants performed VCL tasks in which they learned to categorize novel stimuli by detecting the feature dimension relevant for categorization. Tasks varied both in feature saliency (low-saliency tasks that required perceptual learning vs. high-saliency tasks), and in feedback information (tasks with mid-information, moderately ambiguous feedback that increased attentional load, vs. tasks with high-information non-ambiguous feedback). We found that mid-information and high-information feedback were similarly effective for VCL in high-saliency tasks. This suggests that an increased attentional load, associated with the processing of moderately ambiguous feedback, has little effect on VCL when features are salient. In low-saliency tasks, VCL relied on slower perceptual learning; but when the feedback was highly informative participants were able to ultimately attain the same performance as during the high-saliency VCL tasks. However, VCL was significantly compromised in the low-saliency mid-information feedback task. We suggest that such low-saliency mid-information learning scenarios are characterized by a ‘cognitive loop paradox’ where two interdependent learning processes have to take place simultaneously. PMID:25745404
Dalton, Brian H; Rasman, Brandon G; Inglis, J Timothy; Blouin, Jean-Sébastien
2017-04-15
We tested perceived head-on-feet orientation and the direction of vestibular-evoked balance responses in passively and actively held head-turned postures. The direction of vestibular-evoked balance responses was not aligned with perceived head-on-feet orientation while maintaining prolonged passively held head-turned postures. Furthermore, static visual cues of head-on-feet orientation did not update the estimate of head posture for the balance controller. A prolonged actively held head-turned posture did not elicit a rotation in the direction of the vestibular-evoked balance response despite a significant rotation in perceived angular head posture. It is proposed that conscious perception of head posture and the transformation of vestibular signals for standing balance relying on this head posture are not dependent on the same internal representation. Rather, the balance system may operate under its own sensorimotor principles, which are partly independent from perception. Vestibular signals used for balance control must be integrated with other sensorimotor cues to allow transformation of descending signals according to an internal representation of body configuration. We explored two alternative models of sensorimotor integration that propose (1) a single internal representation of head-on-feet orientation is responsible for perceived postural orientation and standing balance or (2) conscious perception and balance control are driven by separate internal representations. During three experiments, participants stood quietly while passively or actively maintaining a prolonged head-turned posture (>10 min). Throughout the trials, participants intermittently reported their perceived head angular position, and subsequently electrical vestibular stimuli were delivered to elicit whole-body balance responses. Visual recalibration of head-on-feet posture was used to determine whether static visual cues are used to update the internal representation of body configuration for perceived orientation and standing balance. All three experiments involved situations in which the vestibular-evoked balance response was not orthogonal to perceived head-on-feet orientation, regardless of the visual information provided. For prolonged head-turned postures, balance responses consistent with actual head-on-feet posture occurred only during the active condition. Our results indicate that conscious perception of head-on-feet posture and vestibular control of balance do not rely on the same internal representation, but instead treat sensorimotor cues in parallel and may arrive at different conclusions regarding head-on-feet posture. The balance system appears to bypass static visual cues of postural orientation and mainly use other sensorimotor signals of head-on-feet position to transform vestibular signals of head motion, a mechanism appropriate for most daily activities. © 2016 The Authors. The Journal of Physiology © 2016 The Physiological Society.
Remote operation: a selective review of research into visual depth perception.
Reinhardt-Rutland, A H
1996-07-01
Some perceptual motor operations are performed remotely; examples include the handling of life-threatening materials and surgical procedures. A camera conveys the site of operation to a TV monitor, so depth perception relies mainly on pictorial information, perhaps with enhancement of the occlusion cue by motion. However, motion information such as motion parallax is not likely to be important. The effectiveness of pictorial information is diminished by monocular and binocular information conveying flatness of the screen and by difficulties in scaling: Only a degree of relative depth can be conveyed. Furthermore, pictorial information can mislead. Depth perception is probably adequate in remote operation, if target objects are well separated, with well-defined edges and familiar shapes. Stereoscopic viewing systems are being developed to introduce binocular information to remote operation. However, stereoscopic viewing is problematic because binocular disparity conflicts with convergence and monocular information. An alternative strategy to improve precision in remote operation may be to rely on individuals who lack binocular function: There is redundancy in depth information, and such individuals seem to compensate for the lack of binocular function.
MacLin, Otto H; Meissner, Christian A; Zimmerman, Laura A
2005-05-01
Eyewitness identification evidence is an important aspect of our legal system. Society relies on witnesses to identify suspects whom they have observed during the commission of a crime. Because a witness has only a mental representation of the individual he or she observed, law enforcement must rely on verbal descriptions and identification procedures to document eyewitness evidence. The present article introduces and details a computer program, referred to as PC_Eyewitness (PCE), which can be used in laboratories to conduct research on eyewitness memory. PCE is a modular program written in Visual Basic 6.0 that allows a researcher to present stimuli to a participant, to conduct distractor tasks, to elicit verbal descriptors regarding a target individual, and to present a lineup for the participant to provide an identification response. To illustrate the versatility of the program, several classic studies in the eyewitness literature are recreated in the context of PCE. The program is also shown to have applications in the conduct of field research and for use by law enforcement to administer lineups in everyday practice. PCE is distributed at no cost.
Cognitive aspects of haptic form recognition by blind and sighted subjects.
Bailes, S M; Lambert, R M
1986-11-01
Studies using haptic form recognition tasks have generally concluded that the adventitiously blind perform better than the congenitally blind, implicating the importance of early visual experience in improved spatial functioning. The hypothesis was tested that the adventitiously blind have retained some ability to encode successive information obtained haptically in terms of a global visual representation, while the congenitally blind use a coding system based on successive inputs. Eighteen blind (adventitiously and congenitally) and 18 sighted (blindfolded and performing with vision) subjects were tested on their recognition of raised line patterns when the standard was presented in segments: in immediate succession, or with unfilled intersegmental delays of 5, 10, or 15 seconds. The results did not support the above hypothesis. Three main findings were obtained: normally sighted subjects were both faster and more accurate than the other groups; all groups improved in accuracy of recognition as a function of length of interstimulus interval; sighted subjects tended to report using strategies with a strong verbal component while the blind tended to rely on imagery coding. These results are explained in terms of information-processing theory consistent with dual encoding systems in working memory.
Ouellet, Marc; Santiago, Julio; Israeli, Ziv; Gabay, Shai
2010-01-01
Spanish and English speakers tend to conceptualize time as running from left to right along a mental line. Previous research suggests that this representational strategy arises from the participants' exposure to a left-to-right writing system. However, direct evidence supporting this assertion suffers from several limitations and relies only on the visual modality. This study subjected to a direct test the reading hypothesis using an auditory task. Participants from two groups (Spanish and Hebrew) differing in the directionality of their orthographic system had to discriminate temporal reference (past or future) of verbs and adverbs (referring to either past or future) auditorily presented to either the left or right ear by pressing a left or a right key. Spanish participants were faster responding to past words with the left hand and to future words with the right hand, whereas Hebrew participants showed the opposite pattern. Our results demonstrate that the left-right mapping of time is not restricted to the visual modality and that the direction of reading accounts for the preferred directionality of the mental time line. These results are discussed in the context of a possible mechanism underlying the effects of reading direction on highly abstract conceptual representations.
Visualizing period fluctuations in strained-layer superlattices with scanning tunneling microscopy
NASA Astrophysics Data System (ADS)
Kanedy, K.; Lopez, F.; Wood, M. R.; Gmachl, C. F.; Weimer, M.; Klem, J. F.; Hawkins, S. D.; Shaner, E. A.; Kim, J. K.
2018-01-01
We show how cross-sectional scanning tunneling microscopy (STM) may be used to accurately map the period fluctuations throughout epitaxial, strained-layer superlattices based on the InAs/InAsSb and InGaAs/InAlAs material systems. The concept, analogous to Bragg's law in high-resolution x-ray diffraction, relies on an analysis of the [001]-convolved reciprocal-space satellite peaks obtained from discrete Fourier transforms of individual STM images. Properly implemented, the technique enables local period measurements that reliably discriminate vertical fluctuations localized to within ˜5 superlattice repeats along the [001] growth direction and orthogonal, lateral fluctuations localized to within ˜40 nm along <110> directions in the growth plane. While not as accurate as x-ray, the inherent, single-image measurement error associated with the method may be made as small as 0.1%, allowing the vertical or lateral period fluctuations contributing to inhomogeneous energy broadening and carrier localization in these structures to be pinpointed and quantified. The direct visualization of unexpectedly large, lateral period fluctuations on nanometer length scales in both strain-balanced systems supports a common understanding in terms of correlated interface roughness.
Extracting semantics from audio-visual content: the final frontier in multimedia retrieval.
Naphade, M R; Huang, T S
2002-01-01
Multimedia understanding is a fast emerging interdisciplinary research area. There is tremendous potential for effective use of multimedia content through intelligent analysis. Diverse application areas are increasingly relying on multimedia understanding systems. Advances in multimedia understanding are related directly to advances in signal processing, computer vision, pattern recognition, multimedia databases, and smart sensors. We review the state-of-the-art techniques in multimedia retrieval. In particular, we discuss how multimedia retrieval can be viewed as a pattern recognition problem. We discuss how reliance on powerful pattern recognition and machine learning techniques is increasing in the field of multimedia retrieval. We review the state-of-the-art multimedia understanding systems with particular emphasis on a system for semantic video indexing centered around multijects and multinets. We discuss how semantic retrieval is centered around concepts and context and the various mechanisms for modeling concepts and context.
Prognostic Physiology: Modeling Patient Severity in Intensive Care Units Using Radial Domain Folding
Joshi, Rohit; Szolovits, Peter
2012-01-01
Real-time scalable predictive algorithms that can mine big health data as the care is happening can become the new “medical tests” in critical care. This work describes a new unsupervised learning approach, radial domain folding, to scale and summarize the enormous amount of data collected and to visualize the degradations or improvements in multiple organ systems in real time. Our proposed system is based on learning multi-layer lower dimensional abstractions from routinely generated patient data in modern Intensive Care Units (ICUs), and is dramatically different from most of the current work being done in ICU data mining that rely on building supervised predictive models using commonly measured clinical observations. We demonstrate that our system discovers abstract patient states that summarize a patient’s physiology. Further, we show that a logistic regression model trained exclusively on our learned layer outperforms a customized SAPS II score on the mortality prediction task. PMID:23304406
AllAboard: Visual Exploration of Cellphone Mobility Data to Optimise Public Transport.
Di Lorenzo, G; Sbodio, M; Calabrese, F; Berlingerio, M; Pinelli, F; Nair, R
2016-02-01
The deep penetration of mobile phones offers cities the ability to opportunistically monitor citizens' mobility and use data-driven insights to better plan and manage services. With large scale data on mobility patterns, operators can move away from the costly, mostly survey based, transportation planning processes, to a more data-centric view, that places the instrumented user at the center of development. In this framework, using mobile phone data to perform transit analysis and optimization represents a new frontier with significant societal impact, especially in developing countries. In this paper we present AllAboard, an intelligent tool that analyses cellphone data to help city authorities in visually exploring urban mobility and optimizing public transport. This is performed within a self contained tool, as opposed to the current solutions which rely on a combination of several distinct tools for analysis, reporting, optimisation and planning. An interactive user interface allows transit operators to visually explore the travel demand in both space and time, correlate it with the transit network, and evaluate the quality of service that a transit network provides to the citizens at very fine grain. Operators can visually test scenarios for transit network improvements, and compare the expected impact on the travellers' experience. The system has been tested using real telecommunication data for the city of Abidjan, Ivory Coast, and evaluated from a data mining, optimisation and user prospective.
Development of internal models and predictive abilities for visual tracking during childhood
Ego, Caroline; Yüksel, Demet
2015-01-01
The prediction of the consequences of our own actions through internal models is an essential component of motor control. Previous studies showed improvement of anticipatory behaviors with age for grasping, drawing, and postural control. Since these actions require visual and proprioceptive feedback, these improvements might reflect both the development of internal models and the feedback control. In contrast, visual tracking of a temporarily invisible target gives specific markers of prediction and internal models for eye movements. Therefore, we recorded eye movements in 50 children (aged 5–19 yr) and in 10 adults, who were asked to pursue a visual target that is temporarily blanked. Results show that the youngest children (5–7 yr) have a general oculomotor behavior in this task, qualitatively similar to the one observed in adults. However, the overall performance of older subjects in terms of accuracy at target reappearance and variability in their behavior was much better than the youngest children. This late maturation of predictive mechanisms with age was reflected into the development of the accuracy of the internal models governing the synergy between the saccadic and pursuit systems with age. Altogether, we hypothesize that the maturation of the interaction between smooth pursuit and saccades that relies on internal models of the eye and target displacement is related to the continuous maturation of the cerebellum. PMID:26510757
Out-of-Core Streamline Visualization on Large Unstructured Meshes
NASA Technical Reports Server (NTRS)
Ueng, Shyh-Kuang; Sikorski, K.; Ma, Kwan-Liu
1997-01-01
It's advantageous for computational scientists to have the capability to perform interactive visualization on their desktop workstations. For data on large unstructured meshes, this capability is not generally available. In particular, particle tracing on unstructured grids can result in a high percentage of non-contiguous memory accesses and therefore may perform very poorly with virtual memory paging schemes. The alternative of visualizing a lower resolution of the data degrades the original high-resolution calculations. This paper presents an out-of-core approach for interactive streamline construction on large unstructured tetrahedral meshes containing millions of elements. The out-of-core algorithm uses an octree to partition and restructure the raw data into subsets stored into disk files for fast data retrieval. A memory management policy tailored to the streamline calculations is used such that during the streamline construction only a very small amount of data are brought into the main memory on demand. By carefully scheduling computation and data fetching, the overhead of reading data from the disk is significantly reduced and good memory performance results. This out-of-core algorithm makes possible interactive streamline visualization of large unstructured-grid data sets on a single mid-range workstation with relatively low main-memory capacity: 5-20 megabytes. Our test results also show that this approach is much more efficient than relying on virtual memory and operating system's paging algorithms.
Development of internal models and predictive abilities for visual tracking during childhood.
Ego, Caroline; Yüksel, Demet; Orban de Xivry, Jean-Jacques; Lefèvre, Philippe
2016-01-01
The prediction of the consequences of our own actions through internal models is an essential component of motor control. Previous studies showed improvement of anticipatory behaviors with age for grasping, drawing, and postural control. Since these actions require visual and proprioceptive feedback, these improvements might reflect both the development of internal models and the feedback control. In contrast, visual tracking of a temporarily invisible target gives specific markers of prediction and internal models for eye movements. Therefore, we recorded eye movements in 50 children (aged 5-19 yr) and in 10 adults, who were asked to pursue a visual target that is temporarily blanked. Results show that the youngest children (5-7 yr) have a general oculomotor behavior in this task, qualitatively similar to the one observed in adults. However, the overall performance of older subjects in terms of accuracy at target reappearance and variability in their behavior was much better than the youngest children. This late maturation of predictive mechanisms with age was reflected into the development of the accuracy of the internal models governing the synergy between the saccadic and pursuit systems with age. Altogether, we hypothesize that the maturation of the interaction between smooth pursuit and saccades that relies on internal models of the eye and target displacement is related to the continuous maturation of the cerebellum. Copyright © 2016 the American Physiological Society.
Bagust, Jeff; Docherty, Sharon; Haynes, Wayne; Telford, Richard; Isableu, Brice
2013-01-01
The Rod and Frame Test has been used to assess the degree to which subjects rely on the visual frame of reference to perceive vertical (visual field dependence- independence perceptual style). Early investigations found children exhibited a wide range of alignment errors, which reduced as they matured. These studies used a mechanical Rod and Frame system, and presented only mean values of grouped data. The current study also considered changes in individual performance. Changes in rod alignment accuracy in 419 school children were measured using a computer-based Rod and Frame test. Each child was tested at school Grade 2 and retested in Grades 4 and 6. The results confirmed that children displayed a wide range of alignment errors, which decreased with age but did not reach the expected adult values. Although most children showed a decrease in frame dependency over the 4 years of the study, almost 20% had increased alignment errors suggesting that they were becoming more frame-dependent. Plots of individual variation (SD) against mean error allowed the sample to be divided into 4 groups; the majority with small errors and SDs; a group with small SDs, but alignments clustering around the frame angle of 18°; a group showing large errors in the opposite direction to the frame tilt; and a small number with large SDs whose alignment appeared to be random. The errors in the last 3 groups could largely be explained by alignment of the rod to different aspects of the frame. At corresponding ages females exhibited larger alignment errors than males although this did not reach statistical significance. This study confirms that children rely more heavily on the visual frame of reference for processing spatial orientation cues. Most become less frame-dependent as they mature, but there are considerable individual differences. PMID:23724139
Troyer, Melissa; Curley, Lauren B.; Miller, Luke E.; Saygin, Ayse P.; Bergen, Benjamin K.
2014-01-01
Language comprehension requires rapid and flexible access to information stored in long-term memory, likely influenced by activation of rich world knowledge and by brain systems that support the processing of sensorimotor content. We hypothesized that while literal language about biological motion might rely on neurocognitive representations of biological motion specific to the details of the actions described, metaphors rely on more generic representations of motion. In a priming and self-paced reading paradigm, participants saw video clips or images of (a) an intact point-light walker or (b) a scrambled control and read sentences containing literal or metaphoric uses of biological motion verbs either closely or distantly related to the depicted action (walking). We predicted that reading times for literal and metaphorical sentences would show differential sensitivity to the match between the verb and the visual prime. In Experiment 1, we observed interactions between the prime type (walker or scrambled video) and the verb type (close or distant match) for both literal and metaphorical sentences, but with strikingly different patterns. We found no difference in the verb region of literal sentences for Close-Match verbs after walker or scrambled motion primes, but Distant-Match verbs were read more quickly following walker primes. For metaphorical sentences, the results were roughly reversed, with Distant-Match verbs being read more slowly following a walker compared to scrambled motion. In Experiment 2, we observed a similar pattern following still image primes, though critical interactions emerged later in the sentence. We interpret these findings as evidence for shared recruitment of cognitive and neural mechanisms for processing visual and verbal biological motion information. Metaphoric language using biological motion verbs may recruit neurocognitive mechanisms similar to those used in processing literal language but be represented in a less-specific way. PMID:25538604
Horizontal tuning for faces originates in high-level Fusiform Face Area.
Goffaux, Valerie; Duecker, Felix; Hausfeld, Lars; Schiltz, Christine; Goebel, Rainer
2016-01-29
Recent work indicates that the specialization of face visual perception relies on the privileged processing of horizontal angles of facial information. This suggests that stimulus properties assumed to be fully resolved in primary visual cortex (V1; e.g., orientation) in fact determine human vision until high-level stages of processing. To address this hypothesis, the present fMRI study explored the orientation sensitivity of V1 and high-level face-specialized ventral regions such as the Occipital Face Area (OFA) and Fusiform Face Area (FFA) to different angles of face information. Participants viewed face images filtered to retain information at horizontal, vertical or oblique angles. Filtered images were viewed upright, inverted and (phase-)scrambled. FFA responded most strongly to the horizontal range of upright face information; its activation pattern reliably separated horizontal from oblique ranges, but only when faces were upright. Moreover, activation patterns induced in the right FFA and the OFA by upright and inverted faces could only be separated based on horizontal information. This indicates that the specialized processing of upright face information in the OFA and FFA essentially relies on the encoding of horizontal facial cues. This pattern was not passively inherited from V1, which was found to respond less strongly to horizontal than other orientations likely due to adaptive whitening. Moreover, we found that orientation decoding accuracy in V1 was impaired for stimuli containing no meaningful shape. By showing that primary coding in V1 is influenced by high-order stimulus structure and that high-level processing is tuned to selective ranges of primary information, the present work suggests that primary and high-level levels of the visual system interact in order to modulate the processing of certain ranges of primary information depending on their relevance with respect to the stimulus and task at hand. Copyright © 2015 Elsevier Ltd. All rights reserved.
Visualizing the quality of partially accruing data for use in decision making
Eaton, Julia; Painter, Ian; Olson, Don; Lober, William B
2015-01-01
Secondary use of clinical health data for near real-time public health surveillance presents challenges surrounding its utility due to data quality issues. Data used for real-time surveillance must be timely, accurate and complete if it is to be useful; if incomplete data are used for surveillance, understanding the structure of the incompleteness is necessary. Such data are commonly aggregated due to privacy concerns. The Distribute project was a near real-time influenza-like-illness (ILI) surveillance system that relied on aggregated secondary clinical health data. The goal of this work is to disseminate the data quality tools developed to gain insight into the data quality problems associated with these data. These tools apply in general to any system where aggregate data are accrued over time and were created through the end-user-as-developer paradigm. Each tool was developed during the exploratory analysis to gain insight into structural aspects of data quality. Our key finding is that data quality of partially accruing data must be studied in the context of accrual lag—the difference between the time an event occurs and the time data for that event are received, i.e. the time at which data become available to the surveillance system. Our visualization methods therefore revolve around visualizing dimensions of data quality affected by accrual lag, in particular the tradeoff between timeliness and completion, and the effects of accrual lag on accuracy. Accounting for accrual lag in partially accruing data is necessary to avoid misleading or biased conclusions about trends in indicator values and data quality. PMID:27252794
Visualizing the quality of partially accruing data for use in decision making.
Eaton, Julia; Painter, Ian; Olson, Don; Lober, William B
2015-01-01
Secondary use of clinical health data for near real-time public health surveillance presents challenges surrounding its utility due to data quality issues. Data used for real-time surveillance must be timely, accurate and complete if it is to be useful; if incomplete data are used for surveillance, understanding the structure of the incompleteness is necessary. Such data are commonly aggregated due to privacy concerns. The Distribute project was a near real-time influenza-like-illness (ILI) surveillance system that relied on aggregated secondary clinical health data. The goal of this work is to disseminate the data quality tools developed to gain insight into the data quality problems associated with these data. These tools apply in general to any system where aggregate data are accrued over time and were created through the end-user-as-developer paradigm. Each tool was developed during the exploratory analysis to gain insight into structural aspects of data quality. Our key finding is that data quality of partially accruing data must be studied in the context of accrual lag-the difference between the time an event occurs and the time data for that event are received, i.e. the time at which data become available to the surveillance system. Our visualization methods therefore revolve around visualizing dimensions of data quality affected by accrual lag, in particular the tradeoff between timeliness and completion, and the effects of accrual lag on accuracy. Accounting for accrual lag in partially accruing data is necessary to avoid misleading or biased conclusions about trends in indicator values and data quality.
Using Flow Charts to Visualize the Decision-Making Process in Space Weather Forecasting
NASA Astrophysics Data System (ADS)
Aung, M. T. Y.; Myat, T.; Zheng, Y.; Mays, M. L.; Ngwira, C.; Damas, M. C.
2016-12-01
Our society today relies heavily on technological systems such as satellites, navigation systems, power grids and aviation. These systems are very sensitive to space weather disturbances. When Earth-directed space weather driven by the Sun arrives at the Earth, it causes changes to the Earth's radiation environment and the magnetosphere. Strong disturbances in the magnetosphere of the Earth are responsible for geomagnetic storms that can last from hours to days depending on strength of storms. Geomagnetic storms can severely impact critical infrastructure on Earth, such as the electric power grid, and Solar Energetic Particles that can endanger life in outer space. How can we lessen these adverse effects? They can be lessened through the early warning signals sent by space weather forecasters before CME or high-speed stream arrives. A space weather forecaster's duty is to send predicted notifications to high-tech industries and NASA missions so that they could take extra measures for protection. NASA space weather forecasters make prediction decisions by following certain steps and processes from the time an event occurs at the sun all the way to the impact locations. However, there has never been a tool that helps these forecasters visualize the decision process until now. A flow chart is created to help forecasters visualize the decision process. This flow chart provides basic knowledge of space weather and can be used to train future space weather forecasters. It also helps to cut down the training period and increase consistency in forecasting. The flow chart is also a great reference for people who are already familiar with space weather.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sola, M.; Haakon Nordby, L.; Dailey, D.V.
High resolution 3-D visualization of horizon interpretation and seismic attributes from large 3-D seismic surveys in deepwater Nigeria has greatly enhanced the exploration team`s ability to quickly recognize prospective segments of subregional and prospect specific scale areas. Integrated workstation generated structure, isopach and extracted horizon consistent, interval and windowed attributes are particularly useful in illustrating the complex structural and stratigraphical prospectivity of deepwater Nigeria. Large 3-D seismic volumes acquired over 750 square kilometers can be manipulated within the visualization system with attribute tracking capability that allows for real time data interrogation and interpretation. As in classical seismic stratigraphic studies, patternmore » recognition is fundamental to effective depositions facies interpretation and reservoir model construction. The 3-D perspective enhances the data interpretation through clear representation of relative scale, spatial distribution and magnitude of attributes. In deepwater Nigeria, many prospective traps rely on an interplay between syndepositional structure and slope turbidite depositional systems. Reservoir systems in many prospects appear to be dominated by unconfined to moderately focused slope feeder channel facies. These units have spatially complex facies architecture with feeder channel axes separated by extensive interchannel areas. Structural culminations generally have a history of initial compressional folding with late in extensional collapse and accommodation faulting. The resulting complex trap configurations often have stacked reservoirs over intervals as thick as 1500 meters. Exploration, appraisal and development scenarios in these settings can be optimized by taking full advantage of integrating high resolution 3-D visualization and seismic workstation interpretation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sola, M.; Haakon Nordby, L.; Dailey, D.V.
High resolution 3-D visualization of horizon interpretation and seismic attributes from large 3-D seismic surveys in deepwater Nigeria has greatly enhanced the exploration team's ability to quickly recognize prospective segments of subregional and prospect specific scale areas. Integrated workstation generated structure, isopach and extracted horizon consistent, interval and windowed attributes are particularly useful in illustrating the complex structural and stratigraphical prospectivity of deepwater Nigeria. Large 3-D seismic volumes acquired over 750 square kilometers can be manipulated within the visualization system with attribute tracking capability that allows for real time data interrogation and interpretation. As in classical seismic stratigraphic studies, patternmore » recognition is fundamental to effective depositions facies interpretation and reservoir model construction. The 3-D perspective enhances the data interpretation through clear representation of relative scale, spatial distribution and magnitude of attributes. In deepwater Nigeria, many prospective traps rely on an interplay between syndepositional structure and slope turbidite depositional systems. Reservoir systems in many prospects appear to be dominated by unconfined to moderately focused slope feeder channel facies. These units have spatially complex facies architecture with feeder channel axes separated by extensive interchannel areas. Structural culminations generally have a history of initial compressional folding with late in extensional collapse and accommodation faulting. The resulting complex trap configurations often have stacked reservoirs over intervals as thick as 1500 meters. Exploration, appraisal and development scenarios in these settings can be optimized by taking full advantage of integrating high resolution 3-D visualization and seismic workstation interpretation.« less
Nitzsche, Björn; Lobsien, Donald; Seeger, Johannes; Schneider, Holm; Boltze, Johannes
2014-01-01
Cerebrovascular diseases are significant causes of death and disability in humans. Improvements in diagnostic and therapeutic approaches strongly rely on adequate gyrencephalic, large animal models being demanded for translational research. Ovine stroke models may represent a promising approach but are currently limited by insufficient knowledge regarding the venous system of the cerebral angioarchitecture. The present study was intended to provide a comprehensive anatomical analysis of the intracranial venous system in sheep as a reliable basis for the interpretation of experimental results in such ovine models. We used corrosion casts as well as contrast-enhanced magnetic resonance venography to scrutinize blood drainage from the brain. This combined approach yielded detailed and, to some extent, novel findings. In particular, we provide evidence for chordae Willisii and lateral venous lacunae, and report on connections between the dorsal and ventral sinuses in this species. For the first time, we also describe venous confluences in the deep cerebral venous system and an ‘anterior condylar confluent’ as seen in humans. This report provides a detailed reference for the interpretation of venous diagnostic imaging findings in sheep, including an assessment of structure detectability by in vivo (imaging) versus ex vivo (corrosion cast) visualization methods. Moreover, it features a comprehensive interspecies-comparison of the venous cerebral angioarchitecture in man, rodents, canines and sheep as a relevant large animal model species, and describes possible implications for translational cerebrovascular research. PMID:24736654
Comprehension of Spacecraft Telemetry Using Hierarchical Specifications of Behavior
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Joshi, Rajeev
2014-01-01
A key challenge in operating remote spacecraft is that ground operators must rely on the limited visibility available through spacecraft telemetry in order to assess spacecraft health and operational status. We describe a tool for processing spacecraft telemetry that allows ground operators to impose structure on received telemetry in order to achieve a better comprehension of system state. A key element of our approach is the design of a domain-specific language that allows operators to express models of expected system behavior using partial specifications. The language allows behavior specifications with data fields, similar to other recent runtime verification systems. What is notable about our approach is the ability to develop hierarchical specifications of behavior. The language is implemented as an internal DSL in the Scala programming language that synthesizes rules from patterns of specification behavior. The rules are automatically applied to received telemetry and the inferred behaviors are available to ground operators using a visualization interface that makes it easier to understand and track spacecraft state. We describe initial results from applying our tool to telemetry received from the Curiosity rover currently roving the surface of Mars, where the visualizations are being used to trend subsystem behaviors, in order to identify potential problems before they happen. However, the technology is completely general and can be applied to any system that generates telemetry such as event logs.
Accurate Initial State Estimation in a Monocular Visual–Inertial SLAM System
Chen, Jing; Zhou, Zixiang; Leng, Zhen; Fan, Lei
2018-01-01
The fusion of monocular visual and inertial cues has become popular in robotics, unmanned vehicles and augmented reality fields. Recent results have shown that optimization-based fusion strategies outperform filtering strategies. Robust state estimation is the core capability for optimization-based visual–inertial Simultaneous Localization and Mapping (SLAM) systems. As a result of the nonlinearity of visual–inertial systems, the performance heavily relies on the accuracy of initial values (visual scale, gravity, velocity and Inertial Measurement Unit (IMU) biases). Therefore, this paper aims to propose a more accurate initial state estimation method. On the basis of the known gravity magnitude, we propose an approach to refine the estimated gravity vector by optimizing the two-dimensional (2D) error state on its tangent space, then estimate the accelerometer bias separately, which is difficult to be distinguished under small rotation. Additionally, we propose an automatic termination criterion to determine when the initialization is successful. Once the initial state estimation converges, the initial estimated values are used to launch the nonlinear tightly coupled visual–inertial SLAM system. We have tested our approaches with the public EuRoC dataset. Experimental results show that the proposed methods can achieve good initial state estimation, the gravity refinement approach is able to efficiently speed up the convergence process of the estimated gravity vector, and the termination criterion performs well. PMID:29419751
New frontiers for intelligent content-based retrieval
NASA Astrophysics Data System (ADS)
Benitez, Ana B.; Smith, John R.
2001-01-01
In this paper, we examine emerging frontiers in the evolution of content-based retrieval systems that rely on an intelligent infrastructure. Here, we refer to intelligence as the capabilities of the systems to build and maintain situational or world models, utilize dynamic knowledge representation, exploit context, and leverage advanced reasoning and learning capabilities. We argue that these elements are essential to producing effective systems for retrieving audio-visual content at semantic levels matching those of human perception and cognition. In this paper, we review relevant research on the understanding of human intelligence and construction of intelligent system in the fields of cognitive psychology, artificial intelligence, semiotics, and computer vision. We also discus how some of the principal ideas form these fields lead to new opportunities and capabilities for content-based retrieval systems. Finally, we describe some of our efforts in these directions. In particular, we present MediaNet, a multimedia knowledge presentation framework, and some MPEG-7 description tools that facilitate and enable intelligent content-based retrieval.
New frontiers for intelligent content-based retrieval
NASA Astrophysics Data System (ADS)
Benitez, Ana B.; Smith, John R.
2000-12-01
In this paper, we examine emerging frontiers in the evolution of content-based retrieval systems that rely on an intelligent infrastructure. Here, we refer to intelligence as the capabilities of the systems to build and maintain situational or world models, utilize dynamic knowledge representation, exploit context, and leverage advanced reasoning and learning capabilities. We argue that these elements are essential to producing effective systems for retrieving audio-visual content at semantic levels matching those of human perception and cognition. In this paper, we review relevant research on the understanding of human intelligence and construction of intelligent system in the fields of cognitive psychology, artificial intelligence, semiotics, and computer vision. We also discus how some of the principal ideas form these fields lead to new opportunities and capabilities for content-based retrieval systems. Finally, we describe some of our efforts in these directions. In particular, we present MediaNet, a multimedia knowledge presentation framework, and some MPEG-7 description tools that facilitate and enable intelligent content-based retrieval.
Low-cost telepresence for collaborative virtual environments.
Rhee, Seon-Min; Ziegler, Remo; Park, Jiyoung; Naef, Martin; Gross, Markus; Kim, Myoung-Hee
2007-01-01
We present a novel low-cost method for visual communication and telepresence in a CAVE -like environment, relying on 2D stereo-based video avatars. The system combines a selection of proven efficient algorithms and approximations in a unique way, resulting in a convincing stereoscopic real-time representation of a remote user acquired in a spatially immersive display. The system was designed to extend existing projection systems with acquisition capabilities requiring minimal hardware modifications and cost. The system uses infrared-based image segmentation to enable concurrent acquisition and projection in an immersive environment without a static background. The system consists of two color cameras and two additional b/w cameras used for segmentation in the near-IR spectrum. There is no need for special optics as the mask and color image are merged using image-warping based on a depth estimation. The resulting stereo image stream is compressed, streamed across a network, and displayed as a frame-sequential stereo texture on a billboard in the remote virtual environment.
Tele-healthcare for diabetes management: A low cost automatic approach.
Benaissa, M; Malik, B; Kanakis, A; Wright, N P
2012-01-01
In this paper, a telemedicine system for managing diabetic patients with better care is presented. The system is an end to end solution which relies on the integration of front end (patient unit) and backend web server. A key feature of the system developed is the very low cost automated approach. The front-end of the system is capable of reading glucose measurements from any glucose meter and sending them automatically via existing networks to the back-end server. The back-end is designed and developed using n-tier web client architecture based on model-view-controller design pattern using open source technology, a cost effective solution. The back-end helps the health-care provider with data analysis; data visualization and decision support, and allows them to send feedback and therapeutic advice to patients from anywhere using a browser enabled device. This system will be evaluated during the trials which will be conducted in collaboration with a local hospital in phased manner.
Visual rehabilitation: visual scanning, multisensory stimulation and vision restoration trainings
Dundon, Neil M.; Bertini, Caterina; Làdavas, Elisabetta; Sabel, Bernhard A.; Gall, Carolin
2015-01-01
Neuropsychological training methods of visual rehabilitation for homonymous vision loss caused by postchiasmatic damage fall into two fundamental paradigms: “compensation” and “restoration”. Existing methods can be classified into three groups: Visual Scanning Training (VST), Audio-Visual Scanning Training (AViST) and Vision Restoration Training (VRT). VST and AViST aim at compensating vision loss by training eye scanning movements, whereas VRT aims at improving lost vision by activating residual visual functions by training light detection and discrimination of visual stimuli. This review discusses the rationale underlying these paradigms and summarizes the available evidence with respect to treatment efficacy. The issues raised in our review should help guide clinical care and stimulate new ideas for future research uncovering the underlying neural correlates of the different treatment paradigms. We propose that both local “within-system” interactions (i.e., relying on plasticity within peri-lesional spared tissue) and changes in more global “between-system” networks (i.e., recruiting alternative visual pathways) contribute to both vision restoration and compensatory rehabilitation, which ultimately have implications for the rehabilitation of cognitive functions. PMID:26283935
Assembly of the cnidarian camera-type eye from vertebrate-like components
Kozmik, Zbynek; Ruzickova, Jana; Jonasova, Kristyna; Matsumoto, Yoshifumi; Vopalensky, Pavel; Kozmikova, Iryna; Strnad, Hynek; Kawamura, Shoji; Piatigorsky, Joram; Paces, Vaclav; Vlcek, Cestmir
2008-01-01
Animal eyes are morphologically diverse. Their assembly, however, always relies on the same basic principle, i.e., photoreceptors located in the vicinity of dark shielding pigment. Cnidaria as the likely sister group to the Bilateria are the earliest branching phylum with a well developed visual system. Here, we show that camera-type eyes of the cubozoan jellyfish, Tripedalia cystophora, use genetic building blocks typical of vertebrate eyes, namely, a ciliary phototransduction cascade and melanogenic pathway. Our findings indicative of parallelism provide an insight into eye evolution. Combined, the available data favor the possibility that vertebrate and cubozoan eyes arose by independent recruitment of orthologous genes during evolution. PMID:18577593
The contributions of visual and central attention to visual working memory.
Souza, Alessandra S; Oberauer, Klaus
2017-10-01
We investigated the role of two kinds of attention-visual and central attention-for the maintenance of visual representations in working memory (WM). In Experiment 1 we directed attention to individual items in WM by presenting cues during the retention interval of a continuous delayed-estimation task, and instructing participants to think of the cued items. Attending to items improved recall commensurate with the frequency with which items were attended (0, 1, or 2 times). Experiments 1 and 3 further tested which kind of attention-visual or central-was involved in WM maintenance. We assessed the dual-task costs of two types of distractor tasks, one tapping sustained visual attention and one tapping central attention. Only the central attention task yielded substantial dual-task costs, implying that central attention substantially contributes to maintenance of visual information in WM. Experiment 2 confirmed that the visual-attention distractor task was demanding enough to disrupt performance in a task relying on visual attention. We combined the visual-attention and the central-attention distractor tasks with a multiple object tracking (MOT) task. Distracting visual attention, but not central attention, impaired MOT performance. Jointly, the three experiments provide a double dissociation between visual and central attention, and between visual WM and visual object tracking: Whereas tracking multiple targets across the visual filed depends on visual attention, visual WM depends mostly on central attention.
Flipping a Switch "Down" When Not Aligned with the Gravitational Vertical.
Bock, Otmar; Bury, Nils
To flip a switch "down," our motor system can normally rely on concordant visual, gravitational, and egocentric cues about the vertical. However, divers must sometimes perform this task while visual cues are limited and gravitational cues are misaligned with egocentric cues. Astronauts must also flip switches "down" in absence of gravitational cues. Our study evaluates this ability using a laboratory simulation. The subjects were 24 healthy volunteers who were blindfolded, tilted into different angles of roll, and asked to silence an alarm by flipping a switch "down." The switch was constructed such that it could be flipped in any direction in the subjects' frontal plane. Two subjects deflected the switch in accordance with the direction of gravity, irrespective of their body orientation. Twenty subjects deflected it in accordance with their body orientation, irrespective of the direction of gravity. The remaining two persons could not be classified unequivocally. Notably, some egocentric responders deflected the rod consistently toward their feet, but others deflected it consistently toward other parts of their body. Since our findings disagree with perceptual studies where gravitational rather than egocentric cues predominated in the absence of vision, we posit that perception and action may access distinct internal representations of the vertical. On the practical side, our findings indicate that designers of spaceflight and underwater equipment should not rely on divers' intuitive knowledge on how to flip a switch "down." Bock O, Bury N. Flipping a switch "down" when not aligned with the gravitational vertical. Aerosp Med Hum Perform. 2016; 87(10):838-843.
Taylor, Kirsten I.; Devereux, Barry J.; Acres, Kadia; Randall, Billi; Tyler, Lorraine K.
2013-01-01
Conceptual representations are at the heart of our mental lives, involved in every aspect of cognitive functioning. Despite their centrality, a long-standing debate persists as to how the meanings of concepts are represented and processed. Many accounts agree that the meanings of concrete concepts are represented by their individual features, but disagree about the importance of different feature-based variables: some views stress the importance of the information carried by distinctive features in conceptual processing, others the features which are shared over many concepts, and still others the extent to which features co-occur. We suggest that previously disparate theoretical positions and experimental findings can be unified by an account which claims that task demands determine how concepts are processed in addition to the effects of feature distinctiveness and co-occurrence. We tested these predictions in a basic-level naming task which relies on distinctive feature information (Experiment 1) and a domain decision task which relies on shared feature information (Experiment 2). Both used large-scale regression designs with the same visual objects, and mixed-effects models incorporating participant, session, stimulus-related and feature statistic variables to model the performance. We found that concepts with relatively more distinctive and more highly correlated distinctive relative to shared features facilitated basic-level naming latencies, while concepts with relatively more shared and more highly correlated shared relative to distinctive features speeded domain decisions. These findings demonstrate that the feature statistics of distinctiveness (shared vs. distinctive) and correlational strength, as well as the task demands, determine how concept meaning is processed in the conceptual system. PMID:22137770
Feature-based and spatial attentional selection in visual working memory.
Heuer, Anna; Schubö, Anna
2016-05-01
The contents of visual working memory (VWM) can be modulated by spatial cues presented during the maintenance interval ("retrocues"). Here, we examined whether attentional selection of representations in VWM can also be based on features. In addition, we investigated whether the mechanisms of feature-based and spatial attention in VWM differ with respect to parallel access to noncontiguous locations. In two experiments, we tested the efficacy of valid retrocues relying on different kinds of information. Specifically, participants were presented with a typical spatial retrocue pointing to two locations, a symbolic spatial retrocue (numbers mapping onto two locations), and two feature-based retrocues: a color retrocue (a blob of the same color as two of the items) and a shape retrocue (an outline of the shape of two of the items). The two cued items were presented at either contiguous or noncontiguous locations. Overall retrocueing benefits, as compared to a neutral condition, were observed for all retrocue types. Whereas feature-based retrocues yielded benefits for cued items presented at both contiguous and noncontiguous locations, spatial retrocues were only effective when the cued items had been presented at contiguous locations. These findings demonstrate that attentional selection and updating in VWM can operate on different kinds of information, allowing for a flexible and efficient use of this limited system. The observation that the representations of items presented at noncontiguous locations could only be reliably selected with feature-based retrocues suggests that feature-based and spatial attentional selection in VWM rely on different mechanisms, as has been shown for attentional orienting in the external world.
Acquired Codes of Meaning in Data Visualization and Infographics: Beyond Perceptual Primitives.
Byrne, Lydia; Angus, Daniel; Wiles, Janet
2016-01-01
While information visualization frameworks and heuristics have traditionally been reluctant to include acquired codes of meaning, designers are making use of them in a wide variety of ways. Acquired codes leverage a user's experience to understand the meaning of a visualization. They range from figurative visualizations which rely on the reader's recognition of shapes, to conventional arrangements of graphic elements which represent particular subjects. In this study, we used content analysis to codify acquired meaning in visualization. We applied the content analysis to a set of infographics and data visualizations which are exemplars of innovative and effective design. 88% of the infographics and 71% of data visualizations in the sample contain at least one use of figurative visualization. Conventions on the arrangement of graphics are also widespread in the sample. In particular, a comparison of representations of time and other quantitative data showed that conventions can be specific to a subject. These results suggest that there is a need for information visualization research to expand its scope beyond perceptual channels, to include social and culturally constructed meaning. Our paper demonstrates a viable method for identifying figurative techniques and graphic conventions and integrating them into heuristics for visualization design.
Whitwell, Robert L; Goodale, Melvyn A; Merritt, Kate E; Enns, James T
2018-01-01
The two visual systems hypothesis proposes that human vision is supported by an occipito-temporal network for the conscious visual perception of the world and a fronto-parietal network for visually-guided, object-directed actions. Two specific claims about the fronto-parietal network's role in sensorimotor control have generated much data and controversy: (1) the network relies primarily on the absolute metrics of target objects, which it rapidly transforms into effector-specific frames of reference to guide the fingers, hands, and limbs, and (2) the network is largely unaffected by scene-based information extracted by the occipito-temporal network for those same targets. These two claims lead to the counter-intuitive prediction that in-flight anticipatory configuration of the fingers during object-directed grasping will resist the influence of pictorial illusions. The research confirming this prediction has been criticized for confounding the difference between grasping and explicit estimates of object size with differences in attention, sensory feedback, obstacle avoidance, metric sensitivity, and priming. Here, we address and eliminate each of these confounds. We asked participants to reach out and pick up 3D target bars resting on a picture of the Sander Parallelogram illusion and to make explicit estimates of the length of those bars. Participants performed their grasps without visual feedback, and were permitted to grasp the targets after making their size-estimates to afford them an opportunity to reduce illusory error with haptic feedback. The results show unequivocally that the effect of the illusion is stronger on perceptual judgments than on grasping. Our findings from the normally-sighted population provide strong support for the proposal that human vision is comprised of functionally and anatomically dissociable systems. Copyright © 2017 Elsevier Ltd. All rights reserved.
Memory-guided reaching in a patient with visual hemiagnosia.
Cornelsen, Sonja; Rennig, Johannes; Himmelbach, Marc
2016-06-01
The two-visual-systems hypothesis (TVSH) postulates that memory-guided movements rely on intact functions of the ventral stream. Its particular importance for memory-guided actions was initially inferred from behavioral dissociations in the well-known patient DF. Despite of rather accurate reaching and grasping movements to visible targets, she demonstrated grossly impaired memory-guided grasping as much as impaired memory-guided reaching. These dissociations were later complemented by apparently reversed dissociations in patients with dorsal damage and optic ataxia. However, grasping studies in DF and optic ataxia patients differed with respect to the retinotopic position of target objects, questioning the interpretation of the respective findings as a double dissociation. In contrast, the findings for reaching errors in both types of patients came from similar peripheral target presentations. However, new data on brain structural changes and visuomotor deficits in DF also questioned the validity of a double dissociation in reaching. A severe visuospatial short-term memory deficit in DF further questioned the specificity of her memory-guided reaching deficit. Therefore, we compared movement accuracy in visually-guided and memory-guided reaching in a new patient who suffered a confined unilateral damage to the ventral visual system due to stroke. Our results indeed support previous descriptions of memory-guided movements' inaccuracies in DF. Furthermore, our data suggest that recently discovered optic-ataxia like misreaching in DF is most likely caused by her parieto-occipital and not by her ventral stream damage. Finally, multiple visuospatial memory measurements in HWS suggest that inaccuracies in memory-guided reaching tasks in patients with ventral damage cannot be explained by visuospatial short-term memory or perceptual deficits, but by a specific deficit in visuomotor processing. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mann, David L; Abernethy, Bruce; Farrow, Damian
2010-07-01
Coupled interceptive actions are understood to be the result of neural processing-and visual information-which is distinct from that used for uncoupled perceptual responses. To examine the visual information used for action and perception, skilled cricket batters anticipated the direction of balls bowled toward them using a coupled movement (an interceptive action that preserved the natural coupling between perception and action) or an uncoupled (verbal) response, in each of four different visual blur conditions (plano, +1.00, +2.00, +3.00). Coupled responses were found to be better than uncoupled ones, with the blurring of vision found to result in different effects for the coupled and uncoupled response conditions. Low levels of visual blur did not affect coupled anticipation, a finding consistent with the comparatively poorer visual information on which online interceptive actions are proposed to rely. In contrast, some evidence was found to suggest that low levels of blur may enhance the uncoupled verbal perception of movement.
Righi, Giulia; Tenenbaum, Elena J; McCormick, Carolyn; Blossom, Megan; Amso, Dima; Sheinkopf, Stephen J
2018-04-01
Autism Spectrum Disorder (ASD) is often accompanied by deficits in speech and language processing. Speech processing relies heavily on the integration of auditory and visual information, and it has been suggested that the ability to detect correspondence between auditory and visual signals helps to lay the foundation for successful language development. The goal of the present study was to examine whether young children with ASD show reduced sensitivity to temporal asynchronies in a speech processing task when compared to typically developing controls, and to examine how this sensitivity might relate to language proficiency. Using automated eye tracking methods, we found that children with ASD failed to demonstrate sensitivity to asynchronies of 0.3s, 0.6s, or 1.0s between a video of a woman speaking and the corresponding audio track. In contrast, typically developing children who were language-matched to the ASD group, were sensitive to both 0.6s and 1.0s asynchronies. We also demonstrated that individual differences in sensitivity to audiovisual asynchronies and individual differences in orientation to relevant facial features were both correlated with scores on a standardized measure of language abilities. Results are discussed in the context of attention to visual language and audio-visual processing as potential precursors to language impairment in ASD. Autism Res 2018, 11: 645-653. © 2018 International Society for Autism Research, Wiley Periodicals, Inc. Speech processing relies heavily on the integration of auditory and visual information, and it has been suggested that the ability to detect correspondence between auditory and visual signals helps to lay the foundation for successful language development. The goal of the present study was to explore whether children with ASD process audio-visual synchrony in ways comparable to their typically developing peers, and the relationship between preference for synchrony and language ability. Results showed that there are differences in attention to audiovisual synchrony between typically developing children and children with ASD. Preference for synchrony was related to the language abilities of children across groups. © 2018 International Society for Autism Research, Wiley Periodicals, Inc.
Multispectral tissue characterization for intestinal anastomosis optimization.
Cha, Jaepyeong; Shademan, Azad; Le, Hanh N D; Decker, Ryan; Kim, Peter C W; Kang, Jin U; Krieger, Axel
2015-10-01
Intestinal anastomosis is a surgical procedure that restores bowel continuity after surgical resection to treat intestinal malignancy, inflammation, or obstruction. Despite the routine nature of intestinal anastomosis procedures, the rate of complications is high. Standard visual inspection cannot distinguish the tissue subsurface and small changes in spectral characteristics of the tissue, so existing tissue anastomosis techniques that rely on human vision to guide suturing could lead to problems such as bleeding and leakage from suturing sites. We present a proof-of-concept study using a portable multispectral imaging (MSI) platform for tissue characterization and preoperative surgical planning in intestinal anastomosis. The platform is composed of a fiber ring light-guided MSI system coupled with polarizers and image analysis software. The system is tested on ex vivo porcine intestine tissue, and we demonstrate the feasibility of identifying optimal regions for suture placement.
Multispectral tissue characterization for intestinal anastomosis optimization
Cha, Jaepyeong; Shademan, Azad; Le, Hanh N. D.; Decker, Ryan; Kim, Peter C. W.; Kang, Jin U.; Krieger, Axel
2015-01-01
Abstract. Intestinal anastomosis is a surgical procedure that restores bowel continuity after surgical resection to treat intestinal malignancy, inflammation, or obstruction. Despite the routine nature of intestinal anastomosis procedures, the rate of complications is high. Standard visual inspection cannot distinguish the tissue subsurface and small changes in spectral characteristics of the tissue, so existing tissue anastomosis techniques that rely on human vision to guide suturing could lead to problems such as bleeding and leakage from suturing sites. We present a proof-of-concept study using a portable multispectral imaging (MSI) platform for tissue characterization and preoperative surgical planning in intestinal anastomosis. The platform is composed of a fiber ring light-guided MSI system coupled with polarizers and image analysis software. The system is tested on ex vivo porcine intestine tissue, and we demonstrate the feasibility of identifying optimal regions for suture placement. PMID:26440616
Sex-Linked Characteristics of Brain Functioning: Why Jimmy Reads Differently.
ERIC Educational Resources Information Center
Helfeldt, John P.
1983-01-01
Presents evidence to support the premise that boys reflect a predilection to process information visually, while girls reflect a preference to process information auditorally. Cautions against relying on isolated components such as hemispheric dominance or laterality during the identification and correction of reading problems. (FL)
2005-11-01
visible and fl uorescent inspection techniques, while radiography relies on the individual’s ability to detect subtle differences in contrast either...binocular measurement of visual acuity may better predict a person’s functional capability in the workplace . However, measurement of monocular acuities
Investigating Students' Similarity Judgments in Organic Chemistry
ERIC Educational Resources Information Center
Graulich, N.; Bhattacharyya, G.
2017-01-01
Organic chemistry is possibly the most visual science of all chemistry disciplines. The process of scientific inquiry in organic chemistry relies on external representations, such as Lewis structures, mechanisms, and electron arrows. Information about chemical properties or driving forces of mechanistic steps is not available through direct…
Theta Phase Synchronization Is the Glue that Binds Human Associative Memory.
Clouter, Andrew; Shapiro, Kimron L; Hanslmayr, Simon
2017-10-23
Episodic memories are information-rich, often multisensory events that rely on binding different elements [1]. The elements that will constitute a memory episode are processed in specialized but distinct brain modules. The binding of these elements is most likely mediated by fast-acting long-term potentiation (LTP), which relies on the precise timing of neural activity [2]. Theta oscillations in the hippocampus orchestrate such timing as demonstrated by animal studies in vitro [3, 4] and in vivo [5, 6], suggesting a causal role of theta activity for the formation of complex memory episodes, but direct evidence from humans is missing. Here, we show that human episodic memory formation depends on phase synchrony between different sensory cortices at the theta frequency. By modulating the luminance of visual stimuli and the amplitude of auditory stimuli, we directly manipulated the degree of phase synchrony between visual and auditory cortices. Memory for sound-movie associations was significantly better when the stimuli were presented in phase compared to out of phase. This effect was specific to theta (4 Hz) and did not occur in slower (1.7 Hz) or faster (10.5 Hz) frequencies. These findings provide the first direct evidence that episodic memory formation in humans relies on a theta-specific synchronization mechanism. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mackrous, I; Simoneau, M
2011-11-10
Following body rotation, optimal updating of the position of a memorized target is attained when retinal error is perceived and corrective saccade is performed. Thus, it appears that these processes may enable the calibration of the vestibular system by facilitating the sharing of information between both reference frames. Here, it is assessed whether having sensory information regarding body rotation in the target reference frame could enhance an individual's learning rate to predict the position of an earth-fixed target. During rotation, participants had to respond when they felt their body midline had crossed the position of the target and received knowledge of result. During practice blocks, for two groups, visual cues were displayed in the same reference frame of the target, whereas a third group relied on vestibular information (vestibular-only group) to predict the location of the target. Participants, unaware of the role of the visual cues (visual cues group), learned to predict the location of the target and spatial error decreased from 16.2 to 2.0°, reflecting a learning rate of 34.08 trials (determined from fitting a falling exponential model). In contrast, the group aware of the role of the visual cues (explicit visual cues group) showed a faster learning rate (i.e. 2.66 trials) but similar final spatial error 2.9°. For the vestibular-only group, similar accuracy was achieved (final spatial error of 2.3°), but their learning rate was much slower (i.e. 43.29 trials). Transferring to the Post-test (no visual cues and no knowledge of result) increased the spatial error of the explicit visual cues group (9.5°), but it did not change the performance of the vestibular group (1.2°). Overall, these results imply that cognition assists the brain in processing the sensory information within the target reference frame. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.
Owsley, Cynthia; McGwin, Gerald
2010-01-01
Driving is the primary means of personal travel in many countries and is relies heavily on vision for its successful execution. Research over the past few decades has addressed the role of vision in driver safety (motor vehicle collision involvement) and in driver performance (both on-road and using interactive simulators in the laboratory). Here we critically review what is currently known about the role of various aspects of visual function in driving. We also discuss translational research issues on vision screening for licensure and re-licensure and rehabilitation of visually impaired persons who want to drive. PMID:20580907
Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E
2010-05-01
The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.
Crottaz-Herbette, Sonia; Fornari, Eleonora; Notter, Michael P; Bindschaedler, Claire; Manzoni, Laura; Clarke, Stephanie
2017-09-01
Prismatic adaptation has been repeatedly reported to alleviate neglect symptoms; in normal subjects, it was shown to enhance the representation of the left visual space within the left inferior parietal cortex. Our study aimed to determine in humans whether similar compensatory mechanisms underlie the beneficial effect of prismatic adaptation in neglect. Fifteen patients with right hemispheric lesions and 11 age-matched controls underwent a prismatic adaptation session which was preceded and followed by fMRI using a visual detection task. In patients, the prismatic adaptation session improved the accuracy of target detection in the left and central space and enhanced the representation of this visual space within the left hemisphere in parts of the temporal convexity, inferior parietal lobule and prefrontal cortex. Across patients, the increase in neuronal activation within the temporal regions correlated with performance improvements in this visual space. In control subjects, prismatic adaptation enhanced the representation of the left visual space within the left inferior parietal lobule and decreased it within the left temporal cortex. Thus, a brief exposure to prismatic adaptation enhances, both in patients and in control subjects, the competence of the left hemisphere for the left space, but the regions extended beyond the inferior parietal lobule to the temporal convexity in patients. These results suggest that the left hemisphere provides compensatory mechanisms in neglect by assuming the representation of the whole space within the ventral attentional system. The rapidity of the change suggests that the underlying mechanism relies on uncovering pre-existing synaptic connections. Copyright © 2017 Elsevier Ltd. All rights reserved.
Visual imagery and functional connectivity in blindness: a single-case study
Boucard, Christine C.; Rauschecker, Josef P.; Neufang, Susanne; Berthele, Achim; Doll, Anselm; Manoliu, Andrej; Riedl, Valentin; Sorg, Christian; Wohlschläger, Afra; Mühlau, Mark
2016-01-01
We present a case report on visual brain plasticity after total blindness acquired in adulthood. SH lost her sight when she was 27. Despite having been totally blind for 43 years, she reported to strongly rely on her vivid visual imagery. Three-Tesla magnetic resonance imaging (MRI) of SH and age-matched controls was performed. The MRI sequence included anatomical MRI, resting-state functional MRI, and task-related functional MRI where SH was instructed to imagine colours, faces, and motion. Compared to controls, voxel-based analysis revealed white matter loss along SH's visual pathway as well as grey matter atrophy in the calcarine sulci. Yet we demonstrated activation in visual areas, including V1, using functional MRI. Of the four identified visual resting-state networks, none showed alterations in spatial extent; hence, SH's preserved visual imagery seems to be mediated by intrinsic brain networks of normal extent. Time courses of two of these networks showed increased correlation with that of the inferior posterior default mode network, which may reflect adaptive changes supporting SH's strong internal visual representations. Overall, our findings demonstrate that conscious visual experience is possible even after years of absence of extrinsic input. PMID:25690326
Visual imagery and functional connectivity in blindness: a single-case study.
Boucard, Christine C; Rauschecker, Josef P; Neufang, Susanne; Berthele, Achim; Doll, Anselm; Manoliu, Andrej; Riedl, Valentin; Sorg, Christian; Wohlschläger, Afra; Mühlau, Mark
2016-05-01
We present a case report on visual brain plasticity after total blindness acquired in adulthood. SH lost her sight when she was 27. Despite having been totally blind for 43 years, she reported to strongly rely on her vivid visual imagery. Three-Tesla magnetic resonance imaging (MRI) of SH and age-matched controls was performed. The MRI sequence included anatomical MRI, resting-state functional MRI, and task-related functional MRI where SH was instructed to imagine colours, faces, and motion. Compared to controls, voxel-based analysis revealed white matter loss along SH's visual pathway as well as grey matter atrophy in the calcarine sulci. Yet we demonstrated activation in visual areas, including V1, using functional MRI. Of the four identified visual resting-state networks, none showed alterations in spatial extent; hence, SH's preserved visual imagery seems to be mediated by intrinsic brain networks of normal extent. Time courses of two of these networks showed increased correlation with that of the inferior posterior default mode network, which may reflect adaptive changes supporting SH's strong internal visual representations. Overall, our findings demonstrate that conscious visual experience is possible even after years of absence of extrinsic input.
Freud, Erez; Avidan, Galia; Ganel, Tzvi
2015-02-01
Holistic processing, the decoding of a stimulus as a unified whole, is a basic characteristic of object perception. Recent research using Garner's speeded classification task has shown that this processing style is utilized even for impossible objects that contain an inherent spatial ambiguity. In particular, similar Garner interference effects were found for possible and impossible objects, indicating similar holistic processing styles for the two object categories. In the present study, we further investigated the perceptual mechanisms that mediate such holistic representation of impossible objects. We relied on the notion that, whereas information embedded in the high-spatial-frequency (HSF) content supports fine-detailed processing of object features, the information conveyed by low spatial frequencies (LSF) is more crucial for the emergence of a holistic shape representation. To test the effects of image frequency on the holistic processing of impossible objects, participants performed the Garner speeded classification task on images of possible and impossible cubes filtered for their LSF and HSF information. For images containing only LSF, similar interference effects were observed for possible and impossible objects, indicating that the two object categories were processed in a holistic manner. In contrast, for the HSF images, Garner interference was obtained only for possible, but not for impossible objects. Importantly, we provided evidence to show that this effect could not be attributed to a lack of sensitivity to object possibility in the LSF images. Particularly, even for full-spectrum images, Garner interference was still observed for both possible and impossible objects. Additionally, performance in an object classification task revealed high sensitivity to object possibility, even for LSF images. Taken together, these findings suggest that the visual system can tolerate the spatial ambiguity typical to impossible objects by relying on information embedded in LSF, whereas HSF information may underlie the visual system's susceptibility to distortions in objects' spatial layouts.
Predicting Lameness in Sheep Activity Using Tri-Axial Acceleration Signals
Barwick, Jamie; Lamb, David; Dobos, Robin; Schneider, Derek; Welch, Mitchell; Trotter, Mark
2018-01-01
Simple Summary Monitoring livestock farmed under extensive conditions is challenging and this is particularly difficult when observing animal behaviour at an individual level. Lameness is a disease symptom that has traditionally relied on visual inspection to detect those animals with an abnormal walking pattern. More recently, accelerometer sensors have been used in other livestock industries to detect lame animals. These devices are able to record changes in activity intensity, allowing us to differentiate between a grazing, walking, and resting animal. Using these on-animal sensors, grazing, standing, walking, and lame walking were accurately detected from an ear attached sensor. With further development, this classification algorithm could be linked with an automatic livestock monitoring system to provide real time information on individual health status, something that is practically not possible under current extensive livestock production systems. Abstract Lameness is a clinical symptom associated with a number of sheep diseases around the world, having adverse effects on weight gain, fertility, and lamb birth weight, and increasing the risk of secondary diseases. Current methods to identify lame animals rely on labour intensive visual inspection. The aim of this current study was to determine the ability of a collar, leg, and ear attached tri-axial accelerometer to discriminate between sound and lame gait movement in sheep. Data were separated into 10 s mutually exclusive behaviour epochs and subjected to Quadratic Discriminant Analysis (QDA). Initial analysis showed the high misclassification of lame grazing events with sound grazing and standing from all deployment modes. The final classification model, which included lame walking and all sound activity classes, yielded a prediction accuracy for lame locomotion of 82%, 35%, and 87% for the ear, collar, and leg deployments, respectively. Misclassification of sound walking with lame walking within the leg accelerometer dataset highlights the superiority of an ear mode of attachment for the classification of lame gait characteristics based on time series accelerometer data. PMID:29324700
PechaKucha Presentations: Teaching Storytelling, Visual Design, and Conciseness
ERIC Educational Resources Information Center
Lucas, Kristen; Rawlins, Jacob D.
2015-01-01
When speakers rely too heavily on presentation software templates, they often end up stultifying audiences with a triple-whammy of bullet points. In this article, Lucas and Rawlins present an alternative method--PechaKucha (the Japanese word for "chit chat")--a presentation style driven by a carefully planned, automatically timed…
Number versus Extent in Newborns' Spontaneous Preference for Collections of Dots
ERIC Educational Resources Information Center
Turati, Chiara; Gava, Lucia; Valenza, Eloisa; Ghirardi, Valentina
2013-01-01
This study investigated processing of number and extent in newborns. Using visual preference, we showed that newborns discriminated between small sets of dot collections relying solely on implicit numerical information when non-numerical continuous variables were strictly controlled (Experiment 1), and solely on continuous information when…
Cognitive Biases and Nonverbal Cue Availability in Detecting Deception
ERIC Educational Resources Information Center
Burgoon, Judee K.; Blair, J. Pete; Strom, Renee E.
2008-01-01
In potentially deceptive situations, people rely on mental shortcuts to help process information. These heuristic judgments are often biased and result in inaccurate assessments of sender veracity. Four such biases--truth bias, visual bias, demeanor bias, and expectancy violation bias--were examined in a judgment experiment that varied nonverbal…
Edutainment: Is Learning at Risk?
ERIC Educational Resources Information Center
Okan, Zuhal
2003-01-01
This article begins with a definition of "edutainment," a hybrid genre that relies heavily on visual material, on narrative or game-like formats, and on more informal, less didactic styles of address. It examines what technology and education entail. Discussion then focuses on a critique of problems with edutainment, drawing on the…
Sources of Information for Stress Assignment in Reading Greek
ERIC Educational Resources Information Center
Protopapas, Athanassios; Gerakaki, Svetlana; Alexandri, Stella
2007-01-01
To assign lexical stress when reading, the Greek reader can potentially rely on lexical information (knowledge of the word), visual-orthographic information (processing of the written diacritic), or a default metrical strategy (penultimate stress pattern). Previous studies with secondary education children have shown strong lexical effects on…
ERIC Educational Resources Information Center
Bahrick, Lorraine E.; Lickliter, Robert; Castellanos, Irina
2013-01-01
Although research has demonstrated impressive face perception skills of young infants, little attention has focused on conditions that enhance versus impair infant face perception. The present studies tested the prediction, generated from the intersensory redundancy hypothesis (IRH), that face discrimination, which relies on detection of visual…
Objective Mobility Documentation Using Emerging Technologies
ERIC Educational Resources Information Center
Williams, Michael D.; Ray, Christopher T.; Wolf, Jean; Blasch, Bruce B.
2006-01-01
Historically, rehabilitation clinicians who work with people who are visually impaired (that is, are blind or have low vision) have relied on subjective checklists and clinical assessments to document the capacity of individuals to perform various tasks, including mobility, and to assess the impact of rehabilitation. Numerous instruments have been…
A Coordinated Control Architecture for Disaster Response Robots
2016-01-01
to use these same algorithms to provide navigation Odometry for the vehicle motions when the robot is driving. Visual Odometry The YouTube link... depressed the accelerator pedal. We relied on the fact that the vehicle quickly comes to rest when the accelerator pedal is not being pressed. The
Deaf Epistemology: Deafhood and Deafness
ERIC Educational Resources Information Center
Hauser, Peter C.; O'Hearn, Amanda; McKee, Michael; Steider, Anne; Thew, Denise
2010-01-01
Deaf epistemology constitutes the nature and extent of the knowledge that deaf individuals acquire growing up in a society that relies primarily on audition to navigate life. Deafness creates beings who are more visually oriented compared to their auditorily oriented peers. How hearing individuals interact with deaf individuals shapes how deaf…
Accelerated Colorimetric Micro-assay for Screening Mold Inhibitors
Carol A. Clausen; Vina W. Yang
2014-01-01
Rapid quantitative laboratory test methods are needed to screen potential antifungal agents. Existing laboratory test methods are relatively time consuming, may require specialized test equipment and rely on subjective visual ratings. A quantitative, colorimetric micro-assay has been developed that uses XTT tetrazolium salt to metabolically assess mold spore...
Development of Sensorial Experiments and Their Implementation into Undergraduate Laboratories
ERIC Educational Resources Information Center
Bromfield Lee, Deborah Christina
2009-01-01
"Visualization" of chemical phenomena often has been limited in the teaching laboratories to the sense of sight. We have developed chemistry experiments that rely on senses other than eyesight to investigate chemical concepts, make quantitative determinations, and familiarize students with chemical techniques traditionally designed using only…
Towards Autonomous Inspection of Space Systems Using Mobile Robotic Sensor Platforms
NASA Technical Reports Server (NTRS)
Wong, Edmond; Saad, Ashraf; Litt, Jonathan S.
2007-01-01
The space transportation systems required to support NASA's Exploration Initiative will demand a high degree of reliability to ensure mission success. This reliability can be realized through autonomous fault/damage detection and repair capabilities. It is crucial that such capabilities are incorporated into these systems since it will be impractical to rely upon Extra-Vehicular Activity (EVA), visual inspection or tele-operation due to the costly, labor-intensive and time-consuming nature of these methods. One approach to achieving this capability is through the use of an autonomous inspection system comprised of miniature mobile sensor platforms that will cooperatively perform high confidence inspection of space vehicles and habitats. This paper will discuss the efforts to develop a small scale demonstration test-bed to investigate the feasibility of using autonomous mobile sensor platforms to perform inspection operations. Progress will be discussed in technology areas including: the hardware implementation and demonstration of robotic sensor platforms, the implementation of a hardware test-bed facility, and the investigation of collaborative control algorithms.
A Hole in the Weather Warning System.
NASA Astrophysics Data System (ADS)
Wood, Vincent T.; Weisman, Robert A.
2003-02-01
lack of text information. These problems had forced deaf and hard of hearing people to rely on looking at the sky or having hearing people alert them as their primary methods of receiving emergency information. These problems are documented through the use of a survey of 277 deaf and hard of hearing people in Minnesota and Oklahoma as well as specific examples.During the last two years, some progress has been made to "close this hole" in the weather warning system. The Federal Communications Commission has approved new rules, requiring that all audio emergency information provided by television stations, satellite, and cable operators must also be provided visually. In addition, the use of new technology such as pager systems, weather radios adapted for use by those with special needs, the Internet, and satellite warning systems have allowed deaf and hard of hearing people to have more access to emergency information.In this article, these improvements are documented but continuing problems and possible solutions are also listed.
Catching What We Can't See: Manual Interception of Occluded Fly-Ball Trajectories
Bosco, Gianfranco; Delle Monache, Sergio; Lacquaniti, Francesco
2012-01-01
Control of interceptive actions may involve fine interplay between feedback-based and predictive mechanisms. These processes rely heavily on target motion information available when the target is visible. However, short-term visual memory signals as well as implicit knowledge about the environment may also contribute to elaborate a predictive representation of the target trajectory, especially when visual feedback is partially unavailable because other objects occlude the visual target. To determine how different processes and information sources are integrated in the control of the interceptive action, we manipulated a computer-generated visual environment representing a baseball game. Twenty-four subjects intercepted fly-ball trajectories by moving a mouse cursor and by indicating the interception with a button press. In two separate sessions, fly-ball trajectories were either fully visible or occluded for 750, 1000 or 1250 ms before ball landing. Natural ball motion was perturbed during the descending trajectory with effects of either weightlessness (0 g) or increased gravity (2 g) at times such that, for occluded trajectories, 500 ms of perturbed motion were visible before ball disappearance. To examine the contribution of previous visual experience with the perturbed trajectories to the interception of invisible targets, the order of visible and occluded sessions was permuted among subjects. Under these experimental conditions, we showed that, with fully visible targets, subjects combined servo-control and predictive strategies. Instead, when intercepting occluded targets, subjects relied mostly on predictive mechanisms based, however, on different type of information depending on previous visual experience. In fact, subjects without prior experience of the perturbed trajectories showed interceptive errors consistent with predictive estimates of the ball trajectory based on a-priori knowledge of gravity. Conversely, the interceptive responses of subjects previously exposed to fully visible trajectories were compatible with the fact that implicit knowledge of the perturbed motion was also taken into account for the extrapolation of occluded trajectories. PMID:23166653
Catching what we can't see: manual interception of occluded fly-ball trajectories.
Bosco, Gianfranco; Delle Monache, Sergio; Lacquaniti, Francesco
2012-01-01
Control of interceptive actions may involve fine interplay between feedback-based and predictive mechanisms. These processes rely heavily on target motion information available when the target is visible. However, short-term visual memory signals as well as implicit knowledge about the environment may also contribute to elaborate a predictive representation of the target trajectory, especially when visual feedback is partially unavailable because other objects occlude the visual target. To determine how different processes and information sources are integrated in the control of the interceptive action, we manipulated a computer-generated visual environment representing a baseball game. Twenty-four subjects intercepted fly-ball trajectories by moving a mouse cursor and by indicating the interception with a button press. In two separate sessions, fly-ball trajectories were either fully visible or occluded for 750, 1000 or 1250 ms before ball landing. Natural ball motion was perturbed during the descending trajectory with effects of either weightlessness (0 g) or increased gravity (2 g) at times such that, for occluded trajectories, 500 ms of perturbed motion were visible before ball disappearance. To examine the contribution of previous visual experience with the perturbed trajectories to the interception of invisible targets, the order of visible and occluded sessions was permuted among subjects. Under these experimental conditions, we showed that, with fully visible targets, subjects combined servo-control and predictive strategies. Instead, when intercepting occluded targets, subjects relied mostly on predictive mechanisms based, however, on different type of information depending on previous visual experience. In fact, subjects without prior experience of the perturbed trajectories showed interceptive errors consistent with predictive estimates of the ball trajectory based on a-priori knowledge of gravity. Conversely, the interceptive responses of subjects previously exposed to fully visible trajectories were compatible with the fact that implicit knowledge of the perturbed motion was also taken into account for the extrapolation of occluded trajectories.
iview: an interactive WebGL visualizer for protein-ligand complex.
Li, Hongjian; Leung, Kwong-Sak; Nakane, Takanori; Wong, Man-Hon
2014-02-25
Visualization of protein-ligand complex plays an important role in elaborating protein-ligand interactions and aiding novel drug design. Most existing web visualizers either rely on slow software rendering, or lack virtual reality support. The vital feature of macromolecular surface construction is also unavailable. We have developed iview, an easy-to-use interactive WebGL visualizer of protein-ligand complex. It exploits hardware acceleration rather than software rendering. It features three special effects in virtual reality settings, namely anaglyph, parallax barrier and oculus rift, resulting in visually appealing identification of intermolecular interactions. It supports four surface representations including Van der Waals surface, solvent excluded surface, solvent accessible surface and molecular surface. Moreover, based on the feature-rich version of iview, we have also developed a neat and tailor-made version specifically for our istar web platform for protein-ligand docking purpose. This demonstrates the excellent portability of iview. Using innovative 3D techniques, we provide a user friendly visualizer that is not intended to compete with professional visualizers, but to enable easy accessibility and platform independence.
Conscious visual memory with minimal attention.
Pinto, Yair; Vandenbroucke, Annelinde R; Otten, Marte; Sligte, Ilja G; Seth, Anil K; Lamme, Victor A F
2017-02-01
Is conscious visual perception limited to the locations that a person attends? The remarkable phenomenon of change blindness, which shows that people miss nearly all unattended changes in a visual scene, suggests the answer is yes. However, change blindness is found after visual interference (a mask or a new scene), so that subjects have to rely on working memory (WM), which has limited capacity, to detect the change. Before such interference, however, a much larger capacity store, called fragile memory (FM), which is easily overwritten by newly presented visual information, is present. Whether these different stores depend equally on spatial attention is central to the debate on the role of attention in conscious vision. In 2 experiments, we found that minimizing spatial attention almost entirely erases visual WM, as expected. Critically, FM remains largely intact. Moreover, minimally attended FM responses yield accurate metacognition, suggesting that conscious memory persists with limited spatial attention. Together, our findings help resolve the fundamental issue of how attention affects perception: Both visual consciousness and memory can be supported by only minimal attention. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Visualizing diurnal population change in urban areas for emergency management.
Kobayashi, Tetsuo; Medina, Richard M; Cova, Thomas J
2011-01-01
There is an increasing need for a quick, simple method to represent diurnal population change in metropolitan areas for effective emergency management and risk analysis. Many geographic studies rely on decennial U.S. Census data that assume that urban populations are static in space and time. This has obvious limitations in the context of dynamic geographic problems. The U.S. Department of Transportation publishes population data at the transportation analysis zone level in fifteen-minute increments. This level of spatial and temporal detail allows for improved dynamic population modeling. This article presents a methodology for visualizing and analyzing diurnal population change for metropolitan areas based on this readily available data. Areal interpolation within a geographic information system is used to create twenty-four (one per hour) population surfaces for the larger metropolitan area of Salt Lake County, Utah. The resulting surfaces represent diurnal population change for an average workday and are easily combined to produce an animation that illustrates population dynamics throughout the day. A case study of using the method to visualize population distributions in an emergency management context is provided using two scenarios: a chemical release and a dirty bomb in Salt Lake County. This methodology can be used to address a wide variety of problems in emergency management.
The Role of Teleophthalmology in the Management of Diabetic Retinopathy.
Salongcay, Recivall P; Silva, Paolo S
2018-01-01
The emergence of diabetes as a global epidemic is accompanied by the rise in diabetes‑related retinal complications. Diabetic retinopathy, if left undetected and untreated, can lead to severe visual impairment and affect an individual's productivity and quality of life. Globally, diabetic retinopathy remains one of the leading causes of visual loss in the working‑age population. Teleophthalmology for diabetic retinopathy is an innovative means of retinal evaluation that allows identification of eyes at risk for visual loss, thereby preserving vision and decreasing the overall burden to the health care system. Numerous studies worldwide have found teleophthalmology to be a reliable and cost‑efficient alternative to traditional clinical examinations. It has reduced barriers to access to specialized eye care in both rural and urban communities. In teleophthalmology applications for diabetic retinopathy, it is critical that standardized protocols in image acquisition and evaluation are used to ensure low image ungradable rates and maintain the quality of images taken. Innovative imaging technology such as ultrawide field imaging has the potential to provide significant benefit with integration into teleophthalmology programs. Teleophthalmology programs for diabetic retinopathy rely on a comprehensive and multidisciplinary approach with partnerships across specialties and health care professionals to attain wider acceptability and allow evidence‑based eye care to reach a much broader population. Copyright 2017 Asia-Pacific Academy of Ophthalmology.
Neural Integration in Body Perception.
Ramsey, Richard
2018-06-19
The perception of other people is instrumental in guiding social interactions. For example, the appearance of the human body cues a wide range of inferences regarding sex, age, health, and personality, as well as emotional state and intentions, which influence social behavior. To date, most neuroscience research on body perception has aimed to characterize the functional contribution of segregated patches of cortex in the ventral visual stream. In light of the growing prominence of network architectures in neuroscience, the current article reviews neuroimaging studies that measure functional integration between different brain regions during body perception. The review demonstrates that body perception is not restricted to processing in the ventral visual stream but instead reflects a functional alliance between the ventral visual stream and extended neural systems associated with action perception, executive functions, and theory of mind. Overall, these findings demonstrate how body percepts are constructed through interactions in distributed brain networks and underscore that functional segregation and integration should be considered together when formulating neurocognitive theories of body perception. Insight from such an updated model of body perception generalizes to inform the organizational structure of social perception and cognition more generally and also informs disorders of body image, such as anorexia nervosa, which may rely on atypical integration of body-related information.
2017-01-01
Selective visual attention enables organisms to enhance the representation of behaviorally relevant stimuli by altering the encoding properties of single receptive fields (RFs). Yet we know little about how the attentional modulations of single RFs contribute to the encoding of an entire visual scene. Addressing this issue requires (1) measuring a group of RFs that tile a continuous portion of visual space, (2) constructing a population-level measurement of spatial representations based on these RFs, and (3) linking how different types of RF attentional modulations change the population-level representation. To accomplish these aims, we used fMRI to characterize the responses of thousands of voxels in retinotopically organized human cortex. First, we found that the response modulations of voxel RFs (vRFs) depend on the spatial relationship between the RF center and the visual location of the attended target. Second, we used two analyses to assess the spatial encoding quality of a population of voxels. We found that attention increased fine spatial discriminability and representational fidelity near the attended target. Third, we linked these findings by manipulating the observed vRF attentional modulations and recomputing our measures of the fidelity of population codes. Surprisingly, we discovered that attentional enhancements of population-level representations largely depend on position shifts of vRFs, rather than changes in size or gain. Our data suggest that position shifts of single RFs are a principal mechanism by which attention enhances population-level representations in visual cortex. SIGNIFICANCE STATEMENT Although changes in the gain and size of RFs have dominated our view of how attention modulates visual information codes, such hypotheses have largely relied on the extrapolation of single-cell responses to population responses. Here we use fMRI to relate changes in single voxel receptive fields (vRFs) to changes in population-level representations. We find that vRF position shifts contribute more to population-level enhancements of visual information than changes in vRF size or gain. This finding suggests that position shifts are a principal mechanism by which spatial attention enhances population codes for relevant visual information. This poses challenges for labeled line theories of information processing, suggesting that downstream regions likely rely on distributed inputs rather than single neuron-to-neuron mappings. PMID:28242794
Detection of Ballast Damage by In-Situ Vibration Measurement of Sleepers
NASA Astrophysics Data System (ADS)
Lam, H. F.; Wong, M. T.; Keefe, R. M.
2010-05-01
Ballasted track is one of the most important elements of railway transportation systems worldwide. Owing to its importance in railway safety, many monitoring and evaluation methods have been developed. Current railway track monitoring systems are comprehensive, fast and efficient in testing railway track level and alignment, rail gauge, rail corrugation, etc. However, the monitoring of ballast condition still relies very much on visual inspection and core tests. Although extensive research has been carried out in the development of non-destructive methods for ballast condition evaluation, a commonly accepted and cost-effective method is still in demand. In Hong Kong practice, if abnormal train vibration is reported by the train operator or passengers, permanent way inspectors will locate the problem area by track geometry measurement. It must be pointed out that visual inspection can only identify ballast damage on the track surface, the track geometry deficiencies and rail twists can be detected using a track gauge. Ballast damage under the sleeper loading area and the ballast shoulder, which are the main factors affecting track stability and ride quality, are extremely difficult if not impossible to be detected by visual inspection. Core test is a destructive test, which is expensive, time consuming and may be disruptive to traffic. A fast real-time ballast damage detection method that can be implemented by permanent way inspectors with simple equipment can certainly provide valuable information for engineers in assessing the safety and riding quality of ballasted track systems. The main objective of this paper is to study the feasibility in using the vibration characteristics of sleepers in quantifying the ballast condition under the sleepers, and so as to explore the possibility in developing a handy method for the detection of ballast damage based on the measured vibration of sleepers.
Foot placement relies on state estimation during visually guided walking.
Maeda, Rodrigo S; O'Connor, Shawn M; Donelan, J Maxwell; Marigold, Daniel S
2017-02-01
As we walk, we must accurately place our feet to stabilize our motion and to navigate our environment. We must also achieve this accuracy despite imperfect sensory feedback and unexpected disturbances. In this study we tested whether the nervous system uses state estimation to beneficially combine sensory feedback with forward model predictions to compensate for these challenges. Specifically, subjects wore prism lenses during a visually guided walking task, and we used trial-by-trial variation in prism lenses to add uncertainty to visual feedback and induce a reweighting of this input. To expose altered weighting, we added a consistent prism shift that required subjects to adapt their estimate of the visuomotor mapping relationship between a perceived target location and the motor command necessary to step to that position. With added prism noise, subjects responded to the consistent prism shift with smaller initial foot placement error but took longer to adapt, compatible with our mathematical model of the walking task that leverages state estimation to compensate for noise. Much like when we perform voluntary and discrete movements with our arms, it appears our nervous systems uses state estimation during walking to accurately reach our foot to the ground. Accurate foot placement is essential for safe walking. We used computational models and human walking experiments to test how our nervous system achieves this accuracy. We find that our control of foot placement beneficially combines sensory feedback with internal forward model predictions to accurately estimate the body's state. Our results match recent computational neuroscience findings for reaching movements, suggesting that state estimation is a general mechanism of human motor control. Copyright © 2017 the American Physiological Society.
Three-dimensional analysis of alveolar bone resorption by image processing of 3-D dental CT images
NASA Astrophysics Data System (ADS)
Nagao, Jiro; Kitasaka, Takayuki; Mori, Kensaku; Suenaga, Yasuhito; Yamada, Shohzoh; Naitoh, Munetaka
2006-03-01
We have developed a novel system that provides total support for assessment of alveolar bone resorption, caused by periodontitis, based on three-dimensional (3-D) dental CT images. In spite of the difficulty in perceiving the complex 3-D shape of resorption, dentists assessing resorption location and severity have been relying on two-dimensional radiography and probing, which merely provides one-dimensional information (depth) about resorption shape. However, there has been little work on assisting assessment of the disease by 3-D image processing and visualization techniques. This work provides quantitative evaluation results and figures for our system that measures the three-dimensional shape and spread of resorption. It has the following functions: (1) measures the depth of resorption by virtually simulating probing in the 3-D CT images, taking advantage of image processing of not suffering obstruction by teeth on the inter-proximal sides and much smaller measurement intervals than the conventional examination; (2) visualizes the disposition of the depth by movies and graphs; (3) produces a quantitative index and intuitive visual representation of the spread of resorption in the inter-radicular region in terms of area; and (4) calculates the volume of resorption as another severity index in the inter-radicular region and the region outside it. Experimental results in two cases of 3-D dental CT images and a comparison of the results with the clinical examination results and experts' measurements of the corresponding patients confirmed that the proposed system gives satisfying results, including 0.1 to 0.6mm of resorption measurement (probing) error and fairly intuitive presentation of measurement and calculation results.
Efficient Delivery and Visualization of Long Time-Series Datasets Using Das2 Tools
NASA Astrophysics Data System (ADS)
Piker, C.; Granroth, L.; Faden, J.; Kurth, W. S.
2017-12-01
For over 14 years the University of Iowa Radio and Plasma Wave Group has utilized a network transparent data streaming and visualization system for most daily data review and collaboration activities. This system, called Das2, was originally designed in support of the Cassini Radio and Plasma Wave Science (RPWS) investigation, but is now relied on for daily review and analysis of Voyager, Polar, Cluster, Mars Express, Juno and other mission results. In light of current efforts to promote automatic data distribution in space physics it seems prudent to provide an overview of our open source Das2 programs and interface definitions to the wider community and to recount lessons learned. This submission will provide an overview of interfaces that define the system, describe the relationship between the Das2 effort and Autoplot and will examine handling Cassini RPWS Wideband waveforms and dynamic spectra as examples of dealing with long time-series data sets. In addition, the advantages and limitations of the current Das2 tool set will be discussed, as well as lessons learned that are applicable to other data sharing initiatives. Finally, plans for future developments including improved catalogs to support 'no-software' data sources and redundant multi-server fail over, as well as new adapters for CSV (Comma Separated Values) and JSON (Javascript Object Notation) output to support Cassini closeout and the HAPI (Heliophysics Application Programming Interface) initiative are outlined.
Thermometer use among Mexican immigrant mothers in California.
Schwartz, N; Guendelman, S; English, P
1997-11-01
A community-based household survey was utilized to assess the relationship between thermometer use, home treatment and utilization of health care services. Using a cross-sectional design, the study surveyed 688 low income Mexican origin mothers of children between the ages of 8 and 16 months in San Diego County. Mothers were asked how they determine that their child has fever and how often they use a thermometer. Nearly 40% of low income Mexican mothers interviewed in San Diego county never used a thermometer for determining childhood fever. Approximately two-thirds (64.7%) relied either primarily or exclusively on embodied methods such as visual observation or touch to determine fever in their child. A multivariate logistic regression analysis determined that low education and a separated or divorced marital status decreased the odds of thermometer use, whereas regular contact with the health care system doubled the likelihood of thermometer use. Mothers who relied on embodied methods were more likely to use over-the-counter medications than those who relied on thermometers; however, no significant differences were found between groups using other methods of home treatment. Fever determination modalities can be used to screen for lack of access to care and to provide for other health care needs in a culturally appropriate manner. While clinicians' expectations may include parental experience with temperature taking, current pediatric literature questions the need for home-based thermometer use. Possible alternatives to the traditional rectal thermometer might include digital thermometers and color coded thermometer strips.
Ivanova, Elena; Yee, Christopher W; Baldoni, Robert; Sagdullaev, Botir T
2016-09-01
In retinal degenerative disease (RD), the diminished light signal from dying photoreceptors has been considered the sole cause of visual impairment. Recent studies show a 10-fold increase in spontaneous activity in the RD network, challenging this paradigm. This aberrant activity forms a new barrier for the light signal, and not only exacerbates the loss of vision, but also may stand in the way of visual restoration. This activity originates in AII amacrine cells and relies on excessive activation of gap junctions. However, it remains unclear whether aberrant activity affects central visual processing and what mechanisms lead to this excessive activation of gap junctions. By combining genetic manipulation with electrophysiological recordings of light-induced activity in both living mice and isolated wholemount retina, we demonstrate that aberrant activity extends along retinotectal projections to alter activity in higher brain centers. Next, to selectively eliminate Cx36-containing gap junctions, which are the primary type expressed by AII amacrine cells, we crossed rd10 mice, a slow-degenerating model of RD, with Cx36 knockout mice. We found that retinal aberrant activity was reduced in the rd10/Cx36KO mice compared to rd10 controls, a direct evidence for involvement of Cx36-containing gap junctions in generating aberrant activity in RD. These data provide an essential support for future experiments to determine if selectively targeting these gap junctions could be a valid strategy for reducing aberrant activity and restoring light responses in RD. Copyright © 2015 Elsevier Ltd. All rights reserved.
Visuomotor Dissociation in Cerebral Scaling of Size.
Potgieser, Adriaan R E; de Jong, Bauke M
2016-01-01
Estimating size and distance is crucial in effective visuomotor control. The concept of an internal coordinate system implies that visual and motor size parameters are scaled onto a common template. To dissociate perceptual and motor components in such scaling, we performed an fMRI experiment in which 16 right-handed subjects copied geometric figures while the result of drawing remained out of sight. Either the size of the example figure varied while maintaining a constant size of drawing (visual incongruity) or the size of the examples remained constant while subjects were instructed to make changes in size (motor incongruity). These incongruent were compared to congruent conditions. Statistical Parametric Mapping (SPM8) revealed brain activations related to size incongruity in the dorsolateral prefrontal and inferior parietal cortex, pre-SMA / anterior cingulate and anterior insula, dominant in the right hemisphere. This pattern represented simultaneous use of a 'resized' virtual template and actual picture information requiring spatial working memory, early-stage attention shifting and inhibitory control. Activations were strongest in motor incongruity while right pre-dorsal premotor activation specifically occurred in this condition. Visual incongruity additionally relied on a ventral visual pathway. Left ventral premotor activation occurred in all variably sized drawing while constant visuomotor size, compared to congruent size variation, uniquely activated the lateral occipital cortex additional to superior parietal regions. These results highlight size as a fundamental parameter in both general hand movement and movement guided by objects perceived in the context of surrounding 3D space.
Aicardi, Christine
2014-01-01
Taking up the view that semi-institutional gatherings such as clubs, societies, research schools, have been instrumental in creating sheltered spaces from which many a 20th-century project-driven interdisciplinary research programme could develop and become established within the institutions of science, the paper explores the history of one such gathering from its inception in the early 1980s into the 2000s, the Helmholtz Club, which brought together scientists from such various research fields as neuroanatomy, neurophysiology, psychophysics, computer science and engineering, who all had an interest in the study of the visual system and of higher cognitive functions relying on visual perception such as visual consciousness. It argues that British molecular biologist turned South Californian neuroscientist Francis Crick had an early and lasting influence over the Helmholtz Club of which he was a founding pillar, and that from its inception, the club served as a constitutive element in his long-term plans for a neuroscience of vision and of cognition. Further, it argues that in this role, the Helmholtz Club served many purposes, the primary of which was to be a social forum for interdisciplinary discussion, where ‘discussion’ was not mere talk but was imbued with an epistemic value and as such, carefully cultivated. Finally, it questions what counts as ‘doing science’ and in turn, definitions of success and failure—and provides some material evidence towards re-appraising the successfulness of Crick’s contribution to the neurosciences. PMID:24384229
Eye movements reveal epistemic curiosity in human observers.
Baranes, Adrien; Oudeyer, Pierre-Yves; Gottlieb, Jacqueline
2015-12-01
Saccadic (rapid) eye movements are primary means by which humans and non-human primates sample visual information. However, while saccadic decisions are intensively investigated in instrumental contexts where saccades guide subsequent actions, it is largely unknown how they may be influenced by curiosity - the intrinsic desire to learn. While saccades are sensitive to visual novelty and visual surprise, no study has examined their relation to epistemic curiosity - interest in symbolic, semantic information. To investigate this question, we tracked the eye movements of human observers while they read trivia questions and, after a brief delay, were visually given the answer. We show that higher curiosity was associated with earlier anticipatory orienting of gaze toward the answer location without changes in other metrics of saccades or fixations, and that these influences were distinct from those produced by variations in confidence and surprise. Across subjects, the enhancement of anticipatory gaze was correlated with measures of trait curiosity from personality questionnaires. Finally, a machine learning algorithm could predict curiosity in a cross-subject manner, relying primarily on statistical features of the gaze position before the answer onset and independently of covariations in confidence or surprise, suggesting potential practical applications for educational technologies, recommender systems and research in cognitive sciences. With this article, we provide full access to the annotated database allowing readers to reproduce the results. Epistemic curiosity produces specific effects on oculomotor anticipation that can be used to read out curiosity states. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Pruett, Jake A; Zúñiga-Vega, J Jaime; Campos, Stephanie M; Soini, Helena A; Novotny, Milos V; Vital-García, Cuauhcihuatl; Martins, Emília P; Hews, Diana K
2016-11-01
Animals rely on multimodal signals to obtain information from conspecifics through alternative sensory systems, and the evolutionary loss of a signal in one modality may lead to compensation through increased use of signals in an alternative modality. We investigated associations between chemical signaling and evolutionary loss of abdominal color patches in males of four species (two plain-bellied and two colorful-bellied) of Sceloporus lizards. We conducted field trials to compare behavioral responses of male lizards to swabs with femoral gland (FG) secretions from conspecific males and control swabs (clean paper). We also analyzed the volatile organic compound (VOC) composition of male FG secretions by stir bar extraction and gas chromatography-mass spectrometry (GC-MS) to test the hypothesis that loss of the visual signal is associated with elaboration of the chemical signal. Males of plain-bellied, but not colorful-bellied species exhibited different rates of visual displays when exposed to swabs of conspecific FG secretions relative to control swabs. The VOC composition of male Sceloporus FG secretions was similar across all four species, and no clear association between relative abundances of VOCs and evolutionary loss of abdominal color patches was observed. The emerging pattern is that behavioral responses to conspecific chemical signals are species- and context-specific in male Sceloporus, and compensatory changes in receivers, but not signalers may be involved in mediating increased responsiveness to chemical signals in males of plain-bellied species.
Evidence for discrete landmark use by pigeons during homing.
Mora, Cordula V; Ross, Jeremy D; Gorsevski, Peter V; Chowdhury, Budhaditya; Bingman, Verner P
2012-10-01
Considerable efforts have been made to investigate how homing pigeons (Columba livia f. domestica) are able to return to their loft from distant, unfamiliar sites while the mechanisms underlying navigation in familiar territory have received less attention. With the recent advent of global positioning system (GPS) data loggers small enough to be carried by pigeons, the role of visual environmental features in guiding navigation over familiar areas is beginning to be understood, yet, surprisingly, we still know very little about whether homing pigeons can rely on discrete, visual landmarks to guide navigation. To assess a possible role of discrete, visual landmarks in navigation, homing pigeons were first trained to home from a site with four wind turbines as salient landmarks as well as from a control site without any distinctive, discrete landmark features. The GPS-recorded flight paths of the pigeons on the last training release were straighter and more similar among birds from the turbine site compared with those from the control site. The pigeons were then released from both sites following a clock-shift manipulation. Vanishing bearings from the turbine site continued to be homeward oriented as 13 of 14 pigeons returned home. By contrast, at the control site the vanishing bearings were deflected in the expected clock-shift direction and only 5 of 13 pigeons returned home. Taken together, our results offer the first strong evidence that discrete, visual landmarks are one source of spatial information homing pigeons can utilize to navigate when flying over a familiar area.
Identifying solutions to medication adherence in the visually impaired elderly.
Smith, Miranda; Bailey, Trista
2014-02-01
Adults older than 65 years of age with vision impairment are more likely to have difficulty managing medications compared with people having normal vision. This patient population has difficulty reading medication information and may take the wrong medication or incorrect doses of medication, resulting in serious consequences, including overdose or inadequate treatment of health problems. Visually impaired patients report increased anxiety related to medication management and must rely on others to obtain necessary drug information. Pharmacists have a unique opportunity to pursue accurate medication adherence in this special population. This article reviews literature illustrating how severe medication mismanagement can occur in the visually impaired elderly and presents resources and solutions for pharmacists to take a larger role in adherence management in this population.
Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil
2011-01-01
Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934
'When is VISION asked too much'?
van der Wildt, G J; den Brinker, B P; Wertheim, A H
1995-01-01
The last two decades a shift took place from substitutional/compensatory training to utilisation of residual vision regarding rehabilitation of the visually impaired. Some of the visually impaired are able to use their visual perception nearly as complete as normal seeing people in spite of a severe visual disability. On the other hand, people with nearly normal functions can be severely visually handicapped. To illustrate this, two cases are presented. The first case is a man, aged 47 years, with a juvenile macular degeneration on both eyes. In spite of a very low visual acuity of less then 0.05, he finished an university education and he is able to maintain himself very well in a leading position in a scientific environment, by using adequate low vision devices. Also for his leisure activities, as photography and speed skating, he relies upon visual perception. The second case is a woman, aged 30 years, with nearly normal visual functions, who is not able to read for longer periods caused by conflicting information from the body- and eye movements, and the visual input. This causes sickness during reading. She is unable to use books for her study and is working with recordings on tape. The results of a comprehensive visual assessment will be related to the specific low vision devices and its use.
NASA Astrophysics Data System (ADS)
Buck, Z.
2013-04-01
As we turn more and more to high-end computing to understand the Universe at cosmological scales, visualizations of simulations will take on a vital role as perceptual and cognitive tools. In collaboration with the Adler Planetarium and University of California High-Performance AstroComputing Center (UC-HiPACC), I am interested in better understanding the use of visualizations to mediate astronomy learning across formal and informal settings. The aspect of my research that I present here uses quantitative methods to investigate how learners are relying on color to interpret dark matter in a cosmology visualization. The concept of dark matter is vital to our current understanding of the Universe, and yet we do not know how to effectively present dark matter visually to support learning. I employ an alternative treatment post-test only experimental design, in which members of an equivalent sample are randomly assigned to one of three treatment groups, followed by treatment and a post-test. Results indicate significant correlation (p < .05) between the color of dark matter in the visualization and survey responses, implying that aesthetic variations like color can have a profound effect on audience interpretation of a cosmology visualization.
Fong, Justin; Klaic, Marlena; Nair, Siddharth; Vetere, Frank; Cofré Lizama, L. Eduardo; Galea, Mary Pauline
2016-01-01
Background Stroke is a leading cause of disability worldwide, with upper limb deficits affecting an estimated 30% to 60% of survivors. The effectiveness of upper limb rehabilitation relies on numerous factors, particularly patient compliance to home programs and exercises set by therapists. However, therapists lack objective information about their patients’ adherence to rehabilitation exercises as well as other uses of the affected arm and hand in everyday life outside the clinic. We developed a system that consists of wearable sensor technology to monitor a patient’s arm movement and a Web-based dashboard to visualize this information for therapists. Objective The aim of our study was to evaluate how therapists use upper limb movement information visualized on a dashboard to support the rehabilitation process. Methods An interactive dashboard prototype with simulated movement information was created and evaluated through a user-centered design process with therapists (N=8) at a rehabilitation clinic. Data were collected through observations of therapists interacting with an interactive dashboard prototype, think-aloud data, and interviews. Data were analyzed qualitatively through thematic analysis. Results Therapists use visualizations of upper limb information in the following ways: (1) to obtain objective data of patients’ activity levels, exercise, and neglect outside the clinic, (2) to engage patients in the rehabilitation process through education, motivation, and discussion of experiences with activities of daily living, and (3) to engage with other clinicians and researchers based on objective data. A major limitation is the lack of contextual data, which is needed by therapists to discern how movement data visualized on the dashboard relate to activities of daily living. Conclusions Upper limb information captured through wearable devices provides novel insights for therapists and helps to engage patients and other clinicians in therapy. Consideration needs to be given to the collection and visualization of contextual information to provide meaningful insights into patient engagement in activities of daily living. These findings open the door for further work to develop a fully functioning system and to trial it with patients and clinicians during therapy. PMID:28582257
On the coding and reporting of race and ethnicity in New Hampshire for purposes of cancer reporting.
Riddle, Bruce L
2005-01-01
The objective was to investigate how data on race and ethnicity are collected by hospitals reporting to the New Hampshire State Cancer Registry (NHSCR). NHSCR surveyed hospitals asking how information on race and ethnicity were collected. A review of relevant legal mandates and national guidelines was undertaken. Many hospitals lack policies on collection, computer systems fail to support national guidelines, and staff rely on visual inspection. Hospital staffs are not now culturally equipped to collect race and ethnicity in a meaningful way. The numerator in cancer incidence rates is most likely not accurate and for some smaller populations very biased. A new framework is needed that takes into account the needs of the democracy.
Plaisant, Catherine; Lam, Stanley; Shneiderman, Ben; Smith, Mark S.; Roseman, David; Marchand, Greg; Gillam, Michael; Feied, Craig; Handler, Jonathan; Rappaport, Hank
2008-01-01
As electronic health records (EHR) become more widespread, they enable clinicians and researchers to pose complex queries that can benefit immediate patient care and deepen understanding of medical treatment and outcomes. However, current query tools make complex temporal queries difficult to pose, and physicians have to rely on computer professionals to specify the queries for them. This paper describes our efforts to develop a novel query tool implemented in a large operational system at the Washington Hospital Center (Microsoft Amalga, formerly known as Azyxxi). We describe our design of the interface to specify temporal patterns and the visual presentation of results, and report on a pilot user study looking for adverse reactions following radiology studies using contrast. PMID:18999158
Motion based parsing for video from observational psychology
NASA Astrophysics Data System (ADS)
Kokaram, Anil; Doyle, Erika; Lennon, Daire; Joyeux, Laurent; Fuller, Ray
2006-01-01
In Psychology it is common to conduct studies involving the observation of humans undertaking some task. The sessions are typically recorded on video and used for subjective visual analysis. The subjective analysis is tedious and time consuming, not only because much useless video material is recorded but also because subjective measures of human behaviour are not necessarily repeatable. This paper presents tools using content based video analysis that allow automated parsing of video from one such study involving Dyslexia. The tools rely on implicit measures of human motion that can be generalised to other applications in the domain of human observation. Results comparing quantitative assessment of human motion with subjective assessment are also presented, illustrating that the system is a useful scientific tool.
NASA Astrophysics Data System (ADS)
Mustari, Afrina; Nakamura, Naoki; Nishidate, Izumi; Kawauchi, Satoko; Sato, Shunichi; Sato, Manabu; Kokobo, Yasuaki
2017-04-01
Nervous system relies on a continuous and adequate supply of blood flow, bringing the nutrients that it needs and removing the waste products of metabolism. Failure of these mechanisms is found in a number of devastating cerebral diseases, including stroke, vascular dementia, brain injury and trauma. Vasomotion which is the spontaneous low-frequency oscillation derived by the contraction and relaxation of arterioles and appears to be an intrinsic property of the cerebral vasculature, is important for monitoring the cerebral flow, tissue metabolism and health status of brain tissue. In the present study, we investigated a method to visualize the spontaneous low-frequency oscillation of cerebral blood volume based on the sequential RGB images of exposed brain.
Integrated Computational System for Aerodynamic Steering and Visualization
NASA Technical Reports Server (NTRS)
Hesselink, Lambertus
1999-01-01
In February of 1994, an effort from the Fluid Dynamics and Information Sciences Divisions at NASA Ames Research Center with McDonnel Douglas Aerospace Company and Stanford University was initiated to develop, demonstrate, validate and disseminate automated software for numerical aerodynamic simulation. The goal of the initiative was to develop a tri-discipline approach encompassing CFD, Intelligent Systems, and Automated Flow Feature Recognition to improve the utility of CFD in the design cycle. This approach would then be represented through an intelligent computational system which could accept an engineer's definition of a problem and construct an optimal and reliable CFD solution. Stanford University's role focused on developing technologies that advance visualization capabilities for analysis of CFD data, extract specific flow features useful for the design process, and compare CFD data with experimental data. During the years 1995-1997, Stanford University focused on developing techniques in the area of tensor visualization and flow feature extraction. Software libraries were created enabling feature extraction and exploration of tensor fields. As a proof of concept, a prototype system called the Integrated Computational System (ICS) was developed to demonstrate CFD design cycle. The current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching and vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will (1) briefly review the technologies developed during 1995-1997 (2) describe current technologies in the area of comparison techniques, (4) describe the theory of our new method researched during the grant year (5) summarize a few of the results and finally (6) discuss work within the last 6 months that are direct extensions from the grant.
Luminance, Colour, Viewpoint and Border Enhanced Disparity Energy Model
Martins, Jaime A.; Rodrigues, João M. F.; du Buf, Hans
2015-01-01
The visual cortex is able to extract disparity information through the use of binocular cells. This process is reflected by the Disparity Energy Model, which describes the role and functioning of simple and complex binocular neuron populations, and how they are able to extract disparity. This model uses explicit cell parameters to mathematically determine preferred cell disparities, like spatial frequencies, orientations, binocular phases and receptive field positions. However, the brain cannot access such explicit cell parameters; it must rely on cell responses. In this article, we implemented a trained binocular neuronal population, which encodes disparity information implicitly. This allows the population to learn how to decode disparities, in a similar way to how our visual system could have developed this ability during evolution. At the same time, responses of monocular simple and complex cells can also encode line and edge information, which is useful for refining disparities at object borders. The brain should then be able, starting from a low-level disparity draft, to integrate all information, including colour and viewpoint perspective, in order to propagate better estimates to higher cortical areas. PMID:26107954
A Scalable Distributed Approach to Mobile Robot Vision
NASA Technical Reports Server (NTRS)
Kuipers, Benjamin; Browning, Robert L.; Gribble, William S.
1997-01-01
This paper documents our progress during the first year of work on our original proposal entitled 'A Scalable Distributed Approach to Mobile Robot Vision'. We are pursuing a strategy for real-time visual identification and tracking of complex objects which does not rely on specialized image-processing hardware. In this system perceptual schemas represent objects as a graph of primitive features. Distributed software agents identify and track these features, using variable-geometry image subwindows of limited size. Active control of imaging parameters and selective processing makes simultaneous real-time tracking of many primitive features tractable. Perceptual schemas operate independently from the tracking of primitive features, so that real-time tracking of a set of image features is not hurt by latency in recognition of the object that those features make up. The architecture allows semantically significant features to be tracked with limited expenditure of computational resources, and allows the visual computation to be distributed across a network of processors. Early experiments are described which demonstrate the usefulness of this formulation, followed by a brief overview of our more recent progress (after the first year).
NASA Astrophysics Data System (ADS)
Zellmann, Stefan; Percan, Yvonne; Lang, Ulrich
2015-01-01
Reconstruction of 2-d image primitives or of 3-d volumetric primitives is one of the most common operations performed by the rendering components of modern visualization systems. Because this operation is often aided by GPUs, reconstruction is typically restricted to first-order interpolation. With the advent of in situ visualization, the assumption that rendering algorithms are in general executed on GPUs is however no longer adequate. We thus propose a framework that provides versatile texture filtering capabilities: up to third-order reconstruction using various types of cubic filtering and interpolation primitives; cache-optimized algorithms that integrate seamlessly with GPGPU rendering or with software rendering that was optimized for cache-friendly "Structure of Array" (SoA) access patterns; a memory management layer (MML) that gracefully hides the complexities of extra data copies necessary for memory access optimizations such as swizzling, for rendering on GPGPUs, or for reconstruction schemes that rely on pre-filtered data arrays. We prove the effectiveness of our software architecture by integrating it into and validating it using the open source direct volume rendering (DVR) software DeskVOX.
Li, Zan; Yan, Shi-Hai; Chen, Chen; Geng, Zhi-Rong; Chang, Jia-Yin; Chen, Chun-Xia; Huang, Bing-Huan; Wang, Zhi-Lin
2017-04-15
Reactions of peroxynitrite (ONOO - ) with biomolecules can lead to cytotoxic and cytoprotective events. Due to the difficulty of directly and unambiguously measuring its levels, most of the beneficial effects associated with ONOO - in vivo remain controversial or poorly characterized. Recently, optical imaging has served as a powerful noninvasive approach to studying ONOO - in living systems. However, ratiometric probes for ONOO - are currently lacking. Herein, we report the design, synthesis, and biological evaluation of F 482 , a novel fluorescence indicator that relies on ONOO - -induced diene oxidation. The remarkable sensitivity, selectivity, and photostability of F 482 enabled us to visualize basal ONOO - in immune-stimulated phagocyte cells and quantify its generation in phagosomes by high-throughput flow cytometry analysis. With the aid of in vivo ONOO - imaging in a mouse inflammation model assisted by F 482 , we envision that F 482 will find widespread applications in the study of the ONOO - biology associated with physiological and pathological processes in vitro and in vivo. Copyright © 2016 Elsevier B.V. All rights reserved.
Stereopsis, vertical disparity and relief transformations.
Gårding, J; Porrill, J; Mayhew, J E; Frisby, J P
1995-03-01
The pattern of retinal binocular disparities acquired by a fixating visual system depends on both the depth structure of the scene and the viewing geometry. This paper treats the problem of interpreting the disparity pattern in terms of scene structure without relying on estimates of fixation position from eye movement control and proprioception mechanisms. We propose a sequential decomposition of this interpretation process into disparity correction, which is used to compute three-dimensional structure up to a relief transformation, and disparity normalization, which is used to resolve the relief ambiguity to obtain metric structure. We point out that the disparity normalization stage can often be omitted, since relief transformations preserve important properties such as depth ordering and coplanarity. Based on this framework we analyse three previously proposed computational models of disparity processing; the Mayhew and Longuet-Higgins model, the deformation model and the polar angle disparity model. We show how these models are related, and argue that none of them can account satisfactorily for available psychophysical data. We therefore propose an alternative model, regional disparity correction. Using this model we derive predictions for a number of experiments based on vertical disparity manipulations, and compare them to available experimental data. The paper is concluded with a summary and a discussion of the possible architectures and mechanisms underling stereopsis in the human visual system.
Aging disrupts the neural transformations that link facial identity across views.
Habak, Claudine; Wilkinson, Frances; Wilson, Hugh R
2008-01-01
Healthy human aging can have adverse effects on cortical function and on the brain's ability to integrate visual information to form complex representations. Facial identification is crucial to successful social discourse, and yet, it remains unclear whether the neuronal mechanisms underlying face perception per se, and the speed with which they process information, change with age. We present face images whose discrimination relies strictly on the shape and geometry of a face at various stimulus durations. Interestingly, we demonstrate that facial identity matching is maintained with age when faces are shown in the same view (e.g., front-front or side-side), regardless of exposure duration, but degrades when faces are shown in different views (e.g., front and turned 20 degrees to the side) and does not improve at longer durations. Our results indicate that perceptual processing speed for complex representations and the mechanisms underlying same-view facial identity discrimination are maintained with age. In contrast, information is degraded in the neural transformations that represent facial identity across views. We suggest that the accumulation of useful information over time to refine a representation within a population of neurons saturates earlier in the aging visual system than it does in the younger system and contributes to the age-related deterioration of face discrimination across views.
NASA Astrophysics Data System (ADS)
Leonardi, Lorenzo; Sowa, Michael G.; Hewko, Mark D.; Schattka, Bernhard J.; Payette, Jeri R.; Hastings, Michelle; Posthumus, Trevor B.; Mantsch, Henry H.
2003-07-01
The present and accepted standard for determining the status of tissue relies on visual inspection of the tissue. Based on the surface appearance of the tissue, medical personnel will make an assessment of the tissue and proceed to a course of action or treatment. Visual inspection of tissue is central to many areas of clinical medicine, and remains a cornerstone of dermatology, reconstructive plastic surgery, and in the management of chronic wounds, and burn injuries. Near infrared spectroscopic imaging holds the promise of being able to monitor the dynamics of tissue physiology in real-time and detect pathology in living tissue. The continuous measurement of metabolic, physiological, or structural changes in tissue is of primary concern in many clinical and biomedical domains. A near infrared hyperspectral imaging system was constructed for the assessment of burn injuries and skin flaps or skin grafts. This device merged basic science with engineering and integrated manufacturing to develop a device suitable to detect ischemic tissue. This device has the potential of providing measures of tissue physiology, oxygen delivery and tissue hydration during patient screening, in the operating room or during therapy and post-operative/treatment monitoring. Results from a pre-clinical burn injury study will be presented.
Mirror neurons as a model for the science and treatment of stuttering.
Snyder, Gregory J; Waddell, Dwight E; Blanchet, Paul
2016-01-06
Persistent developmental stuttering is generally considered a speech disorder and affects ∼1% of the global population. While mainstream treatments continue to rely on unreliable behavioral speech motor targets, an emerging research perspective utilizes the mirror neuron system hypothesis as a neural substrate in the science and treatment of stuttering. The purpose of this exploratory study is to test the viability of the mirror neuron system hypothesis in the fluency enhancement of those who stutter. Participants were asked to speak while they were producing self-generated manual gestures, producing and visually perceiving self-generated manual gestures, and visually perceiving manual gestures, relative to a nonmanual gesture control speaking condition. Data reveal that all experimental speaking conditions enhanced fluent speech in all research participants, and the simultaneous perception and production of manual gesturing trended toward greater efficacious fluency enhancement. Coupled with existing research, we interpret these data as suggestive of fluency enhancement through subcortical involvement within multiple levels of an action understanding mirror neuron network. In addition, incidental findings report that stuttering moments were observed to simultaneously occur both orally and manually. Consequently, these data suggest that stuttering behaviors are compensatory, distal manifestations over multiple expressive modalities to an underlying centralized genetic neural substrate of the disorder.
A GIS-Enabled, Michigan-Specific, Hierarchical Groundwater Modeling and Visualization System
NASA Astrophysics Data System (ADS)
Liu, Q.; Li, S.; Mandle, R.; Simard, A.; Fisher, B.; Brown, E.; Ross, S.
2005-12-01
Efficient management of groundwater resources relies on a comprehensive database that represents the characteristics of the natural groundwater system as well as analysis and modeling tools to describe the impacts of decision alternatives. Many agencies in Michigan have spent several years compiling expensive and comprehensive surface water and groundwater inventories and other related spatial data that describe their respective areas of responsibility. However, most often this wealth of descriptive data has only been utilized for basic mapping purposes. The benefits from analyzing these data, using GIS analysis functions or externally developed analysis models or programs, has yet to be systematically realized. In this talk, we present a comprehensive software environment that allows Michigan groundwater resources managers and frontline professionals to make more effective use of the available data and improve their ability to manage and protect groundwater resources, address potential conflicts, design cleanup schemes, and prioritize investigation activities. In particular, we take advantage of the Interactive Ground Water (IGW) modeling system and convert it to a customized software environment specifically for analyzing, modeling, and visualizing the Michigan statewide groundwater database. The resulting Michigan IGW modeling system (IGW-M) is completely window-based, fully interactive, and seamlessly integrated with a GIS mapping engine. The system operates in real-time (on the fly) providing dynamic, hierarchical mapping, modeling, spatial analysis, and visualization. Specifically, IGW-M allows water resources and environmental professionals in Michigan to: * Access and utilize the extensive data from the statewide groundwater database, interactively manipulate GIS objects, and display and query the associated data and attributes; * Analyze and model the statewide groundwater database, interactively convert GIS objects into numerical model features, automatically extract data and attributes, and simulate unsteady groundwater flow and contaminant transport in response to water and land management decisions; * Visualize and map model simulations and predictions with data from the statewide groundwater database in a seamless interactive environment. IGW-M has the potential to significantly improve the productivity of Michigan groundwater management investigations. It changes the role of engineers and scientists in modeling and analyzing the statewide groundwater database from heavily physical to cognitive problem-solving and decision-making tasks. The seamless real-time integration, real-time visual interaction, and real-time processing capability allows a user to focus on critical management issues, conflicts, and constraints, to quickly and iteratively examine conceptual approximations, management and planning scenarios, and site characterization assumptions, to identify dominant processes, to evaluate data worth and sensitivity, and to guide further data-collection activities. We illustrate the power and effectiveness of the M-IGW modeling and visualization system with a real case study and a real-time, live demonstration.
Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations
Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.; ...
2017-08-29
Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less
Data Visualization Saliency Model: A Tool for Evaluating Abstract Data Visualizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzen, Laura E.; Haass, Michael J.; Divis, Kristin M.
Evaluating the effectiveness of data visualizations is a challenging undertaking and often relies on one-off studies that test a visualization in the context of one specific task. Researchers across the fields of data science, visualization, and human-computer interaction are calling for foundational tools and principles that could be applied to assessing the effectiveness of data visualizations in a more rapid and generalizable manner. One possibility for such a tool is a model of visual saliency for data visualizations. Visual saliency models are typically based on the properties of the human visual cortex and predict which areas of a scene havemore » visual features (e.g. color, luminance, edges) that are likely to draw a viewer's attention. While these models can accurately predict where viewers will look in a natural scene, they typically do not perform well for abstract data visualizations. In this paper, we discuss the reasons for the poor performance of existing saliency models when applied to data visualizations. We introduce the Data Visualization Saliency (DVS) model, a saliency model tailored to address some of these weaknesses, and we test the performance of the DVS model and existing saliency models by comparing the saliency maps produced by the models to eye tracking data obtained from human viewers. In conclusion, we describe how modified saliency models could be used as general tools for assessing the effectiveness of visualizations, including the strengths and weaknesses of this approach.« less
Bahrick, Lorraine E.; Lickliter, Robert; Castellanos, Irina
2014-01-01
Although research has demonstrated impressive face perception skills of young infants, little attention has focused on conditions that enhance versus impair infant face perception. The present studies tested the prediction, generated from the Intersensory Redundancy Hypothesis (IRH), that face discrimination, which relies on detection of visual featural information, would be impaired in the context of intersensory redundancy provided by audiovisual speech, and enhanced in the absence of intersensory redundancy (unimodal visual and asynchronous audiovisual speech) in early development. Later in development, following improvements in attention, faces should be discriminated in both redundant audiovisual and nonredundant stimulation. Results supported these predictions. Two-month-old infants discriminated a novel face in unimodal visual and asynchronous audiovisual speech but not in synchronous audiovisual speech. By 3 months, face discrimination was evident even during synchronous audiovisual speech. These findings indicate that infant face perception is enhanced and emerges developmentally earlier following unimodal visual than synchronous audiovisual exposure and that intersensory redundancy generated by naturalistic audiovisual speech can interfere with face processing. PMID:23244407
Pycortex: an interactive surface visualizer for fMRI
Gao, James S.; Huth, Alexander G.; Lescroart, Mark D.; Gallant, Jack L.
2015-01-01
Surface visualizations of fMRI provide a comprehensive view of cortical activity. However, surface visualizations are difficult to generate and most common visualization techniques rely on unnecessary interpolation which limits the fidelity of the resulting maps. Furthermore, it is difficult to understand the relationship between flattened cortical surfaces and the underlying 3D anatomy using tools available currently. To address these problems we have developed pycortex, a Python toolbox for interactive surface mapping and visualization. Pycortex exploits the power of modern graphics cards to sample volumetric data on a per-pixel basis, allowing dense and accurate mapping of the voxel grid across the surface. Anatomical and functional information can be projected onto the cortical surface. The surface can be inflated and flattened interactively, aiding interpretation of the correspondence between the anatomical surface and the flattened cortical sheet. The output of pycortex can be viewed using WebGL, a technology compatible with modern web browsers. This allows complex fMRI surface maps to be distributed broadly online without requiring installation of complex software. PMID:26483666
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arendt, Dustin L.; Volkova, Svitlana
Analyzing and visualizing large amounts of social media communications and contrasting short-term conversation changes over time and geo-locations is extremely important for commercial and government applications. Earlier approaches for large-scale text stream summarization used dynamic topic models and trending words. Instead, we rely on text embeddings – low-dimensional word representations in a continuous vector space where similar words are embedded nearby each other. This paper presents ESTEEM,1 a novel tool for visualizing and evaluating spatiotemporal embeddings learned from streaming social media texts. Our tool allows users to monitor and analyze query words and their closest neighbors with an interactive interface.more » We used state-of- the-art techniques to learn embeddings and developed a visualization to represent dynamically changing relations between words in social media over time and other dimensions. This is the first interactive visualization of streaming text representations learned from social media texts that also allows users to contrast differences across multiple dimensions of the data.« less
Natural image statistics mediate brightness 'filling in'.
Dakin, Steven C; Bex, Peter J
2003-11-22
Although the human visual system can accurately estimate the reflectance (or lightness) of surfaces under enormous variations in illumination, two equiluminant grey regions can be induced to appear quite different simply by placing a light-dark luminance transition between them. This illusion, the Craik-Cornsweet-O'Brien (CCOB) effect, has been taken as evidence for a low-level 'filling-in' mechanism subserving lightness perception. Here, we present evidence that the mechanism responsible for the CCOB effect operates not via propagation of a neural signal across space but by amplification of the low spatial frequency (SF) structure of the image. We develop a simple computational model that relies on the statistics of natural scenes actively to reconstruct the image that is most likely to have caused an observed series of responses across SF channels. This principle is tested psychophysically by deriving classification images (CIs) for subjects' discrimination of the contrast polarity of CCOB stimuli masked with noise. CIs resemble 'filled-in' stimuli; i.e. observers rely on portions of the stimuli that contain no information per se but that correspond closely to the reported perceptual completion. As predicted by the model, the filling-in process is contingent on the presence of appropriate low SF structure.
The Emerging Genre of Data Comics.
Bach, Benjamin; Riche, Nathalie Henry; Carpendale, Sheelagh; Pfister, Hanspeter
2017-01-01
As we increasingly rely on data to understand our world, and as problems require global solutions, we need to effectively communicate that data to help people make informed decisions. The special Art on Graphics article explores the potential of data comics and their unique ability to communicate both data and context via compelling visual storytelling.
Comic Strips as a Text Structure for Learning to Read
ERIC Educational Resources Information Center
McVicker, Claudia J.
2007-01-01
Teachers can use comics for reading instruction by capitalizing on their colorful graphic representation. Technology and reading are wed during the use of the Internet, and readers must rely on their visual literacy skills--a group of vision competencies people can hone for comprehension. This article reports on strategies for developing visual…
Simple Heuristic Approach to Introduction of the Black-Scholes Model
ERIC Educational Resources Information Center
Yalamova, Rossitsa
2010-01-01
A heuristic approach to explaining of the Black-Scholes option pricing model in undergraduate classes is described. The approach draws upon the method of protocol analysis to encourage students to "think aloud" so that their mental models can be surfaced. It also relies upon extensive visualizations to communicate relationships that are…
Memory for Recently Accessed Visual Attributes
ERIC Educational Resources Information Center
Jiang, Yuhong V.; Shupe, Joshua M.; Swallow, Khena M.; Tan, Deborah H.
2016-01-01
Recent reports have suggested that the attended features of an item may be rapidly forgotten once they are no longer relevant for an ongoing task (attribute amnesia). This finding relies on a surprise memory procedure that places high demands on declarative memory. We used intertrial priming to examine whether the representation of an item's…
Science Visual Literacy: Learners' Perceptions and Knowledge of Diagrams
ERIC Educational Resources Information Center
McTigue, Erin M.; Flowers, Amanda C.
2011-01-01
Constructing meaning from science texts relies not only on comprehending the words but also the diagrams and other graphics. The goal of this study was to explore elementary students' perceptions of science diagrams and their skills related to diagram interpretation. 30 students, ranging from second grade through middle school, completed a diagram…
Colorimetric micro-assay for accelerated screening of mould inhibitors
Carol A. Clausen; Vina W. Yang
2013-01-01
Since current standard laboratory methods are time-consuming macro-assays that rely on subjective visual ratings of mould growth, rapid and quantitative laboratory methods are needed to screen potential mould inhibitors for use in and on cellulose-based products. A colorimetric micro-assay has been developed that uses XTT tetrazolium salt to enzymatically assess...
The Generation and Maintenance of Visual Mental Images: Evidence from Image Type and Aging
ERIC Educational Resources Information Center
De Beni, Rossana; Pazzaglia, Francesca; Gardini, Simona
2007-01-01
Imagery is a multi-componential process involving different mental operations. This paper addresses whether separate processes underlie the generation, maintenance and transformation of mental images or whether these cognitive processes rely on the same mental functions. We also examine the influence of age on these mental operations for…
Ambiguity in Speaking Chemistry and Other STEM Content: Educational Implications
ERIC Educational Resources Information Center
Isaacson, Mick D.; Michaels, Michelle
2015-01-01
Ambiguity in speech is a possible barrier to the acquisition of knowledge for students who have print disabilities (such as blindness, visual impairments, and some specific learning disabilities) and rely on auditory input for learning. Chemistry appears to have considerable potential for being spoken ambiguously and may be a barrier to accessing…
ERIC Educational Resources Information Center
Lee, Hyunju; Schneider, Stephen E.
2015-01-01
Many topics in introductory astronomy at the college or high-school level rely implicitly on using astronomical photographs and visual data in class. However, students bring many preconceptions to their understanding of these materials that ultimately lead to misconceptions, and the research about students' interpretation of astronomical images…
The Influence of Attentional Focus Instructions and Vision on Jump Height Performance
ERIC Educational Resources Information Center
Abdollahipour, Reza; Psotta, Rudolf; Land, William M.
2016-01-01
Purpose: Studies have suggested that the use of visual information may underlie the benefit associated with an external focus of attention. Recent studies exploring this connection have primarily relied on motor tasks that involve manipulation of an object (object projection). The present study examined whether vision influences the effect of…
Optimizations and Applications in Head-Mounted Video-Based Eye Tracking
ERIC Educational Resources Information Center
Li, Feng
2011-01-01
Video-based eye tracking techniques have become increasingly attractive in many research fields, such as visual perception and human-computer interface design. The technique primarily relies on the positional difference between the center of the eye's pupil and the first-surface reflection at the cornea, the corneal reflection (CR). This…
Redesigning the Human-Machine Interface for Computer-Mediated Visual Technologies.
ERIC Educational Resources Information Center
Acker, Stephen R.
1986-01-01
This study examined an application of a human machine interface which relies on the use of optical bar codes incorporated in a computer-based module to teach radio production. The sequencing procedure used establishes the user rather than the computer as the locus of control for the mediated instruction. (Author/MBR)
ERIC Educational Resources Information Center
Lee, Hyunju; Schneider, Stephen E.
2015-01-01
Many topics in introductory astronomy at the college or high-school level rely implicitly on using astronomical photographs and visual data in class. However, students bring many preconceptions to their understanding of these materials that ultimately lead to misconceptions, and research about students' interpretation of astronomical images has…
Computer-Based Learning of Spelling Skills in Children with and without Dyslexia
ERIC Educational Resources Information Center
Kast, Monika; Baschera, Gian-Marco; Gross, Markus; Jancke, Lutz; Meyer, Martin
2011-01-01
Our spelling training software recodes words into multisensory representations comprising visual and auditory codes. These codes represent information about letters and syllables of a word. An enhanced version, developed for this study, contains an additional phonological code and an improved word selection controller relying on a phoneme-based…
Effects of Camera Arrangement on Perceptual-Motor Performance in Minimally Invasive Surgery
ERIC Educational Resources Information Center
Delucia, Patricia R.; Griswold, John A.
2011-01-01
Minimally invasive surgery (MIS) is performed for a growing number of treatments. Whereas open surgery requires large incisions, MIS relies on small incisions through which instruments are inserted and tissues are visualized with a camera. MIS results in benefits for patients compared with open surgery, but degrades the surgeon's perceptual-motor…
A Critical Review of Line Graphs in Behavior Analytic Journals
ERIC Educational Resources Information Center
Kubina, Richard M., Jr.; Kostewicz, Douglas E.; Brennan, Kaitlyn M.; King, Seth A.
2017-01-01
Visual displays such as graphs have played an instrumental role in psychology. One discipline relies almost exclusively on graphs in both applied and basic settings, behavior analysis. The most common graphic used in behavior analysis falls under the category of time series. The line graph represents the most frequently used display for visual…
ERIC Educational Resources Information Center
Chaumon, Maximilien; Schwartz, Denis; Tallon-Baudry, Catherine
2009-01-01
Oscillatory synchrony in the gamma band (30-120 Hz) has been involved in various cognitive functions including conscious perception and learning. Explicit memory encoding, in particular, relies on enhanced gamma oscillations. Does this finding extend to unconscious memory encoding? Can we dissociate gamma oscillations related to unconscious…
Triggerfish uses chromaticity and lightness for object segregation
2017-01-01
Humans group components of visual patterns according to their colour, and perceive colours separately from shape. This property of human visual perception is the basis behind the Ishihara test for colour deficiency, where an observer is asked to detect a pattern made up of dots of similar colour with variable lightness against a background of dots made from different colour(s) and lightness. To find out if fish use colour for object segregation in a similar manner to humans, we used stimuli inspired by the Ishihara test. Triggerfish (Rhinecanthus aculeatus) were trained to detect a cross constructed from similarly coloured dots against various backgrounds. Fish detected this cross even when it was camouflaged using either achromatic or chromatic noise, but fish relied more on chromatic cues for shape segregation. It remains unknown whether fish may switch to rely primarily on achromatic cues in scenarios where target objects have higher achromatic contrast and lower chromatic contrast. Fish were also able to generalize between stimuli of different colours, suggesting that colour and shape are processed by fish independently. PMID:29308267
Task demands determine comparison strategy in whole probe change detection.
Udale, Rob; Farrell, Simon; Kent, Chris
2018-05-01
Detecting a change in our visual world requires a process that compares the external environment (test display) with the contents of memory (study display). We addressed the question of whether people strategically adapt the comparison process in response to different decision loads. Study displays of 3 colored items were presented, followed by 'whole-display' probes containing 3 colored shapes. Participants were asked to decide whether any probed items contained a new feature. In Experiments 1-4, irrelevant changes to the probed item's locations or feature bindings influenced memory performance, suggesting that participants employed a comparison process that relied on spatial locations. This finding occurred irrespective of whether participants were asked to decide about the whole display, or only a single cued item within the display. In Experiment 5, when the base-rate of changes in the nonprobed items increased (increasing the incentive to use the cue effectively), participants were not influenced by irrelevant changes in location or feature bindings. In addition, we observed individual differences in the use of spatial cues. These results suggest that participants can flexibly switch between spatial and nonspatial comparison strategies, depending on interactions between individual differences and task demand factors. These findings have implications for models of visual working memory that assume that the comparison between study and test obligatorily relies on accessing visual features via their binding to location. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Kress, Daniel; Egelhaaf, Martin
2014-01-01
During locomotion animals rely heavily on visual cues gained from the environment to guide their behavior. Examples are basic behaviors like collision avoidance or the approach to a goal. The saccadic gaze strategy of flying flies, which separates translational from rotational phases of locomotion, has been suggested to facilitate the extraction of environmental information, because only image flow evoked by translational self-motion contains relevant distance information about the surrounding world. In contrast to the translational phases of flight during which gaze direction is kept largely constant, walking flies experience continuous rotational image flow that is coupled to their stride-cycle. The consequences of these self-produced image shifts for the extraction of environmental information are still unclear. To assess the impact of stride-coupled image shifts on visual information processing, we performed electrophysiological recordings from the HSE cell, a motion sensitive wide-field neuron in the blowfly visual system. This cell has been concluded to play a key role in mediating optomotor behavior, self-motion estimation and spatial information processing. We used visual stimuli that were based on the visual input experienced by walking blowflies while approaching a black vertical bar. The response of HSE to these stimuli was dominated by periodic membrane potential fluctuations evoked by stride-coupled image shifts. Nevertheless, during the approach the cell’s response contained information about the bar and its background. The response components evoked by the bar were larger than the responses to its background, especially during the last phase of the approach. However, as revealed by targeted modifications of the visual input during walking, the extraction of distance information on the basis of HSE responses is much impaired by stride-coupled retinal image shifts. Possible mechanisms that may cope with these stride-coupled responses are discussed. PMID:25309362
Parafoveal magnification: visual acuity does not modulate the perceptual span in reading.
Miellet, Sébastien; O'Donnell, Patrick J; Sereno, Sara C
2009-06-01
Models of eye guidance in reading rely on the concept of the perceptual span-the amount of information perceived during a single eye fixation, which is considered to be a consequence of visual and attentional constraints. To directly investigate attentional mechanisms underlying the perceptual span, we implemented a new reading paradigm-parafoveal magnification (PM)-that compensates for how visual acuity drops off as a function of retinal eccentricity. On each fixation and in real time, parafoveal text is magnified to equalize its perceptual impact with that of concurrent foveal text. Experiment 1 demonstrated that PM does not increase the amount of text that is processed, supporting an attentional-based account of eye movements in reading. Experiment 2 explored a contentious issue that differentiates competing models of eye movement control and showed that, even when parafoveal information is enlarged, visual attention in reading is allocated in a serial fashion from word to word.
Chylothorax diagnosis: can the clinical chemistry laboratory do more?
Gibbons, Stephen M; Ahmed, Farhan
2015-01-01
Chylothorax is a rare anatomical disruption of the thoracic duct associated with a significant degree of morbidity and mortality. Diagnosis usually relies upon lipid analysis and visual inspection of the pleural fluid. However, this may be subject to incorrect interpretation. The aim of this study was to compare pleural fluid lipid analysis and visual inspection against lipoprotein electrophoresis. Nine pleural effusion samples suspected of being chylothorax were analysed. A combination of fluid lipid analysis and visual inspection was compared with lipoprotein electrophoresis for the detection of chylothorax. There was 89% concordance between the two methods. Using lipoprotein electrophoresis as gold standard, calculated sensitivity, specificity, negative predictive value and positive predictive value for lipid analysis/visual inspection were 83%, 100%, 100% and 75%, respectively. Examination of pleural effusion samples by lipoprotein electrophoresis may provide important additional information in the diagnosis of chylothorax. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Taylor, Kirsten I; Devereux, Barry J; Acres, Kadia; Randall, Billi; Tyler, Lorraine K
2012-03-01
Conceptual representations are at the heart of our mental lives, involved in every aspect of cognitive functioning. Despite their centrality, a long-standing debate persists as to how the meanings of concepts are represented and processed. Many accounts agree that the meanings of concrete concepts are represented by their individual features, but disagree about the importance of different feature-based variables: some views stress the importance of the information carried by distinctive features in conceptual processing, others the features which are shared over many concepts, and still others the extent to which features co-occur. We suggest that previously disparate theoretical positions and experimental findings can be unified by an account which claims that task demands determine how concepts are processed in addition to the effects of feature distinctiveness and co-occurrence. We tested these predictions in a basic-level naming task which relies on distinctive feature information (Experiment 1) and a domain decision task which relies on shared feature information (Experiment 2). Both used large-scale regression designs with the same visual objects, and mixed-effects models incorporating participant, session, stimulus-related and feature statistic variables to model the performance. We found that concepts with relatively more distinctive and more highly correlated distinctive relative to shared features facilitated basic-level naming latencies, while concepts with relatively more shared and more highly correlated shared relative to distinctive features speeded domain decisions. These findings demonstrate that the feature statistics of distinctiveness (shared vs. distinctive) and correlational strength, as well as the task demands, determine how concept meaning is processed in the conceptual system. Copyright © 2011 Elsevier B.V. All rights reserved.
Cheng, Kenneth C; McKay, Sandra M; King, Emily C; Maki, Brian E
2012-11-01
Rapid reach-to-grasp reactions are a prevalent response to sudden loss of balance and play an important role in preventing falls. A previous study indicated that young adults are able to guide functionally effective grasping reactions using visuospatial information (VSI) stored in working memory. The present study addressed whether healthy older adults are also able to use "stored" VSI in this manner or are more dependent on "online" visual control. Liquid-crystal goggles were used to force reliance on either stored or online VSI while reaching to grasp a small handhold in response to unpredictable platform perturbations. A motor-driven device varied the handhold location unpredictably for each trial. Twelve healthy older adults (65-79 years) were compared with 12 young adults (19-29 years) tested in a previous study. Reach-to-grasp reactions were slower and more variable in older adults, regardless of the nature of the available VSI. When forced to rely on stored VSI, both age groups showed a reduction in reach accuracy; however, a tendency to undershoot the handhold was exacerbated in the older adults. Forced reliance on online VSI led to similar delays in both age groups; however, the older adults were more likely to reach with the "wrong" limb (contralateral to the handhold) and/or raise both arms initially (possibly to "buy" more time for final limb selection). Situations that force the central nervous system to rely on either stored or online VSI tend to exacerbate age-related reductions in speed and accuracy of reach-to-grasp balance-recovery reactions. Further work is needed to determine if this increases risk of falling in daily life.
Palmiero, Massimiliano; Di Matteo, Rosalia; Belardinelli, Marta Olivetti
2014-05-01
Two experiments comparing imaginative processing in different modalities and semantic processing were carried out to investigate the issue of whether conceptual knowledge can be represented in different format. Participants were asked to judge the similarity between visual images, auditory images, and olfactory images in the imaginative block, if two items belonged to the same category in the semantic block. Items were verbally cued in both experiments. The degree of similarity between the imaginative and semantic items was changed across experiments. Experiment 1 showed that the semantic processing was faster than the visual and the auditory imaginative processing, whereas no differentiation was possible between the semantic processing and the olfactory imaginative processing. Experiment 2 revealed that only the visual imaginative processing could be differentiated from the semantic processing in terms of accuracy. These results showed that the visual and auditory imaginative processing can be differentiated from the semantic processing, although both visual and auditory images strongly rely on semantic representations. On the contrary, no differentiation is possible within the olfactory domain. Results are discussed in the frame of the imagery debate.
Self-motivated visual scanning predicts flexible navigation in a virtual environment.
Ploran, Elisabeth J; Bevitt, Jacob; Oshiro, Jaris; Parasuraman, Raja; Thompson, James C
2014-01-01
The ability to navigate flexibly (e.g., reorienting oneself based on distal landmarks to reach a learned target from a new position) may rely on visual scanning during both initial experiences with the environment and subsequent test trials. Reliance on visual scanning during navigation harkens back to the concept of vicarious trial and error, a description of the side-to-side head movements made by rats as they explore previously traversed sections of a maze in an attempt to find a reward. In the current study, we examined if visual scanning predicted the extent to which participants would navigate to a learned location in a virtual environment defined by its position relative to distal landmarks. Our results demonstrated a significant positive relationship between the amount of visual scanning and participant accuracy in identifying the trained target location from a new starting position as long as the landmarks within the environment remain consistent with the period of original learning. Our findings indicate that active visual scanning of the environment is a deliberative attentional strategy that supports the formation of spatial representations for flexible navigation.
Virtual reality stimuli for force platform posturography.
Tossavainen, Timo; Juhola, Martti; Ilmari, Pyykö; Aalto, Heikki; Toppila, Esko
2002-01-01
People relying much on vision in the control of posture are known to have an elevated risk of falling. Dependence on visual control is an important parameter in the diagnosis of balance disorders. We have previously shown that virtual reality methods can be used to produce visual stimuli that affect balance, but suitable stimuli need to be found. In this study the effect of six different virtual reality stimuli on the balance of 22 healthy test subjects was evaluated using force platform posturography. According to the tests two of the stimuli have a significant effect on balance.
Manipulations of attention dissociate fragile visual short-term memory from visual working memory.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; Lamme, Victor A F
2011-05-01
People often rely on information that is no longer in view, but maintained in visual short-term memory (VSTM). Traditionally, VSTM is thought to operate on either a short time-scale with high capacity - iconic memory - or a long time scale with small capacity - visual working memory. Recent research suggests that in addition, an intermediate stage of memory in between iconic memory and visual working memory exists. This intermediate stage has a large capacity and a lifetime of several seconds, but is easily overwritten by new stimulation. We therefore termed it fragile VSTM. In previous studies, fragile VSTM has been dissociated from iconic memory by the characteristics of the memory trace. In the present study, we dissociated fragile VSTM from visual working memory by showing a differentiation in their dependency on attention. A decrease in attention during presentation of the stimulus array greatly reduced the capacity of visual working memory, while this had only a small effect on the capacity of fragile VSTM. We conclude that fragile VSTM is a separate memory store from visual working memory. Thus, a tripartite division of VSTM appears to be in place, comprising iconic memory, fragile VSTM and visual working memory. Copyright © 2011 Elsevier Ltd. All rights reserved.
Tapia, Evelina; Beck, Diane M
2014-01-01
A number of influential theories posit that visual awareness relies not only on the initial, stimulus-driven (i.e., feedforward) sweep of activation but also on recurrent feedback activity within and between brain regions. These theories of awareness draw heavily on data from masking paradigms in which visibility of one stimulus is reduced due to the presence of another stimulus. More recently transcranial magnetic stimulation (TMS) has been used to study the temporal dynamics of visual awareness. TMS over occipital cortex affects performance on visual tasks at distinct time points and in a manner that is comparable to visual masking. We draw parallels between these two methods and examine evidence for the neural mechanisms by which visual masking and TMS suppress stimulus visibility. Specifically, both methods have been proposed to affect feedforward as well as feedback signals when applied at distinct time windows relative to stimulus onset and as a result modify visual awareness. Most recent empirical evidence, moreover, suggests that while visual masking and TMS impact stimulus visibility comparably, the processes these methods affect may not be as similar as previously thought. In addition to reviewing both masking and TMS studies that examine feedforward and feedback processes in vision, we raise questions to guide future studies and further probe the necessary conditions for visual awareness.
Visual Navigation during Colony Emigration by the Ant Temnothorax rugatulus
Bowens, Sean R.; Glatt, Daniel P.; Pratt, Stephen C.
2013-01-01
Many ants rely on both visual cues and self-generated chemical signals for navigation, but their relative importance varies across species and context. We evaluated the roles of both modalities during colony emigration by Temnothorax rugatulus. Colonies were induced to move from an old nest in the center of an arena to a new nest at the arena edge. In the midst of the emigration the arena floor was rotated 60°around the old nest entrance, thus displacing any substrate-bound odor cues while leaving visual cues unchanged. This manipulation had no effect on orientation, suggesting little influence of substrate cues on navigation. When this rotation was accompanied by the blocking of most visual cues, the ants became highly disoriented, suggesting that they did not fall back on substrate cues even when deprived of visual information. Finally, when the substrate was left in place but the visual surround was rotated, the ants' subsequent headings were strongly rotated in the same direction, showing a clear role for visual navigation. Combined with earlier studies, these results suggest that chemical signals deposited by Temnothorax ants serve more for marking of familiar territory than for orientation. The ants instead navigate visually, showing the importance of this modality even for species with small eyes and coarse visual acuity. PMID:23671713
Dyslexia and reasoning: the importance of visual processes.
Bacon, Alison M; Handley, Simon J
2010-08-01
Recent research has suggested that individuals with dyslexia rely on explicit visuospatial representations for syllogistic reasoning while most non-dyslexics opt for an abstract verbal strategy. This paper investigates the role of visual processes in relational reasoning amongst dyslexic reasoners. Expt 1 presents written and verbal protocol evidence to suggest that reasoners with dyslexia generate detailed representations of relational properties and use these to make a visual comparison of objects. Non-dyslexics use a linear array of objects to make a simple transitive inference. Expt 2 examined evidence for the visual-impedance effect which suggests that visual information detracts from reasoning leading to longer latencies and reduced accuracy. While non-dyslexics showed the impedance effects predicted, dyslexics showed only reduced accuracy on problems designed specifically to elicit imagery. Expt 3 presented problems with less semantically and visually rich content. The non-dyslexic group again showed impedance effects, but dyslexics did not. Furthermore, in both studies, visual memory predicted reasoning accuracy for dyslexic participants, but not for non-dyslexics, particularly on problems with highly visual content. The findings are discussed in terms of the importance of visual and semantic processes in reasoning for individuals with dyslexia, and we argue that these processes play a compensatory role, offsetting phonological and verbal memory deficits.
Data augmentation-assisted deep learning of hand-drawn partially colored sketches for visual search
Muhammad, Khan; Baik, Sung Wook
2017-01-01
In recent years, image databases are growing at exponential rates, making their management, indexing, and retrieval, very challenging. Typical image retrieval systems rely on sample images as queries. However, in the absence of sample query images, hand-drawn sketches are also used. The recent adoption of touch screen input devices makes it very convenient to quickly draw shaded sketches of objects to be used for querying image databases. This paper presents a mechanism to provide access to visual information based on users’ hand-drawn partially colored sketches using touch screen devices. A key challenge for sketch-based image retrieval systems is to cope with the inherent ambiguity in sketches due to the lack of colors, textures, shading, and drawing imperfections. To cope with these issues, we propose to fine-tune a deep convolutional neural network (CNN) using augmented dataset to extract features from partially colored hand-drawn sketches for query specification in a sketch-based image retrieval framework. The large augmented dataset contains natural images, edge maps, hand-drawn sketches, de-colorized, and de-texturized images which allow CNN to effectively model visual contents presented to it in a variety of forms. The deep features extracted from CNN allow retrieval of images using both sketches and full color images as queries. We also evaluated the role of partial coloring or shading in sketches to improve the retrieval performance. The proposed method is tested on two large datasets for sketch recognition and sketch-based image retrieval and achieved better classification and retrieval performance than many existing methods. PMID:28859140
A Computationally Efficient Visual Saliency Algorithm Suitable for an Analog CMOS Implementation.
D'Angelo, Robert; Wood, Richard; Lowry, Nathan; Freifeld, Geremy; Huang, Haiyao; Salthouse, Christopher D; Hollosi, Brent; Muresan, Matthew; Uy, Wes; Tran, Nhut; Chery, Armand; Poppe, Dorothy C; Sonkusale, Sameer
2018-06-27
Computer vision algorithms are often limited in their application by the large amount of data that must be processed. Mammalian vision systems mitigate this high bandwidth requirement by prioritizing certain regions of the visual field with neural circuits that select the most salient regions. This work introduces a novel and computationally efficient visual saliency algorithm for performing this neuromorphic attention-based data reduction. The proposed algorithm has the added advantage that it is compatible with an analog CMOS design while still achieving comparable performance to existing state-of-the-art saliency algorithms. This compatibility allows for direct integration with the analog-to-digital conversion circuitry present in CMOS image sensors. This integration leads to power savings in the converter by quantizing only the salient pixels. Further system-level power savings are gained by reducing the amount of data that must be transmitted and processed in the digital domain. The analog CMOS compatible formulation relies on a pulse width (i.e., time mode) encoding of the pixel data that is compatible with pulse-mode imagers and slope based converters often used in imager designs. This letter begins by discussing this time-mode encoding for implementing neuromorphic architectures. Next, the proposed algorithm is derived. Hardware-oriented optimizations and modifications to this algorithm are proposed and discussed. Next, a metric for quantifying saliency accuracy is proposed, and simulation results of this metric are presented. Finally, an analog synthesis approach for a time-mode architecture is outlined, and postsynthesis transistor-level simulations that demonstrate functionality of an implementation in a modern CMOS process are discussed.
Spatial awareness in immersive virtual environments revealed in open-loop walking
NASA Astrophysics Data System (ADS)
Turano, Kathleen A.; Chaudhury, Sidhartha
2005-03-01
People are able to walk without vision to previously viewed targets in the real world. This ability to update one"s position in space has been attributed to a path integration system that uses internally generated self-motion signals together with the perceived object-to-self distance of the target. In a previous study using an immersive virtual environment (VE), we found that many subjects were unable to walk without vision to a previously viewed target located 4 m away. Their walking paths were influenced by the room structure that varied trial to trial. In this study we investigated whether the phenomenon is specific to a VE by testing subjects in a real world and a VE. The real world was viewed with field restricting goggles and via cameras using the same head-mounted display as in the VE. The results showed that only in the VE were walking paths influenced by the room structure. Women were more affected than men, and the effect decreased over trials and after subjects performed the task in the real world. The results also showed that a brief (<0.5 s) exposure to the visual scene during self-motion was sufficient to reduce the influence of the room structure on walking paths. The results are consistent with the idea that without visual experience within the VE, the path integration system is unable to effectively update one"s spatial position. As a result, people rely on other cues to define their position in space. Women, unlike men, choose to use visual cues about environmental structure to reorient.
van den Hurk, Job; Van Baelen, Marc; Op de Beeck, Hans P.
2017-01-01
To what extent does functional brain organization rely on sensory input? Here, we show that for the penultimate visual-processing region, ventral-temporal cortex (VTC), visual experience is not the origin of its fundamental organizational property, category selectivity. In the fMRI study reported here, we presented 14 congenitally blind participants with face-, body-, scene-, and object-related natural sounds and presented 20 healthy controls with both auditory and visual stimuli from these categories. Using macroanatomical alignment, response mapping, and surface-based multivoxel pattern analysis, we demonstrated that VTC in blind individuals shows robust discriminatory responses elicited by the four categories and that these patterns of activity in blind subjects could successfully predict the visual categories in sighted controls. These findings were confirmed in a subset of blind participants born without eyes and thus deprived from all light perception since conception. The sounds also could be decoded in primary visual and primary auditory cortex, but these regions did not sustain generalization across modalities. Surprisingly, although not as strong as visual responses, selectivity for auditory stimulation in visual cortex was stronger in blind individuals than in controls. The opposite was observed in primary auditory cortex. Overall, we demonstrated a striking similarity in the cortical response layout of VTC in blind individuals and sighted controls, demonstrating that the overall category-selective map in extrastriate cortex develops independently from visual experience. PMID:28507127
NASA Astrophysics Data System (ADS)
Pieper, Steven D.; McKenna, Michael; Chen, David; McDowall, Ian E.
1994-04-01
We are interested in the application of computer animation to surgery. Our current project, a navigation and visualization tool for knee arthroscopy, relies on real-time computer graphics and the human interface technologies associated with virtual reality. We believe that this new combination of techniques will lead to improved surgical outcomes and decreased health care costs. To meet these expectations in the medical field, the system must be safe, usable, and cost-effective. In this paper, we outline some of the most important hardware and software specifications in the areas of video input and output, spatial tracking, stereoscopic displays, computer graphics models and libraries, mass storage and network interfaces, and operating systems. Since this is a fairly new combination of technologies and a new application, our justification for our specifications are drawn from the current generation of surgical technology and by analogy to other fields where virtual reality technology has been more extensively applied and studied.
Postupalenko, Viktoriia; Desplancq, Dominique; Orlov, Igor; Arntz, Youri; Spehner, Danièle; Mely, Yves; Klaholz, Bruno P; Schultz, Patrick; Weiss, Etienne; Zuber, Guy
2015-09-01
Recombinant proteins with cytosolic or nuclear activities are emerging as tools for interfering with cellular functions. Because such tools rely on vehicles for crossing the plasma membrane we developed a protein delivery system consisting in the assembly of pyridylthiourea-grafted polyethylenimine (πPEI) with affinity-purified His-tagged proteins pre-organized onto a nickel-immobilized polymeric guide. The guide was prepared by functionalization of an ornithine polymer with nitrilotriacetic acid groups and shown to bind several His-tagged proteins. Superstructures were visualized by electron and atomic force microscopy using 2 nm His-tagged gold nanoparticles as probes. The whole system efficiently carried the green fluorescent protein, single-chain antibodies or caspase 3, into the cytosol of living cells. Transduction of the protease caspase 3 induced apoptosis in two cancer cell lines, demonstrating that this new protein delivery method could be used to interfere with cellular functions. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Optic flow-based collision-free strategies: From insects to robots.
Serres, Julien R; Ruffier, Franck
2017-09-01
Flying insects are able to fly smartly in an unpredictable environment. It has been found that flying insects have smart neurons inside their tiny brains that are sensitive to visual motion also called optic flow. Consequently, flying insects rely mainly on visual motion during their flight maneuvers such as: takeoff or landing, terrain following, tunnel crossing, lateral and frontal obstacle avoidance, and adjusting flight speed in a cluttered environment. Optic flow can be defined as the vector field of the apparent motion of objects, surfaces, and edges in a visual scene generated by the relative motion between an observer (an eye or a camera) and the scene. Translational optic flow is particularly interesting for short-range navigation because it depends on the ratio between (i) the relative linear speed of the visual scene with respect to the observer and (ii) the distance of the observer from obstacles in the surrounding environment without any direct measurement of either speed or distance. In flying insects, roll stabilization reflex and yaw saccades attenuate any rotation at the eye level in roll and yaw respectively (i.e. to cancel any rotational optic flow) in order to ensure pure translational optic flow between two successive saccades. Our survey focuses on feedback-loops which use the translational optic flow that insects employ for collision-free navigation. Optic flow is likely, over the next decade to be one of the most important visual cues that can explain flying insects' behaviors for short-range navigation maneuvers in complex tunnels. Conversely, the biorobotic approach can therefore help to develop innovative flight control systems for flying robots with the aim of mimicking flying insects' abilities and better understanding their flight. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
The Dorsal Visual System Predicts Future and Remembers Past Eye Position
Morris, Adam P.; Bremmer, Frank; Krekelberg, Bart
2016-01-01
Eye movements are essential to primate vision but introduce potentially disruptive displacements of the retinal image. To maintain stable vision, the brain is thought to rely on neurons that carry both visual signals and information about the current direction of gaze in their firing rates. We have shown previously that these neurons provide an accurate representation of eye position during fixation, but whether they are updated fast enough during saccadic eye movements to support real-time vision remains controversial. Here we show that not only do these neurons carry a fast and accurate eye-position signal, but also that they support in parallel a range of time-lagged variants, including predictive and post dictive signals. We recorded extracellular activity in four areas of the macaque dorsal visual cortex during a saccade task, including the lateral and ventral intraparietal areas (LIP, VIP), and the middle temporal (MT) and medial superior temporal (MST) areas. As reported previously, neurons showed tonic eye-position-related activity during fixation. In addition, they showed a variety of transient changes in activity around the time of saccades, including relative suppression, enhancement, and pre-saccadic bursts for one saccade direction over another. We show that a hypothetical neuron that pools this rich population activity through a weighted sum can produce an output that mimics the true spatiotemporal dynamics of the eye. Further, with different pooling weights, this downstream eye position signal (EPS) could be updated long before (<100 ms) or after (<200 ms) an eye movement. The results suggest a flexible coding scheme in which downstream computations have access to past, current, and future eye positions simultaneously, providing a basis for visual stability and delay-free visually-guided behavior. PMID:26941617
DataViewer3D: An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool
Gouws, André; Woods, Will; Millman, Rebecca; Morland, Antony; Green, Gary
2008-01-01
Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE™ and DICOM formats for MRI data display (including statistical data overlay). Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additional format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data. PMID:19352444
A space systems perspective of graphics simulation integration
NASA Technical Reports Server (NTRS)
Brown, R.; Gott, C.; Sabionski, G.; Bochsler, D.
1987-01-01
Creation of an interactive display environment can expose issues in system design and operation not apparent from nongraphics development approaches. Large amounts of information can be presented in a short period of time. Processes can be simulated and observed before committing resources. In addition, changes in the economics of computing have enabled broader graphics usage beyond traditional engineering and design into integrated telerobotics and Artificial Intelligence (AI) applications. The highly integrated nature of space operations often tend to rely upon visually intensive man-machine communication to ensure success. Graphics simulation activities at the Mission Planning and Analysis Division (MPAD) of NASA's Johnson Space Center are focusing on the evaluation of a wide variety of graphical analysis within the context of present and future space operations. Several telerobotics and AI applications studies utilizing graphical simulation are described. The presentation includes portions of videotape illustrating technology developments involving: (1) coordinated manned maneuvering unit and remote manipulator system operations, (2) a helmet mounted display system, and (3) an automated rendezous application utilizing expert system and voice input/output technology.
LOD map--A visual interface for navigating multiresolution volume visualization.
Wang, Chaoli; Shen, Han-Wei
2006-01-01
In multiresolution volume visualization, a visual representation of level-of-detail (LOD) quality is important for us to examine, compare, and validate different LOD selection algorithms. While traditional methods rely on ultimate images for quality measurement, we introduce the LOD map--an alternative representation of LOD quality and a visual interface for navigating multiresolution data exploration. Our measure for LOD quality is based on the formulation of entropy from information theory. The measure takes into account the distortion and contribution of multiresolution data blocks. A LOD map is generated through the mapping of key LOD ingredients to a treemap representation. The ordered treemap layout is used for relative stable update of the LOD map when the view or LOD changes. This visual interface not only indicates the quality of LODs in an intuitive way, but also provides immediate suggestions for possible LOD improvement through visually-striking features. It also allows us to compare different views and perform rendering budget control. A set of interactive techniques is proposed to make the LOD adjustment a simple and easy task. We demonstrate the effectiveness and efficiency of our approach on large scientific and medical data sets.
Roberts, Daniel J; Woollams, Anna M; Kim, Esther; Beeson, Pelagie M; Rapcsak, Steven Z; Lambon Ralph, Matthew A
2013-11-01
Recent visual neuroscience investigations suggest that ventral occipito-temporal cortex is retinotopically organized, with high acuity foveal input projecting primarily to the posterior fusiform gyrus (pFG), making this region crucial for coding high spatial frequency information. Because high spatial frequencies are critical for fine-grained visual discrimination, we hypothesized that damage to the left pFG should have an adverse effect not only on efficient reading, as observed in pure alexia, but also on the processing of complex non-orthographic visual stimuli. Consistent with this hypothesis, we obtained evidence that a large case series (n = 20) of patients with lesions centered on left pFG: 1) Exhibited reduced sensitivity to high spatial frequencies; 2) demonstrated prolonged response latencies both in reading (pure alexia) and object naming; and 3) were especially sensitive to visual complexity and similarity when discriminating between novel visual patterns. These results suggest that the patients' dual reading and non-orthographic recognition impairments have a common underlying mechanism and reflect the loss of high spatial frequency visual information normally coded in the left pFG.
How does visual manipulation affect obstacle avoidance strategies used by athletes?
Bijman, M P; Fisher, J J; Vallis, L A
2016-01-01
Research examining our ability to avoid obstacles in our path has stressed the importance of visual input. The aim of this study was to determine if athletes playing varsity-level field sports, who rely on visual input to guide motor behaviour, are more able to guide their foot over obstacles compared to recreational individuals. While wearing kinematic markers, eight varsity athletes and eight age-matched controls (aged 18-25) walked along a walkway and stepped over stationary obstacles (180° motion arc). Visual input was manipulated using PLATO visual goggles three or two steps pre-obstacle crossing and compared to trials where vision was given throughout. A main effect between groups for peak trail toe elevation was shown with greater values generated by the controls for all crossing conditions during full vision trials only. This may be interpreted as athletes not perceiving this obstacle as an increased threat to their postural stability. Collectively, findings suggest the athletic group is able to transfer their abilities to non-specific conditions during full vision trials; however, varsity-level athletes were equally reliant on visual cues for these visually guided stepping tasks as their performance was similar to the controls when vision is removed.
Barone, Pascal; Chambaudie, Laure; Strelnikov, Kuzma; Fraysse, Bernard; Marx, Mathieu; Belin, Pascal; Deguine, Olivier
2016-10-01
Due to signal distortion, speech comprehension in cochlear-implanted (CI) patients relies strongly on visual information, a compensatory strategy supported by important cortical crossmodal reorganisations. Though crossmodal interactions are evident for speech processing, it is unclear whether a visual influence is observed in CI patients during non-linguistic visual-auditory processing, such as face-voice interactions, which are important in social communication. We analyse and compare visual-auditory interactions in CI patients and normal-hearing subjects (NHS) at equivalent auditory performance levels. Proficient CI patients and NHS performed a voice-gender categorisation in the visual-auditory modality from a morphing-generated voice continuum between male and female speakers, while ignoring the presentation of a male or female visual face. Our data show that during the face-voice interaction, CI deaf patients are strongly influenced by visual information when performing an auditory gender categorisation task, in spite of maximum recovery of auditory speech. No such effect is observed in NHS, even in situations of CI simulation. Our hypothesis is that the functional crossmodal reorganisation that occurs in deafness could influence nonverbal processing, such as face-voice interaction; this is important for patient internal supramodal representation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Joint Prior Learning for Visual Sensor Network Noisy Image Super-Resolution
Yue, Bo; Wang, Shuang; Liang, Xuefeng; Jiao, Licheng; Xu, Caijin
2016-01-01
The visual sensor network (VSN), a new type of wireless sensor network composed of low-cost wireless camera nodes, is being applied for numerous complex visual analyses in wild environments, such as visual surveillance, object recognition, etc. However, the captured images/videos are often low resolution with noise. Such visual data cannot be directly delivered to the advanced visual analysis. In this paper, we propose a joint-prior image super-resolution (JPISR) method using expectation maximization (EM) algorithm to improve VSN image quality. Unlike conventional methods that only focus on upscaling images, JPISR alternatively solves upscaling mapping and denoising in the E-step and M-step. To meet the requirement of the M-step, we introduce a novel non-local group-sparsity image filtering method to learn the explicit prior and induce the geometric duality between images to learn the implicit prior. The EM algorithm inherently combines the explicit prior and implicit prior by joint learning. Moreover, JPISR does not rely on large external datasets for training, which is much more practical in a VSN. Extensive experiments show that JPISR outperforms five state-of-the-art methods in terms of both PSNR, SSIM and visual perception. PMID:26927114
Stoecklein, Veit M; Faber, Florian; Koch, Mandy; Mattmüller, Rudi; Schaper, Anika; Rudolph, Frank; Tonn, Joerg C; Schichor, Christian
2015-11-01
The use of intraoperative neurophysiological monitoring (IONM) in neurosurgery has improved patient safety and outcomes. However, a pitfall in the use of IONM remains unsolved. Currently, there is no feasible way for surgeons to interpret IONM waves themselves during operations. Instead, they have to rely on verbal feedback from a neurophysiologist. This method is prone to communication failures, which can lead to delayed or false interpretation of the data. Direct visualization of IONM waves is a way to alleviate this problem and make IONM more effective. Microscope-integrated IONM (MI-IONM) was used in 163 cranial and spinal cases. We evaluated the feasibility, system stability and how well the system integrated into the surgical workflow. We used an IONM system that was connected to a surgical microscope. All IONM modalities used at our institution could be visualized as required, superimposed on the surgical field in the eyepiece of the microscope without obstructing the surgeon's field of vision. Use of MI-IONM was safe and reliable. It furthermore provided valuable intraoperative information. The system merely required a short learning curve. Only minor system problems without impact on surgical workflow occurred. MI-IONM proved to be especially useful in surgical cases where careful monitoring of nerve function is required, e.g., cerebellopontine angle surgery. Here, direct assessment of surgical action and IONM wave change was provided to the surgeon, if necessary (on-off control). MI-IONM is a useful extension of conventional IONM that provides optional real-time functional information to the surgeon on demand.
Visual and vestibular induced eye movements in verbal children and adults with autism
Furman, Joseph M.; Osorio, Maria Joana; Minshew, Nancy J.
2016-01-01
This study investigated several types of eye movements that rely on the function of brainstem-cerebellar pathways specifically (vestibular-ocular reflexes) or on widely distributed pathways of the brain (horizontal pursuit and saccade eye movements). Although eye movements that rely on higher brain regions have been studies fairly extensively in autism, eye movements dependent on brainstem and cerebellum have not. This study involved 79 individuals with autism and 62 typical controls aged 5 to 52 years with IQ scores above 70. No differences between the autism and control groups were present on the measures of vestibular ocular reflexes, or on saccade velocity or accuracy. The autism group was significantly slower to initiate saccades, which was most prominent in the 8-18 year old age range. These findings provide the most substantial evidence to date of the functional integrity of brainstem and cerebellar pathways in autism, suggesting that the histopathological abnormalities described in these structures may not be associated with intrinsic dysfunction but rather reflect developmental alterations related to forebrain cortical systems formation. The increase in saccade latency adds to the substantial evidence of altered function and maturation of cortical systems in autism. Objective This study assessed the functionality of vestibular, pursuit and saccade circuitry in autism across a wide age range. Methods Subjects were 79 individuals with autism (AUT) and 62 controls (CON) aged 5 to 52 years with IQ scores > 70. For vestibular testing, earth-vertical axis rotation was performed in darkness and in a lighted visual surround with a fixation target. Ocular motor testing included assessment of horizontal saccades and horizontal smooth pursuit. Results No between-group differences were found in vestibular reflexes or in mean saccade velocity or accuracy. Saccade latency was increased in the AUT group with significant age-related effects in the 8-18 year old subgroups. There was a trend toward decreased pursuit gain without age effects. Conclusions Normal vestibular-induced eye movements and normal saccade accuracy and velocity provide the most substantial evidence to date of the functional integrity of brainstem and cerebellar pathways in autism, suggesting that the histopathological abnormalities described in these structures may not be associated with intrinsic dysfunction but rather reflect developmental alterations related to forebrain cortical systems formation. Increased saccade latency with age effects adds to the extensive existing evidence of altered function and maturation of cortical systems in autism. PMID:25846907