Sample records for integrate visual information

  1. Thinking graphically: Connecting vision and cognition during graph comprehension.

    PubMed

    Ratwani, Raj M; Trafton, J Gregory; Boehm-Davis, Deborah A

    2008-03-01

    Task analytic theories of graph comprehension account for the perceptual and conceptual processes required to extract specific information from graphs. Comparatively, the processes underlying information integration have received less attention. We propose a new framework for information integration that highlights visual integration and cognitive integration. During visual integration, pattern recognition processes are used to form visual clusters of information; these visual clusters are then used to reason about the graph during cognitive integration. In 3 experiments, the processes required to extract specific information and to integrate information were examined by collecting verbal protocol and eye movement data. Results supported the task analytic theories for specific information extraction and the processes of visual and cognitive integration for integrative questions. Further, the integrative processes scaled up as graph complexity increased, highlighting the importance of these processes for integration in more complex graphs. Finally, based on this framework, design principles to improve both visual and cognitive integration are described. PsycINFO Database Record (c) 2008 APA, all rights reserved

  2. Neural correlates of audiovisual integration in music reading.

    PubMed

    Nichols, Emily S; Grahn, Jessica A

    2016-10-01

    Integration of auditory and visual information is important to both language and music. In the linguistic domain, audiovisual integration alters event-related potentials (ERPs) at early stages of processing (the mismatch negativity (MMN)) as well as later stages (P300(Andres et al., 2011)). However, the role of experience in audiovisual integration is unclear, as reading experience is generally confounded with developmental stage. Here we tested whether audiovisual integration of music appears similar to reading, and how musical experience altered integration. We compared brain responses in musicians and non-musicians on an auditory pitch-interval oddball task that evoked the MMN and P300, while manipulating whether visual pitch-interval information was congruent or incongruent with the auditory information. We predicted that the MMN and P300 would be largest when both auditory and visual stimuli deviated, because audiovisual integration would increase the neural response when the deviants were congruent. The results indicated that scalp topography differed between musicians and non-musicians for both the MMN and P300 response to deviants. Interestingly, musicians' musical training modulated integration of congruent deviants at both early and late stages of processing. We propose that early in the processing stream, visual information may guide interpretation of auditory information, leading to a larger MMN when auditory and visual information mismatch. At later attentional stages, integration of the auditory and visual stimuli leads to a larger P300 amplitude. Thus, experience with musical visual notation shapes the way the brain integrates abstract sound-symbol pairings, suggesting that musicians can indeed inform us about the role of experience in audiovisual integration. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Visual and haptic integration in the estimation of softness of deformable objects

    PubMed Central

    Cellini, Cristiano; Kaim, Lukas; Drewing, Knut

    2013-01-01

    Softness perception intrinsically relies on haptic information. However, through everyday experiences we learn correspondences between felt softness and the visual effects of exploratory movements that are executed to feel softness. Here, we studied how visual and haptic information is integrated to assess the softness of deformable objects. Participants discriminated between the softness of two softer or two harder objects using only-visual, only-haptic or both visual and haptic information. We assessed the reliabilities of the softness judgments using the method of constant stimuli. In visuo-haptic trials, discrepancies between the two senses' information allowed us to measure the contribution of the individual senses to the judgments. Visual information (finger movement and object deformation) was simulated using computer graphics; input in visual trials was taken from previous visuo-haptic trials. Participants were able to infer softness from vision alone, and vision considerably contributed to bisensory judgments (∼35%). The visual contribution was higher than predicted from models of optimal integration (senses are weighted according to their reliabilities). Bisensory judgments were less reliable than predicted from optimal integration. We conclude that the visuo-haptic integration of softness information is biased toward vision, rather than being optimal, and might even be guided by a fixed weighting scheme. PMID:25165510

  4. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase two, volume 4 : web-based bridge information database--visualization analytics and distributed sensing.

    DOT National Transportation Integrated Search

    2012-03-01

    This report introduces the design and implementation of a Web-based bridge information visual analytics system. This : project integrates Internet, multiple databases, remote sensing, and other visualization technologies. The result : combines a GIS ...

  5. Behavior Selection of Mobile Robot Based on Integration of Multimodal Information

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kaneko, Masahide

    Recently, biologically inspired robots have been developed to acquire the capacity for directing visual attention to salient stimulus generated from the audiovisual environment. On purpose to realize this behavior, a general method is to calculate saliency maps to represent how much the external information attracts the robot's visual attention, where the audiovisual information and robot's motion status should be involved. In this paper, we represent a visual attention model where three modalities, that is, audio information, visual information and robot's motor status are considered, while the previous researches have not considered all of them. Firstly, we introduce a 2-D density map, on which the value denotes how much the robot pays attention to each spatial location. Then we model the attention density using a Bayesian network where the robot's motion statuses are involved. Secondly, the information from both of audio and visual modalities is integrated with the attention density map in integrate-fire neurons. The robot can direct its attention to the locations where the integrate-fire neurons are fired. Finally, the visual attention model is applied to make the robot select the visual information from the environment, and react to the content selected. Experimental results show that it is possible for robots to acquire the visual information related to their behaviors by using the attention model considering motion statuses. The robot can select its behaviors to adapt to the dynamic environment as well as to switch to another task according to the recognition results of visual attention.

  6. Prediction and constraint in audiovisual speech perception

    PubMed Central

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. PMID:25890390

  7. An exploratory study of temporal integration in the peripheral retina of myopes

    NASA Astrophysics Data System (ADS)

    Macedo, Antonio F.; Encarnação, Tito J.; Vilarinho, Daniel; Baptista, António M. G.

    2017-08-01

    The visual system takes time to respond to visual stimuli, neurons need to accumulate information over a time span in order to fire. Visual information perceived by the peripheral retina might be impaired by imperfect peripheral optics leading to myopia development. This study explored the effect of eccentricity, moderate myopia and peripheral refraction in temporal visual integration. Myopes and emmetropes showed similar performance at detecting briefly flashed stimuli in different retinal locations. Our results show evidence that moderate myopes have normal visual integration when refractive errors are corrected with contact lens; however, the tendency to increased temporal integration thresholds observed in myopes deserves further investigation.

  8. Integration and binding in rehabilitative sensory substitution: Increasing resolution using a new Zooming-in approach

    PubMed Central

    Buchs, Galit; Maidenbaum, Shachar; Levy-Tzedek, Shelly; Amedi, Amir

    2015-01-01

    Purpose: To visually perceive our surroundings we constantly move our eyes and focus on particular details, and then integrate them into a combined whole. Current visual rehabilitation methods, both invasive, like bionic-eyes and non-invasive, like Sensory Substitution Devices (SSDs), down-sample visual stimuli into low-resolution images. Zooming-in to sub-parts of the scene could potentially improve detail perception. Can congenitally blind individuals integrate a ‘visual’ scene when offered this information via different sensory modalities, such as audition? Can they integrate visual information –perceived in parts - into larger percepts despite never having had any visual experience? Methods: We explored these questions using a zooming-in functionality embedded in the EyeMusic visual-to-auditory SSD. Eight blind participants were tasked with identifying cartoon faces by integrating their individual components recognized via the EyeMusic’s zooming mechanism. Results: After specialized training of just 6–10 hours, blind participants successfully and actively integrated facial features into cartooned identities in 79±18% of the trials in a highly significant manner, (chance level 10% ; rank-sum P <  1.55E-04). Conclusions: These findings show that even users who lacked any previous visual experience whatsoever can indeed integrate this visual information with increased resolution. This potentially has important practical visual rehabilitation implications for both invasive and non-invasive methods. PMID:26518671

  9. Influences of selective adaptation on perception of audiovisual speech

    PubMed Central

    Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.

    2016-01-01

    Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781

  10. Temporal windows in visual processing: "prestimulus brain state" and "poststimulus phase reset" segregate visual transients on different temporal scales.

    PubMed

    Wutz, Andreas; Weisz, Nathan; Braun, Christoph; Melcher, David

    2014-01-22

    Dynamic vision requires both stability of the current perceptual representation and sensitivity to the accumulation of sensory evidence over time. Here we study the electrophysiological signatures of this intricate balance between temporal segregation and integration in vision. Within a forward masking paradigm with short and long stimulus onset asynchronies (SOA), we manipulated the temporal overlap of the visual persistence of two successive transients. Human observers enumerated the items presented in the second target display as a measure of the informational capacity read-out from this partly temporally integrated visual percept. We observed higher β-power immediately before mask display onset in incorrect trials, in which enumeration failed due to stronger integration of mask and target visual information. This effect was timescale specific, distinguishing between segregation and integration of visual transients that were distant in time (long SOA). Conversely, for short SOA trials, mask onset evoked a stronger visual response when mask and targets were correctly segregated in time. Examination of the target-related response profile revealed the importance of an evoked α-phase reset for the segregation of those rapid visual transients. Investigating this precise mapping of the temporal relationships of visual signals onto electrophysiological responses highlights how the stream of visual information is carved up into discrete temporal windows that mediate between segregated and integrated percepts. Fragmenting the stream of visual information provides a means to stabilize perceptual events within one instant in time.

  11. Immediate integration of prosodic information from speech and visual information from pictures in the absence of focused attention: a mismatch negativity study.

    PubMed

    Li, X; Yang, Y; Ren, G

    2009-06-16

    Language is often perceived together with visual information. Recent experimental evidences indicated that, during spoken language comprehension, the brain can immediately integrate visual information with semantic or syntactic information from speech. Here we used the mismatch negativity to further investigate whether prosodic information from speech could be immediately integrated into a visual scene context or not, and especially the time course and automaticity of this integration process. Sixteen Chinese native speakers participated in the study. The materials included Chinese spoken sentences and picture pairs. In the audiovisual situation, relative to the concomitant pictures, the spoken sentence was appropriately accented in the standard stimuli, but inappropriately accented in the two kinds of deviant stimuli. In the purely auditory situation, the speech sentences were presented without pictures. It was found that the deviants evoked mismatch responses in both audiovisual and purely auditory situations; the mismatch negativity in the purely auditory situation peaked at the same time as, but was weaker than that evoked by the same deviant speech sounds in the audiovisual situation. This pattern of results suggested immediate integration of prosodic information from speech and visual information from pictures in the absence of focused attention.

  12. Semantic integration of differently asynchronous audio-visual information in videos of real-world events in cognitive processing: an ERP study.

    PubMed

    Liu, Baolin; Wu, Guangning; Wang, Zhongning; Ji, Xiang

    2011-07-01

    In the real world, some of the auditory and visual information received by the human brain are temporally asynchronous. How is such information integrated in cognitive processing in the brain? In this paper, we aimed to study the semantic integration of differently asynchronous audio-visual information in cognitive processing using ERP (event-related potential) method. Subjects were presented with videos of real world events, in which the auditory and visual information are temporally asynchronous. When the critical action was prior to the sound, sounds incongruous with the preceding critical actions elicited a N400 effect when compared to congruous condition. This result demonstrates that semantic contextual integration indexed by N400 also applies to cognitive processing of multisensory information. In addition, the N400 effect is early in latency when contrasted with other visually induced N400 studies. It is shown that cross modal information is facilitated in time when contrasted with visual information in isolation. When the sound was prior to the critical action, a larger late positive wave was observed under the incongruous condition compared to congruous condition. P600 might represent a reanalysis process, in which the mismatch between the critical action and the preceding sound was evaluated. It is shown that environmental sound may affect the cognitive processing of a visual event. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  13. The integration processing of the visual and auditory information in videos of real-world events: an ERP study.

    PubMed

    Liu, Baolin; Wang, Zhongning; Jin, Zhixing

    2009-09-11

    In real life, the human brain usually receives information through visual and auditory channels and processes the multisensory information, but studies on the integration processing of the dynamic visual and auditory information are relatively few. In this paper, we have designed an experiment, where through the presentation of common scenario, real-world videos, with matched and mismatched actions (images) and sounds as stimuli, we aimed to study the integration processing of synchronized visual and auditory information in videos of real-world events in the human brain, through the use event-related potentials (ERPs) methods. Experimental results showed that videos of mismatched actions (images) and sounds would elicit a larger P400 as compared to videos of matched actions (images) and sounds. We believe that the P400 waveform might be related to the cognitive integration processing of mismatched multisensory information in the human brain. The results also indicated that synchronized multisensory information would interfere with each other, which would influence the results of the cognitive integration processing.

  14. Prediction and constraint in audiovisual speech perception.

    PubMed

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. The integration of visual context information in facial emotion recognition in 5- to 15-year-olds.

    PubMed

    Theurel, Anne; Witt, Arnaud; Malsert, Jennifer; Lejeune, Fleur; Fiorentini, Chiara; Barisnikov, Koviljka; Gentaz, Edouard

    2016-10-01

    The current study investigated the role of congruent visual context information in the recognition of facial emotional expression in 190 participants from 5 to 15years of age. Children performed a matching task that presented pictures with different facial emotional expressions (anger, disgust, happiness, fear, and sadness) in two conditions: with and without a visual context. The results showed that emotions presented with visual context information were recognized more accurately than those presented in the absence of visual context. The context effect remained steady with age but varied according to the emotion presented and the gender of participants. The findings demonstrated for the first time that children from the age of 5years are able to integrate facial expression and visual context information, and this integration improves facial emotion recognition. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. The influence of selective attention to auditory and visual speech on the integration of audiovisual speech information.

    PubMed

    Buchan, Julie N; Munhall, Kevin G

    2011-01-01

    Conflicting visual speech information can influence the perception of acoustic speech, causing an illusory percept of a sound not present in the actual acoustic speech (the McGurk effect). We examined whether participants can voluntarily selectively attend to either the auditory or visual modality by instructing participants to pay attention to the information in one modality and to ignore competing information from the other modality. We also examined how performance under these instructions was affected by weakening the influence of the visual information by manipulating the temporal offset between the audio and video channels (experiment 1), and the spatial frequency information present in the video (experiment 2). Gaze behaviour was also monitored to examine whether attentional instructions influenced the gathering of visual information. While task instructions did have an influence on the observed integration of auditory and visual speech information, participants were unable to completely ignore conflicting information, particularly information from the visual stream. Manipulating temporal offset had a more pronounced interaction with task instructions than manipulating the amount of visual information. Participants' gaze behaviour suggests that the attended modality influences the gathering of visual information in audiovisual speech perception.

  17. Integration of auditory and visual communication information in the primate ventrolateral prefrontal cortex.

    PubMed

    Sugihara, Tadashi; Diltz, Mark D; Averbeck, Bruno B; Romanski, Lizabeth M

    2006-10-25

    The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O'Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication.

  18. Integration of Auditory and Visual Communication Information in the Primate Ventrolateral Prefrontal Cortex

    PubMed Central

    Sugihara, Tadashi; Diltz, Mark D.; Averbeck, Bruno B.; Romanski, Lizabeth M.

    2009-01-01

    The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The multisensory interactions included both enhancement and suppression of a predominantly auditory or a predominantly visual response, although multisensory suppression was the more common mode of response. The multisensory neurons were distributed across the VLPFC and within previously identified unimodal auditory and visual regions (O’Scalaidhe et al., 1997; Romanski and Goldman-Rakic, 2002). Thus, our study demonstrates, for the first time, that single prefrontal neurons integrate communication information from the auditory and visual domains, suggesting that these neurons are an important node in the cortical network responsible for communication. PMID:17065454

  19. Numerosity estimation benefits from transsaccadic information integration

    PubMed Central

    Hübner, Carolin; Schütz, Alexander C.

    2017-01-01

    Humans achieve a stable and homogeneous representation of their visual environment, although visual processing varies across the visual field. Here we investigated the circumstances under which peripheral and foveal information is integrated for numerosity estimation across saccades. We asked our participants to judge the number of black and white dots on a screen. Information was presented either in the periphery before a saccade, in the fovea after a saccade, or in both areas consecutively to measure transsaccadic integration. In contrast to previous findings, we found an underestimation of numerosity for foveal presentation and an overestimation for peripheral presentation. We used a maximum-likelihood model to predict accuracy and reliability in the transsaccadic condition based on peripheral and foveal values. We found near-optimal integration of peripheral and foveal information, consistently with previous findings about orientation integration. In three consecutive experiments, we disrupted object continuity between the peripheral and foveal presentations to probe the limits of transsaccadic integration. Even for global changes on our numerosity stimuli, no influence of object discontinuity was observed. Overall, our results suggest that transsaccadic integration is a robust mechanism that also works for complex visual features such as numerosity and is operative despite internal or external mismatches between foveal and peripheral information. Transsaccadic integration facilitates an accurate and reliable perception of our environment. PMID:29149766

  20. A Notation for Rapid Specification of Information Visualization

    ERIC Educational Resources Information Center

    Lee, Sang Yun

    2013-01-01

    This thesis describes a notation for rapid specification of information visualization, which can be used as a theoretical framework of integrating various types of information visualization, and its applications at a conceptual level. The notation is devised to codify the major characteristics of data/visual structures in conventionally-used…

  1. Audio-visual affective expression recognition

    NASA Astrophysics Data System (ADS)

    Huang, Thomas S.; Zeng, Zhihong

    2007-11-01

    Automatic affective expression recognition has attracted more and more attention of researchers from different disciplines, which will significantly contribute to a new paradigm for human computer interaction (affect-sensitive interfaces, socially intelligent environments) and advance the research in the affect-related fields including psychology, psychiatry, and education. Multimodal information integration is a process that enables human to assess affective states robustly and flexibly. In order to understand the richness and subtleness of human emotion behavior, the computer should be able to integrate information from multiple sensors. We introduce in this paper our efforts toward machine understanding of audio-visual affective behavior, based on both deliberate and spontaneous displays. Some promising methods are presented to integrate information from both audio and visual modalities. Our experiments show the advantage of audio-visual fusion in affective expression recognition over audio-only or visual-only approaches.

  2. When vision is not an option: children's integration of auditory and haptic information is suboptimal.

    PubMed

    Petrini, Karin; Remark, Alicia; Smith, Louise; Nardini, Marko

    2014-05-01

    When visual information is available, human adults, but not children, have been shown to reduce sensory uncertainty by taking a weighted average of sensory cues. In the absence of reliable visual information (e.g. extremely dark environment, visual disorders), the use of other information is vital. Here we ask how humans combine haptic and auditory information from childhood. In the first experiment, adults and children aged 5 to 11 years judged the relative sizes of two objects in auditory, haptic, and non-conflicting bimodal conditions. In , different groups of adults and children were tested in non-conflicting and conflicting bimodal conditions. In , adults reduced sensory uncertainty by integrating the cues optimally, while children did not. In , adults and children used similar weighting strategies to solve audio-haptic conflict. These results suggest that, in the absence of visual information, optimal integration of cues for discrimination of object size develops late in childhood. © 2014 The Authors. Developmental Science Published by John Wiley & Sons Ltd.

  3. Integration of Visual and Proprioceptive Limb Position Information in Human Posterior Parietal, Premotor, and Extrastriate Cortex.

    PubMed

    Limanowski, Jakub; Blankenburg, Felix

    2016-03-02

    The brain constructs a flexible representation of the body from multisensory information. Previous work on monkeys suggests that the posterior parietal cortex (PPC) and ventral premotor cortex (PMv) represent the position of the upper limbs based on visual and proprioceptive information. Human experiments on the rubber hand illusion implicate similar regions, but since such experiments rely on additional visuo-tactile interactions, they cannot isolate visuo-proprioceptive integration. Here, we independently manipulated the position (palm or back facing) of passive human participants' unseen arm and of a photorealistic virtual 3D arm. Functional magnetic resonance imaging (fMRI) revealed that matching visual and proprioceptive information about arm position engaged the PPC, PMv, and the body-selective extrastriate body area (EBA); activity in the PMv moreover reflected interindividual differences in congruent arm ownership. Further, the PPC, PMv, and EBA increased their coupling with the primary visual cortex during congruent visuo-proprioceptive position information. These results suggest that human PPC, PMv, and EBA evaluate visual and proprioceptive position information and, under sufficient cross-modal congruence, integrate it into a multisensory representation of the upper limb in space. The position of our limbs in space constantly changes, yet the brain manages to represent limb position accurately by combining information from vision and proprioception. Electrophysiological recordings in monkeys have revealed neurons in the posterior parietal and premotor cortices that seem to implement and update such a multisensory limb representation, but this has been difficult to demonstrate in humans. Our fMRI experiment shows that human posterior parietal, premotor, and body-selective visual brain areas respond preferentially to a virtual arm seen in a position corresponding to one's unseen hidden arm, while increasing their communication with regions conveying visual information. These brain areas thus likely integrate visual and proprioceptive information into a flexible multisensory body representation. Copyright © 2016 the authors 0270-6474/16/362582-08$15.00/0.

  4. On the effects of multimodal information integration in multitasking.

    PubMed

    Stock, Ann-Kathrin; Gohil, Krutika; Huster, René J; Beste, Christian

    2017-07-07

    There have recently been considerable advances in our understanding of the neuronal mechanisms underlying multitasking, but the role of multimodal integration for this faculty has remained rather unclear. We examined this issue by comparing different modality combinations in a multitasking (stop-change) paradigm. In-depth neurophysiological analyses of event-related potentials (ERPs) were conducted to complement the obtained behavioral data. Specifically, we applied signal decomposition using second order blind identification (SOBI) to the multi-subject ERP data and source localization. We found that both general multimodal information integration and modality-specific aspects (potentially related to task difficulty) modulate behavioral performance and associated neurophysiological correlates. Simultaneous multimodal input generally increased early attentional processing of visual stimuli (i.e. P1 and N1 amplitudes) as well as measures of cognitive effort and conflict (i.e. central P3 amplitudes). Yet, tactile-visual input caused larger impairments in multitasking than audio-visual input. General aspects of multimodal information integration modulated the activity in the premotor cortex (BA 6) as well as different visual association areas concerned with the integration of visual information with input from other modalities (BA 19, BA 21, BA 37). On top of this, differences in the specific combination of modalities also affected performance and measures of conflict/effort originating in prefrontal regions (BA 6).

  5. The Perceptual Root of Object-Based Storage: An Interactive Model of Perception and Visual Working Memory

    ERIC Educational Resources Information Center

    Gao, Tao; Gao, Zaifeng; Li, Jie; Sun, Zhongqiang; Shen, Mowei

    2011-01-01

    Mainstream theories of visual perception assume that visual working memory (VWM) is critical for integrating online perceptual information and constructing coherent visual experiences in changing environments. Given the dynamic interaction between online perception and VWM, we propose that how visual information is processed during visual…

  6. The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure

    PubMed Central

    Stacey, Paula C.; Kitterick, Pádraig T.; Morris, Saffron D.; Sumner, Christian J.

    2017-01-01

    Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker's voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues. PMID:27085797

  7. Basic multisensory functions can be acquired after congenital visual pattern deprivation in humans.

    PubMed

    Putzar, Lisa; Gondan, Matthias; Röder, Brigitte

    2012-01-01

    People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth.

  8. Assessment of visual communication by information theory

    NASA Astrophysics Data System (ADS)

    Huck, Friedrich O.; Fales, Carl L.

    1994-01-01

    This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.

  9. Weighted integration of short-term memory and sensory signals in the oculomotor system.

    PubMed

    Deravet, Nicolas; Blohm, Gunnar; de Xivry, Jean-Jacques Orban; Lefèvre, Philippe

    2018-05-01

    Oculomotor behaviors integrate sensory and prior information to overcome sensory-motor delays and noise. After much debate about this process, reliability-based integration has recently been proposed and several models of smooth pursuit now include recurrent Bayesian integration or Kalman filtering. However, there is a lack of behavioral evidence in humans supporting these theoretical predictions. Here, we independently manipulated the reliability of visual and prior information in a smooth pursuit task. Our results show that both smooth pursuit eye velocity and catch-up saccade amplitude were modulated by visual and prior information reliability. We interpret these findings as the continuous reliability-based integration of a short-term memory of target motion with visual information, which support modeling work. Furthermore, we suggest that saccadic and pursuit systems share this short-term memory. We propose that this short-term memory of target motion is quickly built and continuously updated, and constitutes a general building block present in all sensorimotor systems.

  10. Shape Recognition in Infancy: Visual Integration of Sequential Information.

    ERIC Educational Resources Information Center

    Rose, Susan A

    1988-01-01

    Investigated infants' integration of visual information across space and time. In four experiments, infants aged 12 months and 6 months viewed objects after watching light trace similar and dissimilar shapes. Infants looked longer at novel shapes, although six-month-olds did not recognize figures taking more than 10 seconds to trace. One-year-old…

  11. Semantic integration of audio-visual information of polyphonic characters in a sentence context: an event-related potential study.

    PubMed

    Liu, Hong; Zhang, Gaoyan; Liu, Baolin

    2017-04-01

    In the Chinese language, a polyphone is a kind of special character that has more than one pronunciation, with each pronunciation corresponding to a different meaning. Here, we aimed to reveal the cognitive processing of audio-visual information integration of polyphones in a sentence context using the event-related potential (ERP) method. Sentences ending with polyphones were presented to subjects simultaneously in both an auditory and a visual modality. Four experimental conditions were set in which the visual presentations were the same, but the pronunciations of the polyphones were: the correct pronunciation; another pronunciation of the polyphone; a semantically appropriate pronunciation but not the pronunciation of the polyphone; or a semantically inappropriate pronunciation but also not the pronunciation of the polyphone. The behavioral results demonstrated significant differences in response accuracies when judging the semantic meanings of the audio-visual sentences, which reflected the different demands on cognitive resources. The ERP results showed that in the early stage, abnormal pronunciations were represented by the amplitude of the P200 component. Interestingly, because the phonological information mediated access to the lexical semantics, the amplitude and latency of the N400 component changed linearly across conditions, which may reflect the gradually increased semantic mismatch in the four conditions when integrating the auditory pronunciation with the visual information. Moreover, the amplitude of the late positive shift (LPS) showed a significant correlation with the behavioral response accuracies, demonstrating that the LPS component reveals the demand of cognitive resources for monitoring and resolving semantic conflicts when integrating the audio-visual information.

  12. The neural representation of objects formed through the spatiotemporal integration of visual transients

    PubMed Central

    Erlikhman, Gennady; Gurariy, Gennadiy; Mruczek, Ryan E.B.; Caplovitz, Gideon P.

    2016-01-01

    Oftentimes, objects are only partially and transiently visible as parts of them become occluded during observer or object motion. The visual system can integrate such object fragments across space and time into perceptual wholes or spatiotemporal objects. This integrative and dynamic process may involve both ventral and dorsal visual processing pathways, along which shape and spatial representations are thought to arise. We measured fMRI BOLD response to spatiotemporal objects and used multi-voxel pattern analysis (MVPA) to decode shape information across 20 topographic regions of visual cortex. Object identity could be decoded throughout visual cortex, including intermediate (V3A, V3B, hV4, LO1-2,) and dorsal (TO1-2, and IPS0-1) visual areas. Shape-specific information, therefore, may not be limited to early and ventral visual areas, particularly when it is dynamic and must be integrated. Contrary to the classic view that the representation of objects is the purview of the ventral stream, intermediate and dorsal areas may play a distinct and critical role in the construction of object representations across space and time. PMID:27033688

  13. Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration.

    PubMed

    Wahn, Basil; König, Peter

    2015-01-01

    Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.

  14. Alpha and Beta Oscillations Index Semantic Congruency between Speech and Gestures in Clear and Degraded Speech.

    PubMed

    Drijvers, Linda; Özyürek, Asli; Jensen, Ole

    2018-06-19

    Previous work revealed that visual semantic information conveyed by gestures can enhance degraded speech comprehension, but the mechanisms underlying these integration processes under adverse listening conditions remain poorly understood. We used MEG to investigate how oscillatory dynamics support speech-gesture integration when integration load is manipulated by auditory (e.g., speech degradation) and visual semantic (e.g., gesture congruency) factors. Participants were presented with videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching (mixing gesture + "mixing") or mismatching (drinking gesture + "walking") gesture. In clear speech, alpha/beta power was more suppressed in the left inferior frontal gyrus and motor and visual cortices when integration load increased in response to mismatching versus matching gestures. In degraded speech, beta power was less suppressed over posterior STS and medial temporal lobe for mismatching compared with matching gestures, showing that integration load was lowest when speech was degraded and mismatching gestures could not be integrated and disambiguate the degraded signal. Our results thus provide novel insights on how low-frequency oscillatory modulations in different parts of the cortex support the semantic audiovisual integration of gestures in clear and degraded speech: When speech is clear, the left inferior frontal gyrus and motor and visual cortices engage because higher-level semantic information increases semantic integration load. When speech is degraded, posterior STS/middle temporal gyrus and medial temporal lobe are less engaged because integration load is lowest when visual semantic information does not aid lexical retrieval and speech and gestures cannot be integrated.

  15. On the assessment of visual communication by information theory

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1993-01-01

    This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.

  16. Visual influence on path integration in darkness indicates a multimodal representation of large-scale space

    PubMed Central

    Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil

    2011-01-01

    Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934

  17. A biologically inspired neural model for visual and proprioceptive integration including sensory training.

    PubMed

    Saidi, Maryam; Towhidkhah, Farzad; Gharibzadeh, Shahriar; Lari, Abdolaziz Azizi

    2013-12-01

    Humans perceive the surrounding world by integration of information through different sensory modalities. Earlier models of multisensory integration rely mainly on traditional Bayesian and causal Bayesian inferences for single causal (source) and two causal (for two senses such as visual and auditory systems), respectively. In this paper a new recurrent neural model is presented for integration of visual and proprioceptive information. This model is based on population coding which is able to mimic multisensory integration of neural centers in the human brain. The simulation results agree with those achieved by casual Bayesian inference. The model can also simulate the sensory training process of visual and proprioceptive information in human. Training process in multisensory integration is a point with less attention in the literature before. The effect of proprioceptive training on multisensory perception was investigated through a set of experiments in our previous study. The current study, evaluates the effect of both modalities, i.e., visual and proprioceptive training and compares them with each other through a set of new experiments. In these experiments, the subject was asked to move his/her hand in a circle and estimate its position. The experiments were performed on eight subjects with proprioception training and eight subjects with visual training. Results of the experiments show three important points: (1) visual learning rate is significantly more than that of proprioception; (2) means of visual and proprioceptive errors are decreased by training but statistical analysis shows that this decrement is significant for proprioceptive error and non-significant for visual error, and (3) visual errors in training phase even in the beginning of it, is much less than errors of the main test stage because in the main test, the subject has to focus on two senses. The results of the experiments in this paper is in agreement with the results of the neural model simulation.

  18. Research on robot mobile obstacle avoidance control based on visual information

    NASA Astrophysics Data System (ADS)

    Jin, Jiang

    2018-03-01

    Robots to detect obstacles and control robots to avoid obstacles has been a key research topic of robot control. In this paper, a scheme of visual information acquisition is proposed. By judging visual information, the visual information is transformed into the information source of path processing. In accordance with the established route, in the process of encountering obstacles, the algorithm real-time adjustment trajectory to meet the purpose of intelligent control of mobile robots. Simulation results show that, through the integration of visual sensing information, the obstacle information is fully obtained, while the real-time and accuracy of the robot movement control is guaranteed.

  19. Performance on the Developmental Test of Visual-Motor Integration and Its Supplementary Tests: Comparing Chinese and U.S. Kindergarten Children

    ERIC Educational Resources Information Center

    Tse, Linda F. L.; Siu, Andrew M. H.; Li-Tsang, Cecilia W. P.

    2017-01-01

    Visual-motor integration (VMI) is the ability to coordinate visual perception and motor skills. Although Chinese children have superior performance in VMI than U.S. norms, there is limited information regarding the performance of its basic composition of VMI in regard to visual and motor aspects. This study aimed to examine the differences in…

  20. Ontology-driven data integration and visualization for exploring regional geologic time and paleontological information

    NASA Astrophysics Data System (ADS)

    Wang, Chengbin; Ma, Xiaogang; Chen, Jianguo

    2018-06-01

    Initiatives of open data promote the online publication and sharing of large amounts of geologic data. How to retrieve information and discover knowledge from the big data is an ongoing challenge. In this paper, we developed an ontology-driven data integration and visualization pilot system for exploring information of regional geologic time, paleontology, and fundamental geology. The pilot system (http://www2.cs.uidaho.edu/%7Emax/gts/)

  1. The quality of visual information about the lower extremities influences visuomotor coordination during virtual obstacle negotiation.

    PubMed

    Kim, Aram; Kretch, Kari S; Zhou, Zixuan; Finley, James M

    2018-05-09

    Successful negotiation of obstacles during walking relies on the integration of visual information about the environment with ongoing locomotor commands. When information about the body and environment are removed through occlusion of the lower visual field, individuals increase downward head pitch angle, reduce foot placement precision, and increase safety margins during crossing. However, whether these effects are mediated by loss of visual information about the lower extremities, the obstacle, or both remains to be seen. Here, we used a fully immersive, virtual obstacle negotiation task to investigate how visual information about the lower extremities is integrated with information about the environment to facilitate skillful obstacle negotiation. Participants stepped over virtual obstacles while walking on a treadmill with one of three types of visual feedback about the lower extremities: no feedback, end-point feedback, or a link-segment model. We found that absence of visual information about the lower extremities led to an increase in the variability of leading foot placement after crossing. The presence of a visual representation of the lower extremities promoted greater downward head pitch angle during the approach to and subsequent crossing of an obstacle. In addition, having greater downward head pitch was associated with closer placement of the trailing foot to the obstacle, further placement of the leading foot after the obstacle, and higher trailing foot clearance. These results demonstrate that the fidelity of visual information about the lower extremities influences both feed-forward and feedback aspects of visuomotor coordination during obstacle negotiation.

  2. An ERP study on whether semantic integration exists in processing ecologically unrelated audio-visual information.

    PubMed

    Liu, Baolin; Meng, Xianyao; Wang, Zhongning; Wu, Guangning

    2011-11-14

    In the present study, we used event-related potentials (ERPs) to examine whether semantic integration occurs for ecologically unrelated audio-visual information. Videos with synchronous audio-visual information were used as stimuli, where the auditory stimuli were sine wave sounds with different sound levels, and the visual stimuli were simple geometric figures with different areas. In the experiment, participants were shown an initial display containing a single shape (drawn from a set of 6 shapes) with a fixed size (14cm(2)) simultaneously with a 3500Hz tone of a fixed intensity (80dB). Following a short delay, another shape/tone pair was presented and the relationship between the size of the shape and the intensity of the tone varied across trials: in the V+A- condition, a large shape was paired with a soft tone; in the V+A+ condition, a large shape was paired with a loud tone, and so forth. The ERPs results revealed that N400 effect was elicited under the VA- condition (V+A- and V-A+) as compared to the VA+ condition (V+A+ and V-A-). It was shown that semantic integration would occur when simultaneous, ecologically unrelated auditory and visual stimuli enter the human brain. We considered that this semantic integration was based on semantic constraint of audio-visual information, which might come from the long-term learned association stored in the human brain and short-term experience of incoming information. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  3. Path integration: effect of curved path complexity and sensory system on blindfolded walking.

    PubMed

    Koutakis, Panagiotis; Mukherjee, Mukul; Vallabhajosula, Srikant; Blanke, Daniel J; Stergiou, Nicholas

    2013-02-01

    Path integration refers to the ability to integrate continuous information of the direction and distance traveled by the system relative to the origin. Previous studies have investigated path integration through blindfolded walking along simple paths such as straight line and triangles. However, limited knowledge exists regarding the role of path complexity in path integration. Moreover, little is known about how information from different sensory input systems (like vision and proprioception) contributes to accurate path integration. The purpose of the current study was to investigate how sensory information and curved path complexity affect path integration. Forty blindfolded participants had to accurately reproduce a curved path and return to the origin. They were divided into four groups that differed in the curved path, circle (simple) or figure-eight (complex), and received either visual (previously seen) or proprioceptive (previously guided) information about the path before they reproduced it. The dependent variables used were average trajectory error, walking speed, and distance traveled. The results indicated that (a) both groups that walked on a circular path and both groups that received visual information produced greater accuracy in reproducing the path. Moreover, the performance of the group that received proprioceptive information and later walked on a figure-eight path was less accurate than their corresponding circular group. The groups that had the visual information also walked faster compared to the group that had proprioceptive information. Results of the current study highlight the roles of different sensory inputs while performing blindfolded walking for path integration. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Teachers' Methodologies and Sources of Information on HIV/AIDS for Students with Visual Impairments in Selected Residential and Integrated Schools in Ghana

    ERIC Educational Resources Information Center

    Hayford, Samuel K.; Ocansey, Frederick

    2017-01-01

    This study reports part of a national survey on sources of information, education and communication materials on HIV/AIDS available to students with visual impairments in residential, segregated, and integrated schools in Ghana. A multi-staged stratified random sampling procedure and a purposive and simple random sampling approach, where…

  5. Models Extracted from Text for System-Software Safety Analyses

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2010-01-01

    This presentation describes extraction and integration of requirements information and safety information in visualizations to support early review of completeness, correctness, and consistency of lengthy and diverse system safety analyses. Software tools have been developed and extended to perform the following tasks: 1) extract model parts and safety information from text in interface requirements documents, failure modes and effects analyses and hazard reports; 2) map and integrate the information to develop system architecture models and visualizations for safety analysts; and 3) provide model output to support virtual system integration testing. This presentation illustrates the methods and products with a rocket motor initiation case.

  6. Widespread correlation patterns of fMRI signal across visual cortex reflect eccentricity organization.

    PubMed

    Arcaro, Michael J; Honey, Christopher J; Mruczek, Ryan E B; Kastner, Sabine; Hasson, Uri

    2015-02-19

    The human visual system can be divided into over two-dozen distinct areas, each of which contains a topographic map of the visual field. A fundamental question in vision neuroscience is how the visual system integrates information from the environment across different areas. Using neuroimaging, we investigated the spatial pattern of correlated BOLD signal across eight visual areas on data collected during rest conditions and during naturalistic movie viewing. The correlation pattern between areas reflected the underlying receptive field organization with higher correlations between cortical sites containing overlapping representations of visual space. In addition, the correlation pattern reflected the underlying widespread eccentricity organization of visual cortex, in which the highest correlations were observed for cortical sites with iso-eccentricity representations including regions with non-overlapping representations of visual space. This eccentricity-based correlation pattern appears to be part of an intrinsic functional architecture that supports the integration of information across functionally specialized visual areas.

  7. Thinking Graphically: Connecting Vision and Cognition during Graph Comprehension

    ERIC Educational Resources Information Center

    Ratwani, Raj M.; Trafton, J. Gregory; Boehm-Davis, Deborah A.

    2008-01-01

    Task analytic theories of graph comprehension account for the perceptual and conceptual processes required to extract specific information from graphs. Comparatively, the processes underlying information integration have received less attention. We propose a new framework for information integration that highlights visual integration and cognitive…

  8. Visual search, visual streams, and visual architectures.

    PubMed

    Green, M

    1991-10-01

    Most psychological, physiological, and computational models of early vision suggest that retinal information is divided into a parallel set of feature modules. The dominant theories of visual search assume that these modules form a "blackboard" architecture: a set of independent representations that communicate only through a central processor. A review of research shows that blackboard-based theories, such as feature-integration theory, cannot easily explain the existing data. The experimental evidence is more consistent with a "network" architecture, which stresses that: (1) feature modules are directly connected to one another, (2) features and their locations are represented together, (3) feature detection and integration are not distinct processing stages, and (4) no executive control process, such as focal attention, is needed to integrate features. Attention is not a spotlight that synthesizes objects from raw features. Instead, it is better to conceptualize attention as an aperture which masks irrelevant visual information.

  9. Audiovisual Temporal Perception in Aging: The Role of Multisensory Integration and Age-Related Sensory Loss

    PubMed Central

    Brooks, Cassandra J.; Chan, Yu Man; Anderson, Andrew J.; McKendrick, Allison M.

    2018-01-01

    Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information. PMID:29867415

  10. Audiovisual Temporal Perception in Aging: The Role of Multisensory Integration and Age-Related Sensory Loss.

    PubMed

    Brooks, Cassandra J; Chan, Yu Man; Anderson, Andrew J; McKendrick, Allison M

    2018-01-01

    Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information.

  11. Modeling and visualizing borehole information on virtual globes using KML

    NASA Astrophysics Data System (ADS)

    Zhu, Liang-feng; Wang, Xi-feng; Zhang, Bing

    2014-01-01

    Advances in virtual globes and Keyhole Markup Language (KML) are providing the Earth scientists with the universal platforms to manage, visualize, integrate and disseminate geospatial information. In order to use KML to represent and disseminate subsurface geological information on virtual globes, we present an automatic method for modeling and visualizing a large volume of borehole information. Based on a standard form of borehole database, the method first creates a variety of borehole models with different levels of detail (LODs), including point placemarks representing drilling locations, scatter dots representing contacts and tube models representing strata. Subsequently, the level-of-detail based (LOD-based) multi-scale representation is constructed to enhance the efficiency of visualizing large numbers of boreholes. Finally, the modeling result can be loaded into a virtual globe application for 3D visualization. An implementation program, termed Borehole2KML, is developed to automatically convert borehole data into KML documents. A case study of using Borehole2KML to create borehole models in Shanghai shows that the modeling method is applicable to visualize, integrate and disseminate borehole information on the Internet. The method we have developed has potential use in societal service of geological information.

  12. Protein-Protein Interaction Network and Gene Ontology

    NASA Astrophysics Data System (ADS)

    Choi, Yunkyu; Kim, Seok; Yi, Gwan-Su; Park, Jinah

    Evolution of computer technologies makes it possible to access a large amount and various kinds of biological data via internet such as DNA sequences, proteomics data and information discovered about them. It is expected that the combination of various data could help researchers find further knowledge about them. Roles of a visualization system are to invoke human abilities to integrate information and to recognize certain patterns in the data. Thus, when the various kinds of data are examined and analyzed manually, an effective visualization system is an essential part. One instance of these integrated visualizations can be combination of protein-protein interaction (PPI) data and Gene Ontology (GO) which could help enhance the analysis of PPI network. We introduce a simple but comprehensive visualization system that integrates GO and PPI data where GO and PPI graphs are visualized side-by-side and supports quick reference functions between them. Furthermore, the proposed system provides several interactive visualization methods for efficiently analyzing the PPI network and GO directedacyclic- graph such as context-based browsing and common ancestors finding.

  13. The working memory Ponzo illusion: Involuntary integration of visuospatial information stored in visual working memory.

    PubMed

    Shen, Mowei; Xu, Haokui; Zhang, Haihang; Shui, Rende; Zhang, Meng; Zhou, Jifan

    2015-08-01

    Visual working memory (VWM) has been traditionally viewed as a mental structure subsequent to visual perception that stores the final output of perceptual processing. However, VWM has recently been emphasized as a critical component of online perception, providing storage for the intermediate perceptual representations produced during visual processing. This interactive view holds the core assumption that VWM is not the terminus of perceptual processing; the stored visual information rather continues to undergo perceptual processing if necessary. The current study tests this assumption, demonstrating an example of involuntary integration of the VWM content, by creating the Ponzo illusion in VWM: when the Ponzo illusion figure was divided into its individual components and sequentially encoded into VWM, the temporally separated components were involuntarily integrated, leading to the distorted length perception of the two horizontal lines. This VWM Ponzo illusion was replicated when the figure components were presented in different combinations and presentation order. The magnitude of the illusion was significantly correlated between VWM and perceptual versions of the Ponzo illusion. These results suggest that the information integration underling the VWM Ponzo illusion is constrained by the laws of visual perception and similarly affected by the common individual factors that govern its perception. Thus, our findings provide compelling evidence that VWM functions as a buffer serving perceptual processes at early stages. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Visual Information for the Desktop, version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2006-03-29

    VZIN integrates visual analytics capabilities into popular desktop tools to aid a user in searching and understanding an information space. VZIN allows users to Drag-Drop-Visualize-Explore-Organize information within tools such as Microsoft Office, Windows Explorer, Excel, and Outlook. VZIN is tailorable to specific client or industry requirements. VZIN follows the desktop metaphors so that advanced analytical capabilities are available with minimal user training.

  15. View Combination: A Generalization Mechanism for Visual Recognition

    ERIC Educational Resources Information Center

    Friedman, Alinda; Waller, David; Thrash, Tyler; Greenauer, Nathan; Hodgson, Eric

    2011-01-01

    We examined whether view combination mechanisms shown to underlie object and scene recognition can integrate visual information across views that have little or no three-dimensional information at either the object or scene level. In three experiments, people learned four "views" of a two dimensional visual array derived from a three-dimensional…

  16. Neural mechanisms of human perceptual choice under focused and divided attention.

    PubMed

    Wyart, Valentin; Myers, Nicholas E; Summerfield, Christopher

    2015-02-25

    Perceptual decisions occur after the evaluation and integration of momentary sensory inputs, and dividing attention between spatially disparate sources of information impairs decision performance. However, it remains unknown whether dividing attention degrades the precision of sensory signals, precludes their conversion into decision signals, or dampens the integration of decision information toward an appropriate response. Here we recorded human electroencephalographic (EEG) activity while participants categorized one of two simultaneous and independent streams of visual gratings according to their average tilt. By analyzing trial-by-trial correlations between EEG activity and the information offered by each sample, we obtained converging behavioral and neural evidence that dividing attention between left and right visual fields does not dampen the encoding of sensory or decision information. Under divided attention, momentary decision information from both visual streams was encoded in slow parietal signals without interference but was lost downstream during their integration as reflected in motor mu- and beta-band (10-30 Hz) signals, resulting in a "leaky" accumulation process that conferred greater behavioral influence to more recent samples. By contrast, sensory inputs that were explicitly cued as irrelevant were not converted into decision signals. These findings reveal that a late cognitive bottleneck on information integration limits decision performance under divided attention, and places new capacity constraints on decision-theoretic models of information integration under cognitive load. Copyright © 2015 the authors 0270-6474/15/353485-14$15.00/0.

  17. Neural mechanisms of human perceptual choice under focused and divided attention

    PubMed Central

    Wyart, Valentin; Myers, Nicholas E.; Summerfield, Christopher

    2015-01-01

    Perceptual decisions occur after evaluation and integration of momentary sensory inputs, and dividing attention between spatially disparate sources of information impairs decision performance. However, it remains unknown whether dividing attention degrades the precision of sensory signals, precludes their conversion into decision signals, or dampens the integration of decision information towards an appropriate response. Here we recorded human electroencephalographic (EEG) activity whilst participants categorised one of two simultaneous and independent streams of visual gratings according to their average tilt. By analyzing trial-by-trial correlations between EEG activity and the information offered by each sample, we obtained converging behavioural and neural evidence that dividing attention between left and right visual fields does not dampen the encoding of sensory or decision information. Under divided attention, momentary decision information from both visual streams was encoded in slow parietal signals without interference but was lost downstream during their integration as reflected in motor mu- and beta-band (10–30 Hz) signals, resulting in a ‘leaky’ accumulation process which conferred greater behavioural influence to more recent samples. By contrast, sensory inputs that were explicitly cued as irrelevant were not converted into decision signals. These findings reveal that a late cognitive bottleneck on information integration limits decision performance under divided attention, and place new capacity constraints on decision-theoretic models of information integration under cognitive load. PMID:25716848

  18. Audiovisual integration of emotional signals from music improvisation does not depend on temporal correspondence.

    PubMed

    Petrini, Karin; McAleer, Phil; Pollick, Frank

    2010-04-06

    In the present study we applied a paradigm often used in face-voice affect perception to solo music improvisation to examine how the emotional valence of sound and gesture are integrated when perceiving an emotion. Three brief excerpts expressing emotion produced by a drummer and three by a saxophonist were selected. From these bimodal congruent displays the audio-only, visual-only, and audiovisually incongruent conditions (obtained by combining the two signals both within and between instruments) were derived. In Experiment 1 twenty musical novices judged the perceived emotion and rated the strength of each emotion. The results indicate that sound dominated the visual signal in the perception of affective expression, though this was more evident for the saxophone. In Experiment 2 a further sixteen musical novices were asked to either pay attention to the musicians' movements or to the sound when judging the perceived emotions. The results showed no effect of visual information when judging the sound. On the contrary, when judging the emotional content of the visual information, a worsening in performance was obtained for the incongruent condition that combined different emotional auditory and visual information for the same instrument. The effect of emotionally discordant information thus became evident only when the auditory and visual signals belonged to the same categorical event despite their temporal mismatch. This suggests that the integration of emotional information may be reinforced by its semantic attributes but might be independent from temporal features. Copyright 2010 Elsevier B.V. All rights reserved.

  19. Effects of aging on audio-visual speech integration.

    PubMed

    Huyse, Aurélie; Leybaert, Jacqueline; Berthommier, Frédéric

    2014-10-01

    This study investigated the impact of aging on audio-visual speech integration. A syllable identification task was presented in auditory-only, visual-only, and audio-visual congruent and incongruent conditions. Visual cues were either degraded or unmodified. Stimuli were embedded in stationary noise alternating with modulated noise. Fifteen young adults and 15 older adults participated in this study. Results showed that older adults had preserved lipreading abilities when the visual input was clear but not when it was degraded. The impact of aging on audio-visual integration also depended on the quality of the visual cues. In the visual clear condition, the audio-visual gain was similar in both groups and analyses in the framework of the fuzzy-logical model of perception confirmed that older adults did not differ from younger adults in their audio-visual integration abilities. In the visual reduction condition, the audio-visual gain was reduced in the older group, but only when the noise was stationary, suggesting that older participants could compensate for the loss of lipreading abilities by using the auditory information available in the valleys of the noise. The fuzzy-logical model of perception confirmed the significant impact of aging on audio-visual integration by showing an increased weight of audition in the older group.

  20. User-centered evaluation of Arizona BioPathway: an information extraction, integration, and visualization system.

    PubMed

    Quiñones, Karin D; Su, Hua; Marshall, Byron; Eggers, Shauna; Chen, Hsinchun

    2007-09-01

    Explosive growth in biomedical research has made automated information extraction, knowledge integration, and visualization increasingly important and critically needed. The Arizona BioPathway (ABP) system extracts and displays biological regulatory pathway information from the abstracts of journal articles. This study uses relations extracted from more than 200 PubMed abstracts presented in a tabular and graphical user interface with built-in search and aggregation functionality. This paper presents a task-centered assessment of the usefulness and usability of the ABP system focusing on its relation aggregation and visualization functionalities. Results suggest that our graph-based visualization is more efficient in supporting pathway analysis tasks and is perceived as more useful and easier to use as compared to a text-based literature-viewing method. Relation aggregation significantly contributes to knowledge-acquisition efficiency. Together, the graphic and tabular views in the ABP Visualizer provide a flexible and effective interface for pathway relation browsing and analysis. Our study contributes to pathway-related research and biological information extraction by assessing the value of a multiview, relation-based interface that supports user-controlled exploration of pathway information across multiple granularities.

  1. Information processing in the primate visual system - An integrated systems perspective

    NASA Technical Reports Server (NTRS)

    Van Essen, David C.; Anderson, Charles H.; Felleman, Daniel J.

    1992-01-01

    The primate visual system contains dozens of distinct areas in the cerebral cortex and several major subcortical structures. These subdivisions are extensively interconnected in a distributed hierarchical network that contains several intertwined processing streams. A number of strategies are used for efficient information processing within this hierarchy. These include linear and nonlinear filtering, passage through information bottlenecks, and coordinated use of multiple types of information. In addition, dynamic regulation of information flow within and between visual areas may provide the computational flexibility needed for the visual system to perform a broad spectrum of tasks accurately and at high resolution.

  2. Audiovisual integration of emotional signals in voice and face: an event-related fMRI study.

    PubMed

    Kreifelts, Benjamin; Ethofer, Thomas; Grodd, Wolfgang; Erb, Michael; Wildgruber, Dirk

    2007-10-01

    In a natural environment, non-verbal emotional communication is multimodal (i.e. speech melody, facial expression) and multifaceted concerning the variety of expressed emotions. Understanding these communicative signals and integrating them into a common percept is paramount to successful social behaviour. While many previous studies have focused on the neurobiology of emotional communication in the auditory or visual modality alone, far less is known about multimodal integration of auditory and visual non-verbal emotional information. The present study investigated this process using event-related fMRI. Behavioural data revealed that audiovisual presentation of non-verbal emotional information resulted in a significant increase in correctly classified stimuli when compared with visual and auditory stimulation. This behavioural gain was paralleled by enhanced activation in bilateral posterior superior temporal gyrus (pSTG) and right thalamus, when contrasting audiovisual to auditory and visual conditions. Further, a characteristic of these brain regions, substantiating their role in the emotional integration process, is a linear relationship between the gain in classification accuracy and the strength of the BOLD response during the bimodal condition. Additionally, enhanced effective connectivity between audiovisual integration areas and associative auditory and visual cortices was observed during audiovisual stimulation, offering further insight into the neural process accomplishing multimodal integration. Finally, we were able to document an enhanced sensitivity of the putative integration sites to stimuli with emotional non-verbal content as compared to neutral stimuli.

  3. Integrated data visualisation: an approach to capture older adults’ wellness

    PubMed Central

    Wilamowska, Katarzyna; Demiris, George; Thompson, Hilaire

    2013-01-01

    Informatics tools can help support the health and independence of older adults. In this paper, we present an approach towards integrating health-monitoring data and describe several techniques for the assessment and visualisation of integrated health and well-being of older adults. We present three different visualisation techniques to provide distinct alternatives towards display of the same information, focusing on reducing the cognitive load of data interpretation. We demonstrate the feasibility of integrating health-monitoring information into a comprehensive measure of wellness, while also highlighting the challenges of designing visual displays targeted at multiple user groups. These visual displays of wellness can be incorporated into personal health records and can be an effective support for informed decision-making. PMID:23079025

  4. Widespread correlation patterns of fMRI signal across visual cortex reflect eccentricity organization

    PubMed Central

    Arcaro, Michael J; Honey, Christopher J; Mruczek, Ryan EB; Kastner, Sabine; Hasson, Uri

    2015-01-01

    The human visual system can be divided into over two-dozen distinct areas, each of which contains a topographic map of the visual field. A fundamental question in vision neuroscience is how the visual system integrates information from the environment across different areas. Using neuroimaging, we investigated the spatial pattern of correlated BOLD signal across eight visual areas on data collected during rest conditions and during naturalistic movie viewing. The correlation pattern between areas reflected the underlying receptive field organization with higher correlations between cortical sites containing overlapping representations of visual space. In addition, the correlation pattern reflected the underlying widespread eccentricity organization of visual cortex, in which the highest correlations were observed for cortical sites with iso-eccentricity representations including regions with non-overlapping representations of visual space. This eccentricity-based correlation pattern appears to be part of an intrinsic functional architecture that supports the integration of information across functionally specialized visual areas. DOI: http://dx.doi.org/10.7554/eLife.03952.001 PMID:25695154

  5. Behavioural evidence for separate mechanisms of audiovisual temporal binding as a function of leading sensory modality.

    PubMed

    Cecere, Roberto; Gross, Joachim; Thut, Gregor

    2016-06-01

    The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  6. Short temporal asynchrony disrupts visual object recognition

    PubMed Central

    Singer, Jedediah M.; Kreiman, Gabriel

    2014-01-01

    Humans can recognize objects and scenes in a small fraction of a second. The cascade of signals underlying rapid recognition might be disrupted by temporally jittering different parts of complex objects. Here we investigated the time course over which shape information can be integrated to allow for recognition of complex objects. We presented fragments of object images in an asynchronous fashion and behaviorally evaluated categorization performance. We observed that visual recognition was significantly disrupted by asynchronies of approximately 30 ms, suggesting that spatiotemporal integration begins to break down with even small deviations from simultaneity. However, moderate temporal asynchrony did not completely obliterate recognition; in fact, integration of visual shape information persisted even with an asynchrony of 100 ms. We describe the data with a concise model based on the dynamic reduction of uncertainty about what image was presented. These results emphasize the importance of timing in visual processing and provide strong constraints for the development of dynamical models of visual shape recognition. PMID:24819738

  7. Cortical Neuroprosthesis Merges Visible and Invisible Light Without Impairing Native Sensory Function

    PubMed Central

    Thomson, Eric E.; Zea, Ivan; França, Wendy

    2017-01-01

    Abstract Adult rats equipped with a sensory prosthesis, which transduced infrared (IR) signals into electrical signals delivered to somatosensory cortex (S1), took approximately 4 d to learn a four-choice IR discrimination task. Here, we show that when such IR signals are projected to the primary visual cortex (V1), rats that are pretrained in a visual-discrimination task typically learn the same IR discrimination task on their first day of training. However, without prior training on a visual discrimination task, the learning rates for S1- and V1-implanted animals converged, suggesting there is no intrinsic difference in learning rate between the two areas. We also discovered that animals were able to integrate IR information into the ongoing visual processing stream in V1, performing a visual-IR integration task in which they had to combine IR and visual information. Furthermore, when the IR prosthesis was implanted in S1, rats showed no impairment in their ability to use their whiskers to perform a tactile discrimination task. Instead, in some rats, this ability was actually enhanced. Cumulatively, these findings suggest that cortical sensory neuroprostheses can rapidly augment the representational scope of primary sensory areas, integrating novel sources of information into ongoing processing while incurring minimal loss of native function. PMID:29279860

  8. Location perception: the X-Files parable.

    PubMed

    Prinzmetal, William

    2005-01-01

    Three aspects of visual object location were investigated: (1) how the visual system integrates information for locating objects, (2) how attention operates to affect location perception, and (3) how the visual system deals with locating an object when multiple objects are present. The theories were described in terms of a parable (the X-Files parable). Then, computer simulations were developed. Finally, predictions derived from the simulations were tested. In the scenario described in the parable, we ask how a system of detectors might locate an alien spaceship, how attention might be implemented in such a spaceship detection system, and how the presence of one spaceship might influence the location perception of another alien spaceship. Experiment 1 demonstrated that location information is integrated with a spatial average rule. In Experiment 2, this rule was applied to a more-samples theory of attention. Experiment 3 demonstrated how the integration rule could account for various visual illusions.

  9. Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment

    PubMed Central

    PONS, FERRAN; ANDREU, LLORENC.; SANZ-TORRENT, MONICA; BUIL-LEGAZ, LUCIA; LEWKOWICZ, DAVID J.

    2014-01-01

    Speech perception involves the integration of auditory and visual articulatory information and, thus, requires the perception of temporal synchrony between this information. There is evidence that children with Specific Language Impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component followed the visual component. None of the groups perceived an audiovisual asynchrony of 366ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception. PMID:22874648

  10. Perception of audio-visual speech synchrony in Spanish-speaking children with and without specific language impairment.

    PubMed

    Pons, Ferran; Andreu, Llorenç; Sanz-Torrent, Monica; Buil-Legaz, Lucía; Lewkowicz, David J

    2013-06-01

    Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the integration of auditory and visual speech. Twenty Spanish-speaking children with SLI, twenty typically developing age-matched Spanish-speaking children, and twenty Spanish-speaking children matched for MLU-w participated in an eye-tracking study to investigate the perception of audiovisual speech synchrony. Results revealed that children with typical language development perceived an audiovisual asynchrony of 666 ms regardless of whether the auditory or visual speech attribute led the other one. Children with SLI only detected the 666 ms asynchrony when the auditory component preceded [corrected] the visual component. None of the groups perceived an audiovisual asynchrony of 366 ms. These results suggest that the difficulty of speech processing by children with SLI would also involve difficulties in integrating auditory and visual aspects of speech perception.

  11. Visual stimuli induced by self-motion and object-motion modify odour-guided flight of male moths (Manduca sexta L.).

    PubMed

    Verspui, Remko; Gray, John R

    2009-10-01

    Animals rely on multimodal sensory integration for proper orientation within their environment. For example, odour-guided behaviours often require appropriate integration of concurrent visual cues. To gain a further understanding of mechanisms underlying sensory integration in odour-guided behaviour, our study examined the effects of visual stimuli induced by self-motion and object-motion on odour-guided flight in male M. sexta. By placing stationary objects (pillars) on either side of a female pheromone plume, moths produced self-induced visual motion during odour-guided flight. These flights showed a reduction in both ground and flight speeds and inter-turn interval when compared with flight tracks without stationary objects. Presentation of an approaching 20 cm disc, to simulate object-motion, resulted in interrupted odour-guided flight and changes in flight direction away from the pheromone source. Modifications of odour-guided flight behaviour in the presence of stationary objects suggest that visual information, in conjunction with olfactory cues, can be used to control the rate of counter-turning. We suggest that the behavioural responses to visual stimuli induced by object-motion indicate the presence of a neural circuit that relays visual information to initiate escape responses. These behavioural responses also suggest the presence of a sensory conflict requiring a trade-off between olfactory and visually driven behaviours. The mechanisms underlying olfactory and visual integration are discussed in the context of these behavioural responses.

  12. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs

    PubMed Central

    ten Oever, Sanne; Sack, Alexander T.; Wheat, Katherine L.; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception. PMID:23805110

  13. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs.

    PubMed

    Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.

  14. Unconscious Cross-Modal Priming of Auditory Sound Localization by Visual Words

    ERIC Educational Resources Information Center

    Ansorge, Ulrich; Khalid, Shah; Laback, Bernhard

    2016-01-01

    Little is known about the cross-modal integration of unconscious and conscious information. In the current study, we therefore tested whether the spatial meaning of an unconscious visual word, such as "up", influences the perceived location of a subsequently presented auditory target. Although cross-modal integration of unconscious…

  15. Ocelli contribute to the encoding of celestial compass information in the Australian desert ant Melophorus bagoti.

    PubMed

    Schwarz, Sebastian; Albert, Laurence; Wystrach, Antoine; Cheng, Ken

    2011-03-15

    Many animal species, including some social hymenoptera, use the visual system for navigation. Although the insect compound eyes have been well studied, less is known about the second visual system in some insects, the ocelli. Here we demonstrate navigational functions of the ocelli in the visually guided Australian desert ant Melophorus bagoti. These ants are known to rely on both visual landmark learning and path integration. We conducted experiments to reveal the role of ocelli in the perception and use of celestial compass information and landmark guidance. Ants with directional information from their path integration system were tested with covered compound eyes and open ocelli on an unfamiliar test field where only celestial compass cues were available for homing. These full-vector ants, using only their ocelli for visual information, oriented significantly towards the fictive nest on the test field, indicating the use of celestial compass information that is presumably based on polarised skylight, the sun's position or the colour gradient of the sky. Ants without any directional information from their path-integration system (zero-vector) were tested, also with covered compound eyes and open ocelli, on a familiar training field where they have to use the surrounding panorama to home. These ants failed to orient significantly in the homeward direction. Together, our results demonstrated that M. bagoti could perceive and process celestial compass information for directional orientation with their ocelli. In contrast, the ocelli do not seem to contribute to terrestrial landmark-based navigation in M. bagoti.

  16. Storage of features, conjunctions and objects in visual working memory.

    PubMed

    Vogel, E K; Woodman, G F; Luck, S J

    2001-02-01

    Working memory can be divided into separate subsystems for verbal and visual information. Although the verbal system has been well characterized, the storage capacity of visual working memory has not yet been established for simple features or for conjunctions of features. The authors demonstrate that it is possible to retain information about only 3-4 colors or orientations in visual working memory at one time. Observers are also able to retain both the color and the orientation of 3-4 objects, indicating that visual working memory stores integrated objects rather than individual features. Indeed, objects defined by a conjunction of four features can be retained in working memory just as well as single-feature objects, allowing many individual features to be retained when distributed across a small number of objects. Thus, the capacity of visual working memory must be understood in terms of integrated objects rather than individual features.

  17. Late maturation of visual spatial integration in humans

    PubMed Central

    Kovács, Ilona; Kozma, Petra; Fehér, Ákos; Benedek, György

    1999-01-01

    Visual development is thought to be completed at an early age. We suggest that the maturation of the visual brain is not homogeneous: functions with greater need for early availability, such as visuomotor control, mature earlier, and the development of other visual functions may extend well into childhood. We found significant improvement in children between 5 and 14 years in visual spatial integration by using a contour-detection task. The data show that long-range spatial interactions—subserving the integration of orientational information across the visual field—span a shorter spatial range in children than in adults. Performance in the task improves in a cue-specific manner with practice, which indicates the participation of fairly low-level perceptual mechanisms. We interpret our findings in terms of a protracted development of ventral visual-stream function in humans. PMID:10518600

  18. Information, entropy, and fidelity in visual communication

    NASA Astrophysics Data System (ADS)

    Huck, Friedrich O.; Fales, Carl L.; Alter-Gartenberg, Rachel; Rahman, Zia-ur

    1992-10-01

    This paper presents an assessment of visual communication that integrates the critical limiting factors of image gathering an display with the digital processing that is used to code and restore images. The approach focuses on two mathematical criteria, information and fidelity, and on their relationships to the entropy of the encoded data and to the visual quality of the restored image.

  19. Information, entropy and fidelity in visual communication

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1992-01-01

    This paper presents an assessment of visual communication that integrates the critical limiting factors of image gathering and display with the digital processing that is used to code and restore images. The approach focuses on two mathematical criteria, information and fidelity, and on their relationships to the entropy of the encoded data and to the visual quality of the restored image.

  20. Decentralized Multisensory Information Integration in Neural Systems.

    PubMed

    Zhang, Wen-Hao; Chen, Aihua; Rasch, Malte J; Wu, Si

    2016-01-13

    How multiple sensory cues are integrated in neural circuitry remains a challenge. The common hypothesis is that information integration might be accomplished in a dedicated multisensory integration area receiving feedforward inputs from the modalities. However, recent experimental evidence suggests that it is not a single multisensory brain area, but rather many multisensory brain areas that are simultaneously involved in the integration of information. Why many mutually connected areas should be needed for information integration is puzzling. Here, we investigated theoretically how information integration could be achieved in a distributed fashion within a network of interconnected multisensory areas. Using biologically realistic neural network models, we developed a decentralized information integration system that comprises multiple interconnected integration areas. Studying an example of combining visual and vestibular cues to infer heading direction, we show that such a decentralized system is in good agreement with anatomical evidence and experimental observations. In particular, we show that this decentralized system can integrate information optimally. The decentralized system predicts that optimally integrated information should emerge locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas. To extract information reliably from ambiguous environments, the brain integrates multiple sensory cues, which provide different aspects of information about the same entity of interest. Here, we propose a decentralized architecture for multisensory integration. In such a system, no processor is in the center of the network topology and information integration is achieved in a distributed manner through reciprocally connected local processors. Through studying the inference of heading direction with visual and vestibular cues, we show that the decentralized system can integrate information optimally, with the reciprocal connections between processers determining the extent of cue integration. Our model reproduces known multisensory integration behaviors observed in experiments and sheds new light on our understanding of how information is integrated in the brain. Copyright © 2016 Zhang et al.

  1. Decentralized Multisensory Information Integration in Neural Systems

    PubMed Central

    Zhang, Wen-hao; Chen, Aihua

    2016-01-01

    How multiple sensory cues are integrated in neural circuitry remains a challenge. The common hypothesis is that information integration might be accomplished in a dedicated multisensory integration area receiving feedforward inputs from the modalities. However, recent experimental evidence suggests that it is not a single multisensory brain area, but rather many multisensory brain areas that are simultaneously involved in the integration of information. Why many mutually connected areas should be needed for information integration is puzzling. Here, we investigated theoretically how information integration could be achieved in a distributed fashion within a network of interconnected multisensory areas. Using biologically realistic neural network models, we developed a decentralized information integration system that comprises multiple interconnected integration areas. Studying an example of combining visual and vestibular cues to infer heading direction, we show that such a decentralized system is in good agreement with anatomical evidence and experimental observations. In particular, we show that this decentralized system can integrate information optimally. The decentralized system predicts that optimally integrated information should emerge locally from the dynamics of the communication between brain areas and sheds new light on the interpretation of the connectivity between multisensory brain areas. SIGNIFICANCE STATEMENT To extract information reliably from ambiguous environments, the brain integrates multiple sensory cues, which provide different aspects of information about the same entity of interest. Here, we propose a decentralized architecture for multisensory integration. In such a system, no processor is in the center of the network topology and information integration is achieved in a distributed manner through reciprocally connected local processors. Through studying the inference of heading direction with visual and vestibular cues, we show that the decentralized system can integrate information optimally, with the reciprocal connections between processers determining the extent of cue integration. Our model reproduces known multisensory integration behaviors observed in experiments and sheds new light on our understanding of how information is integrated in the brain. PMID:26758843

  2. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams

    PubMed Central

    Rouinfar, Amy; Agra, Elise; Larson, Adam M.; Rebello, N. Sanjay; Loschky, Lester C.

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants’ attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants’ verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers’ attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions. PMID:25324804

  3. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams.

    PubMed

    Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.

  4. Visual-auditory integration during speech imitation in autism.

    PubMed

    Williams, Justin H G; Massaro, Dominic W; Peel, Natalie J; Bosseler, Alexis; Suddendorf, Thomas

    2004-01-01

    Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training.

  5. Comprehensive Modeling and Visualization of Cardiac Anatomy and Physiology from CT Imaging and Computer Simulations

    PubMed Central

    Sun, Peng; Zhou, Haoyin; Ha, Seongmin; Hartaigh, Bríain ó; Truong, Quynh A.; Min, James K.

    2016-01-01

    In clinical cardiology, both anatomy and physiology are needed to diagnose cardiac pathologies. CT imaging and computer simulations provide valuable and complementary data for this purpose. However, it remains challenging to gain useful information from the large amount of high-dimensional diverse data. The current tools are not adequately integrated to visualize anatomic and physiologic data from a complete yet focused perspective. We introduce a new computer-aided diagnosis framework, which allows for comprehensive modeling and visualization of cardiac anatomy and physiology from CT imaging data and computer simulations, with a primary focus on ischemic heart disease. The following visual information is presented: (1) Anatomy from CT imaging: geometric modeling and visualization of cardiac anatomy, including four heart chambers, left and right ventricular outflow tracts, and coronary arteries; (2) Function from CT imaging: motion modeling, strain calculation, and visualization of four heart chambers; (3) Physiology from CT imaging: quantification and visualization of myocardial perfusion and contextual integration with coronary artery anatomy; (4) Physiology from computer simulation: computation and visualization of hemodynamics (e.g., coronary blood velocity, pressure, shear stress, and fluid forces on the vessel wall). Substantially, feedback from cardiologists have confirmed the practical utility of integrating these features for the purpose of computer-aided diagnosis of ischemic heart disease. PMID:26863663

  6. Neurophysiology underlying influence of stimulus reliability on audiovisual integration.

    PubMed

    Shatzer, Hannah; Shen, Stanley; Kerlin, Jess R; Pitt, Mark A; Shahin, Antoine J

    2018-01-24

    We tested the predictions of the dynamic reweighting model (DRM) of audiovisual (AV) speech integration, which posits that spectrotemporally reliable (informative) AV speech stimuli induce a reweighting of processing from low-level to high-level auditory networks. This reweighting decreases sensitivity to acoustic onsets and in turn increases tolerance to AV onset asynchronies (AVOA). EEG was recorded while subjects watched videos of a speaker uttering trisyllabic nonwords that varied in spectrotemporal reliability and asynchrony of the visual and auditory inputs. Subjects judged the stimuli as in-sync or out-of-sync. Results showed that subjects exhibited greater AVOA tolerance for non-blurred than blurred visual speech and for less than more degraded acoustic speech. Increased AVOA tolerance was reflected in reduced amplitude of the P1-P2 auditory evoked potentials, a neurophysiological indication of reduced sensitivity to acoustic onsets and successful AV integration. There was also sustained visual alpha band (8-14 Hz) suppression (desynchronization) following acoustic speech onsets for non-blurred vs. blurred visual speech, consistent with continuous engagement of the visual system as the speech unfolds. The current findings suggest that increased spectrotemporal reliability of acoustic and visual speech promotes robust AV integration, partly by suppressing sensitivity to acoustic onsets, in support of the DRM's reweighting mechanism. Increased visual signal reliability also sustains the engagement of the visual system with the auditory system to maintain alignment of information across modalities. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  7. Late development of cue integration is linked to sensory fusion in cortex.

    PubMed

    Dekker, Tessa M; Ban, Hiroshi; van der Velde, Bauke; Sereno, Martin I; Welchman, Andrew E; Nardini, Marko

    2015-11-02

    Adults optimize perceptual judgements by integrating different types of sensory information [1, 2]. This engages specialized neural circuits that fuse signals from the same [3-5] or different [6] modalities. Whereas young children can use sensory cues independently, adult-like precision gains from cue combination only emerge around ages 10 to 11 years [7-9]. Why does it take so long to make best use of sensory information? Existing data cannot distinguish whether this (1) reflects surprisingly late changes in sensory processing (sensory integration mechanisms in the brain are still developing) or (2) depends on post-perceptual changes (integration in sensory cortex is adult-like, but higher-level decision processes do not access the information) [10]. We tested visual depth cue integration in the developing brain to distinguish these possibilities. We presented children aged 6-12 years with displays depicting depth from binocular disparity and relative motion and made measurements using psychophysics, retinotopic mapping, and pattern classification fMRI. Older children (>10.5 years) showed clear evidence for sensory fusion in V3B, a visual area thought to integrate depth cues in the adult brain [3-5]. By contrast, in younger children (<10.5 years), there was no evidence for sensory fusion in any visual area. This significant age difference was paired with a shift in perceptual performance around ages 10 to 11 years and could not be explained by motion artifacts, visual attention, or signal quality differences. Thus, whereas many basic visual processes mature early in childhood [11, 12], the brain circuits that fuse cues take a very long time to develop. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Late Development of Cue Integration Is Linked to Sensory Fusion in Cortex

    PubMed Central

    Dekker, Tessa M.; Ban, Hiroshi; van der Velde, Bauke; Sereno, Martin I.; Welchman, Andrew E.; Nardini, Marko

    2015-01-01

    Summary Adults optimize perceptual judgements by integrating different types of sensory information [1, 2]. This engages specialized neural circuits that fuse signals from the same [3, 4, 5] or different [6] modalities. Whereas young children can use sensory cues independently, adult-like precision gains from cue combination only emerge around ages 10 to 11 years [7, 8, 9]. Why does it take so long to make best use of sensory information? Existing data cannot distinguish whether this (1) reflects surprisingly late changes in sensory processing (sensory integration mechanisms in the brain are still developing) or (2) depends on post-perceptual changes (integration in sensory cortex is adult-like, but higher-level decision processes do not access the information) [10]. We tested visual depth cue integration in the developing brain to distinguish these possibilities. We presented children aged 6–12 years with displays depicting depth from binocular disparity and relative motion and made measurements using psychophysics, retinotopic mapping, and pattern classification fMRI. Older children (>10.5 years) showed clear evidence for sensory fusion in V3B, a visual area thought to integrate depth cues in the adult brain [3, 4, 5]. By contrast, in younger children (<10.5 years), there was no evidence for sensory fusion in any visual area. This significant age difference was paired with a shift in perceptual performance around ages 10 to 11 years and could not be explained by motion artifacts, visual attention, or signal quality differences. Thus, whereas many basic visual processes mature early in childhood [11, 12], the brain circuits that fuse cues take a very long time to develop. PMID:26480841

  9. The answer is blowing in the wind: free-flying honeybees can integrate visual and mechano-sensory inputs for making complex foraging decisions.

    PubMed

    Ravi, Sridhar; Garcia, Jair E; Wang, Chun; Dyer, Adrian G

    2016-11-01

    Bees navigate in complex environments using visual, olfactory and mechano-sensorial cues. In the lowest region of the atmosphere, the wind environment can be highly unsteady and bees employ fine motor-skills to enhance flight control. Recent work reveals sophisticated multi-modal processing of visual and olfactory channels by the bee brain to enhance foraging efficiency, but it currently remains unclear whether wind-induced mechano-sensory inputs are also integrated with visual information to facilitate decision making. Individual honeybees were trained in a linear flight arena with appetitive-aversive differential conditioning to use a context-setting cue of 3 m s -1 cross-wind direction to enable decisions about either a 'blue' or 'yellow' star stimulus being the correct alternative. Colour stimuli properties were mapped in bee-specific opponent-colour spaces to validate saliency, and to thus enable rapid reverse learning. Bees were able to integrate mechano-sensory and visual information to facilitate decisions that were significantly different to chance expectation after 35 learning trials. An independent group of bees were trained to find a single rewarding colour that was unrelated to the wind direction. In these trials, wind was not used as a context-setting cue and served only as a potential distracter in identifying the relevant rewarding visual stimuli. Comparison between respective groups shows that bees can learn to integrate visual and mechano-sensory information in a non-elemental fashion, revealing an unsuspected level of sensory processing in honeybees, and adding to the growing body of knowledge on the capacity of insect brains to use multi-modal sensory inputs in mediating foraging behaviour. © 2016. Published by The Company of Biologists Ltd.

  10. A Visual Profile of Queensland Indigenous Children.

    PubMed

    Hopkins, Shelley; Sampson, Geoff P; Hendicott, Peter L; Wood, Joanne M

    2016-03-01

    Little is known about the prevalence of refractive error, binocular vision, and other visual conditions in Australian Indigenous children. This is important given the association of these visual conditions with reduced reading performance in the wider population, which may also contribute to the suboptimal reading performance reported in this population. The aim of this study was to develop a visual profile of Queensland Indigenous children. Vision testing was performed on 595 primary schoolchildren in Queensland, Australia. Vision parameters measured included visual acuity, refractive error, color vision, nearpoint of convergence, horizontal heterophoria, fusional vergence range, accommodative facility, AC/A ratio, visual motor integration, and rapid automatized naming. Near heterophoria, nearpoint of convergence, and near fusional vergence range were used to classify convergence insufficiency (CI). Although refractive error (Indigenous, 10%; non-Indigenous, 16%; p = 0.04) and strabismus (Indigenous, 0%; non-Indigenous, 3%; p = 0.03) were significantly less common in Indigenous children, CI was twice as prevalent (Indigenous, 10%; non-Indigenous, 5%; p = 0.04). Reduced visual information processing skills were more common in Indigenous children (reduced visual motor integration [Indigenous, 28%; non-Indigenous, 16%; p < 0.01] and slower rapid automatized naming [Indigenous, 67%; non-Indigenous, 59%; p = 0.04]). The prevalence of visual impairment (reduced visual acuity) and color vision deficiency was similar between groups. Indigenous children have less refractive error and strabismus than their non-Indigenous peers. However, CI and reduced visual information processing skills were more common in this group. Given that vision screenings primarily target visual acuity assessment and strabismus detection, this is an important finding as many Indigenous children with CI and reduced visual information processing may be missed. Emphasis should be placed on identifying children with CI and reduced visual information processing given the potential effect of these conditions on school performance.

  11. Attention modulates trans-saccadic integration.

    PubMed

    Stewart, Emma E M; Schütz, Alexander C

    2018-01-01

    With every saccade, humans must reconcile the low resolution peripheral information available before a saccade, with the high resolution foveal information acquired after the saccade. While research has shown that we are able to integrate peripheral and foveal vision in a near-optimal manner, it is still unclear which mechanisms may underpin this important perceptual process. One potential mechanism that may moderate this integration process is visual attention. Pre-saccadic attention is a well documented phenomenon, whereby visual attention shifts to the location of an upcoming saccade before the saccade is executed. While it plays an important role in other peri-saccadic processes such as predictive remapping, the role of attention in the integration process is as yet unknown. This study aimed to determine whether the presentation of an attentional distractor during a saccade impaired trans-saccadic integration, and to measure the time-course of this impairment. Results showed that presenting an attentional distractor impaired integration performance both before saccade onset, and during the saccade, in selected subjects who showed integration in the absence of a distractor. This suggests that visual attention may be a mechanism that facilitates trans-saccadic integration. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  12. A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF

    PubMed Central

    Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A.; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan

    2016-01-01

    With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration. PMID:27315101

  13. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    PubMed

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  14. Sharpening of Hierarchical Visual Feature Representations of Blurred Images.

    PubMed

    Abdelhack, Mohamed; Kamitani, Yukiyasu

    2018-01-01

    The robustness of the visual system lies in its ability to perceive degraded images. This is achieved through interacting bottom-up, recurrent, and top-down pathways that process the visual input in concordance with stored prior information. The interaction mechanism by which they integrate visual input and prior information is still enigmatic. We present a new approach using deep neural network (DNN) representation to reveal the effects of such integration on degraded visual inputs. We transformed measured human brain activity resulting from viewing blurred images to the hierarchical representation space derived from a feedforward DNN. Transformed representations were found to veer toward the original nonblurred image and away from the blurred stimulus image. This indicated deblurring or sharpening in the neural representation, and possibly in our perception. We anticipate these results will help unravel the interplay mechanism between bottom-up, recurrent, and top-down pathways, leading to more comprehensive models of vision.

  15. Some Behavioral and Neurobiological Constraints on Theories of Audiovisual Speech Integration: A Review and Suggestions for New Directions

    PubMed Central

    Altieri, Nicholas; Pisoni, David B.; Townsend, James T.

    2012-01-01

    Summerfield (1987) proposed several accounts of audiovisual speech perception, a field of research that has burgeoned in recent years. The proposed accounts included the integration of discrete phonetic features, vectors describing the values of independent acoustical and optical parameters, the filter function of the vocal tract, and articulatory dynamics of the vocal tract. The latter two accounts assume that the representations of audiovisual speech perception are based on abstract gestures, while the former two assume that the representations consist of symbolic or featural information obtained from visual and auditory modalities. Recent converging evidence from several different disciplines reveals that the general framework of Summerfield’s feature-based theories should be expanded. An updated framework building upon the feature-based theories is presented. We propose a processing model arguing that auditory and visual brain circuits provide facilitatory information when the inputs are correctly timed, and that auditory and visual speech representations do not necessarily undergo translation into a common code during information processing. Future research on multisensory processing in speech perception should investigate the connections between auditory and visual brain regions, and utilize dynamic modeling tools to further understand the timing and information processing mechanisms involved in audiovisual speech integration. PMID:21968081

  16. Some behavioral and neurobiological constraints on theories of audiovisual speech integration: a review and suggestions for new directions.

    PubMed

    Altieri, Nicholas; Pisoni, David B; Townsend, James T

    2011-01-01

    Summerfield (1987) proposed several accounts of audiovisual speech perception, a field of research that has burgeoned in recent years. The proposed accounts included the integration of discrete phonetic features, vectors describing the values of independent acoustical and optical parameters, the filter function of the vocal tract, and articulatory dynamics of the vocal tract. The latter two accounts assume that the representations of audiovisual speech perception are based on abstract gestures, while the former two assume that the representations consist of symbolic or featural information obtained from visual and auditory modalities. Recent converging evidence from several different disciplines reveals that the general framework of Summerfield's feature-based theories should be expanded. An updated framework building upon the feature-based theories is presented. We propose a processing model arguing that auditory and visual brain circuits provide facilitatory information when the inputs are correctly timed, and that auditory and visual speech representations do not necessarily undergo translation into a common code during information processing. Future research on multisensory processing in speech perception should investigate the connections between auditory and visual brain regions, and utilize dynamic modeling tools to further understand the timing and information processing mechanisms involved in audiovisual speech integration.

  17. Is cross-modal integration of emotional expressions independent of attentional resources?

    PubMed

    Vroomen, J; Driver, J; de Gelder, B

    2001-12-01

    In this study, we examined whether integration of visual and auditory information about emotions requires limited attentional resources. Subjects judged whether a voice expressed happiness or fear, while trying to ignore a concurrently presented static facial expression. As an additional task, the subjects had to add two numbers together rapidly (Experiment 1), count the occurrences of a target digit in a rapid serial visual presentation (Experiment 2), or judge the pitch of a tone as high or low (Experiment 3). The visible face had an impact on judgments of the emotion of the heard voice in all the experiments. This cross-modal effect was independent of whether or not the subjects performed a demanding additional task. This suggests that integration of visual and auditory information about emotions may be a mandatory process, unconstrained by attentional resources.

  18. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    PubMed Central

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  19. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    PubMed

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  20. How Object-Specific Are Object Files? Evidence for Integration by Location

    ERIC Educational Resources Information Center

    van Dam, Wessel O.; Hommel, Bernhard

    2010-01-01

    Given the distributed representation of visual features in the human brain, binding mechanisms are necessary to integrate visual information about the same perceptual event. It has been assumed that feature codes are bound into object files--pointers to the neural codes of the features of a given event. The present study investigated the…

  1. ICT Integration in Mathematics Initial Teacher Training and Its Impact on Visualization: The Case of GeoGebra

    ERIC Educational Resources Information Center

    Dockendorff, Monika; Solar, Horacio

    2018-01-01

    This case study investigates the impact of the integration of information and communications technology (ICT) in mathematics visualization skills and initial teacher education programmes. It reports on the influence GeoGebra dynamic software use has on promoting mathematical learning at secondary school and on its impact on teachers' conceptions…

  2. Learning about Probability from Text and Tables: Do Color Coding and Labeling through an Interactive-User Interface Help?

    ERIC Educational Resources Information Center

    Clinton, Virginia; Morsanyi, Kinga; Alibali, Martha W.; Nathan, Mitchell J.

    2016-01-01

    Learning from visual representations is enhanced when learners appropriately integrate corresponding visual and verbal information. This study examined the effects of two methods of promoting integration, color coding and labeling, on learning about probabilistic reasoning from a table and text. Undergraduate students (N = 98) were randomly…

  3. Effects of Audio-Visual Integration on the Detection of Masked Speech and Non-Speech Sounds

    ERIC Educational Resources Information Center

    Eramudugolla, Ranmalee; Henderson, Rachel; Mattingley, Jason B.

    2011-01-01

    Integration of simultaneous auditory and visual information about an event can enhance our ability to detect that event. This is particularly evident in the perception of speech, where the articulatory gestures of the speaker's lips and face can significantly improve the listener's detection and identification of the message, especially when that…

  4. MindSeer: a portable and extensible tool for visualization of structural and functional neuroimaging data

    PubMed Central

    Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F

    2007-01-01

    Background Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. Results We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: . Conclusion MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine. PMID:17937818

  5. MindSeer: a portable and extensible tool for visualization of structural and functional neuroimaging data.

    PubMed

    Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F

    2007-10-15

    Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: http://sig.biostr.washington.edu/projects/MindSeer. MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine.

  6. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA.

    PubMed

    Wilbiks, Jonathan M P; Dyson, Benjamin J

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus.

  7. The Dynamics and Neural Correlates of Audio-Visual Integration Capacity as Determined by Temporal Unpredictability, Proactive Interference, and SOA

    PubMed Central

    Wilbiks, Jonathan M. P.; Dyson, Benjamin J.

    2016-01-01

    Over 5 experiments, we challenge the idea that the capacity of audio-visual integration need be fixed at 1 item. We observe that the conditions under which audio-visual integration is most likely to exceed 1 occur when stimulus change operates at a slow rather than fast rate of presentation and when the task is of intermediate difficulty such as when low levels of proactive interference (3 rather than 8 interfering visual presentations) are combined with the temporal unpredictability of the critical frame (Experiment 2), or, high levels of proactive interference are combined with the temporal predictability of the critical frame (Experiment 4). Neural data suggest that capacity might also be determined by the quality of perceptual information entering working memory. Experiment 5 supported the proposition that audio-visual integration was at play during the previous experiments. The data are consistent with the dynamic nature usually associated with cross-modal binding, and while audio-visual integration capacity likely cannot exceed uni-modal capacity estimates, performance may be better than being able to associate only one visual stimulus with one auditory stimulus. PMID:27977790

  8. Integration trumps selection in object recognition.

    PubMed

    Saarela, Toni P; Landy, Michael S

    2015-03-30

    Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several "cues" (color, luminance, texture, etc.), and humans can integrate sensory cues to improve detection and recognition [1-3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue invariance by responding to a given shape independent of the visual cue defining it [5-8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10, 11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11, 12], imaging [13-16], and single-cell and neural population recordings [17, 18]. Besides single features, attention can select whole objects [19-21]. Objects are among the suggested "units" of attention because attention to a single feature of an object causes the selection of all of its features [19-21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Integration trumps selection in object recognition

    PubMed Central

    Saarela, Toni P.; Landy, Michael S.

    2015-01-01

    Summary Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several “cues” (color, luminance, texture etc.), and humans can integrate sensory cues to improve detection and recognition [1–3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue-invariance by responding to a given shape independent of the visual cue defining it [5–8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10,11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11,12], imaging [13–16], and single-cell and neural population recordings [17,18]. Besides single features, attention can select whole objects [19–21]. Objects are among the suggested “units” of attention because attention to a single feature of an object causes the selection of all of its features [19–21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near-optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection. PMID:25802154

  10. Treelink: data integration, clustering and visualization of phylogenetic trees.

    PubMed

    Allende, Christian; Sohn, Erik; Little, Cedric

    2015-12-29

    Phylogenetic trees are central to a wide range of biological studies. In many of these studies, tree nodes need to be associated with a variety of attributes. For example, in studies concerned with viral relationships, tree nodes are associated with epidemiological information, such as location, age and subtype. Gene trees used in comparative genomics are usually linked with taxonomic information, such as functional annotations and events. A wide variety of tree visualization and annotation tools have been developed in the past, however none of them are intended for an integrative and comparative analysis. Treelink is a platform-independent software for linking datasets and sequence files to phylogenetic trees. The application allows an automated integration of datasets to trees for operations such as classifying a tree based on a field or showing the distribution of selected data attributes in branches and leafs. Genomic and proteonomic sequences can also be linked to the tree and extracted from internal and external nodes. A novel clustering algorithm to simplify trees and display the most divergent clades was also developed, where validation can be achieved using the data integration and classification function. Integrated geographical information allows ancestral character reconstruction for phylogeographic plotting based on parsimony and likelihood algorithms. Our software can successfully integrate phylogenetic trees with different data sources, and perform operations to differentiate and visualize those differences within a tree. File support includes the most popular formats such as newick and csv. Exporting visualizations as images, cluster outputs and genomic sequences is supported. Treelink is available as a web and desktop application at http://www.treelinkapp.com .

  11. Feature integration and object representations along the dorsal stream visual hierarchy

    PubMed Central

    Perry, Carolyn Jeane; Fallah, Mazyar

    2014-01-01

    The visual system is split into two processing streams: a ventral stream that receives color and form information and a dorsal stream that receives motion information. Each stream processes that information hierarchically, with each stage building upon the previous. In the ventral stream this leads to the formation of object representations that ultimately allow for object recognition regardless of changes in the surrounding environment. In the dorsal stream, this hierarchical processing has classically been thought to lead to the computation of complex motion in three dimensions. However, there is evidence to suggest that there is integration of both dorsal and ventral stream information into motion computation processes, giving rise to intermediate object representations, which facilitate object selection and decision making mechanisms in the dorsal stream. First we review the hierarchical processing of motion along the dorsal stream and the building up of object representations along the ventral stream. Then we discuss recent work on the integration of ventral and dorsal stream features that lead to intermediate object representations in the dorsal stream. Finally we propose a framework describing how and at what stage different features are integrated into dorsal visual stream object representations. Determining the integration of features along the dorsal stream is necessary to understand not only how the dorsal stream builds up an object representation but also which computations are performed on object representations instead of local features. PMID:25140147

  12. Electrophysiological evidence for Audio-visuo-lingual speech integration.

    PubMed

    Treille, Avril; Vilain, Coriandre; Schwartz, Jean-Luc; Hueber, Thomas; Sato, Marc

    2018-01-31

    Recent neurophysiological studies demonstrate that audio-visual speech integration partly operates through temporal expectations and speech-specific predictions. From these results, one common view is that the binding of auditory and visual, lipread, speech cues relies on their joint probability and prior associative audio-visual experience. The present EEG study examined whether visual tongue movements integrate with relevant speech sounds, despite little associative audio-visual experience between the two modalities. A second objective was to determine possible similarities and differences of audio-visual speech integration between unusual audio-visuo-lingual and classical audio-visuo-labial modalities. To this aim, participants were presented with auditory, visual, and audio-visual isolated syllables, with the visual presentation related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, with lingual and facial movements previously recorded by an ultrasound imaging system and a video camera. In line with previous EEG studies, our results revealed an amplitude decrease and a latency facilitation of P2 auditory evoked potentials in both audio-visual-lingual and audio-visuo-labial conditions compared to the sum of unimodal conditions. These results argue against the view that auditory and visual speech cues solely integrate based on prior associative audio-visual perceptual experience. Rather, they suggest that dynamic and phonetic informational cues are sharable across sensory modalities, possibly through a cross-modal transfer of implicit articulatory motor knowledge. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Mobile medical visual information retrieval.

    PubMed

    Depeursinge, Adrien; Duc, Samuel; Eggel, Ivan; Müller, Henning

    2012-01-01

    In this paper, we propose mobile access to peer-reviewed medical information based on textual search and content-based visual image retrieval. Web-based interfaces designed for limited screen space were developed to query via web services a medical information retrieval engine optimizing the amount of data to be transferred in wireless form. Visual and textual retrieval engines with state-of-the-art performance were integrated. Results obtained show a good usability of the software. Future use in clinical environments has the potential of increasing quality of patient care through bedside access to the medical literature in context.

  14. Invertebrate neurobiology: visual direction of arm movements in an octopus.

    PubMed

    Niven, Jeremy E

    2011-03-22

    An operant task in which octopuses learn to locate food by a visual cue in a three-choice maze shows that they are capable of integrating visual and mechanosensory information to direct their arm movements to a goal. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Attention affects visual perceptual processing near the hand.

    PubMed

    Cosman, Joshua D; Vecera, Shaun P

    2010-09-01

    Specialized, bimodal neural systems integrate visual and tactile information in the space near the hand. Here, we show that visuo-tactile representations allow attention to influence early perceptual processing, namely, figure-ground assignment. Regions that were reached toward were more likely than other regions to be assigned as foreground figures, and hand position competed with image-based information to bias figure-ground assignment. Our findings suggest that hand position allows attention to influence visual perceptual processing and that visual processes typically viewed as unimodal can be influenced by bimodal visuo-tactile representations.

  16. Cognitive and psychological science insights to improve climate change data visualization

    NASA Astrophysics Data System (ADS)

    Harold, Jordan; Lorenzoni, Irene; Shipley, Thomas F.; Coventry, Kenny R.

    2016-12-01

    Visualization of climate data plays an integral role in the communication of climate change findings to both expert and non-expert audiences. The cognitive and psychological sciences can provide valuable insights into how to improve visualization of climate data based on knowledge of how the human brain processes visual and linguistic information. We review four key research areas to demonstrate their potential to make data more accessible to diverse audiences: directing visual attention, visual complexity, making inferences from visuals, and the mapping between visuals and language. We present evidence-informed guidelines to help climate scientists increase the accessibility of graphics to non-experts, and illustrate how the guidelines can work in practice in the context of Intergovernmental Panel on Climate Change graphics.

  17. Skill dependent audiovisual integration in the fusiform induces repetition suppression.

    PubMed

    McNorgan, Chris; Booth, James R

    2015-02-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression

    PubMed Central

    McNorgan, Chris; Booth, James R.

    2015-01-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276

  19. Model-based analysis of pattern motion processing in mouse primary visual cortex

    PubMed Central

    Muir, Dylan R.; Roth, Morgane M.; Helmchen, Fritjof; Kampa, Björn M.

    2015-01-01

    Neurons in sensory areas of neocortex exhibit responses tuned to specific features of the environment. In visual cortex, information about features such as edges or textures with particular orientations must be integrated to recognize a visual scene or object. Connectivity studies in rodent cortex have revealed that neurons make specific connections within sub-networks sharing common input tuning. In principle, this sub-network architecture enables local cortical circuits to integrate sensory information. However, whether feature integration indeed occurs locally in rodent primary sensory areas has not been examined directly. We studied local integration of sensory features in primary visual cortex (V1) of the mouse by presenting drifting grating and plaid stimuli, while recording the activity of neuronal populations with two-photon calcium imaging. Using a Bayesian model-based analysis framework, we classified single-cell responses as being selective for either individual grating components or for moving plaid patterns. Rather than relying on trial-averaged responses, our model-based framework takes into account single-trial responses and can easily be extended to consider any number of arbitrary predictive models. Our analysis method was able to successfully classify significantly more responses than traditional partial correlation (PC) analysis, and provides a rigorous statistical framework to rank any number of models and reject poorly performing models. We also found a large proportion of cells that respond strongly to only one stimulus class. In addition, a quarter of selectively responding neurons had more complex responses that could not be explained by any simple integration model. Our results show that a broad range of pattern integration processes already take place at the level of V1. This diversity of integration is consistent with processing of visual inputs by local sub-networks within V1 that are tuned to combinations of sensory features. PMID:26300738

  20. Fuels planning: science synthesis and integration; forest structure and fire hazard fact sheet 03: visualizing forest structure and fuels

    Treesearch

    Rocky Mountain Research Station USDA Forest Service

    2004-01-01

    The software described in this fact sheet provides managers with tools for visualizing forest and fuels information. Computer-based landscape simulations can help visualize stand and landscape conditions and the effects of different management treatments and fuel changes over time. These visualizations can assist forest planning by considering a range of management...

  1. A Dynamic Three-Dimensional Network Visualization Program for Integration into CyberCIEGE and Other Network Visualization Scenarios

    DTIC Science & Technology

    2007-06-01

    information flow involved in network attacks. This kind of information can be invaluable in learning how to best setup and defend computer networks...administrators, and those interested in learning about securing networks a way to conceptualize this complex system of computing. NTAV3D will provide a three...teaching with visual and other components can make learning more effective” (Baxley et al, 2006). A hyperbox (Alpern and Carter, 1991) is

  2. A Digital Mixed Methods Research Design: Integrating Multimodal Analysis with Data Mining and Information Visualization for Big Data Analytics

    ERIC Educational Resources Information Center

    O'Halloran, Kay L.; Tan, Sabine; Pham, Duc-Son; Bateman, John; Vande Moere, Andrew

    2018-01-01

    This article demonstrates how a digital environment offers new opportunities for transforming qualitative data into quantitative data in order to use data mining and information visualization for mixed methods research. The digital approach to mixed methods research is illustrated by a framework which combines qualitative methods of multimodal…

  3. Perception of Audio-Visual Speech Synchrony in Spanish-Speaking Children with and without Specific Language Impairment

    ERIC Educational Resources Information Center

    Pons, Ferran; Andreu, Llorenc; Sanz-Torrent, Monica; Buil-Legaz, Lucia; Lewkowicz, David J.

    2013-01-01

    Speech perception involves the integration of auditory and visual articulatory information, and thus requires the perception of temporal synchrony between this information. There is evidence that children with specific language impairment (SLI) have difficulty with auditory speech perception but it is not known if this is also true for the…

  4. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.

    PubMed

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-13

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.

  5. An integrated ball projection technology for the study of dynamic interceptive actions.

    PubMed

    Stone, J A; Panchuk, D; Davids, K; North, J S; Fairweather, I; Maynard, I W

    2014-12-01

    Dynamic interceptive actions, such as catching or hitting a ball, are important task vehicles for investigating the complex relationship between cognition, perception, and action in performance environments. Representative experimental designs have become more important recently, highlighting the need for research methods to ensure that the coupling of information and movement is faithfully maintained. However, retaining representative design while ensuring systematic control of experimental variables is challenging, due to the traditional tendency to employ methods that typically involve use of reductionist motor responses such as buttonpressing or micromovements. Here, we outline the methodology behind a custom-built, integrated ball projection technology that allows images of advanced visual information to be synchronized with ball projection. This integrated technology supports the controlled presentation of visual information to participants while they perform dynamic interceptive actions. We discuss theoretical ideas behind the integration of hardware and software, along with practical issues resolved in technological design, and emphasize how the system can be integrated with emerging developments such as mixed reality environments. We conclude by considering future developments and applications of the integrated projection technology for research in human movement behaviors.

  6. Lexical interference effects in sentence processing: Evidence from the visual world paradigm and self-organizing models

    PubMed Central

    Kukona, Anuenue; Cho, Pyeong Whan; Magnuson, James S.; Tabor, Whitney

    2014-01-01

    Psycholinguistic research spanning a number of decades has produced diverging results with regard to the nature of constraint integration in online sentence processing. For example, evidence that language users anticipatorily fixate likely upcoming referents in advance of evidence in the speech signal supports rapid context integration. By contrast, evidence that language users activate representations that conflict with contextual constraints, or only indirectly satisfy them, supports non-integration or late integration. Here, we report on a self-organizing neural network framework that addresses one aspect of constraint integration: the integration of incoming lexical information (i.e., an incoming word) with sentence context information (i.e., from preceding words in an unfolding utterance). In two simulations, we show that the framework predicts both classic results concerned with lexical ambiguity resolution (Swinney, 1979; Tanenhaus, Leiman, & Seidenberg, 1979), which suggest late context integration, and results demonstrating anticipatory eye movements (e.g., Altmann & Kamide, 1999), which support rapid context integration. We also report two experiments using the visual world paradigm that confirm a new prediction of the framework. Listeners heard sentences like “The boy will eat the white…,” while viewing visual displays with objects like a white cake (i.e., a predictable direct object of “eat”), white car (i.e., an object not predicted by “eat,” but consistent with “white”), and distractors. Consistent with our simulation predictions, we found that while listeners fixated white cake most, they also fixated white car more than unrelated distractors in this highly constraining sentence (and visual) context. PMID:24245535

  7. Preprocessing of emotional visual information in the human piriform cortex.

    PubMed

    Schulze, Patrick; Bestgen, Anne-Kathrin; Lech, Robert K; Kuchinke, Lars; Suchan, Boris

    2017-08-23

    This study examines the processing of visual information by the olfactory system in humans. Recent data point to the processing of visual stimuli by the piriform cortex, a region mainly known as part of the primary olfactory cortex. Moreover, the piriform cortex generates predictive templates of olfactory stimuli to facilitate olfactory processing. This study fills the gap relating to the question whether this region is also capable of preprocessing emotional visual information. To gain insight into the preprocessing and transfer of emotional visual information into olfactory processing, we recorded hemodynamic responses during affective priming using functional magnetic resonance imaging (fMRI). Odors of different valence (pleasant, neutral and unpleasant) were primed by images of emotional facial expressions (happy, neutral and disgust). Our findings are the first to demonstrate that the piriform cortex preprocesses emotional visual information prior to any olfactory stimulation and that the emotional connotation of this preprocessing is subsequently transferred and integrated into an extended olfactory network for olfactory processing.

  8. The informativity of sound modulates crossmodal facilitation of visual discrimination: a fMRI study.

    PubMed

    Li, Qi; Yu, Hongtao; Li, Xiujun; Sun, Hongzan; Yang, Jingjing; Li, Chunlin

    2017-01-18

    Many studies have investigated behavioral crossmodal facilitation when a visual stimulus is accompanied by a concurrent task-irrelevant sound. Lippert and colleagues reported that a concurrent task-irrelevant sound reduced the uncertainty of the timing of the visual display and improved perceptional responses (informative sound). However, the neural mechanism by which the informativity of sound affected crossmodal facilitation of visual discrimination remained unclear. In this study, we used event-related functional MRI to investigate the neural mechanisms underlying the role of informativity of sound in crossmodal facilitation of visual discrimination. Significantly faster reaction times were observed when there was an informative relationship between auditory and visual stimuli. The functional MRI results showed sound informativity-induced activation enhancement including the left fusiform gyrus and the right lateral occipital complex. Further correlation analysis showed that the right lateral occipital complex was significantly correlated with the behavioral benefit in reaction times. This suggests that this region was modulated by the informative relationship within audiovisual stimuli that was learnt during the experiment, resulting in late-stage multisensory integration and enhanced behavioral responses.

  9. Manipulating the fidelity of lower extremity visual feedback to identify obstacle negotiation strategies in immersive virtual reality.

    PubMed

    Kim, Aram; Zhou, Zixuan; Kretch, Kari S; Finley, James M

    2017-07-01

    The ability to successfully navigate obstacles in our environment requires integration of visual information about the environment with estimates of our body's state. Previous studies have used partial occlusion of the visual field to explore how information about the body and impending obstacles are integrated to mediate a successful clearance strategy. However, because these manipulations often remove information about both the body and obstacle, it remains to be seen how information about the lower extremities alone is utilized during obstacle crossing. Here, we used an immersive virtual reality (VR) interface to explore how visual feedback of the lower extremities influences obstacle crossing performance. Participants wore a head-mounted display while walking on treadmill and were instructed to step over obstacles in a virtual corridor in four different feedback trials. The trials involved: (1) No visual feedback of the lower extremities, (2) an endpoint-only model, (3) a link-segment model, and (4) a volumetric multi-segment model. We found that the volumetric model improved success rate, placed their trailing foot before crossing and leading foot after crossing more consistently, and placed their leading foot closer to the obstacle after crossing compared to no model. This knowledge is critical for the design of obstacle negotiation tasks in immersive virtual environments as it may provide information about the fidelity necessary to reproduce ecologically valid practice environments.

  10. Combining computational analyses and interactive visualization for document exploration and sensemaking in jigsaw.

    PubMed

    Görg, Carsten; Liu, Zhicheng; Kihm, Jaeyeon; Choo, Jaegul; Park, Haesun; Stasko, John

    2013-10-01

    Investigators across many disciplines and organizations must sift through large collections of text documents to understand and piece together information. Whether they are fighting crime, curing diseases, deciding what car to buy, or researching a new field, inevitably investigators will encounter text documents. Taking a visual analytics approach, we integrate multiple text analysis algorithms with a suite of interactive visualizations to provide a flexible and powerful environment that allows analysts to explore collections of documents while sensemaking. Our particular focus is on the process of integrating automated analyses with interactive visualizations in a smooth and fluid manner. We illustrate this integration through two example scenarios: an academic researcher examining InfoVis and VAST conference papers and a consumer exploring car reviews while pondering a purchase decision. Finally, we provide lessons learned toward the design and implementation of visual analytics systems for document exploration and understanding.

  11. The Integration of Multi-State Clarus Data into Data Visualization Tools

    DOT National Transportation Integrated Search

    2011-12-20

    This project focused on the integration of all Clarus Data into the Regional Integrated Transportation Information System (RITIS) for real-time situational awareness and historical safety data analysis. The initial outcomes of this project are the fu...

  12. Neural correlates of visuospatial consciousness in 3D default space: insights from contralateral neglect syndrome.

    PubMed

    Jerath, Ravinder; Crawford, Molly W

    2014-08-01

    One of the most compelling questions still unanswered in neuroscience is how consciousness arises. In this article, we examine visual processing, the parietal lobe, and contralateral neglect syndrome as a window into consciousness and how the brain functions as the mind and we introduce a mechanism for the processing of visual information and its role in consciousness. We propose that consciousness arises from integration of information from throughout the body and brain by the thalamus and that the thalamus reimages visual and other sensory information from throughout the cortex in a default three-dimensional space in the mind. We further suggest that the thalamus generates a dynamic default three-dimensional space by integrating processed information from corticothalamic feedback loops, creating an infrastructure that may form the basis of our consciousness. Further experimental evidence is needed to examine and support this hypothesis, the role of the thalamus, and to further elucidate the mechanism of consciousness. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Combining path integration and remembered landmarks when navigating without vision.

    PubMed

    Kalia, Amy A; Schrater, Paul R; Legge, Gordon E

    2013-01-01

    This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information.

  14. Combining Path Integration and Remembered Landmarks When Navigating without Vision

    PubMed Central

    Kalia, Amy A.; Schrater, Paul R.; Legge, Gordon E.

    2013-01-01

    This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information. PMID:24039742

  15. Hearing gestures, seeing music: vision influences perceived tone duration.

    PubMed

    Schutz, Michael; Lipscomb, Scott

    2007-01-01

    Percussionists inadvertently use visual information to strategically manipulate audience perception of note duration. Videos of long (L) and short (S) notes performed by a world-renowned percussionist were separated into visual (Lv, Sv) and auditory (La, Sa) components. Visual components contained only the gesture used to perform the note, auditory components the acoustic note itself. Audio and visual components were then crossed to create realistic musical stimuli. Participants were informed of the mismatch, and asked to rate note duration of these audio-visual pairs based on sound alone. Ratings varied based on visual (Lv versus Sv), but not auditory (La versus Sa) components. Therefore while longer gestures do not make longer notes, longer gestures make longer sounding notes through the integration of sensory information. This finding contradicts previous research showing that audition dominates temporal tasks such as duration judgment.

  16. Development of a geographic visualization and communications systems (GVCS) for monitoring remote vehicles

    DOT National Transportation Integrated Search

    1998-03-30

    The purpose of this project is to integrate a variety of geographic information systems : capabilities and telecommunication technologies for potential use in geographic network and : visualization applications. The specific technical goals of the pr...

  17. Optimal visual-haptic integration with articulated tools.

    PubMed

    Takahashi, Chie; Watt, Simon J

    2017-05-01

    When we feel and see an object, the nervous system integrates visual and haptic information optimally, exploiting the redundancy in multiple signals to estimate properties more precisely than is possible from either signal alone. We examined whether optimal integration is similarly achieved when using articulated tools. Such tools (tongs, pliers, etc) are a defining characteristic of human hand function, but complicate the classical sensory 'correspondence problem' underlying multisensory integration. Optimal integration requires establishing the relationship between signals acquired by different sensors (hand and eye) and, therefore, in fundamentally unrelated units. The system must also determine when signals refer to the same property of the world-seeing and feeling the same thing-and only integrate those that do. This could be achieved by comparing the pattern of current visual and haptic input to known statistics of their normal relationship. Articulated tools disrupt this relationship, however, by altering the geometrical relationship between object properties and hand posture (the haptic signal). We examined whether different tool configurations are taken into account in visual-haptic integration. We indexed integration by measuring the precision of size estimates, and compared our results to optimal predictions from a maximum-likelihood integrator. Integration was near optimal, independent of tool configuration/hand posture, provided that visual and haptic signals referred to the same object in the world. Thus, sensory correspondence was determined correctly (trial-by-trial), taking tool configuration into account. This reveals highly flexible multisensory integration underlying tool use, consistent with the brain constructing internal models of tools' properties.

  18. An annotation system for 3D fluid flow visualization

    NASA Technical Reports Server (NTRS)

    Loughlin, Maria M.; Hughes, John F.

    1995-01-01

    Annotation is a key activity of data analysis. However, current systems for data analysis focus almost exclusively on visualization. We propose a system which integrates annotations into a visualization system. Annotations are embedded in 3D data space, using the Post-it metaphor. This embedding allows contextual-based information storage and retrieval, and facilitates information sharing in collaborative environments. We provide a traditional database filter and a Magic Lens filter to create specialized views of the data. The system has been customized for fluid flow applications, with features which allow users to store parameters of visualization tools and sketch 3D volumes.

  19. Postural Control Deficits in Autism Spectrum Disorder: The Role of Sensory Integration

    ERIC Educational Resources Information Center

    Doumas, Michail; McKenna, Roisin; Murphy, Blain

    2016-01-01

    We investigated the nature of sensory integration deficits in postural control of young adults with ASD. Postural control was assessed in a fixed environment, and in three environments in which sensory information about body sway from visual, proprioceptive or both channels was inaccurate. Furthermore, two levels of inaccurate information were…

  20. Emergency Response Virtual Environment for Safe Schools

    NASA Technical Reports Server (NTRS)

    Wasfy, Ayman; Walker, Teresa

    2008-01-01

    An intelligent emergency response virtual environment (ERVE) that provides emergency first responders, response planners, and managers with situational awareness as well as training and support for safe schools is presented. ERVE incorporates an intelligent agent facility for guiding and assisting the user in the context of the emergency response operations. Response information folders capture key information about the school. The system enables interactive 3D visualization of schools and academic campuses, including the terrain and the buildings' exteriors and interiors in an easy to use Web..based interface. ERVE incorporates live camera and sensors feeds and can be integrated with other simulations such as chemical plume simulation. The system is integrated with a Geographical Information System (GIS) to enable situational awareness of emergency events and assessment of their effect on schools in a geographic area. ERVE can also be integrated with emergency text messaging notification systems. Using ERVE, it is now possible to address safe schools' emergency management needs with a scaleable, seamlessly integrated and fully interactive intelligent and visually compelling solution.

  1. Neural Integration in Body Perception.

    PubMed

    Ramsey, Richard

    2018-06-19

    The perception of other people is instrumental in guiding social interactions. For example, the appearance of the human body cues a wide range of inferences regarding sex, age, health, and personality, as well as emotional state and intentions, which influence social behavior. To date, most neuroscience research on body perception has aimed to characterize the functional contribution of segregated patches of cortex in the ventral visual stream. In light of the growing prominence of network architectures in neuroscience, the current article reviews neuroimaging studies that measure functional integration between different brain regions during body perception. The review demonstrates that body perception is not restricted to processing in the ventral visual stream but instead reflects a functional alliance between the ventral visual stream and extended neural systems associated with action perception, executive functions, and theory of mind. Overall, these findings demonstrate how body percepts are constructed through interactions in distributed brain networks and underscore that functional segregation and integration should be considered together when formulating neurocognitive theories of body perception. Insight from such an updated model of body perception generalizes to inform the organizational structure of social perception and cognition more generally and also informs disorders of body image, such as anorexia nervosa, which may rely on atypical integration of body-related information.

  2. Visual Messages: Integrating Imagery into Instruction. A Media Literacy Resource for Teachers. Second Edition.

    ERIC Educational Resources Information Center

    Considine, David M.; Haley, Gail E.

    Connecting the curriculum of the K-12 classroom with the "curriculum of the living room," this book helps teachers and library media specialists maintain a viable program of visual (or media) literacy by presenting background information on the visual literacy movement and dozens of effective strategies and classroom activities that are ready to…

  3. Visual Analytics in Public Safety: Example Capabilities for Example Government Agencies

    DTIC Science & Technology

    2011-10-01

    is not limited to: the Police Records Information Management Environment for British Columbia (PRIME-BC), the Police Reporting and Occurrence System...and filtering for rapid identification of relevant documents - Graphical environment for visual evidence marshaling - Interactive linking and...analytical reasoning facilitated by interactive visual interfaces and integration with computational analytics. Indeed, a wide variety of technologies

  4. A Causal Inference Model Explains Perception of the McGurk Effect and Other Incongruent Audiovisual Speech.

    PubMed

    Magnotti, John F; Beauchamp, Michael S

    2017-02-01

    Audiovisual speech integration combines information from auditory speech (talker's voice) and visual speech (talker's mouth movements) to improve perceptual accuracy. However, if the auditory and visual speech emanate from different talkers, integration decreases accuracy. Therefore, a key step in audiovisual speech perception is deciding whether auditory and visual speech have the same source, a process known as causal inference. A well-known illusion, the McGurk Effect, consists of incongruent audiovisual syllables, such as auditory "ba" + visual "ga" (AbaVga), that are integrated to produce a fused percept ("da"). This illusion raises two fundamental questions: first, given the incongruence between the auditory and visual syllables in the McGurk stimulus, why are they integrated; and second, why does the McGurk effect not occur for other, very similar syllables (e.g., AgaVba). We describe a simplified model of causal inference in multisensory speech perception (CIMS) that predicts the perception of arbitrary combinations of auditory and visual speech. We applied this model to behavioral data collected from 60 subjects perceiving both McGurk and non-McGurk incongruent speech stimuli. The CIMS model successfully predicted both the audiovisual integration observed for McGurk stimuli and the lack of integration observed for non-McGurk stimuli. An identical model without causal inference failed to accurately predict perception for either form of incongruent speech. The CIMS model uses causal inference to provide a computational framework for studying how the brain performs one of its most important tasks, integrating auditory and visual speech cues to allow us to communicate with others.

  5. The evolution of meaning: spatio-temporal dynamics of visual object recognition.

    PubMed

    Clarke, Alex; Taylor, Kirsten I; Tyler, Lorraine K

    2011-08-01

    Research on the spatio-temporal dynamics of visual object recognition suggests a recurrent, interactive model whereby an initial feedforward sweep through the ventral stream to prefrontal cortex is followed by recurrent interactions. However, critical questions remain regarding the factors that mediate the degree of recurrent interactions necessary for meaningful object recognition. The novel prediction we test here is that recurrent interactivity is driven by increasing semantic integration demands as defined by the complexity of semantic information required by the task and driven by the stimuli. To test this prediction, we recorded magnetoencephalography data while participants named living and nonliving objects during two naming tasks. We found that the spatio-temporal dynamics of neural activity were modulated by the level of semantic integration required. Specifically, source reconstructed time courses and phase synchronization measures showed increased recurrent interactions as a function of semantic integration demands. These findings demonstrate that the cortical dynamics of object processing are modulated by the complexity of semantic information required from the visual input.

  6. Features and functions of nonlinear spatial integration by retinal ganglion cells.

    PubMed

    Gollisch, Tim

    2013-11-01

    Ganglion cells in the vertebrate retina integrate visual information over their receptive fields. They do so by pooling presynaptic excitatory inputs from typically many bipolar cells, which themselves collect inputs from several photoreceptors. In addition, inhibitory interactions mediated by horizontal cells and amacrine cells modulate the structure of the receptive field. In many models, this spatial integration is assumed to occur in a linear fashion. Yet, it has long been known that spatial integration by retinal ganglion cells also incurs nonlinear phenomena. Moreover, several recent examples have shown that nonlinear spatial integration is tightly connected to specific visual functions performed by different types of retinal ganglion cells. This work discusses these advances in understanding the role of nonlinear spatial integration and reviews recent efforts to quantitatively study the nature and mechanisms underlying spatial nonlinearities. These new insights point towards a critical role of nonlinearities within ganglion cell receptive fields for capturing responses of the cells to natural and behaviorally relevant visual stimuli. In the long run, nonlinear phenomena of spatial integration may also prove important for implementing the actual neural code of retinal neurons when designing visual prostheses for the eye. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. The LifeWatch approach to the exploration of distributed species information

    PubMed Central

    Fuentes, Daniel; Fiore, Nicola

    2014-01-01

    Abstract This paper introduces a new method of automatically extracting, integrating and presenting information regarding species from the most relevant online taxonomic resources. First, the information is extracted and joined using data wrappers and integration solutions. Then, an analytical tool is used to provide a visual representation of the data. The information is then integrated into a user friendly content management system. The proposal has been implemented using data from the Global Biodiversity Information Facility (GBIF), the Catalogue of Life (CoL), the World Register of Marine Species (WoRMS), the Integrated Taxonomic Information System (ITIS) and the Global Names Index (GNI). The approach improves data quality, avoiding taxonomic and nomenclature errors whilst increasing the availability and accessibility of the information. PMID:25589865

  8. Clustering of samples and variables with mixed-type data

    PubMed Central

    Edelmann, Dominic; Kopp-Schneider, Annette

    2017-01-01

    Analysis of data measured on different scales is a relevant challenge. Biomedical studies often focus on high-throughput datasets of, e.g., quantitative measurements. However, the need for integration of other features possibly measured on different scales, e.g. clinical or cytogenetic factors, becomes increasingly important. The analysis results (e.g. a selection of relevant genes) are then visualized, while adding further information, like clinical factors, on top. However, a more integrative approach is desirable, where all available data are analyzed jointly, and where also in the visualization different data sources are combined in a more natural way. Here we specifically target integrative visualization and present a heatmap-style graphic display. To this end, we develop and explore methods for clustering mixed-type data, with special focus on clustering variables. Clustering of variables does not receive as much attention in the literature as does clustering of samples. We extend the variables clustering methodology by two new approaches, one based on the combination of different association measures and the other on distance correlation. With simulation studies we evaluate and compare different clustering strategies. Applying specific methods for mixed-type data proves to be comparable and in many cases beneficial as compared to standard approaches applied to corresponding quantitative or binarized data. Our two novel approaches for mixed-type variables show similar or better performance than the existing methods ClustOfVar and bias-corrected mutual information. Further, in contrast to ClustOfVar, our methods provide dissimilarity matrices, which is an advantage, especially for the purpose of visualization. Real data examples aim to give an impression of various kinds of potential applications for the integrative heatmap and other graphical displays based on dissimilarity matrices. We demonstrate that the presented integrative heatmap provides more information than common data displays about the relationship among variables and samples. The described clustering and visualization methods are implemented in our R package CluMix available from https://cran.r-project.org/web/packages/CluMix. PMID:29182671

  9. Interoceptive signals impact visual processing: Cardiac modulation of visual body perception.

    PubMed

    Ronchi, Roberta; Bernasconi, Fosco; Pfeiffer, Christian; Bello-Ruiz, Javier; Kaliuzhna, Mariia; Blanke, Olaf

    2017-09-01

    Multisensory perception research has largely focused on exteroceptive signals, but recent evidence has revealed the integration of interoceptive signals with exteroceptive information. Such research revealed that heartbeat signals affect sensory (e.g., visual) processing: however, it is unknown how they impact the perception of body images. Here we linked our participants' heartbeat to visual stimuli and investigated the spatio-temporal brain dynamics of cardio-visual stimulation on the processing of human body images. We recorded visual evoked potentials with 64-channel electroencephalography while showing a body or a scrambled-body (control) that appeared at the frequency of the on-line recorded participants' heartbeat or not (not-synchronous, control). Extending earlier studies, we found a body-independent effect, with cardiac signals enhancing visual processing during two time periods (77-130 ms and 145-246 ms). Within the second (later) time-window we detected a second effect characterised by enhanced activity in parietal, temporo-occipital, inferior frontal, and right basal ganglia-insula regions, but only when non-scrambled body images were flashed synchronously with the heartbeat (208-224 ms). In conclusion, our results highlight the role of interoceptive information for the visual processing of human body pictures within a network integrating cardio-visual signals of relevance for perceptual and cognitive aspects of visual body processing. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. How cortical neurons help us see: visual recognition in the human brain

    PubMed Central

    Blumberg, Julie; Kreiman, Gabriel

    2010-01-01

    Through a series of complex transformations, the pixel-like input to the retina is converted into rich visual perceptions that constitute an integral part of visual recognition. Multiple visual problems arise due to damage or developmental abnormalities in the cortex of the brain. Here, we provide an overview of how visual information is processed along the ventral visual cortex in the human brain. We discuss how neurophysiological recordings in macaque monkeys and in humans can help us understand the computations performed by visual cortex. PMID:20811161

  11. Geovisualization for Smart Video Surveillance

    NASA Astrophysics Data System (ADS)

    Oves García, R.; Valentín, L.; Serrano, S. A.; Palacios-Alonso, M. A.; Sucar, L. Enrique

    2017-09-01

    Nowadays with the emergence of smart cities and the creation of new sensors capable to connect to the network, it is not only possible to monitor the entire infrastructure of a city, including roads, bridges, rail/subways, airports, communications, water, power, but also to optimize its resources, plan its preventive maintenance and monitor security aspects while maximizing services for its citizens. In particular, the security aspect is one of the most important issues due to the need to ensure the safety of people. However, if we want to have a good security system, it is necessary to take into account the way that we are going to present the information. In order to show the amount of information generated by sensing devices in real time in an understandable way, several visualization techniques are proposed for both local (involves sensing devices in a separated way) and global visualization (involves sensing devices as a whole). Taking into consideration that the information is produced and transmitted from a geographic location, the integration of a Geographic Information System to manage and visualize the behavior of data becomes very relevant. With the purpose of facilitating the decision-making process in a security system, we have integrated the visualization techniques and the Geographic Information System to produce a smart security system, based on a cloud computing architecture, to show relevant information about a set of monitored areas with video cameras.

  12. Superadditive responses in superior temporal sulcus predict audiovisual benefits in object categorization.

    PubMed

    Werner, Sebastian; Noppeney, Uta

    2010-08-01

    Merging information from multiple senses provides a more reliable percept of our environment. Yet, little is known about where and how various sensory features are combined within the cortical hierarchy. Combining functional magnetic resonance imaging and psychophysics, we investigated the neural mechanisms underlying integration of audiovisual object features. Subjects categorized or passively perceived audiovisual object stimuli with the informativeness (i.e., degradation) of the auditory and visual modalities being manipulated factorially. Controlling for low-level integration processes, we show higher level audiovisual integration selectively in the superior temporal sulci (STS) bilaterally. The multisensory interactions were primarily subadditive and even suppressive for intact stimuli but turned into additive effects for degraded stimuli. Consistent with the inverse effectiveness principle, auditory and visual informativeness determine the profile of audiovisual integration in STS similarly to the influence of physical stimulus intensity in the superior colliculus. Importantly, when holding stimulus degradation constant, subjects' audiovisual behavioral benefit predicts their multisensory integration profile in STS: only subjects that benefit from multisensory integration exhibit superadditive interactions, while those that do not benefit show suppressive interactions. In conclusion, superadditive and subadditive integration profiles in STS are functionally relevant and related to behavioral indices of multisensory integration with superadditive interactions mediating successful audiovisual object categorization.

  13. Integrating visual learning within a model-based ATR system

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark; Nebrich, Mark

    2017-05-01

    Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.

  14. Envision: An interactive system for the management and visualization of large geophysical data sets

    NASA Technical Reports Server (NTRS)

    Searight, K. R.; Wojtowicz, D. P.; Walsh, J. E.; Pathi, S.; Bowman, K. P.; Wilhelmson, R. B.

    1995-01-01

    Envision is a software project at the University of Illinois and Texas A&M, funded by NASA's Applied Information Systems Research Project. It provides researchers in the geophysical sciences convenient ways to manage, browse, and visualize large observed or model data sets. Envision integrates data management, analysis, and visualization of geophysical data in an interactive environment. It employs commonly used standards in data formats, operating systems, networking, and graphics. It also attempts, wherever possible, to integrate with existing scientific visualization and analysis software. Envision has an easy-to-use graphical interface, distributed process components, and an extensible design. It is a public domain package, freely available to the scientific community.

  15. Comparison of a Visual and Head Tactile Display for Soldier Navigation

    DTIC Science & Technology

    2013-12-01

    environments for nuclear power plant operators, air traffic controllers, and pilots are information intensive. These environments usually involve the indirect...queue, correcting aircraft conflicts, giving instruction, clearance and advice to pilots , and assigning aircrafts to other work queues and airports...these dynamic, complex, and multitask environments (1) collect and integrate a plethora of visual information into decisions that are critical for

  16. Making sense of personal health information: challenges for information visualization.

    PubMed

    Faisal, Sarah; Blandford, Ann; Potts, Henry W W

    2013-09-01

    This article presents a systematic review of the literature on information visualization for making sense of personal health information. Based on this review, five application themes were identified: treatment planning, examination of patients' medical records, representation of pedigrees and family history, communication and shared decision making, and life management and health monitoring. While there are recognized design challenges associated with each of these themes, such as how best to represent data visually and integrate qualitative and quantitative information, other challenges and opportunities have received little attention to date. In this article, we highlight, in particular, the opportunities for supporting people in better understanding their own illnesses and making sense of their health conditions in order to manage them more effectively.

  17. Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention

    PubMed Central

    Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei

    2016-01-01

    An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features. PMID:26759193

  18. Visual Information Present in Infragranular Layers of Mouse Auditory Cortex.

    PubMed

    Morrill, Ryan J; Hasenstaub, Andrea R

    2018-03-14

    The cerebral cortex is a major hub for the convergence and integration of signals from across the sensory modalities; sensory cortices, including primary regions, are no exception. Here we show that visual stimuli influence neural firing in the auditory cortex of awake male and female mice, using multisite probes to sample single units across multiple cortical layers. We demonstrate that visual stimuli influence firing in both primary and secondary auditory cortex. We then determine the laminar location of recording sites through electrode track tracing with fluorescent dye and optogenetic identification using layer-specific markers. Spiking responses to visual stimulation occur deep in auditory cortex and are particularly prominent in layer 6. Visual modulation of firing rate occurs more frequently at areas with secondary-like auditory responses than those with primary-like responses. Auditory cortical responses to drifting visual gratings are not orientation-tuned, unlike visual cortex responses. The deepest cortical layers thus appear to be an important locus for cross-modal integration in auditory cortex. SIGNIFICANCE STATEMENT The deepest layers of the auditory cortex are often considered its most enigmatic, possessing a wide range of cell morphologies and atypical sensory responses. Here we show that, in mouse auditory cortex, these layers represent a locus of cross-modal convergence, containing many units responsive to visual stimuli. Our results suggest that this visual signal conveys the presence and timing of a stimulus rather than specifics about that stimulus, such as its orientation. These results shed light on both how and what types of cross-modal information is integrated at the earliest stages of sensory cortical processing. Copyright © 2018 the authors 0270-6474/18/382854-09$15.00/0.

  19. Visual information mining in remote sensing image archives

    NASA Astrophysics Data System (ADS)

    Pelizzari, Andrea; Descargues, Vincent; Datcu, Mihai P.

    2002-01-01

    The present article focuses on the development of interactive exploratory tools for visually mining the image content in large remote sensing archives. Two aspects are treated: the iconic visualization of the global information in the archive and the progressive visualization of the image details. The proposed methods are integrated in the Image Information Mining (I2M) system. The images and image structure in the I2M system are indexed based on a probabilistic approach. The resulting links are managed by a relational data base. Both the intrinsic complexity of the observed images and the diversity of user requests result in a great number of associations in the data base. Thus new tools have been designed to visualize, in iconic representation the relationships created during a query or information mining operation: the visualization of the query results positioned on the geographical map, quick-looks gallery, visualization of the measure of goodness of the query, visualization of the image space for statistical evaluation purposes. Additionally the I2M system is enhanced with progressive detail visualization in order to allow better access for operator inspection. I2M is a three-tier Java architecture and is optimized for the Internet.

  20. Audiovisual sentence recognition not predicted by susceptibility to the McGurk effect.

    PubMed

    Van Engen, Kristin J; Xie, Zilong; Chandrasekaran, Bharath

    2017-02-01

    In noisy situations, visual information plays a critical role in the success of speech communication: listeners are better able to understand speech when they can see the speaker. Visual influence on auditory speech perception is also observed in the McGurk effect, in which discrepant visual information alters listeners' auditory perception of a spoken syllable. When hearing /ba/ while seeing a person saying /ga/, for example, listeners may report hearing /da/. Because these two phenomena have been assumed to arise from a common integration mechanism, the McGurk effect has often been used as a measure of audiovisual integration in speech perception. In this study, we test whether this assumed relationship exists within individual listeners. We measured participants' susceptibility to the McGurk illusion as well as their ability to identify sentences in noise across a range of signal-to-noise ratios in audio-only and audiovisual modalities. Our results do not show a relationship between listeners' McGurk susceptibility and their ability to use visual cues to understand spoken sentences in noise, suggesting that McGurk susceptibility may not be a valid measure of audiovisual integration in everyday speech processing.

  1. Feature integration in visual working memory: parietal gamma activity is related to cognitive coordination

    PubMed Central

    Muthukumaraswamy, Suresh D.; Hibbs, Carina S.; Shapiro, Kimron L.; Bracewell, R. Martyn; Singh, Krish D.; Linden, David E. J.

    2011-01-01

    The mechanism by which distinct subprocesses in the brain are coordinated is a central conundrum of systems neuroscience. The parietal lobe is thought to play a key role in visual feature integration, and oscillatory activity in the gamma frequency range has been associated with perception of coherent objects and other tasks requiring neural coordination. Here, we examined the neural correlates of integrating mental representations in working memory and hypothesized that parietal gamma activity would be related to the success of cognitive coordination. Working memory is a classic example of a cognitive operation that requires the coordinated processing of different types of information and the contribution of multiple cognitive domains. Using magnetoencephalography (MEG), we report parietal activity in the high gamma (80–100 Hz) range during manipulation of visual and spatial information (colors and angles) in working memory. This parietal gamma activity was significantly higher during manipulation of visual-spatial conjunctions compared with single features. Furthermore, gamma activity correlated with successful performance during the conjunction task but not during the component tasks. Cortical gamma activity in parietal cortex may therefore play a role in cognitive coordination. PMID:21940605

  2. Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Masuch, Maic

    2012-03-01

    This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.

  3. [Artificial sight: recent developments].

    PubMed

    Zeitz, O; Keserü, M; Hornig, R; Richard, G

    2009-03-01

    The implantation of electronic retina stimulators appears to be a future possibility to restore vision, at least partially in patients with retinal degeneration. The idea of such visual prostheses is not new but due to the general technical progress it has become more likely that a functioning implant will be on the market soon. Visual prosthesis may be integrated in the visual system in various places. Thus there are subretinal and epiretinal implants, as well as implants that are connected directly to the optic nerve or the visual cortex. The epiretinal approach is the most promising at the moment, but the problem of appropriate modulation of the image information is unsolved so far. This will be necessary to provide a interpretable visual information to the brain. The present article summarises the concepts and includes some latest information from recent conferences.

  4. Walking through doorways causes forgetting: environmental integration.

    PubMed

    Radvansky, Gabriel A; Tamplin, Andrea K; Krawietz, Sabine A

    2010-12-01

    Memory for objects declines when people move from one location to another (the location updating effect). However, it is unclear whether this is attributable to event model updating or to task demands. The focus here was on the degree of integration for probed-for information with the experienced environment. In prior research, the probes were verbal labels of visual objects. Experiment 1 assessed whether this was a consequence of an item-probe mismatch, as with transfer-appropriate processing. Visual probes were used to better coordinate what was seen with the nature of the memory probe. In Experiment 2, people received additional word pairs to remember, which were less well integrated with the environment, to assess whether the probed-for information needed to be well integrated. The results showed location updating effects in both cases. These data are consistent with an event cognition view that mental updating of a dynamic event disrupts memory.

  5. Integrating a geographic information system, a scientific visualization system and an orographic precipitation model

    USGS Publications Warehouse

    Hay, L.; Knapp, L.

    1996-01-01

    Investigating natural, potential, and man-induced impacts on hydrological systems commonly requires complex modelling with overlapping data requirements, and massive amounts of one- to four-dimensional data at multiple scales and formats. Given the complexity of most hydrological studies, the requisite software infrastructure must incorporate many components including simulation modelling, spatial analysis and flexible, intuitive displays. There is a general requirement for a set of capabilities to support scientific analysis which, at this time, can only come from an integration of several software components. Integration of geographic information systems (GISs) and scientific visualization systems (SVSs) is a powerful technique for developing and analysing complex models. This paper describes the integration of an orographic precipitation model, a GIS and a SVS. The combination of these individual components provides a robust infrastructure which allows the scientist to work with the full dimensionality of the data and to examine the data in a more intuitive manner.

  6. Audiovisual Modulation in Mouse Primary Visual Cortex Depends on Cross-Modal Stimulus Configuration and Congruency.

    PubMed

    Meijer, Guido T; Montijn, Jorrit S; Pennartz, Cyriel M A; Lansink, Carien S

    2017-09-06

    The sensory neocortex is a highly connected associative network that integrates information from multiple senses, even at the level of the primary sensory areas. Although a growing body of empirical evidence supports this view, the neural mechanisms of cross-modal integration in primary sensory areas, such as the primary visual cortex (V1), are still largely unknown. Using two-photon calcium imaging in awake mice, we show that the encoding of audiovisual stimuli in V1 neuronal populations is highly dependent on the features of the stimulus constituents. When the visual and auditory stimulus features were modulated at the same rate (i.e., temporally congruent), neurons responded with either an enhancement or suppression compared with unisensory visual stimuli, and their prevalence was balanced. Temporally incongruent tones or white-noise bursts included in audiovisual stimulus pairs resulted in predominant response suppression across the neuronal population. Visual contrast did not influence multisensory processing when the audiovisual stimulus pairs were congruent; however, when white-noise bursts were used, neurons generally showed response suppression when the visual stimulus contrast was high whereas this effect was absent when the visual contrast was low. Furthermore, a small fraction of V1 neurons, predominantly those located near the lateral border of V1, responded to sound alone. These results show that V1 is involved in the encoding of cross-modal interactions in a more versatile way than previously thought. SIGNIFICANCE STATEMENT The neural substrate of cross-modal integration is not limited to specialized cortical association areas but extends to primary sensory areas. Using two-photon imaging of large groups of neurons, we show that multisensory modulation of V1 populations is strongly determined by the individual and shared features of cross-modal stimulus constituents, such as contrast, frequency, congruency, and temporal structure. Congruent audiovisual stimulation resulted in a balanced pattern of response enhancement and suppression compared with unisensory visual stimuli, whereas incongruent or dissimilar stimuli at full contrast gave rise to a population dominated by response-suppressing neurons. Our results indicate that V1 dynamically integrates nonvisual sources of information while still attributing most of its resources to coding visual information. Copyright © 2017 the authors 0270-6474/17/378783-14$15.00/0.

  7. Developmental visual perception deficits with no indications of prosopagnosia in a child with abnormal eye movements.

    PubMed

    Gilaie-Dotan, Sharon; Doron, Ravid

    2017-06-01

    Visual categories are associated with eccentricity biases in high-order visual cortex: Faces and reading with foveally-biased regions, while common objects and space with mid- and peripherally-biased regions. As face perception and reading are among the most challenging human visual skills, and are often regarded as the peak achievements of a distributed neural network supporting common objects perception, it is unclear why objects, which also rely on foveal vision to be processed, are associated with mid-peripheral rather than with a foveal bias. Here, we studied BN, a 9 y.o. boy who has normal basic-level vision, abnormal (limited) oculomotor pursuit and saccades, and shows developmental object and contour integration deficits but with no indication of prosopagnosia. Although we cannot infer causation from the data presented here, we suggest that normal pursuit and saccades could be critical for the development of contour integration and object perception. While faces and perhaps reading, when fixated upon, take up a small portion of central visual field and require only small eye movements to be properly processed, common objects typically prevail in mid-peripheral visual field and rely on longer-distance voluntary eye movements as saccades to be brought to fixation. While retinal information feeds into early visual cortex in an eccentricity orderly manner, we hypothesize that propagation of non-foveal information to mid and high-order visual cortex critically relies on circuitry involving eye movements. Limited or atypical eye movements, as in the case of BN, may hinder normal information flow to mid-eccentricity biased high-order visual cortex, adversely affecting its development and consequently inducing visual perceptual deficits predominantly for categories associated with these regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information.

    PubMed

    Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise

    2017-01-01

    Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.

  9. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information

    PubMed Central

    Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise

    2017-01-01

    Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information. PMID:28634444

  10. Object representation in the bottlenose dolphin (Tursiops truncatus): integration of visual and echoic information.

    PubMed

    Harley, H E; Roitblat, H L; Nachtigall, P E

    1996-04-01

    A dolphin performed a 3-alternative matching-to-sample task in different modality conditions (visual/echoic, both vision and echolocation: visual, vision only; echoic, echolocation only). In Experiment 1, training occurred in the dual-modality (visual/echoic) condition. Choice accuracy in tests of all conditions was above chance without further training. In Experiment 2, unfamiliar objects with complementary similarity relations in vision and echolocation were presented in single-modality conditions until accuracy was about 70%. When tested in the visual/echoic condition, accuracy immediately rose (95%), suggesting integration across modalities. In Experiment 3, conditions varied between presentation of sample and alternatives. The dolphin successfully matched familiar objects in the cross-modal conditions. These data suggest that the dolphin has an object-based representational system.

  11. Audio-visual speech processing in age-related hearing loss: Stronger integration and increased frontal lobe recruitment.

    PubMed

    Rosemann, Stephanie; Thiel, Christiane M

    2018-07-15

    Hearing loss is associated with difficulties in understanding speech, especially under adverse listening conditions. In these situations, seeing the speaker improves speech intelligibility in hearing-impaired participants. On the neuronal level, previous research has shown cross-modal plastic reorganization in the auditory cortex following hearing loss leading to altered processing of auditory, visual and audio-visual information. However, how reduced auditory input effects audio-visual speech perception in hearing-impaired subjects is largely unknown. We here investigated the impact of mild to moderate age-related hearing loss on processing audio-visual speech using functional magnetic resonance imaging. Normal-hearing and hearing-impaired participants performed two audio-visual speech integration tasks: a sentence detection task inside the scanner and the McGurk illusion outside the scanner. Both tasks consisted of congruent and incongruent audio-visual conditions, as well as auditory-only and visual-only conditions. We found a significantly stronger McGurk illusion in the hearing-impaired participants, which indicates stronger audio-visual integration. Neurally, hearing loss was associated with an increased recruitment of frontal brain areas when processing incongruent audio-visual, auditory and also visual speech stimuli, which may reflect the increased effort to perform the task. Hearing loss modulated both the audio-visual integration strength measured with the McGurk illusion and brain activation in frontal areas in the sentence task, showing stronger integration and higher brain activation with increasing hearing loss. Incongruent compared to congruent audio-visual speech revealed an opposite brain activation pattern in left ventral postcentral gyrus in both groups, with higher activation in hearing-impaired participants in the incongruent condition. Our results indicate that already mild to moderate hearing loss impacts audio-visual speech processing accompanied by changes in brain activation particularly involving frontal areas. These changes are modulated by the extent of hearing loss. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Patient-tailored multimodal neuroimaging, visualization and quantification of human intra-cerebral hemorrhage

    NASA Astrophysics Data System (ADS)

    Goh, Sheng-Yang M.; Irimia, Andrei; Vespa, Paul M.; Van Horn, John D.

    2016-03-01

    In traumatic brain injury (TBI) and intracerebral hemorrhage (ICH), the heterogeneity of lesion sizes and types necessitates a variety of imaging modalities to acquire a comprehensive perspective on injury extent. Although it is advantageous to combine imaging modalities and to leverage their complementary benefits, there are difficulties in integrating information across imaging types. Thus, it is important that efforts be dedicated to the creation and sustained refinement of resources for multimodal data integration. Here, we propose a novel approach to the integration of neuroimaging data acquired from human patients with TBI/ICH using various modalities; we also demonstrate the integrated use of multimodal magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI) data for TBI analysis based on both visual observations and quantitative metrics. 3D models of healthy-appearing tissues and TBIrelated pathology are generated, both of which are derived from multimodal imaging data. MRI volumes acquired using FLAIR, SWI, and T2 GRE are used to segment pathology. Healthy tissues are segmented using user-supervised tools, and results are visualized using a novel graphical approach called a `connectogram', where brain connectivity information is depicted within a circle of radially aligned elements. Inter-region connectivity and its strength are represented by links of variable opacities drawn between regions, where opacity reflects the percentage longitudinal change in brain connectivity density. Our method for integrating, analyzing and visualizing structural brain changes due to TBI and ICH can promote knowledge extraction and enhance the understanding of mechanisms underlying recovery.

  13. SnapShot: Visualization to Propel Ice Hockey Analytics.

    PubMed

    Pileggi, H; Stolper, C D; Boyle, J M; Stasko, J T

    2012-12-01

    Sports analysts live in a world of dynamic games flattened into tables of numbers, divorced from the rinks, pitches, and courts where they were generated. Currently, these professional analysts use R, Stata, SAS, and other statistical software packages for uncovering insights from game data. Quantitative sports consultants seek a competitive advantage both for their clients and for themselves as analytics becomes increasingly valued by teams, clubs, and squads. In order for the information visualization community to support the members of this blossoming industry, it must recognize where and how visualization can enhance the existing analytical workflow. In this paper, we identify three primary stages of today's sports analyst's routine where visualization can be beneficially integrated: 1) exploring a dataspace; 2) sharing hypotheses with internal colleagues; and 3) communicating findings to stakeholders.Working closely with professional ice hockey analysts, we designed and built SnapShot, a system to integrate visualization into the hockey intelligence gathering process. SnapShot employs a variety of information visualization techniques to display shot data, yet given the importance of a specific hockey statistic, shot length, we introduce a technique, the radial heat map. Through a user study, we received encouraging feedback from several professional analysts, both independent consultants and professional team personnel.

  14. The ventriloquist in periphery: impact of eccentricity-related reliability on audio-visual localization.

    PubMed

    Charbonneau, Geneviève; Véronneau, Marie; Boudrias-Fournier, Colin; Lepore, Franco; Collignon, Olivier

    2013-10-28

    The relative reliability of separate sensory estimates influences the way they are merged into a unified percept. We investigated how eccentricity-related changes in reliability of auditory and visual stimuli influence their integration across the entire frontal space. First, we surprisingly found that despite a strong decrease in auditory and visual unisensory localization abilities in periphery, the redundancy gain resulting from the congruent presentation of audio-visual targets was not affected by stimuli eccentricity. This result therefore contrasts with the common prediction that a reduction in sensory reliability necessarily induces an enhanced integrative gain. Second, we demonstrate that the visual capture of sounds observed with spatially incongruent audio-visual targets (ventriloquist effect) steadily decreases with eccentricity, paralleling a lowering of the relative reliability of unimodal visual over unimodal auditory stimuli in periphery. Moreover, at all eccentricities, the ventriloquist effect positively correlated with a weighted combination of the spatial resolution obtained in unisensory conditions. These findings support and extend the view that the localization of audio-visual stimuli relies on an optimal combination of auditory and visual information according to their respective spatial reliability. All together, these results evidence that the external spatial coordinates of multisensory events relative to an observer's body (e.g., eyes' or head's position) influence how this information is merged, and therefore determine the perceptual outcome.

  15. Family genome browser: visualizing genomes with pedigree information.

    PubMed

    Juan, Liran; Liu, Yongzhuang; Wang, Yongtian; Teng, Mingxiang; Zang, Tianyi; Wang, Yadong

    2015-07-15

    Families with inherited diseases are widely used in Mendelian/complex disease studies. Owing to the advances in high-throughput sequencing technologies, family genome sequencing becomes more and more prevalent. Visualizing family genomes can greatly facilitate human genetics studies and personalized medicine. However, due to the complex genetic relationships and high similarities among genomes of consanguineous family members, family genomes are difficult to be visualized in traditional genome visualization framework. How to visualize the family genome variants and their functions with integrated pedigree information remains a critical challenge. We developed the Family Genome Browser (FGB) to provide comprehensive analysis and visualization for family genomes. The FGB can visualize family genomes in both individual level and variant level effectively, through integrating genome data with pedigree information. Family genome analysis, including determination of parental origin of the variants, detection of de novo mutations, identification of potential recombination events and identical-by-decent segments, etc., can be performed flexibly. Diverse annotations for the family genome variants, such as dbSNP memberships, linkage disequilibriums, genes, variant effects, potential phenotypes, etc., are illustrated as well. Moreover, the FGB can automatically search de novo mutations and compound heterozygous variants for a selected individual, and guide investigators to find high-risk genes with flexible navigation options. These features enable users to investigate and understand family genomes intuitively and systematically. The FGB is available at http://mlg.hit.edu.cn/FGB/. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. What You See Isn’t Always What You Get: Auditory Word Signals Trump Consciously Perceived Words in Lexical Access

    PubMed Central

    Ostrand, Rachel; Blumstein, Sheila E.; Ferreira, Victor S.; Morgan, James L.

    2016-01-01

    Human speech perception often includes both an auditory and visual component. A conflict in these signals can result in the McGurk illusion, in which the listener perceives a fusion of the two streams, implying that information from both has been integrated. We report two experiments investigating whether auditory-visual integration of speech occurs before or after lexical access, and whether the visual signal influences lexical access at all. Subjects were presented with McGurk or Congruent primes and performed a lexical decision task on related or unrelated targets. Although subjects perceived the McGurk illusion, McGurk and Congruent primes with matching real-word auditory signals equivalently primed targets that were semantically related to the auditory signal, but not targets related to the McGurk percept. We conclude that the time course of auditory-visual integration is dependent on the lexicality of the auditory and visual input signals, and that listeners can lexically access one word and yet consciously perceive another. PMID:27011021

  17. The effects of aging on the working memory processes of multimodal information.

    PubMed

    Solesio-Jofre, Elena; López-Frutos, José María; Cashdollar, Nathan; Aurtenetxe, Sara; de Ramón, Ignacio; Maestú, Fernando

    2017-05-01

    Normal aging is associated with deficits in working memory processes. However, the majority of research has focused on storage or inhibitory processes using unimodal paradigms, without addressing their relationships using different sensory modalities. Hence, we pursued two objectives. First, was to examine the effects of aging on storage and inhibitory processes. Second, was to evaluate aging effects on multisensory integration of visual and auditory stimuli. To this end, young and older participants performed a multimodal task for visual and auditory pairs of stimuli with increasing memory load at encoding and interference during retention. Our results showed an age-related increased vulnerability to interrupting and distracting interference reflecting inhibitory deficits related to the off-line reactivation and on-line suppression of relevant and irrelevant information, respectively. Storage capacity was impaired with increasing task demands in both age groups. Additionally, older adults showed a deficit in multisensory integration, with poorer performance for new visual compared to new auditory information.

  18. ICT integration in mathematics initial teacher training and its impact on visualization: the case of GeoGebra

    NASA Astrophysics Data System (ADS)

    Dockendorff, Monika; Solar, Horacio

    2018-01-01

    This case study investigates the impact of the integration of information and communications technology (ICT) in mathematics visualization skills and initial teacher education programmes. It reports on the influence GeoGebra dynamic software use has on promoting mathematical learning at secondary school and on its impact on teachers' conceptions about teaching and learning mathematics. This paper describes how GeoGebra-based dynamic applets - designed and used in an exploratory manner - promote mathematical processes such as conjectures. It also refers to the changes prospective teachers experience regarding the relevance visual dynamic representations acquire in teaching mathematics. This study observes a shift in school routines when incorporating technology into the mathematics classroom. Visualization appears as a basic competence associated to key mathematical processes. Implications of an early integration of ICT in mathematics initial teacher training and its impact on developing technological pedagogical content knowledge (TPCK) are drawn.

  19. McGurk stimuli for the investigation of multisensory integration in cochlear implant users: The Oldenburg Audio Visual Speech Stimuli (OLAVS).

    PubMed

    Stropahl, Maren; Schellhardt, Sebastian; Debener, Stefan

    2017-06-01

    The concurrent presentation of different auditory and visual syllables may result in the perception of a third syllable, reflecting an illusory fusion of visual and auditory information. This well-known McGurk effect is frequently used for the study of audio-visual integration. Recently, it was shown that the McGurk effect is strongly stimulus-dependent, which complicates comparisons across perceivers and inferences across studies. To overcome this limitation, we developed the freely available Oldenburg audio-visual speech stimuli (OLAVS), consisting of 8 different talkers and 12 different syllable combinations. The quality of the OLAVS set was evaluated with 24 normal-hearing subjects. All 96 stimuli were characterized based on their stimulus disparity, which was obtained from a probabilistic model (cf. Magnotti & Beauchamp, 2015). Moreover, the McGurk effect was studied in eight adult cochlear implant (CI) users. By applying the individual, stimulus-independent parameters of the probabilistic model, the predicted effect of stronger audio-visual integration in CI users could be confirmed, demonstrating the validity of the new stimulus material.

  20. The spatial reliability of task-irrelevant sounds modulates bimodal audiovisual integration: An event-related potential study.

    PubMed

    Li, Qi; Yu, Hongtao; Wu, Yan; Gao, Ning

    2016-08-26

    The integration of multiple sensory inputs is essential for perception of the external world. The spatial factor is a fundamental property of multisensory audiovisual integration. Previous studies of the spatial constraints on bimodal audiovisual integration have mainly focused on the spatial congruity of audiovisual information. However, the effect of spatial reliability within audiovisual information on bimodal audiovisual integration remains unclear. In this study, we used event-related potentials (ERPs) to examine the effect of spatial reliability of task-irrelevant sounds on audiovisual integration. Three relevant ERP components emerged: the first at 140-200ms over a wide central area, the second at 280-320ms over the fronto-central area, and a third at 380-440ms over the parieto-occipital area. Our results demonstrate that ERP amplitudes elicited by audiovisual stimuli with reliable spatial relationships are larger than those elicited by stimuli with inconsistent spatial relationships. In addition, we hypothesized that spatial reliability within an audiovisual stimulus enhances feedback projections to the primary visual cortex from multisensory integration regions. Overall, our findings suggest that the spatial linking of visual and auditory information depends on spatial reliability within an audiovisual stimulus and occurs at a relatively late stage of processing. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. A comparison of visuomotor cue integration strategies for object placement and prehension.

    PubMed

    Greenwald, Hal S; Knill, David C

    2009-01-01

    Visual cue integration strategies are known to depend on cue reliability and how rapidly the visual system processes incoming information. We investigated whether these strategies also depend on differences in the information demands for different natural tasks. Using two common goal-oriented tasks, prehension and object placement, we determined whether monocular and binocular information influence estimates of three-dimensional (3D) orientation differently depending on task demands. Both tasks rely on accurate 3D orientation estimates, but 3D position is potentially more important for grasping. Subjects placed an object on or picked up a disc in a virtual environment. On some trials, the monocular cues (aspect ratio and texture compression) and binocular cues (e.g., binocular disparity) suggested slightly different 3D orientations for the disc; these conflicts either were present upon initial stimulus presentation or were introduced after movement initiation, which allowed us to quantify how information from the cues accumulated over time. We analyzed the time-varying orientations of subjects' fingers in the grasping task and those of the object in the object placement task to quantify how different visual cues influenced motor control. In the first experiment, different subjects performed each task, and those performing the grasping task relied on binocular information more when orienting their hands than those performing the object placement task. When subjects in the second experiment performed both tasks in interleaved sessions, binocular cues were still more influential during grasping than object placement, and the different cue integration strategies observed for each task in isolation were maintained. In both experiments, the temporal analyses showed that subjects processed binocular information faster than monocular information, but task demands did not affect the time course of cue processing. How one uses visual cues for motor control depends on the task being performed, although how quickly the information is processed appears to be task invariant.

  2. Integrating Clinical Neuropsychology into the Undergraduate Curriculum.

    ERIC Educational Resources Information Center

    Puente, Antonio E.; And Others

    1991-01-01

    Claims little information exists in undergraduate education about clinical neuropsychology. Outlines an undergraduate neuropsychology course and proposes ways to integrate the subject into existing undergraduate psychology courses. Suggests developing specialized audio-visual materials for telecourses or existing courses. (NL)

  3. Disentangling brain activity related to the processing of emotional visual information and emotional arousal.

    PubMed

    Kuniecki, Michał; Wołoszyn, Kinga; Domagalik, Aleksandra; Pilarczyk, Joanna

    2018-05-01

    Processing of emotional visual information engages cognitive functions and induces arousal. We aimed to examine the modulatory role of emotional valence on brain activations linked to the processing of visual information and those linked to arousal. Participants were scanned and their pupil size was measured while viewing negative and neutral images. The visual noise was added to the images in various proportions to parametrically manipulate the amount of visual information. Pupil size was used as an index of physiological arousal. We show that arousal induced by the negative images, as compared to the neutral ones, is primarily related to greater amygdala activity while increasing visibility of negative content to enhanced activity in the lateral occipital complex (LOC). We argue that more intense visual processing of negative scenes can occur irrespective of the level of arousal. It may suggest that higher areas of the visual stream are fine-tuned to process emotionally relevant objects. Both arousal and processing of emotional visual information modulated activity within the ventromedial prefrontal cortex (vmPFC). Overlapping activations within the vmPFC may reflect the integration of these aspects of emotional processing. Additionally, we show that emotionally-evoked pupil dilations are related to activations in the amygdala, vmPFC, and LOC.

  4. MPI CyberMotion Simulator: implementation of a novel motion simulator to investigate multisensory path integration in three dimensions.

    PubMed

    Barnett-Cowan, Michael; Meilinger, Tobias; Vidal, Manuel; Teufel, Harald; Bülthoff, Heinrich H

    2012-05-10

    Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point (1). Humans can do path integration based exclusively on visual (2-3), auditory (4), or inertial cues (5). However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate (6-7). In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones (5). Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see (3) for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator (8-9) with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s(2) peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.

  5. RE Data Explorer: Informing Variable Renewable Energy Grid Integration for Low Emission Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cox, Sarah L

    The RE Data Explorer, developed by the National Renewable Energy Laboratory, is an innovative web-based analysis tool that utilizes geospatial and spatiotemporal renewable energy data to visualize, execute, and support analysis of renewable energy potential under various user-defined scenarios. This analysis can inform high-level prospecting, integrated planning, and policy making to enable low emission development.

  6. Temporal characteristics of audiovisual information processing.

    PubMed

    Fuhrmann Alpert, Galit; Hein, Grit; Tsai, Nancy; Naumer, Marcus J; Knight, Robert T

    2008-05-14

    In complex natural environments, auditory and visual information often have to be processed simultaneously. Previous functional magnetic resonance imaging (fMRI) studies focused on the spatial localization of brain areas involved in audiovisual (AV) information processing, but the temporal characteristics of AV information flow in these regions remained unclear. In this study, we used fMRI and a novel information-theoretic approach to study the flow of AV sensory information. Subjects passively perceived sounds and images of objects presented either alone or simultaneously. Applying the measure of mutual information, we computed for each voxel the latency in which the blood oxygenation level-dependent signal had the highest information content about the preceding stimulus. The results indicate that, after AV stimulation, the earliest informative activity occurs in right Heschl's gyrus, left primary visual cortex, and the posterior portion of the superior temporal gyrus, which is known as a region involved in object-related AV integration. Informative activity in the anterior portion of superior temporal gyrus, middle temporal gyrus, right occipital cortex, and inferior frontal cortex was found at a later latency. Moreover, AV presentation resulted in shorter latencies in multiple cortical areas compared with isolated auditory or visual presentation. The results provide evidence for bottom-up processing from primary sensory areas into higher association areas during AV integration in humans and suggest that AV presentation shortens processing time in early sensory cortices.

  7. Integration of genomic and medical data into a 3D atlas of human anatomy.

    PubMed

    Turinsky, Andrei L; Fanea, Elena; Trinh, Quang; Dong, Xiaoli; Stromer, Julie N; Shu, Xueling; Wat, Stephen; Hallgrímsson, Benedikt; Hill, Jonathan W; Edwards, Carol; Grosenick, Brenda; Yajima, Masumi; Sensen, Christoph W

    2008-01-01

    We have developed a framework for the visual integration and exploration of multi-scale biomedical data, which includes anatomical and molecular components. We have also created a Java-based software system that integrates molecular information, such as gene expression data, into a three-dimensional digital atlas of the male adult human anatomy. Our atlas is structured according to the Terminologia Anatomica. The underlying data-indexing mechanism uses open standards and semantic ontology-processing tools to establish the associations between heterogeneous data types. The software system makes an extensive use of virtual reality visualization.

  8. Sensory processing patterns predict the integration of information held in visual working memory.

    PubMed

    Lowe, Matthew X; Stevenson, Ryan A; Wilson, Kristin E; Ouslis, Natasha E; Barense, Morgan D; Cant, Jonathan S; Ferber, Susanne

    2016-02-01

    Given the limited resources of visual working memory, multiple items may be remembered as an averaged group or ensemble. As a result, local information may be ill-defined, but these ensemble representations provide accurate diagnostics of the natural world by combining gist information with item-level information held in visual working memory. Some neurodevelopmental disorders are characterized by sensory processing profiles that predispose individuals to avoid or seek-out sensory stimulation, fundamentally altering their perceptual experience. Here, we report such processing styles will affect the computation of ensemble statistics in the general population. We identified stable adult sensory processing patterns to demonstrate that individuals with low sensory thresholds who show a greater proclivity to engage in active response strategies to prevent sensory overstimulation are less likely to integrate mean size information across a set of similar items and are therefore more likely to be biased away from the mean size representation of an ensemble display. We therefore propose the study of ensemble processing should extend beyond the statistics of the display, and should also consider the statistics of the observer. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Visual feature integration theory: past, present, and future.

    PubMed

    Quinlan, Philip T

    2003-09-01

    Visual feature integration theory was one of the most influential theories of visual information processing in the last quarter of the 20th century. This article provides an exposition of the theory and a review of the associated data. In the past much emphasis has been placed on how the theory explains performance in various visual search tasks. The relevant literature is discussed and alternative accounts are described. Amendments to the theory are also set out. Many other issues concerning internal processes and representations implicated by the theory are reviewed. The article closes with a synopsis of what has been learned from consideration of the theory, and it is concluded that some of the issues may remain intractable unless appropriate neuroscientific investigations are carried out.

  10. Auditory-visual fusion in speech perception in children with cochlear implants

    PubMed Central

    Schorr, Efrat A.; Fox, Nathan A.; van Wassenhove, Virginie; Knudsen, Eric I.

    2005-01-01

    Speech, for most of us, is a bimodal percept whenever we both hear the voice and see the lip movements of a speaker. Children who are born deaf never have this bimodal experience. We tested children who had been deaf from birth and who subsequently received cochlear implants for their ability to fuse the auditory information provided by their implants with visual information about lip movements for speech perception. For most of the children with implants (92%), perception was dominated by vision when visual and auditory speech information conflicted. For some, bimodal fusion was strong and consistent, demonstrating a remarkable plasticity in their ability to form auditory-visual associations despite the atypical stimulation provided by implants. The likelihood of consistent auditory-visual fusion declined with age at implant beyond 2.5 years, suggesting a sensitive period for bimodal integration in speech perception. PMID:16339316

  11. A spatially collocated sound thrusts a flash into awareness

    PubMed Central

    Aller, Máté; Giani, Anette; Conrad, Verena; Watanabe, Masataka; Noppeney, Uta

    2015-01-01

    To interact effectively with the environment the brain integrates signals from multiple senses. It is currently unclear to what extent spatial information can be integrated across different senses in the absence of awareness. Combining dynamic continuous flash suppression (CFS) and spatial audiovisual stimulation, the current study investigated whether a sound facilitates a concurrent visual flash to elude flash suppression and enter perceptual awareness depending on audiovisual spatial congruency. Our results demonstrate that a concurrent sound boosts unaware visual signals into perceptual awareness. Critically, this process depended on the spatial congruency of the auditory and visual signals pointing towards low level mechanisms of audiovisual integration. Moreover, the concurrent sound biased the reported location of the flash as a function of flash visibility. The spatial bias of sounds on reported flash location was strongest for flashes that were judged invisible. Our results suggest that multisensory integration is a critical mechanism that enables signals to enter conscious perception. PMID:25774126

  12. Integrated web visualizations for protein-protein interaction databases.

    PubMed

    Jeanquartier, Fleur; Jean-Quartier, Claire; Holzinger, Andreas

    2015-06-16

    Understanding living systems is crucial for curing diseases. To achieve this task we have to understand biological networks based on protein-protein interactions. Bioinformatics has come up with a great amount of databases and tools that support analysts in exploring protein-protein interactions on an integrated level for knowledge discovery. They provide predictions and correlations, indicate possibilities for future experimental research and fill the gaps to complete the picture of biochemical processes. There are numerous and huge databases of protein-protein interactions used to gain insights into answering some of the many questions of systems biology. Many computational resources integrate interaction data with additional information on molecular background. However, the vast number of diverse Bioinformatics resources poses an obstacle to the goal of understanding. We present a survey of databases that enable the visual analysis of protein networks. We selected M=10 out of N=53 resources supporting visualization, and we tested against the following set of criteria: interoperability, data integration, quantity of possible interactions, data visualization quality and data coverage. The study reveals differences in usability, visualization features and quality as well as the quantity of interactions. StringDB is the recommended first choice. CPDB presents a comprehensive dataset and IntAct lets the user change the network layout. A comprehensive comparison table is available via web. The supplementary table can be accessed on http://tinyurl.com/PPI-DB-Comparison-2015. Only some web resources featuring graph visualization can be successfully applied to interactive visual analysis of protein-protein interaction. Study results underline the necessity for further enhancements of visualization integration in biochemical analysis tools. Identified challenges are data comprehensiveness, confidence, interactive feature and visualization maturing.

  13. Multisensory speech perception in autism spectrum disorder: From phoneme to whole-word perception.

    PubMed

    Stevenson, Ryan A; Baum, Sarah H; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Wallace, Mark T

    2017-07-01

    Speech perception in noisy environments is boosted when a listener can see the speaker's mouth and integrate the auditory and visual speech information. Autistic children have a diminished capacity to integrate sensory information across modalities, which contributes to core symptoms of autism, such as impairments in social communication. We investigated the abilities of autistic and typically-developing (TD) children to integrate auditory and visual speech stimuli in various signal-to-noise ratios (SNR). Measurements of both whole-word and phoneme recognition were recorded. At the level of whole-word recognition, autistic children exhibited reduced performance in both the auditory and audiovisual modalities. Importantly, autistic children showed reduced behavioral benefit from multisensory integration with whole-word recognition, specifically at low SNRs. At the level of phoneme recognition, autistic children exhibited reduced performance relative to their TD peers in auditory, visual, and audiovisual modalities. However, and in contrast to their performance at the level of whole-word recognition, both autistic and TD children showed benefits from multisensory integration for phoneme recognition. In accordance with the principle of inverse effectiveness, both groups exhibited greater benefit at low SNRs relative to high SNRs. Thus, while autistic children showed typical multisensory benefits during phoneme recognition, these benefits did not translate to typical multisensory benefit of whole-word recognition in noisy environments. We hypothesize that sensory impairments in autistic children raise the SNR threshold needed to extract meaningful information from a given sensory input, resulting in subsequent failure to exhibit behavioral benefits from additional sensory information at the level of whole-word recognition. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1280-1290. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  14. PHYLOGEOrec: A QGIS plugin for spatial phylogeographic reconstruction from phylogenetic tree and geographical information data

    NASA Astrophysics Data System (ADS)

    Nashrulloh, Maulana Malik; Kurniawan, Nia; Rahardi, Brian

    2017-11-01

    The increasing availability of genetic sequence data associated with explicit geographic and environment (including biotic and abiotic components) information offers new opportunities to study the processes that shape biodiversity and its patterns. Developing phylogeography reconstruction, by integrating phylogenetic and biogeographic knowledge, provides richer and deeper visualization and information on diversification events than ever before. Geographical information systems such as QGIS provide an environment for spatial modeling, analysis, and dissemination by which phylogenetic models can be explicitly linked with their associated spatial data, and subsequently, they will be integrated with other related georeferenced datasets describing the biotic and abiotic environment. We are introducing PHYLOGEOrec, a QGIS plugin for building spatial phylogeographic reconstructions constructed from phylogenetic tree and geographical information data based on QGIS2threejs. By using PHYLOGEOrec, researchers can integrate existing phylogeny and geographical information data, resulting in three-dimensional geographic visualizations of phylogenetic trees in the Keyhole Markup Language (KML) format. Such formats can be overlaid on a map using QGIS and finally, spatially viewed in QGIS by means of a QGIS2threejs engine for further analysis. KML can also be viewed in reputable geobrowsers with KML-support (i.e., Google Earth).

  15. Unisensory processing and multisensory integration in schizophrenia: A high-density electrical mapping study

    PubMed Central

    Stone, David B.; Urrea, Laura J.; Aine, Cheryl J.; Bustillo, Juan R.; Clark, Vincent P.; Stephen, Julia M.

    2011-01-01

    In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. PMID:21807011

  16. Projection-specific visual feature encoding by layer 5 cortical subnetworks

    PubMed Central

    Lur, Gyorgy; Vinck, Martin A.; Tang, Lan; Cardin, Jessica A.; Higley, Michael J.

    2016-01-01

    Summary Primary neocortical sensory areas act as central hubs, distributing afferent information to numerous cortical and subcortical structures. However, it remains unclear whether each downstream target receives distinct versions of sensory information. We used in vivo calcium imaging combined with retrograde tracing to monitor visual response properties of three distinct subpopulations of projection neurons in primary visual cortex. While there is overlap across the groups, on average corticotectal (CT) cells exhibit lower contrast thresholds and broader tuning for orientation and spatial frequency in comparison to corticostriatal (CS) cells, while corticocortical (CC) cells have intermediate properties. Noise correlational analyses support the hypothesis that CT cells integrate information across diverse layer 5 populations, whereas CS and CC cells form more selectively interconnected groups. Overall, our findings demonstrate the existence of functional subnetworks within layer 5 that may differentially route visual information to behaviorally relevant downstream targets. PMID:26972011

  17. Feature Integration Theory Revisited: Dissociating Feature Detection and Attentional Guidance in Visual Search

    ERIC Educational Resources Information Center

    Chan, Louis K. H.; Hayward, William G.

    2009-01-01

    In feature integration theory (FIT; A. Treisman & S. Sato, 1990), feature detection is driven by independent dimensional modules, and other searches are driven by a master map of locations that integrates dimensional information into salience signals. Although recent theoretical models have largely abandoned this distinction, some observed…

  18. The Integration of Occlusion and Disparity Information for Judging Depth in Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Smith, Danielle; Ropar, Danielle; Allen, Harriet A.

    2017-01-01

    In autism spectrum disorder (ASD), atypical integration of visual depth cues may be due to flattened perceptual priors or selective fusion. The current study attempts to disentangle these explanations by psychophysically assessing within-modality integration of ordinal (occlusion) and metric (disparity) depth cues while accounting for sensitivity…

  19. Learning STEM through Integrative Visual Representations

    ERIC Educational Resources Information Center

    Virk, Satyugjit Singh

    2013-01-01

    Previous cognitive models of memory have not comprehensively taken into account the internal cognitive load of chunking isolated information and have emphasized the external cognitive load of visual presentation only. Under the Virk Long Term Working Memory Multimedia Model of cognitive load, drawing from the Cowan model, students presented with…

  20. Information Technology Infusion Case Study: Integrating Google Earth(Trademark) into the A-Train Data Depot

    NASA Technical Reports Server (NTRS)

    Smith, Peter; Kempler, Steven; Leptoukh, Gregory; Chen, Aijun

    2010-01-01

    This poster paper represents the NASA funded project that was to employ the latest three dimensional visualization technology to explore and provide direct data access to heterogeneous A-Train datasets. Google Earth (tm) provides foundation for organizing, visualizing, publishing and synergizing Earth science data .

  1. Concept Mapping: A Graphical System for Understanding the Relationship between Concepts. ERIC Digest.

    ERIC Educational Resources Information Center

    Plotnick, Eric

    This ERIC Digest discusses concept mapping, a technique for representing the structure of information visually. Concept mapping can be used to brainstorm, design complex structures, communicate complex ideas, aid learning by explicitly integrating new and old knowledge, and assess understanding or diagnose misunderstanding. Visual representation…

  2. Intracranial Cortical Responses during Visual–Tactile Integration in Humans

    PubMed Central

    Quinn, Brian T.; Carlson, Chad; Doyle, Werner; Cash, Sydney S.; Devinsky, Orrin; Spence, Charles; Halgren, Eric

    2014-01-01

    Sensory integration of touch and sight is crucial to perceiving and navigating the environment. While recent evidence from other sensory modality combinations suggests that low-level sensory areas integrate multisensory information at early processing stages, little is known about how the brain combines visual and tactile information. We investigated the dynamics of multisensory integration between vision and touch using the high spatial and temporal resolution of intracranial electrocorticography in humans. We present a novel, two-step metric for defining multisensory integration. The first step compares the sum of the unisensory responses to the bimodal response as multisensory responses. The second step eliminates the possibility that double addition of sensory responses could be misinterpreted as interactions. Using these criteria, averaged local field potentials and high-gamma-band power demonstrate a functional processing cascade whereby sensory integration occurs late, both anatomically and temporally, in the temporo–parieto–occipital junction (TPOJ) and dorsolateral prefrontal cortex. Results further suggest two neurophysiologically distinct and temporally separated integration mechanisms in TPOJ, while providing direct evidence for local suppression as a dominant mechanism for synthesizing visual and tactile input. These results tend to support earlier concepts of multisensory integration as relatively late and centered in tertiary multimodal association cortices. PMID:24381279

  3. Self-motion perception in autism is compromised by visual noise but integrated optimally across multiple senses

    PubMed Central

    Zaidel, Adam; Goin-Kochel, Robin P.; Angelaki, Dora E.

    2015-01-01

    Perceptual processing in autism spectrum disorder (ASD) is marked by superior low-level task performance and inferior complex-task performance. This observation has led to theories of defective integration in ASD of local parts into a global percept. Despite mixed experimental results, this notion maintains widespread influence and has also motivated recent theories of defective multisensory integration in ASD. Impaired ASD performance in tasks involving classic random dot visual motion stimuli, corrupted by noise as a means to manipulate task difficulty, is frequently interpreted to support this notion of global integration deficits. By manipulating task difficulty independently of visual stimulus noise, here we test the hypothesis that heightened sensitivity to noise, rather than integration deficits, may characterize ASD. We found that although perception of visual motion through a cloud of dots was unimpaired without noise, the addition of stimulus noise significantly affected adolescents with ASD, more than controls. Strikingly, individuals with ASD demonstrated intact multisensory (visual–vestibular) integration, even in the presence of noise. Additionally, when vestibular motion was paired with pure visual noise, individuals with ASD demonstrated a different strategy than controls, marked by reduced flexibility. This result could be simulated by using attenuated (less reliable) and inflexible (not experience-dependent) Bayesian priors in ASD. These findings question widespread theories of impaired global and multisensory integration in ASD. Rather, they implicate increased sensitivity to sensory noise and less use of prior knowledge in ASD, suggesting increased reliance on incoming sensory information. PMID:25941373

  4. Using Visualization in Cockpit Decision Support Systems

    NASA Technical Reports Server (NTRS)

    Aragon, Cecilia R.

    2005-01-01

    In order to safely operate their aircraft, pilots must make rapid decisions based on integrating and processing large amounts of heterogeneous information. Visual displays are often the most efficient method of presenting safety-critical data to pilots in real time. However, care must be taken to ensure the pilot is provided with the appropriate amount of information to make effective decisions and not become cognitively overloaded. The results of two usability studies of a prototype airflow hazard visualization cockpit decision support system are summarized. The studies demonstrate that such a system significantly improves the performance of helicopter pilots landing under turbulent conditions. Based on these results, design principles and implications for cockpit decision support systems using visualization are presented.

  5. Gauging Possibilities for Action Based on Friction Underfoot

    ERIC Educational Resources Information Center

    Joh, Amy S.; Adolph, Karen E.; Narayanan, Priya J.; Dietz, Victoria A.

    2007-01-01

    Standing and walking generate information about friction underfoot. Five experiments examined whether walkers use such perceptual information for prospective control of locomotion. In particular, do walkers integrate information about friction underfoot with visual cues for sloping ground ahead to make adaptive locomotor decisions? Participants…

  6. The Importance of Data Visualization: Incorporating Storytelling into the Scientific Presentation

    NASA Technical Reports Server (NTRS)

    Babiak-Vazquez, A.; Cornett, A. N.; Wear, M. L.; Sams, C.

    2014-01-01

    From its inception in 2000, one of the primary tasks of the Biomedical Data Reduction Analysis (BDRA) group has been translation of large amounts of data into information that is relevant to the audience receiving it. BDRA helps translate data into an integrated model that supports both operational and research activities. This data integrated model and subsequent visual data presentations have contributed to BDRA's success in delivering the message (i.e., the story) that its customers have needed to communicate. This success has led to additional collaborations among groups that had previously not felt they had much in common until they worked together to develop solutions in an integrated fashion. As more emphasis is placed on working with "big data" and on showing how NASA's efforts contribute to the greater good of the American people and of the world, it becomes imperative to visualize the story of our data to communicate the greater message we need to share. METHODS To create and expand its data integrated model, BDRA has incorporated data from many different collaborating partner labs and other sources. Data are compiled from the repositories of the Lifetime Surveillance of Astronaut Health and the Life Sciences Data Archive, and from the individual laboratories at Johnson Space Center that support collection of data from medical testing, environmental monitoring, and countermeasures, as designated in the Medical Requirements Integration Documents. Ongoing communication with the participating collaborators is maintained to ensure that the message and story of the data are retained as data are translated into information and visual data presentations are delivered in different venues and to different audiences. RESULTS We will describe the importance of storytelling through an integrated model and of subsequent data visualizations in today's scientific presentations and discuss the collaborative methods used. We will illustrate the discussion with examples of graphs from BDRA's past work supporting operations and/or research efforts.

  7. SCSODC: Integrating Ocean Data for Visualization Sharing and Application

    NASA Astrophysics Data System (ADS)

    Xu, C.; Li, S.; Wang, D.; Xie, Q.

    2014-02-01

    The South China Sea Ocean Data Center (SCSODC) was founded in 2010 in order to improve collecting and managing of ocean data of the South China Sea Institute of Oceanology (SCSIO). The mission of SCSODC is to ensure the long term scientific stewardship of ocean data, information and products - collected through research groups, monitoring stations and observation cruises - and to facilitate the efficient use and distribution to possible users. However, data sharing and applications were limited due to the characteristics of distribution and heterogeneity that made it difficult to integrate the data. To surmount those difficulties, the Data Sharing System has been developed by the SCSODC using the most appropriate information management and information technology. The Data Sharing System uses open standards and tools to promote the capability to integrate ocean data and to interact with other data portals or users and includes a full range of processes such as data discovery, evaluation and access combining C/S and B/S mode. It provides a visualized management interface for the data managers and a transparent and seamless data access and application environment for users. Users are allowed to access data using the client software and to access interactive visualization application interface via a web browser. The architecture, key technologies and functionality of the system are discussed briefly in this paper. It is shown that the system of SCSODC is able to implement web visualization sharing and seamless access to ocean data in a distributed and heterogeneous environment.

  8. Cognitive integration of asynchronous natural or non-natural auditory and visual information in videos of real-world events: an event-related potential study.

    PubMed

    Liu, B; Wang, Z; Wu, G; Meng, X

    2011-04-28

    In this paper, we aim to study the cognitive integration of asynchronous natural or non-natural auditory and visual information in videos of real-world events. Videos with asynchronous semantically consistent or inconsistent natural sound or speech were used as stimuli in order to compare the difference and similarity between multisensory integrations of videos with asynchronous natural sound and speech. The event-related potential (ERP) results showed that N1 and P250 components were elicited irrespective of whether natural sounds were consistent or inconsistent with critical actions in videos. Videos with inconsistent natural sound could elicit N400-P600 effects compared to videos with consistent natural sound, which was similar to the results from unisensory visual studies. Videos with semantically consistent or inconsistent speech could both elicit N1 components. Meanwhile, videos with inconsistent speech would elicit N400-LPN effects in comparison with videos with consistent speech, which showed that this semantic processing was probably related to recognition memory. Moreover, the N400 effect elicited by videos with semantically inconsistent speech was larger and later than that elicited by videos with semantically inconsistent natural sound. Overall, multisensory integration of videos with natural sound or speech could be roughly divided into two stages. For the videos with natural sound, the first stage might reflect the connection between the received information and the stored information in memory; and the second one might stand for the evaluation process of inconsistent semantic information. For the videos with speech, the first stage was similar to the first stage of videos with natural sound; while the second one might be related to recognition memory process. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.

  9. Audio-visual speech perception in prelingually deafened Japanese children following sequential bilateral cochlear implantation.

    PubMed

    Yamamoto, Ryosuke; Naito, Yasushi; Tona, Risa; Moroto, Saburo; Tamaya, Rinko; Fujiwara, Keizo; Shinohara, Shogo; Takebayashi, Shinji; Kikuchi, Masahiro; Michida, Tetsuhiko

    2017-11-01

    An effect of audio-visual (AV) integration is observed when the auditory and visual stimuli are incongruent (the McGurk effect). In general, AV integration is helpful especially in subjects wearing hearing aids or cochlear implants (CIs). However, the influence of AV integration on spoken word recognition in individuals with bilateral CIs (Bi-CIs) has not been fully investigated so far. In this study, we investigated AV integration in children with Bi-CIs. The study sample included thirty one prelingually deafened children who underwent sequential bilateral cochlear implantation. We assessed their responses to congruent and incongruent AV stimuli with three CI-listening modes: only the 1st CI, only the 2nd CI, and Bi-CIs. The responses were assessed in the whole group as well as in two sub-groups: a proficient group (syllable intelligibility ≥80% with the 1st CI) and a non-proficient group (syllable intelligibility < 80% with the 1st CI). We found evidence of the McGurk effect in each of the three CI-listening modes. AV integration responses were observed in a subset of incongruent AV stimuli, and the patterns observed with the 1st CI and with Bi-CIs were similar. In the proficient group, the responses with the 2nd CI were not significantly different from those with the 1st CI whereas in the non-proficient group the responses with the 2nd CI were driven by visual stimuli more than those with the 1st CI. Our results suggested that prelingually deafened Japanese children who underwent sequential bilateral cochlear implantation exhibit AV integration abilities, both in monaural listening as well as in binaural listening. We also observed a higher influence of visual stimuli on speech perception with the 2nd CI in the non-proficient group, suggesting that Bi-CIs listeners with poorer speech recognition rely on visual information more compared to the proficient subjects to compensate for poorer auditory input. Nevertheless, poorer quality auditory input with the 2nd CI did not interfere with AV integration with binaural listening (with Bi-CIs). Overall, the findings of this study might be used to inform future research to identify the best strategies for speech training using AV integration effectively in prelingually deafened children. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Computer interfaces for the visually impaired

    NASA Technical Reports Server (NTRS)

    Higgins, Gerry

    1991-01-01

    Information access via computer terminals extends to blind and low vision persons employed in many technical and nontechnical disciplines. Two aspects are detailed of providing computer technology for persons with a vision related handicap. First, research into the most effective means of integrating existing adaptive technologies into information systems was made. This was conducted to integrate off the shelf products with adaptive equipment for cohesive integrated information processing systems. Details are included that describe the type of functionality required in software to facilitate its incorporation into a speech and/or braille system. The second aspect is research into providing audible and tactile interfaces to graphics based interfaces. Parameters are included for the design and development of the Mercator Project. The project will develop a prototype system for audible access to graphics based interfaces. The system is being built within the public domain architecture of X windows to show that it is possible to provide access to text based applications within a graphical environment. This information will be valuable to suppliers to ADP equipment since new legislation requires manufacturers to provide electronic access to the visually impaired.

  11. Mapping the navigational knowledge of individually foraging ants, Myrmecia croslandi

    PubMed Central

    Narendra, Ajay; Gourmaud, Sarah; Zeil, Jochen

    2013-01-01

    Ants are efficient navigators, guided by path integration and visual landmarks. Path integration is the primary strategy in landmark-poor habitats, but landmarks are readily used when available. The landmark panorama provides reliable information about heading direction, routes and specific location. Visual memories for guidance are often acquired along routes or near to significant places. Over what area can such locally acquired memories provide information for reaching a place? This question is unusually approachable in the solitary foraging Australian jack jumper ant, since individual foragers typically travel to one or two nest-specific foraging trees. We find that within 10 m from the nest, ants both with and without home vector information available from path integration return directly to the nest from all compass directions, after briefly scanning the panorama. By reconstructing panoramic views within the successful homing range, we show that in the open woodland habitat of these ants, snapshot memories acquired close to the nest provide sufficient navigational information to determine nest-directed heading direction over a surprisingly large area, including areas that animals may have not visited previously. PMID:23804615

  12. Dogs account for body orientation but not visual barriers when responding to pointing gestures

    PubMed Central

    MacLean, Evan L.; Krupenye, Christopher; Hare, Brian

    2014-01-01

    In a series of 4 experiments we investigated whether dogs use information about a human’s visual perspective when responding to pointing gestures. While there is evidence that dogs may know what humans can and cannot see, and that they flexibly use human communicative gestures, it is unknown if they can integrate these two skills. In Experiment 1 we first determined that dogs were capable of using basic information about a human’s body orientation (indicative of her visual perspective) in a point following context. Subjects were familiarized with experimenters who either faced the dog and accurately indicated the location of hidden food, or faced away from the dog and (falsely) indicated the un-baited container. In test trials these cues were pitted against one another and dogs tended to follow the gesture from the individual who faced them while pointing. In Experiments 2–4 the experimenter pointed ambiguously toward two possible locations where food could be hidden. On test trials a visual barrier occluded the pointer’s view of one container, while dogs could always see both containers. We predicted that if dogs could take the pointer’s visual perspective they should search in the only container visible to the pointer. This hypothesis was supported only in Experiment 2. We conclude that while dogs are skilled both at following human gestures, and exploiting information about others’ visual perspectives, they may not integrate these skills in the manner characteristic of human children. PMID:24611643

  13. Neural networks supporting audiovisual integration for speech: A large-scale lesion study.

    PubMed

    Hickok, Gregory; Rogalsky, Corianne; Matchin, William; Basilakos, Alexandra; Cai, Julia; Pillay, Sara; Ferrill, Michelle; Mickelsen, Soren; Anderson, Steven W; Love, Tracy; Binder, Jeffrey; Fridriksson, Julius

    2018-06-01

    Auditory and visual speech information are often strongly integrated resulting in perceptual enhancements for audiovisual (AV) speech over audio alone and sometimes yielding compelling illusory fusion percepts when AV cues are mismatched, the McGurk-MacDonald effect. Previous research has identified three candidate regions thought to be critical for AV speech integration: the posterior superior temporal sulcus (STS), early auditory cortex, and the posterior inferior frontal gyrus. We assess the causal involvement of these regions (and others) in the first large-scale (N = 100) lesion-based study of AV speech integration. Two primary findings emerged. First, behavioral performance and lesion maps for AV enhancement and illusory fusion measures indicate that classic metrics of AV speech integration are not necessarily measuring the same process. Second, lesions involving superior temporal auditory, lateral occipital visual, and multisensory zones in the STS are the most disruptive to AV speech integration. Further, when AV speech integration fails, the nature of the failure-auditory vs visual capture-can be predicted from the location of the lesions. These findings show that AV speech processing is supported by unimodal auditory and visual cortices as well as multimodal regions such as the STS at their boundary. Motor related frontal regions do not appear to play a role in AV speech integration. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. A recurrent neural model for proto-object based contour integration and figure-ground segregation.

    PubMed

    Hu, Brian; Niebur, Ernst

    2017-12-01

    Visual processing of objects makes use of both feedforward and feedback streams of information. However, the nature of feedback signals is largely unknown, as is the identity of the neuronal populations in lower visual areas that receive them. Here, we develop a recurrent neural model to address these questions in the context of contour integration and figure-ground segregation. A key feature of our model is the use of grouping neurons whose activity represents tentative objects ("proto-objects") based on the integration of local feature information. Grouping neurons receive input from an organized set of local feature neurons, and project modulatory feedback to those same neurons. Additionally, inhibition at both the local feature level and the object representation level biases the interpretation of the visual scene in agreement with principles from Gestalt psychology. Our model explains several sets of neurophysiological results (Zhou et al. Journal of Neuroscience, 20(17), 6594-6611 2000; Qiu et al. Nature Neuroscience, 10(11), 1492-1499 2007; Chen et al. Neuron, 82(3), 682-694 2014), and makes testable predictions about the influence of neuronal feedback and attentional selection on neural responses across different visual areas. Our model also provides a framework for understanding how object-based attention is able to select both objects and the features associated with them.

  15. ToxPi Graphical User Interface 2.0: Dynamic exploration, visualization, and sharing of integrated data models.

    PubMed

    Marvel, Skylar W; To, Kimberly; Grimm, Fabian A; Wright, Fred A; Rusyn, Ivan; Reif, David M

    2018-03-05

    Drawing integrated conclusions from diverse source data requires synthesis across multiple types of information. The ToxPi (Toxicological Prioritization Index) is an analytical framework that was developed to enable integration of multiple sources of evidence by transforming data into integrated, visual profiles. Methodological improvements have advanced ToxPi and expanded its applicability, necessitating a new, consolidated software platform to provide functionality, while preserving flexibility for future updates. We detail the implementation of a new graphical user interface for ToxPi (Toxicological Prioritization Index) that provides interactive visualization, analysis, reporting, and portability. The interface is deployed as a stand-alone, platform-independent Java application, with a modular design to accommodate inclusion of future analytics. The new ToxPi interface introduces several features, from flexible data import formats (including legacy formats that permit backward compatibility) to similarity-based clustering to options for high-resolution graphical output. We present the new ToxPi interface for dynamic exploration, visualization, and sharing of integrated data models. The ToxPi interface is freely-available as a single compressed download that includes the main Java executable, all libraries, example data files, and a complete user manual from http://toxpi.org .

  16. Visual integration enhances associative memory equally for young and older adults without reducing hippocampal encoding activation.

    PubMed

    Memel, Molly; Ryan, Lee

    2017-06-01

    The ability to remember associations between previously unrelated pieces of information is often impaired in older adults (Naveh-Benjamin, 2000). Unitization, the process of creating a perceptually or semantically integrated representation that includes both items in an associative pair, attenuates age-related associative deficits (Bastin et al., 2013; Ahmad et al., 2015; Zheng et al., 2015). Compared to non-unitized pairs, unitized pairs may rely less on hippocampally-mediated binding associated with recollection, and more on familiarity-based processes mediated by perirhinal cortex (PRC) and parahippocampal cortex (PHC). While unitization of verbal materials improves associative memory in older adults, less is known about the impact of visual integration. The present study determined whether visual integration improves associative memory in older adults by minimizing the need for hippocampal (HC) recruitment and shifting encoding to non-hippocampal medial temporal structures, such as the PRC and PHC. Young and older adults were presented with a series of objects paired with naturalistic scenes while undergoing fMRI scanning, and were later given an associative memory test. Visual integration was varied by presenting the object either next to the scene (Separated condition) or visually integrated within the scene (Combined condition). Visual integration improved associative memory among young and older adults to a similar degree by increasing the hit rate for intact pairs, but without increasing false alarms for recombined pairs, suggesting enhanced recollection rather than increased reliance on familiarity. Also contrary to expectations, visual integration resulted in increased hippocampal activation in both age groups, along with increases in PRC and PHC activation. Activation in all three MTL regions predicted discrimination performance during the Separated condition in young adults, while only a marginal relationship between PRC activation and performance was observed during the Combined condition. Older adults showed less overall activation in MTL regions compared to young adults, and associative memory performance was most strongly predicted by prefrontal, rather than MTL, activation. We suggest that visual integration benefits both young and older adults similarly, and provides a special case of unitization that may be mediated by recollective, rather than familiarity-based encoding processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory.

    PubMed

    Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E

    2010-05-01

    The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.

  18. Long-term Recurrent Convolutional Networks for Visual Recognition and Description

    DTIC Science & Technology

    2014-11-17

    deep???, are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large...models which are also recurrent, or “temporally deep”, are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent...limitation of simple RNN models which strictly integrate state information over time is known as the “vanishing gradient” effect : the ability to

  19. Integrating mechanisms of visual guidance in naturalistic language production.

    PubMed

    Coco, Moreno I; Keller, Frank

    2015-05-01

    Situated language production requires the integration of visual attention and linguistic processing. Previous work has not conclusively disentangled the role of perceptual scene information and structural sentence information in guiding visual attention. In this paper, we present an eye-tracking study that demonstrates that three types of guidance, perceptual, conceptual, and structural, interact to control visual attention. In a cued language production experiment, we manipulate perceptual (scene clutter) and conceptual guidance (cue animacy) and measure structural guidance (syntactic complexity of the utterance). Analysis of the time course of language production, before and during speech, reveals that all three forms of guidance affect the complexity of visual responses, quantified in terms of the entropy of attentional landscapes and the turbulence of scan patterns, especially during speech. We find that perceptual and conceptual guidance mediate the distribution of attention in the scene, whereas structural guidance closely relates to scan pattern complexity. Furthermore, the eye-voice span of the cued object and its perceptual competitor are similar; its latency mediated by both perceptual and structural guidance. These results rule out a strict interpretation of structural guidance as the single dominant form of visual guidance in situated language production. Rather, the phase of the task and the associated demands of cross-modal cognitive processing determine the mechanisms that guide attention.

  20. Joint Attention Enhances Visual Working Memory

    ERIC Educational Resources Information Center

    Gregory, Samantha E. A.; Jackson, Margaret C.

    2017-01-01

    Joint attention--the mutual focus of 2 individuals on an item--speeds detection and discrimination of target information. However, what happens to that information beyond the initial perceptual episode? To fully comprehend and engage with our immediate environment also requires working memory (WM), which integrates information from second to…

  1. Effective visualization of integrated knowledge and data to enable informed decisions in drug development and translational medicine

    PubMed Central

    2013-01-01

    Integrative understanding of preclinical and clinical data is imperative to enable informed decisions and reduce the attrition rate during drug development. The volume and variety of data generated during drug development have increased tremendously. A new information model and visualization tool was developed to effectively utilize all available data and current knowledge. The Knowledge Plot integrates preclinical, clinical, efficacy and safety data by adding two concepts: knowledge from the different disciplines and protein binding. Internal and public available data were gathered and processed to allow flexible and interactive visualizations. The exposure was expressed as the unbound concentration of the compound and the treatment effect was normalized and scaled by including expert opinion on what a biologically meaningful treatment effect would be. The Knowledge Plot has been applied both retrospectively and prospectively in project teams in a number of different therapeutic areas, resulting in closer collaboration between multiple disciplines discussing both preclinical and clinical data. The Plot allows head to head comparisons of compounds and was used to support Candidate Drug selections and differentiation from comparators and competitors, back translation of clinical data, understanding the predictability of preclinical models and assays, reviewing drift in primary endpoints over the years, and evaluate or benchmark compounds in due diligence comparing multiple attributes. The Knowledge Plot concept allows flexible integration and visualization of relevant data for interpretation in order to enable scientific and informed decision-making in various stages of drug development. The concept can be used for communication, decision-making, knowledge management, and as a forward and back translational tool, that will result in an improved understanding of the competitive edge for a particular project or disease area portfolio. In addition, it also builds up a knowledge and translational continuum, which in turn will reduce the attrition rate and costs of clinical development by identifying poor candidates early. PMID:24098919

  2. Superior haptic-to-visual shape matching in autism spectrum disorders.

    PubMed

    Nakano, Tamami; Kato, Nobumasa; Kitazawa, Shigeru

    2012-04-01

    A weak central coherence theory in autism spectrum disorder (ASD) proposes that a cognitive bias toward local processing in ASD derives from a weakness in integrating local elements into a coherent whole. Using this theory, we hypothesized that shape perception through active touch, which requires sequential integration of sensorimotor traces of exploratory finger movements into a shape representation, would be impaired in ASD. Contrary to our expectation, adults with ASD showed superior performance in a haptic-to-visual delayed shape-matching task compared to adults without ASD. Accuracy in discriminating haptic lengths or haptic orientations, which lies within the somatosensory modality, did not differ between adults with ASD and adults without ASD. Moreover, this superior ability in inter-modal haptic-to-visual shape matching was not explained by the score in a unimodal visuospatial rotation task. These results suggest that individuals with ASD are not impaired in integrating sensorimotor traces into a global visual shape and that their multimodal shape representations and haptic-to-visual information transfer are more accurate than those of individuals without ASD. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Degradation of labial information modifies audiovisual speech perception in cochlear-implanted children.

    PubMed

    Huyse, Aurélie; Berthommier, Frédéric; Leybaert, Jacqueline

    2013-01-01

    The aim of the present study was to examine audiovisual speech integration in cochlear-implanted children and in normally hearing children exposed to degraded auditory stimuli. Previous studies have shown that speech perception in cochlear-implanted users is biased toward the visual modality when audition and vision provide conflicting information. Our main question was whether an experimentally designed degradation of the visual speech cue would increase the importance of audition in the response pattern. The impact of auditory proficiency was also investigated. A group of 31 children with cochlear implants and a group of 31 normally hearing children matched for chronological age were recruited. All children with cochlear implants had profound congenital deafness and had used their implants for at least 2 years. Participants had to perform an /aCa/ consonant-identification task in which stimuli were presented randomly in three conditions: auditory only, visual only, and audiovisual (congruent and incongruent McGurk stimuli). In half of the experiment, the visual speech cue was normal; in the other half (visual reduction) a degraded visual signal was presented, aimed at preventing lipreading of good quality. The normally hearing children received a spectrally reduced speech signal (simulating the input delivered by the cochlear implant). First, performance in visual-only and in congruent audiovisual modalities were decreased, showing that the visual reduction technique used here was efficient at degrading lipreading. Second, in the incongruent audiovisual trials, visual reduction led to a major increase in the number of auditory based responses in both groups. Differences between proficient and nonproficient children were found in both groups, with nonproficient children's responses being more visual and less auditory than those of proficient children. Further analysis revealed that differences between visually clear and visually reduced conditions and between groups were not only because of differences in unisensory perception but also because of differences in the process of audiovisual integration per se. Visual reduction led to an increase in the weight of audition, even in cochlear-implanted children, whose perception is generally dominated by vision. This result suggests that the natural bias in favor of vision is not immutable. Audiovisual speech integration partly depends on the experimental situation, which modulates the informational content of the sensory channels and the weight that is awarded to each of them. Consequently, participants, whether deaf with cochlear implants or having normal hearing, not only base their perception on the most reliable modality but also award it an additional weight.

  4. Optimal cue integration in ants.

    PubMed

    Wystrach, Antoine; Mangan, Michael; Webb, Barbara

    2015-10-07

    In situations with redundant or competing sensory information, humans have been shown to perform cue integration, weighting different cues according to their certainty in a quantifiably optimal manner. Ants have been shown to merge the directional information available from their path integration (PI) and visual memory, but as yet it is not clear that they do so in a way that reflects the relative certainty of the cues. In this study, we manipulate the variance of the PI home vector by allowing ants (Cataglyphis velox) to run different distances and testing their directional choice when the PI vector direction is put in competition with visual memory. Ants show progressively stronger weighting of their PI direction as PI length increases. The weighting is quantitatively predicted by modelling the expected directional variance of home vectors of different lengths and assuming optimal cue integration. However, a subsequent experiment suggests ants may not actually compute an internal estimate of the PI certainty, but are using the PI home vector length as a proxy. © 2015 The Author(s).

  5. Kinesthetic information disambiguates visual motion signals.

    PubMed

    Hu, Bo; Knill, David C

    2010-05-25

    Numerous studies have shown that extra-retinal signals can disambiguate motion information created by movements of the eye or head. We report a new form of cross-modal sensory integration in which the kinesthetic information generated by active hand movements essentially captures ambiguous visual motion information. Several previous studies have shown that active movement can bias observers' percepts of bi-stable stimuli; however, these effects seem to be best explained by attentional mechanisms. We show that kinesthetic information can change an otherwise stable perception of motion, providing evidence of genuine fusion between visual and kinesthetic information. The experiments take advantage of the aperture problem, in which the motion of a one-dimensional grating pattern behind an aperture, while geometrically ambiguous, appears to move stably in the grating normal direction. When actively moving the pattern, however, the observer sees the motion to be in the hand movement direction. Copyright 2010 Elsevier Ltd. All rights reserved.

  6. Bayesian integration of position and orientation cues in perception of biological and non-biological forms.

    PubMed

    Thurman, Steven M; Lu, Hongjing

    2014-01-01

    Visual form analysis is fundamental to shape perception and likely plays a central role in perception of more complex dynamic shapes, such as moving objects or biological motion. Two primary form-based cues serve to represent the overall shape of an object: the spatial position and the orientation of locations along the boundary of the object. However, it is unclear how the visual system integrates these two sources of information in dynamic form analysis, and in particular how the brain resolves ambiguities due to sensory uncertainty and/or cue conflict. In the current study, we created animations of sparsely-sampled dynamic objects (human walkers or rotating squares) comprised of oriented Gabor patches in which orientation could either coincide or conflict with information provided by position cues. When the cues were incongruent, we found a characteristic trade-off between position and orientation information whereby position cues increasingly dominated perception as the relative uncertainty of orientation increased and vice versa. Furthermore, we found no evidence for differences in the visual processing of biological and non-biological objects, casting doubt on the claim that biological motion may be specialized in the human brain, at least in specific terms of form analysis. To explain these behavioral results quantitatively, we adopt a probabilistic template-matching model that uses Bayesian inference within local modules to estimate object shape separately from either spatial position or orientation signals. The outputs of the two modules are integrated with weights that reflect individual estimates of subjective cue reliability, and integrated over time to produce a decision about the perceived dynamics of the input data. Results of this model provided a close fit to the behavioral data, suggesting a mechanism in the human visual system that approximates rational Bayesian inference to integrate position and orientation signals in dynamic form analysis.

  7. The effect of a concurrent working memory task and temporal offsets on the integration of auditory and visual speech information.

    PubMed

    Buchan, Julie N; Munhall, Kevin G

    2012-01-01

    Audiovisual speech perception is an everyday occurrence of multisensory integration. Conflicting visual speech information can influence the perception of acoustic speech (namely the McGurk effect), and auditory and visual speech are integrated over a rather wide range of temporal offsets. This research examined whether the addition of a concurrent cognitive load task would affect the audiovisual integration in a McGurk speech task and whether the cognitive load task would cause more interference at increasing offsets. The amount of integration was measured by the proportion of responses in incongruent trials that did not correspond to the audio (McGurk response). An eye-tracker was also used to examine whether the amount of temporal offset and the presence of a concurrent cognitive load task would influence gaze behavior. Results from this experiment show a very modest but statistically significant decrease in the number of McGurk responses when subjects also perform a cognitive load task, and that this effect is relatively constant across the various temporal offsets. Participant's gaze behavior was also influenced by the addition of a cognitive load task. Gaze was less centralized on the face, less time was spent looking at the mouth and more time was spent looking at the eyes, when a concurrent cognitive load task was added to the speech task.

  8. Bimanual Coordination Learning with Different Augmented Feedback Modalities and Information Types

    PubMed Central

    Chiou, Shiau-Chuen; Chang, Erik Chihhung

    2016-01-01

    Previous studies have shown that bimanual coordination learning is more resistant to the removal of augmented feedback when acquired with auditory than with visual channel. However, it is unclear whether this differential “guidance effect” between feedback modalities is due to enhanced sensorimotor integration via the non-dominant auditory channel or strengthened linkage to kinesthetic information under rhythmic input. The current study aimed to examine how modalities (visual vs. auditory) and information types (continuous visuospatial vs. discrete rhythmic) of concurrent augmented feedback influence bimanual coordination learning. Participants either learned a 90°-out-of-phase pattern for three consecutive days with Lissajous feedback indicating the integrated position of both arms, or with visual or auditory rhythmic feedback reflecting the relative timing of the movement. The results showed diverse performance change after practice when the feedback was removed between Lissajous and the other two rhythmic groups, indicating that the guidance effect may be modulated by the type of information provided during practice. Moreover, significant performance improvement in the dual-task condition where the irregular rhythm counting task was applied as a secondary task also suggested that lower involvement of conscious control may result in better performance in bimanual coordination. PMID:26895286

  9. Bimanual Coordination Learning with Different Augmented Feedback Modalities and Information Types.

    PubMed

    Chiou, Shiau-Chuen; Chang, Erik Chihhung

    2016-01-01

    Previous studies have shown that bimanual coordination learning is more resistant to the removal of augmented feedback when acquired with auditory than with visual channel. However, it is unclear whether this differential "guidance effect" between feedback modalities is due to enhanced sensorimotor integration via the non-dominant auditory channel or strengthened linkage to kinesthetic information under rhythmic input. The current study aimed to examine how modalities (visual vs. auditory) and information types (continuous visuospatial vs. discrete rhythmic) of concurrent augmented feedback influence bimanual coordination learning. Participants either learned a 90°-out-of-phase pattern for three consecutive days with Lissajous feedback indicating the integrated position of both arms, or with visual or auditory rhythmic feedback reflecting the relative timing of the movement. The results showed diverse performance change after practice when the feedback was removed between Lissajous and the other two rhythmic groups, indicating that the guidance effect may be modulated by the type of information provided during practice. Moreover, significant performance improvement in the dual-task condition where the irregular rhythm counting task was applied as a secondary task also suggested that lower involvement of conscious control may result in better performance in bimanual coordination.

  10. The Impact of Integrating Visuals in an Elementary Creative Writing Process.

    ERIC Educational Resources Information Center

    Bailey, Margaret; And Others

    Most children's books are filled with pictures, yet when schools design curricula to teach writing, they often ignore the role of visual images in the writing process. Historically, methods for teaching writing have focused on text. Even relatively recent techniques like brainstorming and story webbing still focus on verbal information. In some…

  11. Sensory Symptoms and Processing of Nonverbal Auditory and Visual Stimuli in Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Stewart, Claire R.; Sanchez, Sandra S.; Grenesko, Emily L.; Brown, Christine M.; Chen, Colleen P.; Keehn, Brandon; Velasquez, Francisco; Lincoln, Alan J.; Müller, Ralph-Axel

    2016-01-01

    Atypical sensory responses are common in autism spectrum disorder (ASD). While evidence suggests impaired auditory-visual integration for verbal information, findings for nonverbal stimuli are inconsistent. We tested for sensory symptoms in children with ASD (using the Adolescent/Adult Sensory Profile) and examined unisensory and bisensory…

  12. Monitoring an Online Course with the GISMO Tool: A Case Study

    ERIC Educational Resources Information Center

    Mazza, Riccardo; Botturi, Luca

    2007-01-01

    This article presents GISMO, a novel, open source, graphic student-tracking tool integrated into Moodle. GISMO represents a further step in information visualization applied to education, and also a novelty in the field of learning management systems applications. The visualizations of the tool, its uses and the benefits it can bring are…

  13. Multilevel depth and image fusion for human activity detection.

    PubMed

    Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng

    2013-10-01

    Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.

  14. Category identification of changed land-use polygons in an integrated image processing/geographic information system

    NASA Technical Reports Server (NTRS)

    Westmoreland, Sally; Stow, Douglas A.

    1992-01-01

    A framework is proposed for analyzing ancillary data and developing procedures for incorporating ancillary data to aid interactive identification of land-use categories in land-use updates. The procedures were developed for use within an integrated image processsing/geographic information systems (GIS) that permits simultaneous display of digital image data with the vector land-use data to be updated. With such systems and procedures, automated techniques are integrated with visual-based manual interpretation to exploit the capabilities of both. The procedural framework developed was applied as part of a case study to update a portion of the land-use layer in a regional scale GIS. About 75 percent of the area in the study site that experienced a change in land use was correctly labeled into 19 categories using the combination of automated and visual interpretation procedures developed in the study.

  15. Geoinformatics in the public service: building a cyberinfrastructure across the geological surveys

    USGS Publications Warehouse

    Allison, M. Lee; Gundersen, Linda C.; Richard, Stephen M.; Keller, G. Randy; Baru, Chaitanya

    2011-01-01

    Advanced information technology infrastructure is increasingly being employed in the Earth sciences to provide researchers with efficient access to massive central databases and to integrate diversely formatted information from a variety of sources. These geoinformatics initiatives enable manipulation, modeling and visualization of data in a consistent way, and are helping to develop integrated Earth models at various scales, and from the near surface to the deep interior. This book uses a series of case studies to demonstrate computer and database use across the geosciences. Chapters are thematically grouped into sections that cover data collection and management; modeling and community computational codes; visualization and data representation; knowledge management and data integration; and web services and scientific workflows. Geoinformatics is a fascinating and accessible introduction to this emerging field for readers across the solid Earth sciences and an invaluable reference for researchers interested in initiating new cyberinfrastructure projects of their own.

  16. An integrated measure of display clutter based on feature content, user knowledge and attention allocation factors.

    PubMed

    Pankok, Carl; Kaber, David B

    2018-05-01

    Existing measures of display clutter in the literature generally exhibit weak correlations with task performance, which limits their utility in safety-critical domains. A literature review led to formulation of an integrated display data- and user knowledge-driven measure of display clutter. A driving simulation experiment was conducted in which participants were asked to search 'high' and 'low' clutter displays for navigation information. Data-driven measures and subjective perceptions of clutter were collected along with patterns of visual attention allocation and driving performance responses during time periods in which participants searched the navigation display for information. The new integrated measure was more strongly correlated with driving performance than other, previously developed measures of clutter, particularly in the case of low-clutter displays. Integrating display data and user knowledge factors with patterns of visual attention allocation shows promise for measuring display clutter and correlation with task performance, particularly for low-clutter displays. Practitioner Summary: A novel measure of display clutter was formulated, accounting for display data content, user knowledge states and patterns of visual attention allocation. The measure was evaluated in terms of correlations with driver performance in a safety-critical driving simulation study. The measure exhibited stronger correlations with task performance than previously defined measures.

  17. Acquired prior knowledge modulates audiovisual integration.

    PubMed

    Van Wanrooij, Marc M; Bremen, Peter; John Van Opstal, A

    2010-05-01

    Orienting responses to audiovisual events in the environment can benefit markedly by the integration of visual and auditory spatial information. However, logically, audiovisual integration would only be considered successful for stimuli that are spatially and temporally aligned, as these would be emitted by a single object in space-time. As humans do not have prior knowledge about whether novel auditory and visual events do indeed emanate from the same object, such information needs to be extracted from a variety of sources. For example, expectation about alignment or misalignment could modulate the strength of multisensory integration. If evidence from previous trials would repeatedly favour aligned audiovisual inputs, the internal state might also assume alignment for the next trial, and hence react to a new audiovisual event as if it were aligned. To test for such a strategy, subjects oriented a head-fixed pointer as fast as possible to a visual flash that was consistently paired, though not always spatially aligned, with a co-occurring broadband sound. We varied the probability of audiovisual alignment between experiments. Reaction times were consistently lower in blocks containing only aligned audiovisual stimuli than in blocks also containing pseudorandomly presented spatially disparate stimuli. Results demonstrate dynamic updating of the subject's prior expectation of audiovisual congruency. We discuss a model of prior probability estimation to explain the results.

  18. Congruent and Opposite Neurons as Partners in Multisensory Integration and Segregation

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Hao; Wong, K. Y. Michael; Wang, He; Wu, Si

    Experiments revealed that where visual and vestibular cues are integrated to infer heading direction in the brain, there are two types of neurons with roughly the same number. Respectively, congruent and opposite cells respond similarly and oppositely to visual and vestibular cues. Congruent neurons are known to be responsible for cue integration, but the computational role of opposite neurons remains largely unknown. We propose that opposite neurons may serve to encode the disparity information between cues necessary for multisensory segregation. We build a computational model composed of two reciprocally coupled modules, each consisting of groups of congruent and opposite neurons. Our model reproduces the characteristics of congruent and opposite neurons, and demonstrates that in each module, congruent and opposite neurons can jointly achieve optimal multisensory information integration and segregation. This study sheds light on our understanding of how the brain implements optimal multisensory integration and segregation concurrently in a distributed manner. This work is supported by the Research Grants Council of Hong Kong (N _HKUST606/12, 605813, and 16322616) and National Basic Research Program of China (2014CB846101) and the Natural Science Foundation of China (31261160495).

  19. An Integrated Tone Mapping for High Dynamic Range Image Visualization

    NASA Astrophysics Data System (ADS)

    Liang, Lei; Pan, Jeng-Shyang; Zhuang, Yongjun

    2018-01-01

    There are two type tone mapping operators for high dynamic range (HDR) image visualization. HDR image mapped by perceptual operators have strong sense of reality, but will lose local details. Empirical operators can maximize local detail information of HDR image, but realism is not strong. A common tone mapping operator suitable for all applications is not available. This paper proposes a novel integrated tone mapping framework which can achieve conversion between empirical operators and perceptual operators. In this framework, the empirical operator is rendered based on improved saliency map, which simulates the visual attention mechanism of the human eye to the natural scene. The results of objective evaluation prove the effectiveness of the proposed solution.

  20. Situational analysis of communication of HIV and AIDS information to persons with visual impairment: a case of Kang'onga Production Centre in Ndola, Zambia.

    PubMed

    Chintende, Grace Nsangwe; Sitali, Doreen; Michelo, Charles; Mweemba, Oliver

    2017-04-04

    Despite the increases in health promotion and educational programs on HIV and AIDS, lack of information and communication on HIV and AIDS for the visually impaired persons continues. The underlying factors that create the information and communication gaps have not been fully explored in Zambia. It is therefore important that, this situational analysis on HIV and AIDS information dissemination to persons with visual impairments at Kang'onga Production Centre in Ndola was conducted. The study commenced in December 2014 to May 2015. A qualitative case study design was employed. The study used two focus group discussions with males and females. Each group comprised twelve participants. Eight in-depth interviews involving the visually impaired persons and five key informants working with visually impaired persons were conducted. Data was analysed thematically using NVIVO 8 software. Ethical clearance was sought from Excellency in Research Ethics and Science. Reference Number 2014-May-030. It was established that most visually impaired people lacked knowledge on the cause, transmission and treatment of HIV and AIDS resulting in misconceptions. It was revealed that health promoters and people working with the visually impaired did not have specific HIV and AIDS information programs in Zambia. Further, it was discovered that the media, information education communication and health education were channels through which the visually impaired accessed HIV and AIDS information. Discrimination, stigma, lack of employment opportunities, funding and poverty were among the many challenges identified which the visually impaired persons faced in accessing HIV and AIDS information. Integration of the visually impaired in HIV and AIDS programs would increase funding for economic empowerment and health promotions in order to improve communication on HIV and AIDS information. The study showed that, the visually impaired persons in Zambia are not catered for in the dissemination of HIV and AIDS information. Available information is not user-friendly because it is in unreadable formats thereby increasing the potential for misinformation and limitations to their access. This calls for innovations in the communication on HIV and AIDS information health promotion to the target groups.

  1. Visual Feature Integration Indicated by pHase-Locked Frontal-Parietal EEG Signals

    PubMed Central

    Phillips, Steven; Takeda, Yuji; Singh, Archana

    2012-01-01

    The capacity to integrate multiple sources of information is a prerequisite for complex cognitive ability, such as finding a target uniquely identifiable by the conjunction of two or more features. Recent studies identified greater frontal-parietal synchrony during conjunctive than non-conjunctive (feature) search. Whether this difference also reflects greater information integration, rather than just differences in cognitive strategy (e.g., top-down versus bottom-up control of attention), or task difficulty is uncertain. Here, we examine the first possibility by parametrically varying the number of integrated sources from one to three and measuring phase-locking values (PLV) of frontal-parietal EEG electrode signals, as indicators of synchrony. Linear regressions, under hierarchical false-discovery rate control, indicated significant positive slopes for number of sources on PLV in the 30–38 Hz, 175–250 ms post-stimulus frequency-time band for pairs in the sagittal plane (i.e., F3-P3, Fz-Pz, F4-P4), after equating conditions for behavioural performance (to exclude effects due to task difficulty). No such effects were observed for pairs in the transverse plane (i.e., F3-F4, C3-C4, P3-P4). These results provide support for the idea that anterior-posterior phase-locking in the lower gamma-band mediates integration of visual information. They also provide a potential window into cognitive development, seen as developing the capacity to integrate more sources of information. PMID:22427847

  2. Visual feature integration indicated by pHase-locked frontal-parietal EEG signals.

    PubMed

    Phillips, Steven; Takeda, Yuji; Singh, Archana

    2012-01-01

    The capacity to integrate multiple sources of information is a prerequisite for complex cognitive ability, such as finding a target uniquely identifiable by the conjunction of two or more features. Recent studies identified greater frontal-parietal synchrony during conjunctive than non-conjunctive (feature) search. Whether this difference also reflects greater information integration, rather than just differences in cognitive strategy (e.g., top-down versus bottom-up control of attention), or task difficulty is uncertain. Here, we examine the first possibility by parametrically varying the number of integrated sources from one to three and measuring phase-locking values (PLV) of frontal-parietal EEG electrode signals, as indicators of synchrony. Linear regressions, under hierarchical false-discovery rate control, indicated significant positive slopes for number of sources on PLV in the 30-38 Hz, 175-250 ms post-stimulus frequency-time band for pairs in the sagittal plane (i.e., F3-P3, Fz-Pz, F4-P4), after equating conditions for behavioural performance (to exclude effects due to task difficulty). No such effects were observed for pairs in the transverse plane (i.e., F3-F4, C3-C4, P3-P4). These results provide support for the idea that anterior-posterior phase-locking in the lower gamma-band mediates integration of visual information. They also provide a potential window into cognitive development, seen as developing the capacity to integrate more sources of information.

  3. Unisensory processing and multisensory integration in schizophrenia: a high-density electrical mapping study.

    PubMed

    Stone, David B; Urrea, Laura J; Aine, Cheryl J; Bustillo, Juan R; Clark, Vincent P; Stephen, Julia M

    2011-10-01

    In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. A medical application integrating remote 3D visualization tools to access picture archiving and communication system on mobile devices.

    PubMed

    He, Longjun; Ming, Xing; Liu, Qian

    2014-04-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.

  5. Audiovisual Asynchrony Detection in Human Speech

    ERIC Educational Resources Information Center

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  6. Development of a novel visuomotor integration paradigm by integrating a virtual environment with mobile eye-tracking and motion-capture systems

    PubMed Central

    Miller, Haylie L.; Bugnariu, Nicoleta; Patterson, Rita M.; Wijayasinghe, Indika; Popa, Dan O.

    2018-01-01

    Visuomotor integration (VMI), the use of visual information to guide motor planning, execution, and modification, is necessary for a wide range of functional tasks. To comprehensively, quantitatively assess VMI, we developed a paradigm integrating virtual environments, motion-capture, and mobile eye-tracking. Virtual environments enable tasks to be repeatable, naturalistic, and varied in complexity. Mobile eye-tracking and minimally-restricted movement enable observation of natural strategies for interacting with the environment. This paradigm yields a rich dataset that may inform our understanding of VMI in typical and atypical development. PMID:29876370

  7. Dissociable Effects of Aging and Mild Cognitive Impairment on Bottom-Up Audiovisual Integration.

    PubMed

    Festa, Elena K; Katz, Andrew P; Ott, Brian R; Tremont, Geoffrey; Heindel, William C

    2017-01-01

    Effective audiovisual sensory integration involves dynamic changes in functional connectivity between superior temporal sulcus and primary sensory areas. This study examined whether disrupted connectivity in early Alzheimer's disease (AD) produces impaired audiovisual integration under conditions requiring greater corticocortical interactions. Audiovisual speech integration was examined in healthy young adult controls (YC), healthy elderly controls (EC), and patients with amnestic mild cognitive impairment (MCI) using McGurk-type stimuli (providing either congruent or incongruent audiovisual speech information) under conditions differing in the strength of bottom-up support and the degree of top-down lexical asymmetry. All groups accurately identified auditory speech under congruent audiovisual conditions, and displayed high levels of visual bias under strong bottom-up incongruent conditions. Under weak bottom-up incongruent conditions, however, EC and amnestic MCI groups displayed opposite patterns of performance, with enhanced visual bias in the EC group and reduced visual bias in the MCI group relative to the YC group. Moreover, there was no overlap between the EC and MCI groups in individual visual bias scores reflecting the change in audiovisual integration from the strong to the weak stimulus conditions. Top-down lexicality influences on visual biasing were observed only in the MCI patients under weaker bottom-up conditions. Results support a deficit in bottom-up audiovisual integration in early AD attributable to disruptions in corticocortical connectivity. Given that this deficit is not simply an exacerbation of changes associated with healthy aging, tests of audiovisual speech integration may serve as sensitive and specific markers of the earliest cognitive change associated with AD.

  8. Object Representations in Human Visual Cortex Formed Through Temporal Integration of Dynamic Partial Shape Views.

    PubMed

    Orlov, Tanya; Zohary, Ehud

    2018-01-17

    We typically recognize visual objects using the spatial layout of their parts, which are present simultaneously on the retina. Therefore, shape extraction is based on integration of the relevant retinal information over space. The lateral occipital complex (LOC) can represent shape faithfully in such conditions. However, integration over time is sometimes required to determine object shape. To study shape extraction through temporal integration of successive partial shape views, we presented human participants (both men and women) with artificial shapes that moved behind a narrow vertical or horizontal slit. Only a tiny fraction of the shape was visible at any instant at the same retinal location. However, observers perceived a coherent whole shape instead of a jumbled pattern. Using fMRI and multivoxel pattern analysis, we searched for brain regions that encode temporally integrated shape identity. We further required that the representation of shape should be invariant to changes in the slit orientation. We show that slit-invariant shape information is most accurate in the LOC. Importantly, the slit-invariant shape representations matched the conventional whole-shape representations assessed during full-image runs. Moreover, when the same slit-dependent shape slivers were shuffled, thereby preventing their spatiotemporal integration, slit-invariant shape information was reduced dramatically. The slit-invariant representation of the various shapes also mirrored the structure of shape perceptual space as assessed by perceptual similarity judgment tests. Therefore, the LOC is likely to mediate temporal integration of slit-dependent shape views, generating a slit-invariant whole-shape percept. These findings provide strong evidence for a global encoding of shape in the LOC regardless of integration processes required to generate the shape percept. SIGNIFICANCE STATEMENT Visual objects are recognized through spatial integration of features available simultaneously on the retina. The lateral occipital complex (LOC) represents shape faithfully in such conditions even if the object is partially occluded. However, shape must sometimes be reconstructed over both space and time. Such is the case in anorthoscopic perception, when an object is moving behind a narrow slit. In this scenario, spatial information is limited at any moment so the whole-shape percept can only be inferred by integration of successive shape views over time. We find that LOC carries shape-specific information recovered using such temporal integration processes. The shape representation is invariant to slit orientation and is similar to that evoked by a fully viewed image. Existing models of object recognition lack such capabilities. Copyright © 2018 the authors 0270-6474/18/380659-20$15.00/0.

  9. Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection

    PubMed Central

    Denison, Rachel N.; Driver, Jon; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067

  10. Audiovisual integration in hemianopia: A neurocomputational account based on cortico-collicular interaction.

    PubMed

    Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro

    2016-10-01

    Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. a Web-Based Platform for Visualizing Spatiotemporal Dynamics of Big Taxi Data

    NASA Astrophysics Data System (ADS)

    Xiong, H.; Chen, L.; Gui, Z.

    2017-09-01

    With more and more vehicles equipped with Global Positioning System (GPS), access to large-scale taxi trajectory data has become increasingly easy. Taxis are valuable sensors and information associated with taxi trajectory can provide unprecedented insight into many aspects of city life. But analysing these data presents many challenges. Visualization of taxi data is an efficient way to represent its distributions and structures and reveal hidden patterns in the data. However, Most of the existing visualization systems have some shortcomings. On the one hand, the passenger loading status and speed information cannot be expressed. On the other hand, mono-visualization form limits the information presentation. In view of these problems, this paper designs and implements a visualization system in which we use colour and shape to indicate passenger loading status and speed information and integrate various forms of taxi visualization. The main work as follows: 1. Pre-processing and storing the taxi data into MongoDB database. 2. Visualization of hotspots for taxi pickup points. Through DBSCAN clustering algorithm, we cluster the extracted taxi passenger's pickup locations to produce passenger hotspots. 3. Visualizing the dynamic of taxi moving trajectory using interactive animation. We use a thinning algorithm to reduce the amount of data and design a preloading strategyto load the data smoothly. Colour and shape are used to visualize the taxi trajectory data.

  12. Spatial integration and cortical dynamics.

    PubMed

    Gilbert, C D; Das, A; Ito, M; Kapadia, M; Westheimer, G

    1996-01-23

    Cells in adult primary visual cortex are capable of integrating information over much larger portions of the visual field than was originally thought. Moreover, their receptive field properties can be altered by the context within which local features are presented and by changes in visual experience. The substrate for both spatial integration and cortical plasticity is likely to be found in a plexus of long-range horizontal connections, formed by cortical pyramidal cells, which link cells within each cortical area over distances of 6-8 mm. The relationship between horizontal connections and cortical functional architecture suggests a role in visual segmentation and spatial integration. The distribution of lateral interactions within striate cortex was visualized with optical recording, and their functional consequences were explored by using comparable stimuli in human psychophysical experiments and in recordings from alert monkeys. They may represent the substrate for perceptual phenomena such as illusory contours, surface fill-in, and contour saliency. The dynamic nature of receptive field properties and cortical architecture has been seen over time scales ranging from seconds to months. One can induce a remapping of the topography of visual cortex by making focal binocular retinal lesions. Shorter-term plasticity of cortical receptive fields was observed following brief periods of visual stimulation. The mechanisms involved entailed, for the short-term changes, altering the effectiveness of existing cortical connections, and for the long-term changes, sprouting of axon collaterals and synaptogenesis. The mutability of cortical function implies a continual process of calibration and normalization of the perception of visual attributes that is dependent on sensory experience throughout adulthood and might further represent the mechanism of perceptual learning.

  13. Single cell integration of animate form, motion and location in the superior temporal cortex of the macaque monkey.

    PubMed

    Jellema, Tjeerd; Maassen, Gerard; Perrett, David I

    2004-07-01

    This study investigated the cellular mechanisms in the anterior part of the superior temporal sulcus (STSa) that underlie the integration of different features of the same visually perceived animate object. Three visual features were systematically manipulated: form, motion and location. In 58% of a population of cells selectively responsive to the sight of a walking agent, the location of the agent significantly influenced the cell's response. The influence of position was often evident in intricate two- and three-way interactions with the factors form and/or motion. For only one of the 31 cells tested, the response could be explained by just a single factor. For all other cells at least two factors, and for half of the cells (52%) all three factors, played a significant role in controlling responses. Our findings support a reformulation of the Ungerleider and Mishkin model, which envisages a subdivision of the visual processing into a ventral 'what' and a dorsal 'where' stream. We demonstrated that at least part of the temporal cortex ('what' stream) makes ample use of visual spatial information. Our findings open up the prospect of a much more elaborate integration of visual properties of animate objects at the single cell level. Such integration may support the comprehension of animals and their actions.

  14. Integrating Spherical Panoramas and Maps for Visualization of Cultural Heritage Objects Using Virtual Reality Technology.

    PubMed

    Koeva, Mila; Luleva, Mila; Maldjanski, Plamen

    2017-04-11

    Development and virtual representation of 3D models of Cultural Heritage (CH) objects has triggered great interest over the past decade. The main reason for this is the rapid development in the fields of photogrammetry and remote sensing, laser scanning, and computer vision. The advantages of using 3D models for restoration, preservation, and documentation of valuable historical and architectural objects have been numerously demonstrated by scientists in the field. Moreover, 3D model visualization in virtual reality has been recognized as an efficient, fast, and easy way of representing a variety of objects worldwide for present-day users, who have stringent requirements and high expectations. However, the main focus of recent research is the visual, geometric, and textural characteristics of a single concrete object, while integration of large numbers of models with additional information-such as historical overview, detailed description, and location-are missing. Such integrated information can be beneficial, not only for tourism but also for accurate documentation. For that reason, we demonstrate in this paper an integration of high-resolution spherical panoramas, a variety of maps, GNSS, sound, video, and text information for representation of numerous cultural heritage objects. These are then displayed in a web-based portal with an intuitive interface. The users have the opportunity to choose freely from the provided information, and decide for themselves what is interesting to visit. Based on the created web application, we provide suggestions and guidelines for similar studies. We selected objects, which are located in Bulgaria-a country with thousands of years of history and cultural heritage dating back to ancient civilizations. The methods used in this research are applicable for any type of spherical or cylindrical images and can be easily followed and applied in various domains. After a visual and metric assessment of the panoramas and the evaluation of the web-portal, we conclude that this novel approach is a very effective, fast, informative, and accurate way to present, disseminate, and document cultural heritage objects.

  15. Visualization portal for genetic variation (VizGVar): a tool for interactive visualization of SNPs and somatic mutations in exons, genes and protein domains.

    PubMed

    Solano-Román, Antonio; Alfaro-Arias, Verónica; Cruz-Castillo, Carlos; Orozco-Solano, Allan

    2018-03-15

    VizGVar was designed to meet the growing need of the research community for improved genomic and proteomic data viewers that benefit from better information visualization. We implemented a new information architecture and applied user centered design principles to provide a new improved way of visualizing genetic information and protein data related to human disease. VizGVar connects the entire database of Ensembl protein motifs, domains, genes and exons with annotated SNPs and somatic variations from PharmGKB and COSMIC. VizGVar precisely represents genetic variations and their respective location by colored curves to designate different types of variations. The structured hierarchy of biological data is reflected in aggregated patterns through different levels, integrating several layers of information at once. VizGVar provides a new interactive, web-based JavaScript visualization of somatic mutations and protein variation, enabling fast and easy discovery of clinically relevant variation patterns. VizGVar is accessible at http://vizport.io/vizgvar; http://vizport.io/vizgvar/doc/. asolano@broadinstitute.org or allan.orozcosolano@ucr.ac.cr.

  16. [Automated anesthesia record system].

    PubMed

    Zhu, Tao; Liu, Jin

    2005-12-01

    Based on Client/Server architecture, a software of automated anesthesia record system running under Windows operation system and networks has been developed and programmed with Microsoft Visual C++ 6.0, Visual Basic 6.0 and SQL Server. The system can deal with patient's information throughout the anesthesia. It can collect and integrate the data from several kinds of medical equipment such as monitor, infusion pump and anesthesia machine automatically and real-time. After that, the system presents the anesthesia sheets automatically. The record system makes the anesthesia record more accurate and integral and can raise the anesthesiologist's working efficiency.

  17. Visual-Cerebellar Pathways and Their Roles in the Control of Avian Flight.

    PubMed

    Wylie, Douglas R; Gutiérrez-Ibáñez, Cristián; Gaede, Andrea H; Altshuler, Douglas L; Iwaniuk, Andrew N

    2018-01-01

    In this paper, we review the connections and physiology of visual pathways to the cerebellum in birds and consider their role in flight. We emphasize that there are two visual pathways to the cerebellum. One is to the vestibulocerebellum (folia IXcd and X) that originates from two retinal-recipient nuclei that process optic flow: the nucleus of the basal optic root (nBOR) and the pretectal nucleus lentiformis mesencephali (LM). The second is to the oculomotor cerebellum (folia VI-VIII), which receives optic flow information, mainly from LM, but also local visual motion information from the optic tectum, and other visual information from the ventral lateral geniculate nucleus (Glv). The tectum, LM and Glv are all intimately connected with the pontine nuclei, which also project to the oculomotor cerebellum. We believe this rich integration of visual information in the cerebellum is important for analyzing motion parallax that occurs during flight. Finally, we extend upon a suggestion by Ibbotson (2017) that the hypertrophy that is observed in LM in hummingbirds might be due to an increase in the processing demands associated with the pathway to the oculomotor cerebellum as they fly through a cluttered environment while feeding.

  18. Encoding color information for visual tracking: Algorithms and benchmark.

    PubMed

    Liang, Pengpeng; Blasch, Erik; Ling, Haibin

    2015-12-01

    While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.

  19. CasCADe: A Novel 4D Visualization System for Virtual Construction Planning.

    PubMed

    Ivson, Paulo; Nascimento, Daniel; Celes, Waldemar; Barbosa, Simone Dj

    2018-01-01

    Building Information Modeling (BIM) provides an integrated 3D environment to manage large-scale engineering projects. The Architecture, Engineering and Construction (AEC) industry explores 4D visualizations over these datasets for virtual construction planning. However, existing solutions lack adequate visual mechanisms to inspect the underlying schedule and make inconsistencies readily apparent. The goal of this paper is to apply best practices of information visualization to improve 4D analysis of construction plans. We first present a review of previous work that identifies common use cases and limitations. We then consulted with AEC professionals to specify the main design requirements for such applications. These guided the development of CasCADe, a novel 4D visualization system where task sequencing and spatio-temporal simultaneity are immediately apparent. This unique framework enables the combination of diverse analytical features to create an information-rich analysis environment. We also describe how engineering collaborators used CasCADe to review the real-world construction plans of an Oil & Gas process plant. The system made evident schedule uncertainties, identified work-space conflicts and helped analyze other constructability issues. The results and contributions of this paper suggest new avenues for future research in information visualization for the AEC industry.

  20. Understanding pictorial information in biology: students' cognitive activities and visual reading strategies

    NASA Astrophysics Data System (ADS)

    Brandstetter, Miriam; Sandmann, Angela; Florian, Christine

    2017-06-01

    In classroom, scientific contents are increasingly communicated through visual forms of representations. Students' learning outcomes rely on their ability to read and understand pictorial information. Understanding pictorial information in biology requires cognitive effort and can be challenging to students. Yet evidence-based knowledge about students' visual reading strategies during the process of understanding pictorial information is pending. Therefore, 42 students at the age of 14-15 were asked to think aloud while trying to understand visual representations of the blood circulatory system and the patellar reflex. A category system was developed differentiating 16 categories of cognitive activities. A Principal Component Analysis revealed two underlying patterns of activities that can be interpreted as visual reading strategies: 1. Inferences predominated by using a problem-solving schema; 2. Inferences predominated by recall of prior content knowledge. Each pattern consists of a specific set of cognitive activities that reflect selection, organisation and integration of pictorial information as well as different levels of expertise. The results give detailed insights into cognitive activities of students who were required to understand the pictorial information of complex organ systems. They provide an evidence-based foundation to derive instructional aids that can promote students pictorial-information-based learning on different levels of expertise.

  1. Driver Vision Based Perception-Response Time Prediction and Assistance Model on Mountain Highway Curve.

    PubMed

    Li, Yi; Chen, Yuren

    2016-12-30

    To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.

  2. Deficient multisensory integration in schizophrenia: an event-related potential study.

    PubMed

    Stekelenburg, Jeroen J; Maes, Jan Pieter; Van Gool, Arthur R; Sitskoorn, Margriet; Vroomen, Jean

    2013-07-01

    In many natural audiovisual events (e.g., the sight of a face articulating the syllable /ba/), the visual signal precedes the sound and thus allows observers to predict the onset and the content of the sound. In healthy adults, the N1 component of the event-related brain potential (ERP), reflecting neural activity associated with basic sound processing, is suppressed if a sound is accompanied by a video that reliably predicts sound onset. If the sound does not match the content of the video (e.g., hearing /ba/ while lipreading /fu/), the later occurring P2 component is affected. Here, we examined whether these visual information sources affect auditory processing in patients with schizophrenia. The electroencephalography (EEG) was recorded in 18 patients with schizophrenia and compared with that of 18 healthy volunteers. As stimuli we used video recordings of natural actions in which visual information preceded and predicted the onset of the sound that was either congruent or incongruent with the video. For the healthy control group, visual information reduced the auditory-evoked N1 if compared to a sound-only condition, and stimulus-congruency affected the P2. This reduction in N1 was absent in patients with schizophrenia, and the congruency effect on the P2 was diminished. Distributed source estimations revealed deficits in the network subserving audiovisual integration in patients with schizophrenia. The results show a deficit in multisensory processing in patients with schizophrenia and suggest that multisensory integration dysfunction may be an important and, to date, under-researched aspect of schizophrenia. Copyright © 2013. Published by Elsevier B.V.

  3. Novel method of extracting motion from natural movies.

    PubMed

    Suzuki, Wataru; Ichinohe, Noritaka; Tani, Toshiki; Hayami, Taku; Miyakawa, Naohisa; Watanabe, Satoshi; Takeichi, Hiroshige

    2017-11-01

    The visual system in primates can be segregated into motion and shape pathways. Interaction occurs at multiple stages along these pathways. Processing of shape-from-motion and biological motion is considered to be a higher-order integration process involving motion and shape information. However, relatively limited types of stimuli have been used in previous studies on these integration processes. We propose a new algorithm to extract object motion information from natural movies and to move random dots in accordance with the information. The object motion information is extracted by estimating the dynamics of local normal vectors of the image intensity projected onto the x-y plane of the movie. An electrophysiological experiment on two adult common marmoset monkeys (Callithrix jacchus) showed that the natural and random dot movies generated with this new algorithm yielded comparable neural responses in the middle temporal visual area. In principle, this algorithm provided random dot motion stimuli containing shape information for arbitrary natural movies. This new method is expected to expand the neurophysiological and psychophysical experimental protocols to elucidate the integration processing of motion and shape information in biological systems. The novel algorithm proposed here was effective in extracting object motion information from natural movies and provided new motion stimuli to investigate higher-order motion information processing. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  4. Cross-frequency synchronization connects networks of fast and slow oscillations during visual working memory maintenance.

    PubMed

    Siebenhühner, Felix; Wang, Sheng H; Palva, J Matias; Palva, Satu

    2016-09-26

    Neuronal activity in sensory and fronto-parietal (FP) areas underlies the representation and attentional control, respectively, of sensory information maintained in visual working memory (VWM). Within these regions, beta/gamma phase-synchronization supports the integration of sensory functions, while synchronization in theta/alpha bands supports the regulation of attentional functions. A key challenge is to understand which mechanisms integrate neuronal processing across these distinct frequencies and thereby the sensory and attentional functions. We investigated whether such integration could be achieved by cross-frequency phase synchrony (CFS). Using concurrent magneto- and electroencephalography, we found that CFS was load-dependently enhanced between theta and alpha-gamma and between alpha and beta-gamma oscillations during VWM maintenance among visual, FP, and dorsal attention (DA) systems. CFS also connected the hubs of within-frequency-synchronized networks and its strength predicted individual VWM capacity. We propose that CFS integrates processing among synchronized neuronal networks from theta to gamma frequencies to link sensory and attentional functions.

  5. The role of emotion in dynamic audiovisual integration of faces and voices

    PubMed Central

    Kotz, Sonja A.; Tavano, Alessandro; Schröger, Erich

    2015-01-01

    We used human electroencephalogram to study early audiovisual integration of dynamic angry and neutral expressions. An auditory-only condition served as a baseline for the interpretation of integration effects. In the audiovisual conditions, the validity of visual information was manipulated using facial expressions that were either emotionally congruent or incongruent with the vocal expressions. First, we report an N1 suppression effect for angry compared with neutral vocalizations in the auditory-only condition. Second, we confirm early integration of congruent visual and auditory information as indexed by a suppression of the auditory N1 and P2 components in the audiovisual compared with the auditory-only condition. Third, audiovisual N1 suppression was modulated by audiovisual congruency in interaction with emotion: for neutral vocalizations, there was N1 suppression in both the congruent and the incongruent audiovisual conditions. For angry vocalizations, there was N1 suppression only in the congruent but not in the incongruent condition. Extending previous findings of dynamic audiovisual integration, the current results suggest that audiovisual N1 suppression is congruency- and emotion-specific and indicate that dynamic emotional expressions compared with non-emotional expressions are preferentially processed in early audiovisual integration. PMID:25147273

  6. Visual and tactile information in double bass intonation control.

    PubMed

    Lage, Guilherme Menezes; Borém, Fausto; Vieira, Maurílio Nunes; Barreiros, João Pardal

    2007-04-01

    Traditionally, the teaching of intonation on the non-tempered orchestral strings (violin, viola, cello, and double bass) has resorted to the auditory and proprioceptive senses only. This study aims at understanding the role of visual and tactile information in the control of the non-tempered intonation of the acoustic double bass. Eight musicians played 11 trials of an atonal sequence of musical notes on two double basses of different sizes under different sensorial constraints. The accuracy of the played notes was analyzed by measuring their frequencies and comparing them with respective target values. The main finding was that the performance which integrated visual and tactile information was superior in relation to the other performances in the control of double bass intonation. This contradicts the traditional belief that proprioception and hearing are the most effective feedback information in the performance of stringed instruments.

  7. Integration of Open Educational Resources in Undergraduate Chemistry Teaching--A Mapping Tool and Lecturers' Considerations

    ERIC Educational Resources Information Center

    Feldman-Maggor, Yael; Rom, Amira; Tuvi-Arad, Inbal

    2016-01-01

    This study examines chemistry lecturers' considerations for using open educational resources (OER) in their teaching. Recent technological developments provide innovative approaches for teaching chemistry and visualizing chemical phenomena. End users' improved ability to upload information online enables integration of various pedagogical models…

  8. Cognitive neuroscience: integration of sight and sound outside of awareness?

    PubMed

    Noel, Jean-Paul; Wallace, Mark; Blake, Randolph

    2015-02-16

    A recent study found that auditory and visual information can be integrated even when you are completely unaware of hearing or seeing the paired stimuli--but only if you have received prior, conscious exposure to the paired stimuli. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. ToxPi GUI: An interactive visualization tool for transparent integration of data from diverse sources of evidence

    EPA Science Inventory

    Motivation: Scientists and regulators are often faced with complex decisions, where use of scarce resources must be prioritized using collections of diverse information. The Toxicological Prioritization Index (ToxPi™) was developed to enable integration of multiple sources of evi...

  10. Effects of auditory stimuli in the horizontal plane on audiovisual integration: an event-related potential study.

    PubMed

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.

  11. Effects of Auditory Stimuli in the Horizontal Plane on Audiovisual Integration: An Event-Related Potential Study

    PubMed Central

    Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong

    2013-01-01

    This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides. PMID:23799097

  12. Effects of Temporal Integration on the Shape of Visual Backward Masking Functions

    ERIC Educational Resources Information Center

    Francis, Gregory; Cho, Yang Seok

    2008-01-01

    Many studies of cognition and perception use a visual mask to explore the dynamics of information processing of a target. Especially important in these applications is the time between the target and mask stimuli. A plot of some measure of target visibility against stimulus onset asynchrony is called a masking function, which can sometimes be…

  13. Getting from Here to There and Knowing Where: Teaching Global Positioning Systems to Students with Visual Impairments

    ERIC Educational Resources Information Center

    Phillips, Craig L.

    2011-01-01

    Global Positioning Systems' (GPS) technology is available for individuals with visual impairments to use in wayfinding and address Lowenfeld's "three limitations of blindness." The considerations and methodologies for teaching GPS usage have developed over time as GPS information and devices have been integrated into orientation and mobility…

  14. The effect of visualizing healthy eaters and mortality reminders on nutritious grocery purchases: an integrative terror management and prototype willingness analysis.

    PubMed

    McCabe, Simon; Arndt, Jamie; Goldenberg, Jamie L; Vess, Matthew; Vail, Kenneth E; Gibbons, Frederick X; Rogers, Ross

    2015-03-01

    To use insights from an integration of the terror management health model and the prototype willingness model to inform and improve nutrition-related behavior using an ecologically valid outcome. Prior to shopping, grocery shoppers were exposed to a reminder of mortality (or pain) and then visualized a healthy (vs. neutral) prototype. Receipts were collected postshopping and food items purchased were coded using a nutrition database. Compared with those in the control conditions, participants who received the mortality reminder and who were led to visualize a healthy eater prototype purchased more nutritious foods. The integration of the terror management health model and the prototype willingness model has the potential for both basic and applied advances and offers a generative ground for future research. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  15. Video content parsing based on combined audio and visual information

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-08-01

    While previous research on audiovisual data segmentation and indexing primarily focuses on the pictorial part, significant clues contained in the accompanying audio flow are often ignored. A fully functional system for video content parsing can be achieved more successfully through a proper combination of audio and visual information. By investigating the data structure of different video types, we present tools for both audio and visual content analysis and a scheme for video segmentation and annotation in this research. In the proposed system, video data are segmented into audio scenes and visual shots by detecting abrupt changes in audio and visual features, respectively. Then, the audio scene is categorized and indexed as one of the basic audio types while a visual shot is presented by keyframes and associate image features. An index table is then generated automatically for each video clip based on the integration of outputs from audio and visual analysis. It is shown that the proposed system provides satisfying video indexing results.

  16. Systematic data ingratiation of clinical trial recruitment locations for geographic-based query and visualization

    PubMed Central

    Luo, Jake; Chen, Weiheng; Wu, Min; Weng, Chunhua

    2018-01-01

    Background Prior studies of clinical trial planning indicate that it is crucial to search and screen recruitment sites before starting to enroll participants. However, currently there is no systematic method developed to support clinical investigators to search candidate recruitment sites according to their interested clinical trial factors. Objective In this study, we aim at developing a new approach to integrating the location data of over one million heterogeneous recruitment sites that are stored in clinical trial documents. The integrated recruitment location data can be searched and visualized using a map-based information retrieval method. The method enables systematic search and analysis of recruitment sites across a large amount of clinical trials. Methods The location data of more than 1.4 million recruitment sites of over 183,000 clinical trials was normalized and integrated using a geocoding method. The integrated data can be used to support geographic information retrieval of recruitment sites. Additionally, the information of over 6000 clinical trial target disease conditions and close to 4000 interventions was also integrated into the system and linked to the recruitment locations. Such data integration enabled the construction of a novel map-based query system. The system will allow clinical investigators to search and visualize candidate recruitment sites for clinical trials based on target conditions and interventions. Results The evaluation results showed that the coverage of the geographic location mapping for the 1.4 million recruitment sites was 99.8%. The evaluation of 200 randomly retrieved recruitment sites showed that the correctness of geographic information mapping was 96.5%. The recruitment intensities of the top 30 countries were also retrieved and analyzed. The data analysis results indicated that the recruitment intensity varied significantly across different countries and geographic areas. Conclusion This study contributed a new data processing framework to extract and integrate the location data of heterogeneous recruitment sites from clinical trial documents. The developed system can support effective retrieval and analysis of potential recruitment sites using target clinical trial factors. PMID:29132636

  17. Systematic data ingratiation of clinical trial recruitment locations for geographic-based query and visualization.

    PubMed

    Luo, Jake; Chen, Weiheng; Wu, Min; Weng, Chunhua

    2017-12-01

    Prior studies of clinical trial planning indicate that it is crucial to search and screen recruitment sites before starting to enroll participants. However, currently there is no systematic method developed to support clinical investigators to search candidate recruitment sites according to their interested clinical trial factors. In this study, we aim at developing a new approach to integrating the location data of over one million heterogeneous recruitment sites that are stored in clinical trial documents. The integrated recruitment location data can be searched and visualized using a map-based information retrieval method. The method enables systematic search and analysis of recruitment sites across a large amount of clinical trials. The location data of more than 1.4 million recruitment sites of over 183,000 clinical trials was normalized and integrated using a geocoding method. The integrated data can be used to support geographic information retrieval of recruitment sites. Additionally, the information of over 6000 clinical trial target disease conditions and close to 4000 interventions was also integrated into the system and linked to the recruitment locations. Such data integration enabled the construction of a novel map-based query system. The system will allow clinical investigators to search and visualize candidate recruitment sites for clinical trials based on target conditions and interventions. The evaluation results showed that the coverage of the geographic location mapping for the 1.4 million recruitment sites was 99.8%. The evaluation of 200 randomly retrieved recruitment sites showed that the correctness of geographic information mapping was 96.5%. The recruitment intensities of the top 30 countries were also retrieved and analyzed. The data analysis results indicated that the recruitment intensity varied significantly across different countries and geographic areas. This study contributed a new data processing framework to extract and integrate the location data of heterogeneous recruitment sites from clinical trial documents. The developed system can support effective retrieval and analysis of potential recruitment sites using target clinical trial factors. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Visual Cortical Representation of Whole Words and Hemifield-split Word Parts.

    PubMed

    Strother, Lars; Coros, Alexandra M; Vilis, Tutis

    2016-02-01

    Reading requires the neural integration of visual word form information that is split between our retinal hemifields. We examined multiple visual cortical areas involved in this process by measuring fMRI responses while observers viewed words that changed or repeated in one or both hemifields. We were specifically interested in identifying brain areas that exhibit decreased fMRI responses as a result of repeated versus changing visual word form information in each visual hemifield. Our method yielded highly significant effects of word repetition in a previously reported visual word form area (VWFA) in occipitotemporal cortex, which represents hemifield-split words as whole units. We also identified a more posterior occipital word form area (OWFA), which represents word form information in the right and left hemifields independently and is thus both functionally and anatomically distinct from the VWFA. Both the VWFA and the OWFA were left-lateralized in our study and strikingly symmetric in anatomical location relative to known face-selective visual cortical areas in the right hemisphere. Our findings are consistent with the observation that category-selective visual areas come in pairs and support the view that neural mechanisms in left visual cortex--especially those that evolved to support the visual processing of faces--are developmentally malleable and become incorporated into a left-lateralized visual word form network that supports rapid word recognition and reading.

  19. A Method to Quantify Visual Information Processing in Children Using Eye Tracking

    PubMed Central

    Kooiker, Marlou J.G.; Pel, Johan J.M.; van der Steen-Kant, Sanny P.; van der Steen, Johannes

    2016-01-01

    Visual problems that occur early in life can have major impact on a child's development. Without verbal communication and only based on observational methods, it is difficult to make a quantitative assessment of a child's visual problems. This limits accurate diagnostics in children under the age of 4 years and in children with intellectual disabilities. Here we describe a quantitative method that overcomes these problems. The method uses a remote eye tracker and a four choice preferential looking paradigm to measure eye movement responses to different visual stimuli. The child sits without head support in front of a monitor with integrated infrared cameras. In one of four monitor quadrants a visual stimulus is presented. Each stimulus has a specific visual modality with respect to the background, e.g., form, motion, contrast or color. From the reflexive eye movement responses to these specific visual modalities, output parameters such as reaction times, fixation accuracy and fixation duration are calculated to quantify a child's viewing behavior. With this approach, the quality of visual information processing can be assessed without the use of communication. By comparing results with reference values obtained in typically developing children from 0-12 years, the method provides a characterization of visual information processing in visually impaired children. The quantitative information provided by this method can be advantageous for the field of clinical visual assessment and rehabilitation in multiple ways. The parameter values provide a good basis to: (i) characterize early visual capacities and consequently to enable early interventions; (ii) compare risk groups and follow visual development over time; and (iii), construct an individual visual profile for each child. PMID:27500922

  20. A Method to Quantify Visual Information Processing in Children Using Eye Tracking.

    PubMed

    Kooiker, Marlou J G; Pel, Johan J M; van der Steen-Kant, Sanny P; van der Steen, Johannes

    2016-07-09

    Visual problems that occur early in life can have major impact on a child's development. Without verbal communication and only based on observational methods, it is difficult to make a quantitative assessment of a child's visual problems. This limits accurate diagnostics in children under the age of 4 years and in children with intellectual disabilities. Here we describe a quantitative method that overcomes these problems. The method uses a remote eye tracker and a four choice preferential looking paradigm to measure eye movement responses to different visual stimuli. The child sits without head support in front of a monitor with integrated infrared cameras. In one of four monitor quadrants a visual stimulus is presented. Each stimulus has a specific visual modality with respect to the background, e.g., form, motion, contrast or color. From the reflexive eye movement responses to these specific visual modalities, output parameters such as reaction times, fixation accuracy and fixation duration are calculated to quantify a child's viewing behavior. With this approach, the quality of visual information processing can be assessed without the use of communication. By comparing results with reference values obtained in typically developing children from 0-12 years, the method provides a characterization of visual information processing in visually impaired children. The quantitative information provided by this method can be advantageous for the field of clinical visual assessment and rehabilitation in multiple ways. The parameter values provide a good basis to: (i) characterize early visual capacities and consequently to enable early interventions; (ii) compare risk groups and follow visual development over time; and (iii), construct an individual visual profile for each child.

  1. The effect of visual salience on memory-based choices.

    PubMed

    Pooresmaeili, Arezoo; Bach, Dominik R; Dolan, Raymond J

    2014-02-01

    Deciding whether a stimulus is the "same" or "different" from a previous presented one involves integrating among the incoming sensory information, working memory, and perceptual decision making. Visual selective attention plays a crucial role in selecting the relevant information that informs a subsequent course of action. Previous studies have mainly investigated the role of visual attention during the encoding phase of working memory tasks. In this study, we investigate whether manipulation of bottom-up attention by changing stimulus visual salience impacts on later stages of memory-based decisions. In two experiments, we asked subjects to identify whether a stimulus had either the same or a different feature to that of a memorized sample. We manipulated visual salience of the test stimuli by varying a task-irrelevant feature contrast. Subjects chose a visually salient item more often when they looked for matching features and less often so when they looked for a nonmatch. This pattern of results indicates that salient items are more likely to be identified as a match. We interpret the findings in terms of capacity limitations at a comparison stage where a visually salient item is more likely to exhaust resources leading it to be prematurely parsed as a match.

  2. Bring It to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis.

    PubMed

    Stein, Manuel; Janetzko, Halldor; Lamprecht, Andreas; Breitkreutz, Thorsten; Zimmermann, Philipp; Goldlucke, Bastian; Schreck, Tobias; Andrienko, Gennady; Grossniklaus, Michael; Keim, Daniel A

    2018-01-01

    Analysts in professional team sport regularly perform analysis to gain strategic and tactical insights into player and team behavior. Goals of team sport analysis regularly include identification of weaknesses of opposing teams, or assessing performance and improvement potential of a coached team. Current analysis workflows are typically based on the analysis of team videos. Also, analysts can rely on techniques from Information Visualization, to depict e.g., player or ball trajectories. However, video analysis is typically a time-consuming process, where the analyst needs to memorize and annotate scenes. In contrast, visualization typically relies on an abstract data model, often using abstract visual mappings, and is not directly linked to the observed movement context anymore. We propose a visual analytics system that tightly integrates team sport video recordings with abstract visualization of underlying trajectory data. We apply appropriate computer vision techniques to extract trajectory data from video input. Furthermore, we apply advanced trajectory and movement analysis techniques to derive relevant team sport analytic measures for region, event and player analysis in the case of soccer analysis. Our system seamlessly integrates video and visualization modalities, enabling analysts to draw on the advantages of both analysis forms. Several expert studies conducted with team sport analysts indicate the effectiveness of our integrated approach.

  3. Multisensory integration across the senses in young and old adults

    PubMed Central

    Mahoney, Jeannette R.; Li, Po Ching Clara; Oh-Park, Mooyeon; Verghese, Joe; Holtzer, Roee

    2011-01-01

    Stimuli are processed concurrently and across multiple sensory inputs. Here we directly compared the effect of multisensory integration (MSI) on reaction time across three paired sensory inputs in eighteen young (M=19.17 yrs) and eighteen old (M=76.44 yrs) individuals. Participants were determined to be non-demented and without any medical or psychiatric conditions that would affect their performance. Participants responded to randomly presented unisensory (auditory, visual, somatosensory) stimuli and three paired sensory inputs consisting of auditory-somatosensory (AS) auditory-visual (AV) and visual-somatosensory (VS) stimuli. Results revealed that reaction time (RT) to all multisensory pairings was significantly faster than those elicited to the constituent unisensory conditions across age groups; findings that could not be accounted for by simple probability summation. Both young and old participants responded the fastest to multisensory pairings containing somatosensory input. Compared to younger adults, older adults demonstrated a significantly greater RT benefit when processing concurrent VS information. In terms of co-activation, older adults demonstrated a significant increase in the magnitude of visual-somatosensory co-activation (i.e., multisensory integration), while younger adults demonstrated a significant increase in the magnitude of auditory-visual and auditory-somatosensory co-activation. This study provides first evidence in support of the facilitative effect of pairing somatosensory with visual stimuli in older adults. PMID:22024545

  4. When seeing outweighs feeling: a role for prefrontal cortex in passive control of negative affect in blindsight.

    PubMed

    Anders, Silke; Eippert, Falk; Wiens, Stefan; Birbaumer, Niels; Lotze, Martin; Wildgruber, Dirk

    2009-11-01

    Affective neuroscience has been strongly influenced by the view that a 'feeling' is the perception of somatic changes and has consequently often neglected the neural mechanisms that underlie the integration of somatic and other information in affective experience. Here, we investigate affective processing by means of functional magnetic resonance imaging in nine cortically blind patients. In these patients, unilateral postgeniculate lesions prevent primary cortical visual processing in part of the visual field which, as a result, becomes subjectively blind. Residual subcortical processing of visual information, however, is assumed to occur in the entire visual field. As we have reported earlier, these patients show significant startle reflex potentiation when a threat-related visual stimulus is shown in their blind visual field. Critically, this was associated with an increase of brain activity in somatosensory-related areas, and an increase in experienced negative affect. Here, we investigated the patients' response when the visual stimulus was shown in the sighted visual field, that is, when it was visible and cortically processed. Despite the fact that startle reflex potentiation was similar in the blind and sighted visual field, patients reported significantly less negative affect during stimulation of the sighted visual field. In other words, when the visual stimulus was visible and received full cortical processing, the patients' phenomenal experience of affect did not closely reflect somatic changes. This decoupling of phenomenal affective experience and somatic changes was associated with an increase of activity in the left ventrolateral prefrontal cortex and a decrease of affect-related somatosensory activity. Moreover, patients who showed stronger left ventrolateral prefrontal cortex activity tended to show a stronger decrease of affect-related somatosensory activity. Our findings show that similar affective somatic changes can be associated with different phenomenal experiences of affect, depending on the depth of cortical processing. They are in line with a model in which the left ventrolateral prefrontal cortex is a relay station that integrates information about subcortically triggered somatic responses and information resulting from in-depth cortical stimulus processing. Tentatively, we suggest that the observed decoupling of somatic responses and experienced affect, and the reduction of negative phenomenal experience, can be explained by a left ventrolateral prefrontal cortex-mediated inhibition of affect-related somatosensory activity.

  5. When seeing outweighs feeling: a role for prefrontal cortex in passive control of negative affect in blindsight

    PubMed Central

    Eippert, Falk; Wiens, Stefan; Birbaumer, Niels; Lotze, Martin; Wildgruber, Dirk

    2009-01-01

    Affective neuroscience has been strongly influenced by the view that a ‘feeling’ is the perception of somatic changes and has consequently often neglected the neural mechanisms that underlie the integration of somatic and other information in affective experience. Here, we investigate affective processing by means of functional magnetic resonance imaging in nine cortically blind patients. In these patients, unilateral postgeniculate lesions prevent primary cortical visual processing in part of the visual field which, as a result, becomes subjectively blind. Residual subcortical processing of visual information, however, is assumed to occur in the entire visual field. As we have reported earlier, these patients show significant startle reflex potentiation when a threat-related visual stimulus is shown in their blind visual field. Critically, this was associated with an increase of brain activity in somatosensory-related areas, and an increase in experienced negative affect. Here, we investigated the patients’ response when the visual stimulus was shown in the sighted visual field, that is, when it was visible and cortically processed. Despite the fact that startle reflex potentiation was similar in the blind and sighted visual field, patients reported significantly less negative affect during stimulation of the sighted visual field. In other words, when the visual stimulus was visible and received full cortical processing, the patients’ phenomenal experience of affect did not closely reflect somatic changes. This decoupling of phenomenal affective experience and somatic changes was associated with an increase of activity in the left ventrolateral prefrontal cortex and a decrease of affect-related somatosensory activity. Moreover, patients who showed stronger left ventrolateral prefrontal cortex activity tended to show a stronger decrease of affect-related somatosensory activity. Our findings show that similar affective somatic changes can be associated with different phenomenal experiences of affect, depending on the depth of cortical processing. They are in line with a model in which the left ventrolateral prefrontal cortex is a relay station that integrates information about subcortically triggered somatic responses and information resulting from in-depth cortical stimulus processing. Tentatively, we suggest that the observed decoupling of somatic responses and experienced affect, and the reduction of negative phenomenal experience, can be explained by a left ventrolateral prefrontal cortex-mediated inhibition of affect-related somatosensory activity. PMID:19767414

  6. Development of driver’s assistant system of additional visual information of blind areas for Gazelle Next

    NASA Astrophysics Data System (ADS)

    Makarov, V.; Korelin, O.; Koblyakov, D.; Kostin, S.; Komandirov, A.

    2018-02-01

    The article is devoted to the development of the Advanced Driver Assistance Systems (ADAS) for the GAZelle NEXT car. This project is aimed at developing a visual information system for the driver integrated into the windshield racks. The developed system implements the following functions: assistance in maneuvering and parking; Recognition of road signs; Warning the driver about the possibility of a frontal collision; Control of "blind" zones; "Transparent" vision in the windshield racks, widening the field of view, behind them; Visual and sound information about the traffic situation; Control and descent from the lane of the vehicle; Monitoring of the driver’s condition; navigation system; All-round review. The scheme of action of sensors of the developed system of visual information of the driver is provided. The moments of systems on a prototype of a vehicle are considered. Possible changes in the interior and dashboard of the car are given. The results of the implementation are aimed at the implementation of the system - improved informing of the driver about the environment and the development of an ergonomic interior for this system within the new Functional Salon of the Gazelle Next vehicle equipped with a visual information system for the driver.

  7. Speed of feedforward and recurrent processing in multilayer networks of integrate-and-fire neurons.

    PubMed

    Panzeri, S; Rolls, E T; Battaglia, F; Lavis, R

    2001-11-01

    The speed of processing in the visual cortical areas can be fast, with for example the latency of neuronal responses increasing by only approximately 10 ms per area in the ventral visual system sequence V1 to V2 to V4 to inferior temporal visual cortex. This has led to the suggestion that rapid visual processing can only be based on the feedforward connections between cortical areas. To test this idea, we investigated the dynamics of information retrieval in multiple layer networks using a four-stage feedforward network modelled with continuous dynamics with integrate-and-fire neurons, and associative synaptic connections between stages with a synaptic time constant of 10 ms. Through the implementation of continuous dynamics, we found latency differences in information retrieval of only 5 ms per layer when local excitation was absent and processing was purely feedforward. However, information latency differences increased significantly when non-associative local excitation was included. We also found that local recurrent excitation through associatively modified synapses can contribute significantly to processing in as little as 15 ms per layer, including the feedforward and local feedback processing. Moreover, and in contrast to purely feed-forward processing, the contribution of local recurrent feedback was useful and approximately this rapid even when retrieval was made difficult by noise. These findings suggest that cortical information processing can benefit from recurrent circuits when the allowed processing time per cortical area is at least 15 ms long.

  8. Possible role for fundus autofluorescence as a predictive factor for visual acuity recovery after epiretinal membrane surgery.

    PubMed

    Brito, Pedro N; Gomes, Nuno L; Vieira, Marco P; Faria, Pedro A; Fernandes, Augusto V; Rocha-Sousa, Amândio; Falcão-Reis, Fernando

    2014-02-01

    To study the potential association between fundus autofluorescence, spectral-domain optical coherence tomography, and visual acuity in patients undergoing surgery because of epiretinal membranes. Prospective, interventional case series including 26 patients submitted to vitrectomy because of symptomatic epiretinal membranes. Preoperative evaluation consisted of a complete ophthalmologic examination, autofluorescence, and spectral-domain optical coherence tomography. Studied variables included foveal autofluorescence (fov.AF), photoreceptor inner segment/outer segment (IS/OS) junction line integrity, external limiting membrane integrity, central foveal thickness, and foveal morphology. All examinations were repeated at the first, third, and sixth postoperative months. The main outcome measures were logarithm of minimal angle resolution visual acuity, fov.AF integrity, and IS/OS integrity. All cases showing a continuous IS/OS line had an intact fov.AF, whereas patients with IS/OS disruption could have either an increased area of foveal hypoautofluorescence or an intact fov.AF, with the latter being associated with IS/OS integrity recovery in follow-up spectral-domain optical coherence tomography imaging. The only preoperative variables presenting a significant correlation with final visual acuity were baseline visual acuity (P = 0.047) and fov.AF grade (P = 0.023). Recovery of IS/OS line integrity after surgery, in patients with preoperative IS/OS disruption and normal fov.AF, can be explained by the presence of a functional retinal pigment epithelium-photoreceptor complex, supporting normal photoreceptor activity. Autofluorescence imaging provides a functional component to the study of epiretinal membranes, complementing the structural information obtained with optical coherence tomography.

  9. Multisensory speech perception without the left superior temporal sulcus.

    PubMed

    Baum, Sarah H; Martin, Randi C; Hamilton, A Cris; Beauchamp, Michael S

    2012-09-01

    Converging evidence suggests that the left superior temporal sulcus (STS) is a critical site for multisensory integration of auditory and visual information during speech perception. We report a patient, SJ, who suffered a stroke that damaged the left tempo-parietal area, resulting in mild anomic aphasia. Structural MRI showed complete destruction of the left middle and posterior STS, as well as damage to adjacent areas in the temporal and parietal lobes. Surprisingly, SJ demonstrated preserved multisensory integration measured with two independent tests. First, she perceived the McGurk effect, an illusion that requires integration of auditory and visual speech. Second, her perception of morphed audiovisual speech with ambiguous auditory or visual information was significantly influenced by the opposing modality. To understand the neural basis for this preserved multisensory integration, blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) was used to examine brain responses to audiovisual speech in SJ and 23 healthy age-matched controls. In controls, bilateral STS activity was observed. In SJ, no activity was observed in the damaged left STS but in the right STS, more cortex was active in SJ than in any of the normal controls. Further, the amplitude of the BOLD response in right STS response to McGurk stimuli was significantly greater in SJ than in controls. The simplest explanation of these results is a reorganization of SJ's cortical language networks such that the right STS now subserves multisensory integration of speech. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Multisensory Speech Perception Without the Left Superior Temporal Sulcus

    PubMed Central

    Baum, Sarah H.; Martin, Randi C.; Hamilton, A. Cris; Beauchamp, Michael S.

    2012-01-01

    Converging evidence suggests that the left superior temporal sulcus (STS) is a critical site for multisensory integration of auditory and visual information during speech perception. We report a patient, SJ, who suffered a stroke that damaged the left tempo-parietal area, resulting in mild anomic aphasia. Structural MRI showed complete destruction of the left middle and posterior STS, as well as damage to adjacent areas in the temporal and parietal lobes. Surprisingly, SJ demonstrated preserved multisensory integration measured with two independent tests. First, she perceived the McGurk effect, an illusion that requires integration of auditory and visual speech. Second, her perception of morphed audiovisual speech with ambiguous auditory or visual information was significantly influenced by the opposing modality. To understand the neural basis for this preserved multisensory integration, blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) was used to examine brain responses to audiovisual speech in SJ and 23 healthy age-matched controls. In controls, bilateral STS activity was observed. In SJ, no activity was observed in the damaged left STS but in the right STS, more cortex was active in SJ than in any of the normal controls. Further, the amplitude of the BOLD response in right STS response to McGurk stimuli was significantly greater in SJ than in controls. The simplest explanation of these results is a reorganization of SJ's cortical language networks such that the right STS now subserves multisensory integration of speech. PMID:22634292

  11. Optimal Audiovisual Integration in the Ventriloquism Effect But Pervasive Deficits in Unisensory Spatial Localization in Amblyopia.

    PubMed

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-01-01

    Classically understood as a deficit in spatial vision, amblyopia is increasingly recognized to also impair audiovisual multisensory processing. Studies to date, however, have not determined whether the audiovisual abnormalities reflect a failure of multisensory integration, or an optimal strategy in the face of unisensory impairment. We use the ventriloquism effect and the maximum-likelihood estimation (MLE) model of optimal integration to investigate integration of audiovisual spatial information in amblyopia. Participants with unilateral amblyopia (n = 14; mean age 28.8 years; 7 anisometropic, 3 strabismic, 4 mixed mechanism) and visually normal controls (n = 16, mean age 29.2 years) localized brief unimodal auditory, unimodal visual, and bimodal (audiovisual) stimuli during binocular viewing using a location discrimination task. A subset of bimodal trials involved the ventriloquism effect, an illusion in which auditory and visual stimuli originating from different locations are perceived as originating from a single location. Localization precision and bias were determined by psychometric curve fitting, and the observed parameters were compared with predictions from the MLE model. Spatial localization precision was significantly reduced in the amblyopia group compared with the control group for unimodal visual, unimodal auditory, and bimodal stimuli. Analyses of localization precision and bias for bimodal stimuli showed no significant deviations from the MLE model in either the amblyopia group or the control group. Despite pervasive deficits in localization precision for visual, auditory, and audiovisual stimuli, audiovisual integration remains intact and optimal in unilateral amblyopia.

  12. Crossmodal adaptation in right posterior superior temporal sulcus during face-voice emotional integration.

    PubMed

    Watson, Rebecca; Latinus, Marianne; Noguchi, Takao; Garrod, Oliver; Crabbe, Frances; Belin, Pascal

    2014-05-14

    The integration of emotional information from the face and voice of other persons is known to be mediated by a number of "multisensory" cerebral regions, such as the right posterior superior temporal sulcus (pSTS). However, whether multimodal integration in these regions is attributable to interleaved populations of unisensory neurons responding to face or voice or rather by multimodal neurons receiving input from the two modalities is not fully clear. Here, we examine this question using functional magnetic resonance adaptation and dynamic audiovisual stimuli in which emotional information was manipulated parametrically and independently in the face and voice via morphing between angry and happy expressions. Healthy human adult subjects were scanned while performing a happy/angry emotion categorization task on a series of such stimuli included in a fast event-related, continuous carryover design. Subjects integrated both face and voice information when categorizing emotion-although there was a greater weighting of face information-and showed behavioral adaptation effects both within and across modality. Adaptation also occurred at the neural level: in addition to modality-specific adaptation in visual and auditory cortices, we observed for the first time a crossmodal adaptation effect. Specifically, fMRI signal in the right pSTS was reduced in response to a stimulus in which facial emotion was similar to the vocal emotion of the preceding stimulus. These results suggest that the integration of emotional information from face and voice in the pSTS involves a detectable proportion of bimodal neurons that combine inputs from visual and auditory cortices. Copyright © 2014 the authors 0270-6474/14/346813-09$15.00/0.

  13. How does Interhemispheric Communication in Visual Word Recognition Work? Deciding between Early and Late Integration Accounts of the Split Fovea Theory

    ERIC Educational Resources Information Center

    Van der Haegen, Lise; Brysbaert, Marc; Davis, Colin J.

    2009-01-01

    It has recently been shown that interhemispheric communication is needed for the processing of foveally presented words. In this study, we examine whether the integration of information happens at an early stage, before word recognition proper starts, or whether the integration is part of the recognition process itself. Two lexical decision…

  14. Space Shuttle Payload Information Source

    NASA Technical Reports Server (NTRS)

    Griswold, Tom

    2000-01-01

    The Space Shuttle Payload Information Source Compact Disk (CD) is a joint NASA and USA project to introduce Space Shuttle capabilities, payload services and accommodations, and the payload integration process. The CD will be given to new payload customers or to organizations outside of NASA considering using the Space Shuttle as a launch vehicle. The information is high-level in a visually attractive format with a voice over. The format is in a presentation style plus 360 degree views, videos, and animation. Hyperlinks are provided to connect to the Internet for updates and more detailed information on how payloads are integrated into the Space Shuttle.

  15. Management of Information Technology Access Controls

    DTIC Science & Technology

    1991-01-01

    Management Information Systems , (New York: American Elsevier Publishing Company, 1968), 8. 2. Webster’s Third New International Dictionary, Unabridged... Management Information Systems (New York: American Elsevier Publishing company, 1968), 37. 5. Ibid. 6. Ibid. 7. Gerald M. Ward and Jonathan D. Harris, "Data...Controls: A Visual Approach Through Integrated Management Information Systems . New York: American Elsevier Publishing Company, 1968. Brancheau, James C

  16. Organic light emitting board for dynamic interactive display

    PubMed Central

    Kim, Eui Hyuk; Cho, Sung Hwan; Lee, Ju Han; Jeong, Beomjin; Kim, Richard Hahnkee; Yu, Seunggun; Lee, Tae-Woo; Shim, Wooyoung; Park, Cheolmin

    2017-01-01

    Interactive displays involve the interfacing of a stimuli-responsive sensor with a visual human-readable response. Here, we describe a polymeric electroluminescence-based stimuli-responsive display method that simultaneously detects external stimuli and visualizes the stimulant object. This organic light-emitting board is capable of both sensing and direct visualization of a variety of conductive information. Simultaneous sensing and visualization of the conductive substance is achieved when the conductive object is coupled with the light emissive material layer on application of alternating current. A variety of conductive materials can be detected regardless of their work functions, and thus information written by a conductive pen is clearly visualized, as is a human fingerprint with natural conductivity. Furthermore, we demonstrate that integration of the organic light-emitting board with a fluidic channel readily allows for dynamic monitoring of metallic liquid flow through the channel, which may be suitable for biological detection and imaging applications. PMID:28406151

  17. Organic light emitting board for dynamic interactive display

    NASA Astrophysics Data System (ADS)

    Kim, Eui Hyuk; Cho, Sung Hwan; Lee, Ju Han; Jeong, Beomjin; Kim, Richard Hahnkee; Yu, Seunggun; Lee, Tae-Woo; Shim, Wooyoung; Park, Cheolmin

    2017-04-01

    Interactive displays involve the interfacing of a stimuli-responsive sensor with a visual human-readable response. Here, we describe a polymeric electroluminescence-based stimuli-responsive display method that simultaneously detects external stimuli and visualizes the stimulant object. This organic light-emitting board is capable of both sensing and direct visualization of a variety of conductive information. Simultaneous sensing and visualization of the conductive substance is achieved when the conductive object is coupled with the light emissive material layer on application of alternating current. A variety of conductive materials can be detected regardless of their work functions, and thus information written by a conductive pen is clearly visualized, as is a human fingerprint with natural conductivity. Furthermore, we demonstrate that integration of the organic light-emitting board with a fluidic channel readily allows for dynamic monitoring of metallic liquid flow through the channel, which may be suitable for biological detection and imaging applications.

  18. Restoring Fort Frontenac in 3D: Effective Usage of 3D Technology for Heritage Visualization

    NASA Astrophysics Data System (ADS)

    Yabe, M.; Goins, E.; Jackson, C.; Halbstein, D.; Foster, S.; Bazely, S.

    2015-02-01

    This paper is composed of three elements: 3D modeling, web design, and heritage visualization. The aim is to use computer graphics design to inform and create an interest in historical visualization by rebuilding Fort Frontenac using 3D modeling and interactive design. The final model will be integr ated into an interactive website to learn more about the fort's historic imp ortance. It is apparent that using computer graphics can save time and money when it comes to historical visualization. Visitors do not have to travel to the actual archaeological buildings. They can simply use the Web in their own home to learn about this information virtually. Meticulously following historical records to create a sophisticated restoration of archaeological buildings will draw viewers into visualizations, such as the historical world of Fort Frontenac. As a result, it allows the viewers to effectively understand the fort's social sy stem, habits, and historical events.

  19. Proprioceptive Distance Cues Restore Perfect Size Constancy in Grasping, but Not Perception, When Vision Is Limited.

    PubMed

    Chen, Juan; Sperandio, Irene; Goodale, Melvyn Alan

    2018-03-19

    Our brain integrates information from multiple modalities in the control of behavior. When information from one sensory source is compromised, information from another source can compensate for the loss. What is not clear is whether the nature of this multisensory integration and the re-weighting of different sources of sensory information are the same across different control systems. Here, we investigated whether proprioceptive distance information (position sense of body parts) can compensate for the loss of visual distance cues that support size constancy in perception (mediated by the ventral visual stream) [1, 2] versus size constancy in grasping (mediated by the dorsal visual stream) [3-6], in which the real-world size of an object is computed despite changes in viewing distance. We found that there was perfect size constancy in both perception and grasping in a full-viewing condition (lights on, binocular viewing) and that size constancy in both tasks was dramatically disrupted in the restricted-viewing condition (lights off; monocular viewing of the same but luminescent object through a 1-mm pinhole). Importantly, in the restricted-viewing condition, proprioceptive cues about viewing distance originating from the non-grasping limb (experiment 1) or the inclination of the torso and/or the elbow angle of the grasping limb (experiment 2) compensated for the loss of visual distance cues to enable a complete restoration of size constancy in grasping but only a modest improvement of size constancy in perception. This suggests that the weighting of different sources of sensory information varies as a function of the control system being used. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Development of Integrated Information System for Travel Bureau Company

    NASA Astrophysics Data System (ADS)

    Karma, I. G. M.; Susanti, J.

    2018-01-01

    Related to the effectiveness of decision-making by the management of travel bureau company, especially by managers, information serves frequent delays or incomplete. Although already computer-assisted, the existing application-based is used only handle one particular activity only, not integrated. This research is intended to produce an integrated information system that handles the overall operational activities of the company. By applying the object-oriented system development approach, the system is built with Visual Basic. Net programming language and MySQL database package. The result is a system that consists of 4 (four) separated program packages, including Reservation System, AR System, AP System and Accounting System. Based on the output, we can conclude that this system is able to produce integrated information that related to the problem of reservation, operational and financial those produce up-to-date information in order to support operational activities and decisionmaking process by related parties.

  1. Development of a User-Defined Stressor in the Improved Performance Research Integration Tool (IMPRINT) for Conducting Tasks in a Moving Vehicle

    DTIC Science & Technology

    2007-03-01

    task termine if in travel la ons, visual recognitio nd information proce visual recognitio uation will yiee Δ = (0accuracy .37 * 06463) + (0.63 * 0.11...mission Figure 2. User-defined stresso err int face . 8 Figure 3. Stressor levels in IMPRINT. Figure 4. Accuracy stressor definition

  2. Playing the electric light orchestra—how electrical stimulation of visual cortex elucidates the neural basis of perception

    PubMed Central

    Cicmil, Nela; Krug, Kristine

    2015-01-01

    Vision research has the potential to reveal fundamental mechanisms underlying sensory experience. Causal experimental approaches, such as electrical microstimulation, provide a unique opportunity to test the direct contributions of visual cortical neurons to perception and behaviour. But in spite of their importance, causal methods constitute a minority of the experiments used to investigate the visual cortex to date. We reconsider the function and organization of visual cortex according to results obtained from stimulation techniques, with a special emphasis on electrical stimulation of small groups of cells in awake subjects who can report their visual experience. We compare findings from humans and monkeys, striate and extrastriate cortex, and superficial versus deep cortical layers, and identify a number of revealing gaps in the ‘causal map′ of visual cortex. Integrating results from different methods and species, we provide a critical overview of the ways in which causal approaches have been used to further our understanding of circuitry, plasticity and information integration in visual cortex. Electrical stimulation not only elucidates the contributions of different visual areas to perception, but also contributes to our understanding of neuronal mechanisms underlying memory, attention and decision-making. PMID:26240421

  3. Rapid recalibration of speech perception after experiencing the McGurk illusion.

    PubMed

    Lüttke, Claudia S; Pérez-Bellido, Alexis; de Lange, Floris P

    2018-03-01

    The human brain can quickly adapt to changes in the environment. One example is phonetic recalibration: a speech sound is interpreted differently depending on the visual speech and this interpretation persists in the absence of visual information. Here, we examined the mechanisms of phonetic recalibration. Participants categorized the auditory syllables /aba/ and /ada/, which were sometimes preceded by the so-called McGurk stimuli (in which an /aba/ sound, due to visual /aga/ input, is often perceived as 'ada'). We found that only one trial of exposure to the McGurk illusion was sufficient to induce a recalibration effect, i.e. an auditory /aba/ stimulus was subsequently more often perceived as 'ada'. Furthermore, phonetic recalibration took place only when auditory and visual inputs were integrated to 'ada' (McGurk illusion). Moreover, this recalibration depended on the sensory similarity between the preceding and current auditory stimulus. Finally, signal detection theoretical analysis showed that McGurk-induced phonetic recalibration resulted in both a criterion shift towards /ada/ and a reduced sensitivity to distinguish between /aba/ and /ada/ sounds. The current study shows that phonetic recalibration is dependent on the perceptual integration of audiovisual information and leads to a perceptual shift in phoneme categorization.

  4. NED-2: A decision support system for integrated forest ecosystem management

    Treesearch

    Mark J. Twery; Peter D. Knopp; Scott A. Thomasma; H. Michael Rauscher; Donald E. Nute; Walter D. Potter; Frederick Maier; Jin Wang; Mayukh Dass; Hajime Uchiyama; Astrid Glende; Robin E. Hoffman

    2005-01-01

    NED-2 is a Windows-based system designed to improve project-level planning and decision making by providing useful and scientifically sound information to natural resource managers. Resources currently addressed include visual quality, ecology, forest health, timber, water, and wildlife. NED-2 expands on previous versions of NED applications by integrating treatment...

  5. NED-2: a decision support system for integrated forest ecosystem management

    Treesearch

    Mark J. Twery; Peter D. Knopp; Scott A. Thomasma; H. Michael Rauscher; Donald E. Nute; Walter D. Potter; Frederick Maier; Jin Wang; Mayukh Dass; Hajime Uchiyama; Astrid Glende; Robin E. Hoffman

    2005-01-01

    NED-2 is a Windows-based system designed to improve project-level planning and decision making by providing useful and scientifically sound information to natural resource managers. Resources currently addressed include visual quality, ecology, forest health, timber, water, and wildlife. NED-2 expands on previous versions of NED applications by integrating treatment...

  6. Integrating management strategies for the mountain pine beetle with multiple-resource management of lodgepole pine forests

    Treesearch

    Mark D. McGregor; Dennis M. Cole

    1985-01-01

    Provides guidelines for integrating practices for managing mountain pine beetle populations with silvicultural practices for enhancing multiple resource values of lodgepole pine forests. Summarizes published and unpublished technical information and recent research on the ecology of pest and host and presents visual and classification criteria for recognizing...

  7. An integrated system for interactive continuous learning of categorical knowledge

    NASA Astrophysics Data System (ADS)

    Skočaj, Danijel; Vrečko, Alen; Mahnič, Marko; Janíček, Miroslav; Kruijff, Geert-Jan M.; Hanheide, Marc; Hawes, Nick; Wyatt, Jeremy L.; Keller, Thomas; Zhou, Kai; Zillich, Michael; Kristan, Matej

    2016-09-01

    This article presents an integrated robot system capable of interactive learning in dialogue with a human. Such a system needs to have several competencies and must be able to process different types of representations. In this article, we describe a collection of mechanisms that enable integration of heterogeneous competencies in a principled way. Central to our design is the creation of beliefs from visual and linguistic information, and the use of these beliefs for planning system behaviour to satisfy internal drives. The system is able to detect gaps in its knowledge and to plan and execute actions that provide information needed to fill these gaps. We propose a hierarchy of mechanisms which are capable of engaging in different kinds of learning interactions, e.g. those initiated by a tutor or by the system itself. We present the theory these mechanisms are build upon and an instantiation of this theory in the form of an integrated robot system. We demonstrate the operation of the system in the case of learning conceptual models of objects and their visual properties.

  8. Gunslinger Effect and Müller-Lyer Illusion: Examining Early Visual Information Processing for Late Limb-Target Control.

    PubMed

    Roberts, James W; Lyons, James; Garcia, Daniel B L; Burgess, Raquel; Elliott, Digby

    2017-07-01

    The multiple process model contends that there are two forms of online control for manual aiming: impulse regulation and limb-target control. This study examined the impact of visual information processing for limb-target control. We amalgamated the Gunslinger protocol (i.e., faster movements following a reaction to an external trigger compared with the spontaneous initiation of movement) and Müller-Lyer target configurations into the same aiming protocol. The results showed the Gunslinger effect was isolated at the early portions of the movement (peak acceleration and peak velocity). Reacted aims reached a longer displacement at peak deceleration, but no differences for movement termination. The target configurations manifested terminal biases consistent with the illusion. We suggest the visual information processing demands imposed by reacted aims can be adapted by integrating early feedforward information for limb-target control.

  9. Investigation Of Integrating Three-Dimensional (3-D) Geometry Into The Visual Anatomical Injury Descriptor (Visual AID) Using WebGL

    DTIC Science & Technology

    2011-08-01

    generated using the Zygote Human Anatomy 3-D model (3). Use of a reference anatomy independent of personal identification, such as Zygote, allows Visual...Zygote Human Anatomy 3D Model, 2010. http://www.zygote.com/ (accessed July 26, 2011). 4. Khronos Group Web site. Khronos to Create New Open Standard for...understanding of the information at hand. In order to fulfill the medical illustration track, I completed a concentration in science, focusing on human

  10. MRIVIEW: An interactive computational tool for investigation of brain structure and function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ranken, D.; George, J.

    MRIVIEW is a software system which uses image processing and visualization to provide neuroscience researchers with an integrated environment for combining functional and anatomical information. Key features of the software include semi-automated segmentation of volumetric head data and an interactive coordinate reconciliation method which utilizes surface visualization. The current system is a precursor to a computational brain atlas. We describe features this atlas will incorporate, including methods under development for visualizing brain functional data obtained from several different research modalities.

  11. In situ visualization and data analysis for turbidity currents simulation

    NASA Astrophysics Data System (ADS)

    Camata, Jose J.; Silva, Vítor; Valduriez, Patrick; Mattoso, Marta; Coutinho, Alvaro L. G. A.

    2018-01-01

    Turbidity currents are underflows responsible for sediment deposits that generate geological formations of interest for the oil and gas industry. LibMesh-sedimentation is an application built upon the libMesh library to simulate turbidity currents. In this work, we present the integration of libMesh-sedimentation with in situ visualization and in transit data analysis tools. DfAnalyzer is a solution based on provenance data to extract and relate strategic simulation data in transit from multiple data for online queries. We integrate libMesh-sedimentation and ParaView Catalyst to perform in situ data analysis and visualization. We present a parallel performance analysis for two turbidity currents simulations showing that the overhead for both in situ visualization and in transit data analysis is negligible. We show that our tools enable monitoring the sediments appearance at runtime and steer the simulation based on the solver convergence and visual information on the sediment deposits, thus enhancing the analytical power of turbidity currents simulations.

  12. Synthesis maps: visual knowledge translation for the CanIMPACT clinical system and patient cancer journeys.

    PubMed

    Jones, P H; Shakdher, S; Singh, P

    2017-04-01

    Salient findings and interpretations from the canimpact clinical cancer research study are visually represented in two synthesis maps for the purpose of communicating an integrated presentation of the study to clinical cancer researchers and policymakers. Synthesis maps integrate evidence and expertise into a visual narrative for knowledge translation and communication. A clinical system synthesis map represents the current Canadian primary care and cancer practice systems, proposed as a visual knowledge translation from the mixed-methods canimpact study to inform Canadian clinical research, policy, and practice discourses. Two synthesis maps, drawn together from multiple canimpact investigations and sources, were required to articulate critical differences between the clinical system and patient perspectives. The synthesis map of Canada-wide clinical cancer systems illustrates the relationships between primary care and the full cancer continuum. A patient-centred map was developed to represent the cancer (and primary care) journeys as experienced by breast and colorectal cancer patients.

  13. Tools for visually exploring biological networks.

    PubMed

    Suderman, Matthew; Hallett, Michael

    2007-10-15

    Many tools exist for visually exploring biological networks including well-known examples such as Cytoscape, VisANT, Pathway Studio and Patika. These systems play a key role in the development of integrative biology, systems biology and integrative bioinformatics. The trend in the development of these tools is to go beyond 'static' representations of cellular state, towards a more dynamic model of cellular processes through the incorporation of gene expression data, subcellular localization information and time-dependent behavior. We provide a comprehensive review of the relative advantages and disadvantages of existing systems with two goals in mind: to aid researchers in efficiently identifying the appropriate existing tools for data visualization; to describe the necessary and realistic goals for the next generation of visualization tools. In view of the first goal, we provide in the Supplementary Material a systematic comparison of more than 35 existing tools in terms of over 25 different features. Supplementary data are available at Bioinformatics online.

  14. Managing Spatial Selections With Contextual Snapshots

    PubMed Central

    Mindek, P; Gröller, M E; Bruckner, S

    2014-01-01

    Spatial selections are a ubiquitous concept in visualization. By localizing particular features, they can be analysed and compared in different views. However, the semantics of such selections often depend on specific parameter settings and it can be difficult to reconstruct them without additional information. In this paper, we present the concept of contextual snapshots as an effective means for managing spatial selections in visualized data. The selections are automatically associated with the context in which they have been created. Contextual snapshots can also be used as the basis for interactive integrated and linked views, which enable in-place investigation and comparison of multiple visual representations of data. Our approach is implemented as a flexible toolkit with well-defined interfaces for integration into existing systems. We demonstrate the power and generality of our techniques by applying them to several distinct scenarios such as the visualization of simulation data, the analysis of historical documents and the display of anatomical data. PMID:25821284

  15. From Physical Campus Space to a Full-view Figure: University Atlas Compiling Based on `Information Design' Concept

    NASA Astrophysics Data System (ADS)

    Song, Ge; Tang, Xi; Zhu, Feng

    2018-05-01

    Traditional university maps, taking campus as the principal body, mainly realize the abilities of space localization and navigation. They don't take full advantage of map, such as multi-scale representations and thematic geo-graphical information visualization. And their inherent propaganda functions have not been entirely developed. Therefore, we tried to take East China Normal University (ECNU) located in Shanghai as an example, and integrated various information related to university propaganda need (like spatial patterns, history and culture, landscape ecology, disciplinary constructions, cooperation, social services, development plans and so on). We adopted the frontier knowledge of `information design' as well as kinds of information graphics and visualization solutions. As a result, we designed and compiled a prototype atlas of `ECNU Impression' to provide a series of views of ECNU, which practiced a new model of `narrative campus map'. This innovative propaganda product serves as a supplement to typical shows with official authority, data maturity, scientificity, dimension diversity, and timing integrity. The university atlas will become a usable media for university overall figure shaping.

  16. Multistage audiovisual integration of speech: dissociating identification and detection.

    PubMed

    Eskelund, Kasper; Tuomainen, Jyrki; Andersen, Tobias S

    2011-02-01

    Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech signal. Here, we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers, the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multistage account of audiovisual integration of speech in which the many attributes of the audiovisual speech signal are integrated by separate integration processes.

  17. Outcome of the First wwPDB Hybrid/Integrative Methods Task Force Workshop

    PubMed Central

    Sali, Andrej; Berman, Helen M.; Schwede, Torsten; Trewhella, Jill; Kleywegt, Gerard; Burley, Stephen K.; Markley, John; Nakamura, Haruki; Adams, Paul; Bonvin, Alexandre M.J.J.; Chiu, Wah; Dal Peraro, Matteo; Di Maio, Frank; Ferrin, Thomas E.; Grünewald, Kay; Gutmanas, Aleksandras; Henderson, Richard; Hummer, Gerhard; Iwasaki, Kenji; Johnson, Graham; Lawson, Catherine L.; Meiler, Jens; Marti-Renom, Marc A.; Montelione, Gaetano T.; Nilges, Michael; Nussinov, Ruth; Patwardhan, Ardan; Rappsilber, Juri; Read, Randy J.; Saibil, Helen; Schröder, Gunnar F.; Schwieters, Charles D.; Seidel, Claus A. M.; Svergun, Dmitri; Topf, Maya; Ulrich, Eldon L.; Velankar, Sameer; Westbrook, John D.

    2016-01-01

    Summary Structures of biomolecular systems are increasingly computed by integrative modeling that relies on varied types of experimental data and theoretical information. We describe here the proceedings and conclusions from the first wwPDB Hybrid/Integrative Methods Task Force Workshop held at the European Bioinformatics Institute in Hinxton, UK, October 6 and 7, 2014. At the workshop, experts in various experimental fields of structural biology, experts in integrative modeling and visualization, and experts in data archiving addressed a series of questions central to the future of structural biology. How should integrative models be represented? How should the data and integrative models be validated? What data should be archived? How should the data and models be archived? What information should accompany the publication of integrative models? PMID:26095030

  18. The Temporal Pole Top-Down Modulates the Ventral Visual Stream During Social Cognition.

    PubMed

    Pehrs, Corinna; Zaki, Jamil; Schlochtermeier, Lorna H; Jacobs, Arthur M; Kuchinke, Lars; Koelsch, Stefan

    2017-01-01

    The temporal pole (TP) has been associated with diverse functions of social cognition and emotion processing. Although the underlying mechanism remains elusive, one possibility is that TP acts as domain-general hub integrating socioemotional information. To test this, 26 participants were presented with 60 empathy-evoking film clips during fMRI scanning. The film clips were preceded by a linguistic sad or neutral context and half of the clips were accompanied by sad music. In line with its hypothesized role, TP was involved in the processing of sad context and furthermore tracked participants' empathic concern. To examine the neuromodulatory impact of TP, we applied nonlinear dynamic causal modeling to a multisensory integration network from previous work consisting of superior temporal gyrus (STG), fusiform gyrus (FG), and amygdala, which was extended by an additional node in the TP. Bayesian model comparison revealed a gating of STG and TP on fusiform-amygdalar coupling and an increase of TP to FG connectivity during the integration of contextual information. Moreover, these backward projections were strengthened by emotional music. The findings indicate that during social cognition, TP integrates information from different modalities and top-down modulates lower-level perceptual areas in the ventral visual stream as a function of integration demands. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Perceived object stability depends on multisensory estimates of gravity.

    PubMed

    Barnett-Cowan, Michael; Fleming, Roland W; Singh, Manish; Bülthoff, Heinrich H

    2011-04-27

    How does the brain estimate object stability? Objects fall over when the gravity-projected centre-of-mass lies outside the point or area of support. To estimate an object's stability visually, the brain must integrate information across the shape and compare its orientation to gravity. When observers lie on their sides, gravity is perceived as tilted toward body orientation, consistent with a representation of gravity derived from multisensory information. We exploited this to test whether vestibular and kinesthetic information affect this visual task or whether the brain estimates object stability solely from visual information. In three body orientations, participants viewed images of objects close to a table edge. We measured the critical angle at which each object appeared equally likely to fall over or right itself. Perceived gravity was measured using the subjective visual vertical. The results show that the perceived critical angle was significantly biased in the same direction as the subjective visual vertical (i.e., towards the multisensory estimate of gravity). Our results rule out a general explanation that the brain depends solely on visual heuristics and assumptions about object stability. Instead, they suggest that multisensory estimates of gravity govern the perceived stability of objects, resulting in objects appearing more stable than they are when the head is tilted in the same direction in which they fall.

  20. Design of smart home sensor visualizations for older adults.

    PubMed

    Le, Thai; Reeder, Blaine; Chung, Jane; Thompson, Hilaire; Demiris, George

    2014-01-01

    Smart home sensor systems provide a valuable opportunity to continuously and unobtrusively monitor older adult wellness. However, the density of sensor data can be challenging to visualize, especially for an older adult consumer with distinct user needs. We describe the design of sensor visualizations informed by interviews with older adults. The goal of the visualizations is to present sensor activity data to an older adult consumer audience that supports both longitudinal detection of trends and on-demand display of activity details for any chosen day. The design process is grounded through participatory design with older adult interviews during a six-month pilot sensor study. Through a secondary analysis of interviews, we identified the visualization needs of older adults. We incorporated these needs with cognitive perceptual visualization guidelines and the emotional design principles of Norman to develop sensor visualizations. We present a design of sensor visualization that integrate both temporal and spatial components of information. The visualization supports longitudinal detection of trends while allowing the viewer to view activity within a specific date. Appropriately designed visualizations for older adults not only provide insight into health and wellness, but also are a valuable resource to promote engagement within care.

  1. Design of smart home sensor visualizations for older adults.

    PubMed

    Le, Thai; Reeder, Blaine; Chung, Jane; Thompson, Hilaire; Demiris, George

    2014-07-24

    Smart home sensor systems provide a valuable opportunity to continuously and unobtrusively monitor older adult wellness. However, the density of sensor data can be challenging to visualize, especially for an older adult consumer with distinct user needs. We describe the design of sensor visualizations informed by interviews with older adults. The goal of the visualizations is to present sensor activity data to an older adult consumer audience that supports both longitudinal detection of trends and on-demand display of activity details for any chosen day. The design process is grounded through participatory design with older adult interviews during a six-month pilot sensor study. Through a secondary analysis of interviews, we identified the visualization needs of older adults. We incorporated these needs with cognitive perceptual visualization guidelines and the emotional design principles of Norman to develop sensor visualizations. We present a design of sensor visualization that integrate both temporal and spatial components of information. The visualization supports longitudinal detection of trends while allowing the viewer to view activity within a specific date.CONCLUSIONS: Appropriately designed visualizations for older adults not only provide insight into health and wellness, but also are a valuable resource to promote engagement within care.

  2. Acquired Codes of Meaning in Data Visualization and Infographics: Beyond Perceptual Primitives.

    PubMed

    Byrne, Lydia; Angus, Daniel; Wiles, Janet

    2016-01-01

    While information visualization frameworks and heuristics have traditionally been reluctant to include acquired codes of meaning, designers are making use of them in a wide variety of ways. Acquired codes leverage a user's experience to understand the meaning of a visualization. They range from figurative visualizations which rely on the reader's recognition of shapes, to conventional arrangements of graphic elements which represent particular subjects. In this study, we used content analysis to codify acquired meaning in visualization. We applied the content analysis to a set of infographics and data visualizations which are exemplars of innovative and effective design. 88% of the infographics and 71% of data visualizations in the sample contain at least one use of figurative visualization. Conventions on the arrangement of graphics are also widespread in the sample. In particular, a comparison of representations of time and other quantitative data showed that conventions can be specific to a subject. These results suggest that there is a need for information visualization research to expand its scope beyond perceptual channels, to include social and culturally constructed meaning. Our paper demonstrates a viable method for identifying figurative techniques and graphic conventions and integrating them into heuristics for visualization design.

  3. Optimal visuotactile integration for velocity discrimination of self-hand movements

    PubMed Central

    Chancel, M.; Blanchard, C.; Guerraz, M.; Montagnini, A.

    2016-01-01

    Illusory hand movements can be elicited by a textured disk or a visual pattern rotating under one's hand, while proprioceptive inputs convey immobility information (Blanchard C, Roll R, Roll JP, Kavounoudias A. PLoS One 8: e62475, 2013). Here, we investigated whether visuotactile integration can optimize velocity discrimination of illusory hand movements in line with Bayesian predictions. We induced illusory movements in 15 volunteers by visual and/or tactile stimulation delivered at six angular velocities. Participants had to compare hand illusion velocities with a 5°/s hand reference movement in an alternative forced choice paradigm. Results showed that the discrimination threshold decreased in the visuotactile condition compared with unimodal (visual or tactile) conditions, reflecting better bimodal discrimination. The perceptual strength (gain) of the illusions also increased: the stimulation required to give rise to a 5°/s illusory movement was slower in the visuotactile condition compared with each of the two unimodal conditions. The maximum likelihood estimation model satisfactorily predicted the improved discrimination threshold but not the increase in gain. When we added a zero-centered prior, reflecting immobility information, the Bayesian model did actually predict the gain increase but systematically overestimated it. Interestingly, the predicted gains better fit the visuotactile performances when a proprioceptive noise was generated by covibrating antagonist wrist muscles. These findings show that kinesthetic information of visual and tactile origins is optimally integrated to improve velocity discrimination of self-hand movements. However, a Bayesian model alone could not fully describe the illusory phenomenon pointing to the crucial importance of the omnipresent muscle proprioceptive cues with respect to other sensory cues for kinesthesia. PMID:27385802

  4. Integrating Spherical Panoramas and Maps for Visualization of Cultural Heritage Objects Using Virtual Reality Technology

    PubMed Central

    Koeva, Mila; Luleva, Mila; Maldjanski, Plamen

    2017-01-01

    Development and virtual representation of 3D models of Cultural Heritage (CH) objects has triggered great interest over the past decade. The main reason for this is the rapid development in the fields of photogrammetry and remote sensing, laser scanning, and computer vision. The advantages of using 3D models for restoration, preservation, and documentation of valuable historical and architectural objects have been numerously demonstrated by scientists in the field. Moreover, 3D model visualization in virtual reality has been recognized as an efficient, fast, and easy way of representing a variety of objects worldwide for present-day users, who have stringent requirements and high expectations. However, the main focus of recent research is the visual, geometric, and textural characteristics of a single concrete object, while integration of large numbers of models with additional information—such as historical overview, detailed description, and location—are missing. Such integrated information can be beneficial, not only for tourism but also for accurate documentation. For that reason, we demonstrate in this paper an integration of high-resolution spherical panoramas, a variety of maps, GNSS, sound, video, and text information for representation of numerous cultural heritage objects. These are then displayed in a web-based portal with an intuitive interface. The users have the opportunity to choose freely from the provided information, and decide for themselves what is interesting to visit. Based on the created web application, we provide suggestions and guidelines for similar studies. We selected objects, which are located in Bulgaria—a country with thousands of years of history and cultural heritage dating back to ancient civilizations. The methods used in this research are applicable for any type of spherical or cylindrical images and can be easily followed and applied in various domains. After a visual and metric assessment of the panoramas and the evaluation of the web-portal, we conclude that this novel approach is a very effective, fast, informative, and accurate way to present, disseminate, and document cultural heritage objects. PMID:28398230

  5. Figure-ground activity in V1 and guidance of saccadic eye movements.

    PubMed

    Supèr, Hans

    2006-01-01

    Every day we shift our gaze about 150.000 times mostly without noticing it. The direction of these gaze shifts are not random but directed by sensory information and internal factors. After each movement the eyes hold still for a brief moment so that visual information at the center of our gaze can be processed in detail. This means that visual information at the saccade target location is sufficient to accurately guide the gaze shift but yet is not sufficiently processed to be fully perceived. In this paper I will discuss the possible role of activity in the primary visual cortex (V1), in particular figure-ground activity, in oculo-motor behavior. Figure-ground activity occurs during the late response period of V1 neurons and correlates with perception. The strength of figure-ground responses predicts the direction and moment of saccadic eye movements. The superior colliculus, a gaze control center that integrates visual and motor signals, receives direct anatomical connections from V1. These projections may convey the perceptual information that is required for appropriate gaze shifts. In conclusion, figure-ground activity in V1 may act as an intermediate component linking visual and motor signals.

  6. Octopus vulgaris uses visual information to determine the location of its arm.

    PubMed

    Gutnick, Tamar; Byrne, Ruth A; Hochner, Binyamin; Kuba, Michael

    2011-03-22

    Octopuses are intelligent, soft-bodied animals with keen senses that perform reliably in a variety of visual and tactile learning tasks. However, researchers have found them disappointing in that they consistently fail in operant tasks that require them to combine central nervous system reward information with visual and peripheral knowledge of the location of their arms. Wells claimed that in order to filter and integrate an abundance of multisensory inputs that might inform the animal of the position of a single arm, octopuses would need an exceptional computing mechanism, and "There is no evidence that such a system exists in Octopus, or in any other soft bodied animal." Recent electrophysiological experiments, which found no clear somatotopic organization in the higher motor centers, support this claim. We developed a three-choice maze that required an octopus to use a single arm to reach a visually marked goal compartment. Using this operant task, we show for the first time that Octopus vulgaris is capable of guiding a single arm in a complex movement to a location. Thus, we claim that octopuses can combine peripheral arm location information with visual input to control goal-directed complex movements. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Bridging views in cinema: a review of the art and science of view integration.

    PubMed

    Levin, Daniel T; Baker, Lewis J

    2017-09-01

    Recently, there has been a surge of interest in the relationship between film and cognitive science. This is reflected in a new science of cinema that can help us both to understand this art form, and to produce new insights about cognition and perception. In this review, we begin by describing how the initial development of cinema involved close observation of audience response. This allowed filmmakers to develop an informal theory of visual cognition that helped them to isolate and creatively recombine fundamental elements of visual experience. We review research exploring naturalistic forms of visual perception and cognition that have opened the door to a productive convergence between the dynamic visual art of cinema and science of visual cognition that can enrich both. In particular, we discuss how parallel understandings of view integration in cinema and in cognitive science have been converging to support a new understanding of meaningful visual experience. WIREs Cogn Sci 2017, 8:e1436. doi: 10.1002/wcs.1436 For further resources related to this article, please visit the WIREs website. © 2017 Wiley Periodicals, Inc.

  8. Incidence of Group Awareness Information on Students' Collaborative Learning Processes

    ERIC Educational Resources Information Center

    Pifarré, M.; Cobos, R.; Argelagós, E.

    2014-01-01

    This paper studies how the integration of group awareness tools in the knowledge management system called KnowCat (Knowledge Catalyser), which promotes collaborative knowledge construction, may both foster the students' perception about the meaningfulness of visualization of group awareness information and promote better collaborative…

  9. A Visual Aid to Decision-Making for People with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Bailey, Rebecca; Willner, Paul; Dymond, Simon

    2011-01-01

    Previous studies have shown that people with mild intellectual disabilities have difficulty in "weighing-up" information, defined as integrating information from two different sources for the purpose of reaching a decision. This was demonstrated in two very different procedures, temporal discounting and a scenario-based financial…

  10. Central Processing Dysfunctions in Children: A Review of Research.

    ERIC Educational Resources Information Center

    Chalfant, James C.; Scheffelin, Margaret A.

    Research on central processing dysfunctions in children is reviewed in three major areas. The first, dysfunctions in the analysis of sensory information, includes auditory, visual, and haptic processing. The second, dysfunction in the synthesis of sensory information, covers multiple stimulus integration and short-term memory. The third area of…

  11. Spatial and Visual Reasoning: Do These Abilities Improve in First-Year Veterinary Medical Students Exposed to an Integrated Curriculum?

    PubMed

    Gutierrez, J Claudio; Chigerwe, Munashe; Ilkiw, Jan E; Youngblood, Patricia; Holladay, Steven D; Srivastava, Sakti

    Spatial visualization ability refers to the human cognitive ability to form, retrieve, and manipulate mental models of spatial nature. Visual reasoning ability has been linked to spatial ability. There is currently limited information about how entry-level spatial and visual reasoning abilities may predict veterinary anatomy performance or may be enhanced with progression through the veterinary anatomy content in an integrated curriculum. The present study made use of two tests that measure spatial ability and one test that measures visual reasoning ability in veterinary students: Guay's Visualization of Views Test, adapted version (GVVT), the Mental Rotations Test (MRT), and Raven's Advanced Progressive Matrices Test, short form (RavenT). The tests were given to the entering class of veterinary students during their orientation week and at week 32 in the veterinary medical curriculum. Mean score on the MRT significantly increased from 15.2 to 20.1, and on the RavenT significantly increased from 7.5 to 8.8. When females only were evaluated, results were similar to the total class outcome; however, all three tests showed significant increases in mean scores. A positive correlation between the pre- and post-test scores was found for all three tests. The present results should be considered preliminary at best for associating anatomic learning in an integrated curriculum with spatial and visual reasoning abilities. Other components of the curriculum, for instance histology or physiology, could also influence the improved spatial visualization and visual reasoning test scores at week 32.

  12. Visual control of robots using range images.

    PubMed

    Pomares, Jorge; Gil, Pablo; Torres, Fernando

    2010-01-01

    In the last years, 3D-vision systems based on the time-of-flight (ToF) principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information.

  13. Training haptic stiffness discrimination: time course of learning with or without visual information and knowledge of results.

    PubMed

    Teodorescu, Kinneret; Bouchigny, Sylvain; Korman, Maria

    2013-08-01

    In this study, we explored the time course of haptic stiffness discrimination learning and how it was affected by two experimental factors, the addition of visual information and/or knowledge of results (KR) during training. Stiffness perception may integrate both haptic and visual modalities. However, in many tasks, the visual field is typically occluded, forcing stiffness perception to be dependent exclusively on haptic information. No studies to date addressed the time course of haptic stiffness perceptual learning. Using a virtual environment (VE) haptic interface and a two-alternative forced-choice discrimination task, the haptic stiffness discrimination ability of 48 participants was tested across 2 days. Each day included two haptic test blocks separated by a training block Additional visual information and/or KR were manipulated between participants during training blocks. Practice repetitions alone induced significant improvement in haptic stiffness discrimination. Between days, accuracy was slightly improved, but decision time performance was deteriorated. The addition of visual information and/or KR had only temporary effects on decision time, without affecting the time course of haptic discrimination learning. Learning in haptic stiffness discrimination appears to evolve through at least two distinctive phases: A single training session resulted in both immediate and latent learning. This learning was not affected by the training manipulations inspected. Training skills in VE in spaced sessions can be beneficial for tasks in which haptic perception is critical, such as surgery procedures, when the visual field is occluded. However, training protocols for such tasks should account for low impact of multisensory information and KR.

  14. The role of emotion in dynamic audiovisual integration of faces and voices.

    PubMed

    Kokinous, Jenny; Kotz, Sonja A; Tavano, Alessandro; Schröger, Erich

    2015-05-01

    We used human electroencephalogram to study early audiovisual integration of dynamic angry and neutral expressions. An auditory-only condition served as a baseline for the interpretation of integration effects. In the audiovisual conditions, the validity of visual information was manipulated using facial expressions that were either emotionally congruent or incongruent with the vocal expressions. First, we report an N1 suppression effect for angry compared with neutral vocalizations in the auditory-only condition. Second, we confirm early integration of congruent visual and auditory information as indexed by a suppression of the auditory N1 and P2 components in the audiovisual compared with the auditory-only condition. Third, audiovisual N1 suppression was modulated by audiovisual congruency in interaction with emotion: for neutral vocalizations, there was N1 suppression in both the congruent and the incongruent audiovisual conditions. For angry vocalizations, there was N1 suppression only in the congruent but not in the incongruent condition. Extending previous findings of dynamic audiovisual integration, the current results suggest that audiovisual N1 suppression is congruency- and emotion-specific and indicate that dynamic emotional expressions compared with non-emotional expressions are preferentially processed in early audiovisual integration. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  15. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2009-09-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  16. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2010-11-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  17. Visualizing Mobility of Public Transportation System.

    PubMed

    Zeng, Wei; Fu, Chi-Wing; Arisona, Stefan Müller; Erath, Alexander; Qu, Huamin

    2014-12-01

    Public transportation systems (PTSs) play an important role in modern cities, providing shared/massive transportation services that are essential for the general public. However, due to their increasing complexity, designing effective methods to visualize and explore PTS is highly challenging. Most existing techniques employ network visualization methods and focus on showing the network topology across stops while ignoring various mobility-related factors such as riding time, transfer time, waiting time, and round-the-clock patterns. This work aims to visualize and explore passenger mobility in a PTS with a family of analytical tasks based on inputs from transportation researchers. After exploring different design alternatives, we come up with an integrated solution with three visualization modules: isochrone map view for geographical information, isotime flow map view for effective temporal information comparison and manipulation, and OD-pair journey view for detailed visual analysis of mobility factors along routes between specific origin-destination pairs. The isotime flow map linearizes a flow map into a parallel isoline representation, maximizing the visualization of mobility information along the horizontal time axis while presenting clear and smooth pathways from origin to destinations. Moreover, we devise several interactive visual query methods for users to easily explore the dynamics of PTS mobility over space and time. Lastly, we also construct a PTS mobility model from millions of real passenger trajectories, and evaluate our visualization techniques with assorted case studies with the transportation researchers.

  18. Filling-in visual motion with sounds.

    PubMed

    Väljamäe, A; Soto-Faraco, S

    2008-10-01

    Information about the motion of objects can be extracted by multiple sensory modalities, and, as a consequence, object motion perception typically involves the integration of multi-sensory information. Often, in naturalistic settings, the flow of such information can be rather discontinuous (e.g. a cat racing through the furniture in a cluttered room is partly seen and partly heard). This study addressed audio-visual interactions in the perception of time-sampled object motion by measuring adaptation after-effects. We found significant auditory after-effects following adaptation to unisensory auditory and visual motion in depth, sampled at 12.5 Hz. The visually induced (cross-modal) auditory motion after-effect was eliminated if visual adaptors flashed at half of the rate (6.25 Hz). Remarkably, the addition of the high-rate acoustic flutter (12.5 Hz) to this ineffective, sparsely time-sampled, visual adaptor restored the auditory after-effect to a level comparable to what was seen with high-rate bimodal adaptors (flashes and beeps). Our results suggest that this auditory-induced reinstatement of the motion after-effect from the poor visual signals resulted from the occurrence of sound-induced illusory flashes. This effect was found to be dependent both on the directional congruency between modalities and on the rate of auditory flutter. The auditory filling-in of time-sampled visual motion supports the feasibility of using reduced frame rate visual content in multisensory broadcasting and virtual reality applications.

  19. A novel computational model to probe visual search deficits during motor performance

    PubMed Central

    Singh, Tarkeshwar; Fridriksson, Julius; Perry, Christopher M.; Tryon, Sarah C.; Ross, Angela; Fritz, Stacy

    2016-01-01

    Successful execution of many motor skills relies on well-organized visual search (voluntary eye movements that actively scan the environment for task-relevant information). Although impairments of visual search that result from brain injuries are linked to diminished motor performance, the neural processes that guide visual search within this context remain largely unknown. The first objective of this study was to examine how visual search in healthy adults and stroke survivors is used to guide hand movements during the Trail Making Test (TMT), a neuropsychological task that is a strong predictor of visuomotor and cognitive deficits. Our second objective was to develop a novel computational model to investigate combinatorial interactions between three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing). We predicted that stroke survivors would exhibit deficits in integrating the three underlying processes, resulting in deteriorated overall task performance. We found that normal TMT performance is associated with patterns of visual search that primarily rely on spatial planning and/or working memory (but not peripheral visual processing). Our computational model suggested that abnormal TMT performance following stroke is associated with impairments of visual search that are characterized by deficits integrating spatial planning and working memory. This innovative methodology provides a novel framework for studying how the neural processes underlying visual search interact combinatorially to guide motor performance. NEW & NOTEWORTHY Visual search has traditionally been studied in cognitive and perceptual paradigms, but little is known about how it contributes to visuomotor performance. We have developed a novel computational model to examine how three underlying processes of visual search (spatial planning, working memory, and peripheral visual processing) contribute to visual search during a visuomotor task. We show that deficits integrating spatial planning and working memory underlie abnormal performance in stroke survivors with frontoparietal damage. PMID:27733596

  20. Harnessing the web information ecosystem with wiki-based visualization dashboards.

    PubMed

    McKeon, Matt

    2009-01-01

    We describe the design and deployment of Dashiki, a public website where users may collaboratively build visualization dashboards through a combination of a wiki-like syntax and interactive editors. Our goals are to extend existing research on social data analysis into presentation and organization of data from multiple sources, explore new metaphors for these activities, and participate more fully in the web!s information ecology by providing tighter integration with real-time data. To support these goals, our design includes novel and low-barrier mechanisms for editing and layout of dashboard pages and visualizations, connection to data sources, and coordinating interaction between visualizations. In addition to describing these technologies, we provide a preliminary report on the public launch of a prototype based on this design, including a description of the activities of our users derived from observation and interviews.

  1. Design and application of BIM based digital sand table for construction management

    NASA Astrophysics Data System (ADS)

    Fuquan, JI; Jianqiang, LI; Weijia, LIU

    2018-05-01

    This paper explores the design and application of BIM based digital sand table for construction management. Aiming at the demands and features of construction management plan for bridge and tunnel engineering, the key functional features of digital sand table should include three-dimensional GIS, model navigation, virtual simulation, information layers, and data exchange, etc. That involving the technology of 3D visualization and 4D virtual simulation of BIM, breakdown structure of BIM model and project data, multi-dimensional information layers, and multi-source data acquisition and interaction. Totally, the digital sand table is a visual and virtual engineering information integrated terminal, under the unified data standard system. Also, the applications shall contain visual constructing scheme, virtual constructing schedule, and monitoring of construction, etc. Finally, the applicability of several basic software to the digital sand table is analyzed.

  2. Cross-frequency synchronization connects networks of fast and slow oscillations during visual working memory maintenance

    PubMed Central

    Siebenhühner, Felix; Wang, Sheng H; Palva, J Matias; Palva, Satu

    2016-01-01

    Neuronal activity in sensory and fronto-parietal (FP) areas underlies the representation and attentional control, respectively, of sensory information maintained in visual working memory (VWM). Within these regions, beta/gamma phase-synchronization supports the integration of sensory functions, while synchronization in theta/alpha bands supports the regulation of attentional functions. A key challenge is to understand which mechanisms integrate neuronal processing across these distinct frequencies and thereby the sensory and attentional functions. We investigated whether such integration could be achieved by cross-frequency phase synchrony (CFS). Using concurrent magneto- and electroencephalography, we found that CFS was load-dependently enhanced between theta and alpha–gamma and between alpha and beta-gamma oscillations during VWM maintenance among visual, FP, and dorsal attention (DA) systems. CFS also connected the hubs of within-frequency-synchronized networks and its strength predicted individual VWM capacity. We propose that CFS integrates processing among synchronized neuronal networks from theta to gamma frequencies to link sensory and attentional functions. DOI: http://dx.doi.org/10.7554/eLife.13451.001 PMID:27669146

  3. Neural dynamics for landmark orientation and angular path integration

    PubMed Central

    Seelig, Johannes D.; Jayaraman, Vivek

    2015-01-01

    Summary Many animals navigate using a combination of visual landmarks and path integration. In mammalian brains, head direction cells integrate these two streams of information by representing an animal's heading relative to landmarks, yet maintaining their directional tuning in darkness based on self-motion cues. Here we use two-photon calcium imaging in head-fixed flies walking on a ball in a virtual reality arena to demonstrate that landmark-based orientation and angular path integration are combined in the population responses of neurons whose dendrites tile the ellipsoid body — a toroidal structure in the center of the fly brain. The population encodes the fly's azimuth relative to its environment, tracking visual landmarks when available and relying on self-motion cues in darkness. When both visual and self-motion cues are absent, a representation of the animal's orientation is maintained in this network through persistent activity — a potential substrate for short-term memory. Several features of the population dynamics of these neurons and their circular anatomical arrangement are suggestive of ring attractors — network structures proposed to support the function of navigational brain circuits. PMID:25971509

  4. Visualizing 3D Objects from 2D Cross Sectional Images Displayed "In-Situ" versus "Ex-Situ"

    ERIC Educational Resources Information Center

    Wu, Bing; Klatzky, Roberta L.; Stetten, George

    2010-01-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to…

  5. Analysis, Mining and Visualization Service at NCSA

    NASA Astrophysics Data System (ADS)

    Wilhelmson, R.; Cox, D.; Welge, M.

    2004-12-01

    NCSA's goal is to create a balanced system that fully supports high-end computing as well as: 1) high-end data management and analysis; 2) visualization of massive, highly complex data collections; 3) large databases; 4) geographically distributed Grid computing; and 5) collaboratories, all based on a secure computational environment and driven with workflow-based services. To this end NCSA has defined a new technology path that includes the integration and provision of cyberservices in support of data analysis, mining, and visualization. NCSA has begun to develop and apply a data mining system-NCSA Data-to-Knowledge (D2K)-in conjunction with both the application and research communities. NCSA D2K will enable the formation of model-based application workflows and visual programming interfaces for rapid data analysis. The Java-based D2K framework, which integrates analytical data mining methods with data management, data transformation, and information visualization tools, will be configurable from the cyberservices (web and grid services, tools, ..) viewpoint to solve a wide range of important data mining problems. This effort will use modules, such as a new classification methods for the detection of high-risk geoscience events, and existing D2K data management, machine learning, and information visualization modules. A D2K cyberservices interface will be developed to seamlessly connect client applications with remote back-end D2K servers, providing computational resources for data mining and integration with local or remote data stores. This work is being coordinated with SDSC's data and services efforts. The new NCSA Visualization embedded workflow environment (NVIEW) will be integrated with D2K functionality to tightly couple informatics and scientific visualization with the data analysis and management services. Visualization services will access and filter disparate data sources, simplifying tasks such as fusing related data from distinct sources into a coherent visual representation. This approach enables collaboration among geographically dispersed researchers via portals and front-end clients, and the coupling with data management services enables recording associations among datasets and building annotation systems into visualization tools and portals, giving scientists a persistent, shareable, virtual lab notebook. To facilitate provision of these cyberservices to the national community, NCSA will be providing a computational environment for large-scale data assimilation, analysis, mining, and visualization. This will be initially implemented on the new 512 processor shared memory SGI's recently purchased by NCSA. In addition to standard batch capabilities, NCSA will provide on-demand capabilities for those projects requiring rapid response (e.g., development of severe weather, earthquake events) for decision makers. It will also be used for non-sequential interactive analysis of data sets where it is important have access to large data volumes over space and time.

  6. A scalable architecture for extracting, aligning, linking, and visualizing multi-Int data

    NASA Astrophysics Data System (ADS)

    Knoblock, Craig A.; Szekely, Pedro

    2015-05-01

    An analyst today has a tremendous amount of data available, but each of the various data sources typically exists in their own silos, so an analyst has limited ability to see an integrated view of the data and has little or no access to contextual information that could help in understanding the data. We have developed the Domain-Insight Graph (DIG) system, an innovative architecture for extracting, aligning, linking, and visualizing massive amounts of domain-specific content from unstructured sources. Under the DARPA Memex program we have already successfully applied this architecture to multiple application domains, including the enormous international problem of human trafficking, where we extracted, aligned and linked data from 50 million online Web pages. DIG builds on our Karma data integration toolkit, which makes it easy to rapidly integrate structured data from a variety of sources, including databases, spreadsheets, XML, JSON, and Web services. The ability to integrate Web services allows Karma to pull in live data from the various social media sites, such as Twitter, Instagram, and OpenStreetMaps. DIG then indexes the integrated data and provides an easy to use interface for query, visualization, and analysis.

  7. The Processing of Biologically Plausible and Implausible forms in American Sign Language: Evidence for Perceptual Tuning.

    PubMed

    Almeida, Diogo; Poeppel, David; Corina, David

    The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.

  8. Dissociation and Convergence of the Dorsal and Ventral Visual Streams in the Human Prefrontal Cortex

    PubMed Central

    Takahashi, Emi; Ohki, Kenichi; Kim, Dae-Shik

    2012-01-01

    Visual information is largely processed through two pathways in the primate brain: an object pathway from the primary visual cortex to the temporal cortex (ventral stream) and a spatial pathway to the parietal cortex (dorsal stream). Whether and to what extent dissociation exists in the human prefrontal cortex (PFC) has long been debated. We examined anatomical connections from functionally defined areas in the temporal and parietal cortices to the PFC, using noninvasive functional and diffusion-weighted magnetic resonance imaging. The right inferior frontal gyrus (IFG) received converging input from both streams, while the right superior frontal gyrus received input only from the dorsal stream. Interstream functional connectivity to the IFG was dynamically recruited only when both object and spatial information were processed. These results suggest that the human PFC receives dissociated and converging visual pathways, and that the right IFG region serves as an integrator of the two types of information. PMID:23063444

  9. Can you hear me yet? An intracranial investigation of speech and non-speech audiovisual interactions in human cortex.

    PubMed

    Rhone, Ariane E; Nourski, Kirill V; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A; McMurray, Bob

    In everyday conversation, viewing a talker's face can provide information about the timing and content of an upcoming speech signal, resulting in improved intelligibility. Using electrocorticography, we tested whether human auditory cortex in Heschl's gyrus (HG) and on superior temporal gyrus (STG) and motor cortex on precentral gyrus (PreC) were responsive to visual/gestural information prior to the onset of sound and whether early stages of auditory processing were sensitive to the visual content (speech syllable versus non-speech motion). Event-related band power (ERBP) in the high gamma band was content-specific prior to acoustic onset on STG and PreC, and ERBP in the beta band differed in all three areas. Following sound onset, we found with no evidence for content-specificity in HG, evidence for visual specificity in PreC, and specificity for both modalities in STG. These results support models of audio-visual processing in which sensory information is integrated in non-primary cortical areas.

  10. "Look who's talking!" Gaze Patterns for Implicit and Explicit Audio-Visual Speech Synchrony Detection in Children With High-Functioning Autism.

    PubMed

    Grossman, Ruth B; Steinhart, Erin; Mitchell, Teresa; McIlvane, William

    2015-06-01

    Conversation requires integration of information from faces and voices to fully understand the speaker's message. To detect auditory-visual asynchrony of speech, listeners must integrate visual movements of the face, particularly the mouth, with auditory speech information. Individuals with autism spectrum disorder may be less successful at such multisensory integration, despite their demonstrated preference for looking at the mouth region of a speaker. We showed participants (individuals with and without high-functioning autism (HFA) aged 8-19) a split-screen video of two identical individuals speaking side by side. Only one of the speakers was in synchrony with the corresponding audio track and synchrony switched between the two speakers every few seconds. Participants were asked to watch the video without further instructions (implicit condition) or to specifically watch the in-synch speaker (explicit condition). We recorded which part of the screen and face their eyes targeted. Both groups looked at the in-synch video significantly more with explicit instructions. However, participants with HFA looked at the in-synch video less than typically developing (TD) peers and did not increase their gaze time as much as TD participants in the explicit task. Importantly, the HFA group looked significantly less at the mouth than their TD peers, and significantly more at non-face regions of the image. There were no between-group differences for eye-directed gaze. Overall, individuals with HFA spend less time looking at the crucially important mouth region of the face during auditory-visual speech integration, which is maladaptive gaze behavior for this type of task. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  11. How can knowledge discovery methods uncover spatio-temporal patterns in environmental data?

    NASA Astrophysics Data System (ADS)

    Wachowicz, Monica

    2000-04-01

    This paper proposes the integration of KDD, GVis and STDB as a long-term strategy, which will allow users to apply knowledge discovery methods for uncovering spatio-temporal patterns in environmental data. The main goal is to combine innovative techniques and associated tools for exploring very large environmental data sets in order to arrive at valid, novel, potentially useful, and ultimately understandable spatio-temporal patterns. The GeoInsight approach is described using the principles and key developments in the research domains of KDD, GVis, and STDB. The GeoInsight approach aims at the integration of these research domains in order to provide tools for performing information retrieval, exploration, analysis, and visualization. The result is a knowledge-based design, which involves visual thinking (perceptual-cognitive process) and automated information processing (computer-analytical process).

  12. Integrated approach to multimodal media content analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Tong; Kuo, C.-C. Jay

    1999-12-01

    In this work, we present a system for the automatic segmentation, indexing and retrieval of audiovisual data based on the combination of audio, visual and textural content analysis. The video stream is demultiplexed into audio, image and caption components. Then, a semantic segmentation of the audio signal based on audio content analysis is conducted, and each segment is indexed as one of the basic audio types. The image sequence is segmented into shots based on visual information analysis, and keyframes are extracted from each shot. Meanwhile, keywords are detected from the closed caption. Index tables are designed for both linear and non-linear access to the video. It is shown by experiments that the proposed methods for multimodal media content analysis are effective. And that the integrated framework achieves satisfactory results for video information filtering and retrieval.

  13. Making a Literacy Plan: Developing an Integrated Curriculum That Meets Your School's Needs

    ERIC Educational Resources Information Center

    Schutte, Annie

    2016-01-01

    Literacy does not happen in a single lesson or course. There are no shortcuts to gaining mastery over a skill set, whether it is reading literacy, information literacy and research skills, online literacy and digital citizenship, or visual literacy. School librarians dream about a perfect integrated curriculum: there is ample time for…

  14. Proprioceptive feedback determines visuomotor gain in Drosophila

    PubMed Central

    Bartussek, Jan; Lehmann, Fritz-Olaf

    2016-01-01

    Multisensory integration is a prerequisite for effective locomotor control in most animals. Especially, the impressive aerial performance of insects relies on rapid and precise integration of multiple sensory modalities that provide feedback on different time scales. In flies, continuous visual signalling from the compound eyes is fused with phasic proprioceptive feedback to ensure precise neural activation of wing steering muscles (WSM) within narrow temporal phase bands of the stroke cycle. This phase-locked activation relies on mechanoreceptors distributed over wings and gyroscopic halteres. Here we investigate visual steering performance of tethered flying fruit flies with reduced haltere and wing feedback signalling. Using a flight simulator, we evaluated visual object fixation behaviour, optomotor altitude control and saccadic escape reflexes. The behavioural assays show an antagonistic effect of wing and haltere signalling on visuomotor gain during flight. Compared with controls, suppression of haltere feedback attenuates while suppression of wing feedback enhances the animal’s wing steering range. Our results suggest that the generation of motor commands owing to visual perception is dynamically controlled by proprioception. We outline a potential physiological mechanism based on the biomechanical properties of WSM and sensory integration processes at the level of motoneurons. Collectively, the findings contribute to our general understanding how moving animals integrate sensory information with dynamically changing temporal structure. PMID:26909184

  15. Visual-Vestibular Conflict Detection Depends on Fixation.

    PubMed

    Garzorz, Isabelle T; MacNeilage, Paul R

    2017-09-25

    Visual and vestibular signals are the primary sources of sensory information for self-motion. Conflict among these signals can be seriously debilitating, resulting in vertigo [1], inappropriate postural responses [2], and motion, simulator, or cyber sickness [3-8]. Despite this significance, the mechanisms mediating conflict detection are poorly understood. Here we model conflict detection simply as crossmodal discrimination with benchmark performance limited by variabilities of the signals being compared. In a series of psychophysical experiments conducted in a virtual reality motion simulator, we measure these variabilities and assess conflict detection relative to this benchmark. We also examine the impact of eye movements on visual-vestibular conflict detection. In one condition, observers fixate a point that is stationary in the simulated visual environment by rotating the eyes opposite head rotation, thereby nulling retinal image motion. In another condition, eye movement is artificially minimized via fixation of a head-fixed fixation point, thereby maximizing retinal image motion. Visual-vestibular integration performance is also measured, similar to previous studies [9-12]. We observe that there is a tradeoff between integration and conflict detection that is mediated by eye movements. Minimizing eye movements by fixating a head-fixed target leads to optimal integration but highly impaired conflict detection. Minimizing retinal motion by fixating a scene-fixed target improves conflict detection at the cost of impaired integration performance. The common tendency to fixate scene-fixed targets during self-motion [13] may indicate that conflict detection is typically a higher priority than the increase in precision of self-motion estimation that is obtained through integration. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study

    PubMed Central

    Ursino, Mauro; Crisafulli, Andrea; di Pellegrino, Giuseppe; Magosso, Elisa; Cuppini, Cristiano

    2017-01-01

    The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In particular, in unisensory conditions the visual estimates exhibit a bias toward the fovea, which increases with the level of noise. In cross modal conditions, the SD of the estimates decreases when using congruent audio-visual stimuli, and a ventriloquism effect becomes evident in case of spatially disparate stimuli. Moreover, the ventriloquism decreases with the eccentricity. PMID:29046631

  17. Dynamic Reconfiguration of the Supplementary Motor Area Network during Imagined Music Performance

    PubMed Central

    Tanaka, Shoji; Kirino, Eiji

    2017-01-01

    The supplementary motor area (SMA) has been shown to be the center for motor planning and is active during music listening and performance. However, limited data exist on the role of the SMA in music. Music performance requires complex information processing in auditory, visual, spatial, emotional, and motor domains, and this information is integrated for the performance. We hypothesized that the SMA is engaged in multimodal integration of information, distributed across several regions of the brain to prepare for ongoing music performance. To test this hypothesis, functional networks involving the SMA were extracted from functional magnetic resonance imaging (fMRI) data that were acquired from musicians during imagined music performance and during the resting state. Compared with the resting condition, imagined music performance increased connectivity of the SMA with widespread regions in the brain including the sensorimotor cortices, parietal cortex, posterior temporal cortex, occipital cortex, and inferior and dorsolateral prefrontal cortex. Increased connectivity of the SMA with the dorsolateral prefrontal cortex suggests that the SMA is under cognitive control, while increased connectivity with the inferior prefrontal cortex suggests the involvement of syntax processing. Increased connectivity with the parietal cortex, posterior temporal cortex, and occipital cortex is likely for the integration of spatial, emotional, and visual information. Finally, increased connectivity with the sensorimotor cortices was potentially involved with the translation of thought planning into motor programs. Therefore, the reconfiguration of the SMA network observed in this study is considered to reflect the multimodal integration required for imagined and actual music performance. We propose that the SMA network construct “the internal representation of music performance” by integrating multimodal information required for the performance. PMID:29311870

  18. A Prototype Search Toolkit

    NASA Astrophysics Data System (ADS)

    Knepper, Margaret M.; Fox, Kevin L.; Frieder, Ophir

    Information overload is now a reality. We no longer worry about obtaining a sufficient volume of data; we now are concerned with sifting and understanding the massive volumes of data available to us. To do so, we developed an integrated information processing toolkit that provides the user with a variety of ways to view their information. The views include keyword search results, a domain specific ranking system that allows for adaptively capturing topic vocabularies to customize and focus the search results, navigation pages for browsing, and a geospatial and temporal component to visualize results in time and space, and provide “what if” scenario playing. Integrating the information from different tools and sources gives the user additional information and another way to analyze the data. An example of the integration is illustrated on reports of the avian influenza (bird flu).

  19. 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display.

    PubMed

    Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen

    2017-07-01

    Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Representation and Integration of Scientific Information

    NASA Technical Reports Server (NTRS)

    1999-01-01

    The objective of this Joint Research Interchange with NASA-Ames was to investigate how the Tsimmis technology could be used to represent and integrate scientific information. The funding allowed us to work with researchers within NAS at the NASA Ames Research Center, to understand their information needs, and to work with them on integration strategies. Most organizations have a need to access and integrate information from multiple, disparate information sources that may include both structured as well as semi-structured information. At Stanford we have been working on an information integration project called Tsimmis, supported by DARPA. The main goal of the Tsimmis project is to allow a decision maker to find information of interest from such sources, fuse it, and process it (e.g., summarize it, visualize it, discover trends). Another important goal is the easy incorporation of new sources, as well the ability of deal with sources whose structure or services evolve. During the Interchange we had research meetings approximately every month or two. The participants from NASA included Michael Cox and Peter Vanderbilt. The Stanford PI, and various students and Stanford staff members also participated. NASA researchers also participated in some of our regular Tsimmis meetings. As planned, our meetings discussed problems and solutions to various information integration problems.

  1. Crossmodal Adaptation in Right Posterior Superior Temporal Sulcus during Face–Voice Emotional Integration

    PubMed Central

    Latinus, Marianne; Noguchi, Takao; Garrod, Oliver; Crabbe, Frances; Belin, Pascal

    2014-01-01

    The integration of emotional information from the face and voice of other persons is known to be mediated by a number of “multisensory” cerebral regions, such as the right posterior superior temporal sulcus (pSTS). However, whether multimodal integration in these regions is attributable to interleaved populations of unisensory neurons responding to face or voice or rather by multimodal neurons receiving input from the two modalities is not fully clear. Here, we examine this question using functional magnetic resonance adaptation and dynamic audiovisual stimuli in which emotional information was manipulated parametrically and independently in the face and voice via morphing between angry and happy expressions. Healthy human adult subjects were scanned while performing a happy/angry emotion categorization task on a series of such stimuli included in a fast event-related, continuous carryover design. Subjects integrated both face and voice information when categorizing emotion—although there was a greater weighting of face information—and showed behavioral adaptation effects both within and across modality. Adaptation also occurred at the neural level: in addition to modality-specific adaptation in visual and auditory cortices, we observed for the first time a crossmodal adaptation effect. Specifically, fMRI signal in the right pSTS was reduced in response to a stimulus in which facial emotion was similar to the vocal emotion of the preceding stimulus. These results suggest that the integration of emotional information from face and voice in the pSTS involves a detectable proportion of bimodal neurons that combine inputs from visual and auditory cortices. PMID:24828635

  2. Using web-based animations to teach histology.

    PubMed

    Brisbourne, Marc A S; Chin, Susan S-L; Melnyk, Erica; Begg, David A

    2002-02-15

    We have been experimenting with the use of animations to teach histology as part of an interactive multimedia program we are developing to replace the traditional lecture/laboratory-based histology course in our medical and dental curricula. This program, called HistoQuest, uses animations to illustrate basic histologic principles, explain dynamic processes, integrate histologic structure with physiological function, and assist students in forming mental models with which to organize and integrate new information into their learning. With this article, we first briefly discuss the theory of mental modeling, principles of visual presentation, and how mental modeling and visual presentation can be integrated to create effective animations. We then discuss the major Web-based animation technologies that are currently available and their suitability for different visual styles and navigational structures. Finally, we describe the process we use to produce animations for our program. The approach described in this study can be used by other developers to create animations for delivery over the Internet for the teaching of histology.

  3. Visualization and Analysis of Multi-scale Land Surface Products via Giovanni Portals

    NASA Technical Reports Server (NTRS)

    Shen, Suhung; Kempler, Steven J.; Gerasimov, Irina V.

    2013-01-01

    Large volumes of MODIS land data products at multiple spatial resolutions have been integrated into the Giovanni online analysis system to support studies on land cover and land use changes,focused on the Northern Eurasia and Monsoon Asia regions through the LCLUC program. Giovanni (Goddard Interactive Online Visualization ANd aNalysis Infrastructure) is a Web-based application developed by the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC), providing a simple and intuitive way to visualize, analyze, and access Earth science remotely-sensed and modeled data.Customized Giovanni Web portals (Giovanni-NEESPI andGiovanni-MAIRS) have been created to integrate land, atmospheric,cryospheric, and societal products, enabling researchers to do quick exploration and basic analyses of land surface changes, and their relationships to climate, at global and regional scales. This presentation shows a sample Giovanni portal page, lists selected data products in the system, and illustrates potential analyses with imagesand time-series at global and regional scales, focusing on climatology and anomaly analysis. More information is available at the GES DISCMAIRS data support project portal: http:disc.sci.gsfc.nasa.govmairs.

  4. Outcome of the First wwPDB Hybrid/Integrative Methods Task Force Workshop.

    PubMed

    Sali, Andrej; Berman, Helen M; Schwede, Torsten; Trewhella, Jill; Kleywegt, Gerard; Burley, Stephen K; Markley, John; Nakamura, Haruki; Adams, Paul; Bonvin, Alexandre M J J; Chiu, Wah; Peraro, Matteo Dal; Di Maio, Frank; Ferrin, Thomas E; Grünewald, Kay; Gutmanas, Aleksandras; Henderson, Richard; Hummer, Gerhard; Iwasaki, Kenji; Johnson, Graham; Lawson, Catherine L; Meiler, Jens; Marti-Renom, Marc A; Montelione, Gaetano T; Nilges, Michael; Nussinov, Ruth; Patwardhan, Ardan; Rappsilber, Juri; Read, Randy J; Saibil, Helen; Schröder, Gunnar F; Schwieters, Charles D; Seidel, Claus A M; Svergun, Dmitri; Topf, Maya; Ulrich, Eldon L; Velankar, Sameer; Westbrook, John D

    2015-07-07

    Structures of biomolecular systems are increasingly computed by integrative modeling that relies on varied types of experimental data and theoretical information. We describe here the proceedings and conclusions from the first wwPDB Hybrid/Integrative Methods Task Force Workshop held at the European Bioinformatics Institute in Hinxton, UK, on October 6 and 7, 2014. At the workshop, experts in various experimental fields of structural biology, experts in integrative modeling and visualization, and experts in data archiving addressed a series of questions central to the future of structural biology. How should integrative models be represented? How should the data and integrative models be validated? What data should be archived? How should the data and models be archived? What information should accompany the publication of integrative models? Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Autistic traits, but not schizotypy, predict increased weighting of sensory information in Bayesian visual integration.

    PubMed

    Karvelis, Povilas; Seitz, Aaron R; Lawrie, Stephen M; Seriès, Peggy

    2018-05-14

    Recent theories propose that schizophrenia/schizotypy and autistic spectrum disorder are related to impairments in Bayesian inference that is, how the brain integrates sensory information (likelihoods) with prior knowledge. However existing accounts fail to clarify: (i) how proposed theories differ in accounts of ASD vs. schizophrenia and (ii) whether the impairments result from weaker priors or enhanced likelihoods. Here, we directly address these issues by characterizing how 91 healthy participants, scored for autistic and schizotypal traits, implicitly learned and combined priors with sensory information. This was accomplished through a visual statistical learning paradigm designed to quantitatively assess variations in individuals' likelihoods and priors. The acquisition of the priors was found to be intact along both traits spectra. However, autistic traits were associated with more veridical perception and weaker influence of expectations. Bayesian modeling revealed that this was due, not to weaker prior expectations, but to more precise sensory representations. © 2018, Karvelis et al.

  6. aGEM: an integrative system for analyzing spatial-temporal gene-expression information

    PubMed Central

    Jiménez-Lozano, Natalia; Segura, Joan; Macías, José Ramón; Vega, Juanjo; Carazo, José María

    2009-01-01

    Motivation: The work presented here describes the ‘anatomical Gene-Expression Mapping (aGEM)’ Platform, a development conceived to integrate phenotypic information with the spatial and temporal distributions of genes expressed in the mouse. The aGEM Platform has been built by extending the Distributed Annotation System (DAS) protocol, which was originally designed to share genome annotations over the WWW. DAS is a client-server system in which a single client integrates information from multiple distributed servers. Results: The aGEM Platform provides information to answer three main questions. (i) Which genes are expressed in a given mouse anatomical component? (ii) In which mouse anatomical structures are a given gene or set of genes expressed? And (iii) is there any correlation among these findings? Currently, this Platform includes several well-known mouse resources (EMAGE, GXD and GENSAT), hosting gene-expression data mostly obtained from in situ techniques together with a broad set of image-derived annotations. Availability: The Platform is optimized for Firefox 3.0 and it is accessed through a friendly and intuitive display: http://agem.cnb.csic.es Contact: natalia@cnb.csic.es Supplementary information: Supplementary data are available at http://bioweb.cnb.csic.es/VisualOmics/aGEM/home.html and http://bioweb.cnb.csic.es/VisualOmics/index_VO.html and Bioinformatics online. PMID:19592395

  7. Learning STEM Through Integrative Visual Representations

    NASA Astrophysics Data System (ADS)

    Virk, Satyugjit Singh

    Previous cognitive models of memory have not comprehensively taken into account the internal cognitive load of chunking isolated information and have emphasized the external cognitive load of visual presentation only. Under the Virk Long Term Working Memory Multimedia Model of cognitive load, drawing from the Cowan model, students presented with integrated animations of the key neural signal transmission subcomponents where the interrelationships between subcomponents are visually and verbally explicit, were hypothesized to perform significantly better on free response and diagram labeling questions, than students presented with isolated animations of these subcomponents. This is because the internal attentional cognitive load of chunking these concepts is greatly reduced and hence the overall cognitive load is less for the integrated visuals group than the isolated group, despite the higher external load for the integrated group of having the interrelationships between subcomponents presented explicitly. Experiment 1 demonstrated that integrating the subcomponents of the neuron significantly enhanced comprehension of the interconnections between cellular subcomponents and approached significance for enhancing comprehension of the layered molecular correlates of the cellular structures and their interconnections. Experiment 2 corrected time on task confounds from Experiment 1 and focused on the cellular subcomponents of the neuron only. Results from the free response essay subcomponent subscores did demonstrate significant differences in favor of the integrated group as well as some evidence from the diagram labeling section. Results from free response, short answer and What-If (problem solving), and diagram labeling detailed interrelationship subscores demonstrated the integrated group did indeed learn the extra material they were presented with. This data demonstrating the integrated group learned the extra material they were presented with provides some initial support for the assertion that chunking mediated the greater gains in learning for the neural subcomponent concepts over the control.

  8. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity

    PubMed Central

    Gibney, Kyla D.; Aligbe, Enimielen; Eggleston, Brady A.; Nunes, Sarah R.; Kerkhoff, Willa G.; Dean, Cassandra L.; Kwakye, Leslie D.

    2017-01-01

    The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information. PMID:28163675

  9. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity.

    PubMed

    Gibney, Kyla D; Aligbe, Enimielen; Eggleston, Brady A; Nunes, Sarah R; Kerkhoff, Willa G; Dean, Cassandra L; Kwakye, Leslie D

    2017-01-01

    The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller's inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information.

  10. Video signal processing system uses gated current mode switches to perform high speed multiplication and digital-to-analog conversion

    NASA Technical Reports Server (NTRS)

    Gilliland, M. G.; Rougelot, R. S.; Schumaker, R. A.

    1966-01-01

    Video signal processor uses special-purpose integrated circuits with nonsaturating current mode switching to accept texture and color information from a digital computer in a visual spaceflight simulator and to combine these, for display on color CRT with analog information concerning fading.

  11. Visualizing the Critique: Integrating Quantitative Reasoning with the Design Process

    ERIC Educational Resources Information Center

    Weinstein, Kathryn

    2017-01-01

    In the age of "Big Data," information is often quantitative in nature. The ability to analyze information through the sifting of data has been identified as a core competency for success in navigating daily life and participation in the contemporary workforce. This skill, known as Quantitative Reasoning (QR), is characterized by the…

  12. MetaSEEk: a content-based metasearch engine for images

    NASA Astrophysics Data System (ADS)

    Beigi, Mandis; Benitez, Ana B.; Chang, Shih-Fu

    1997-12-01

    Search engines are the most powerful resources for finding information on the rapidly expanding World Wide Web (WWW). Finding the desired search engines and learning how to use them, however, can be very time consuming. The integration of such search tools enables the users to access information across the world in a transparent and efficient manner. These systems are called meta-search engines. The recent emergence of visual information retrieval (VIR) search engines on the web is leading to the same efficiency problem. This paper describes and evaluates MetaSEEk, a content-based meta-search engine used for finding images on the Web based on their visual information. MetaSEEk is designed to intelligently select and interface with multiple on-line image search engines by ranking their performance for different classes of user queries. User feedback is also integrated in the ranking refinement. We compare MetaSEEk with a base line version of meta-search engine, which does not use the past performance of the different search engines in recommending target search engines for future queries.

  13. Skylab 4 visual observations project report

    NASA Technical Reports Server (NTRS)

    Kaltenbach, J. L.; Lenoir, W. B.; Mcewen, M. C.; Weitenhagen, R. A.; Wilmarth, V. R.

    1974-01-01

    The Skylab 4 Visual Observations Project was undertaken to determine the ways in which man can contribute to future earth-orbital observational programs. The premission training consisted of 17 hours of lectures by scientists representing 16 disciplines and provided the crewmen information on observational and photographic procedures and the scientific significance of this information. During the Skylab 4 mission, more than 850 observations and 2000 photographs with the 70-millimeter Hasselblad and 35-millimeter Nikon cameras were obtained for many investigative areas. Preliminary results of the project indicate that man can obtain new and unique information to support satellite earth-survey programs because of his inherent capability to make selective observations, to integrate the information, and to record the data by describing and photographing the observational sites.

  14. Visual perceptual and handwriting skills in children with Developmental Coordination Disorder.

    PubMed

    Prunty, Mellissa; Barnett, Anna L; Wilmut, Kate; Plumb, Mandy

    2016-10-01

    Children with Developmental Coordination Disorder demonstrate a lack of automaticity in handwriting as measured by pauses during writing. Deficits in visual perception have been proposed in the literature as underlying mechanisms of handwriting difficulties in children with DCD. The aim of this study was to examine whether correlations exist between measures of visual perception and visual motor integration with measures of the handwriting product and process in children with DCD. The performance of twenty-eight 8-14year-old children who met the DSM-5 criteria for DCD was compared with 28 typically developing (TD) age and gender-matched controls. The children completed the Developmental Test of Visual Motor Integration (VMI) and the Test of Visual Perceptual Skills (TVPS). Group comparisons were made, correlations were conducted between the visual perceptual measures and handwriting measures and the sensitivity and specificity examined. The DCD group performed below the TD group on the VMI and TVPS. There were no significant correlations between the VMI or TVPS and any of the handwriting measures in the DCD group. In addition, both tests demonstrated low sensitivity. Clinicians should execute caution in using visual perceptual measures to inform them about handwriting skill in children with DCD. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Auditory-visual speech integration by prelinguistic infants: perception of an emergent consonant in the McGurk effect.

    PubMed

    Burnham, Denis; Dodd, Barbara

    2004-12-01

    The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. Copyright 2004 Wiley Periodicals, Inc.

  16. Finding regions of interest in pathological images: an attentional model approach

    NASA Astrophysics Data System (ADS)

    Gómez, Francisco; Villalón, Julio; Gutierrez, Ricardo; Romero, Eduardo

    2009-02-01

    This paper introduces an automated method for finding diagnostic regions-of-interest (RoIs) in histopathological images. This method is based on the cognitive process of visual selective attention that arises during a pathologist's image examination. Specifically, it emulates the first examination phase, which consists in a coarse search for tissue structures at a "low zoom" to separate the image into relevant regions.1 The pathologist's cognitive performance depends on inherent image visual cues - bottom-up information - and on acquired clinical medicine knowledge - top-down mechanisms -. Our pathologist's visual attention model integrates the latter two components. The selected bottom-up information includes local low level features such as intensity, color, orientation and texture information. Top-down information is related to the anatomical and pathological structures known by the expert. A coarse approximation to these structures is achieved by an oversegmentation algorithm, inspired by psychological grouping theories. The algorithm parameters are learned from an expert pathologist's segmentation. Top-down and bottom-up integration is achieved by calculating a unique index for each of the low level characteristics inside the region. Relevancy is estimated as a simple average of these indexes. Finally, a binary decision rule defines whether or not a region is interesting. The method was evaluated on a set of 49 images using a perceptually-weighted evaluation criterion, finding a quality gain of 3dB when comparing to a classical bottom-up model of attention.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakafuji, Dora; Gouveia, Lauren

    This project supports development of the next generation, integrated energy management infrastructure (EMS) able to incorporate advance visualization of behind-the-meter distributed resource information and probabilistic renewable energy generation forecasts to inform real-time operational decisions. The project involves end-users and active feedback from an Utility Advisory Team (UAT) to help inform how information can be used to enhance operational functions (e.g. unit commitment, load forecasting, Automatic Generation Control (AGC) reserve monitoring, ramp alerts) within two major EMS platforms. Objectives include: Engaging utility operations personnel to develop user input on displays, set expectations, test and review; Developing ease of use and timelinessmore » metrics for measuring enhancements; Developing prototype integrated capabilities within two operational EMS environments; Demonstrating an integrated decision analysis platform with real-time wind and solar forecasting information and timely distributed resource information; Seamlessly integrating new 4-dimensional information into operations without increasing workload and complexities; Developing sufficient analytics to inform and confidently transform and adopt new operating practices and procedures; Disseminating project lessons learned through industry sponsored workshops and conferences;Building on collaborative utility-vendor partnership and industry capabilities« less

  18. Integrating and Visualizing Tropical Cyclone Data Using the Real Time Mission Monitor

    NASA Technical Reports Server (NTRS)

    Goodman, H. Michael; Blakeslee, Richard; Conover, Helen; Hall, John; He, Yubin; Regner, Kathryn

    2009-01-01

    The Real Time Mission Monitor (RTMM) is a visualization and information system that fuses multiple Earth science data sources, to enable real time decision-making for airborne and ground validation experiments. Developed at the NASA Marshall Space Flight Center, RTMM is a situational awareness, decision-support system that integrates satellite imagery, radar, surface and airborne instrument data sets, model output parameters, lightning location observations, aircraft navigation data, soundings, and other applicable Earth science data sets. The integration and delivery of this information is made possible using data acquisition systems, network communication links, network server resources, and visualizations through the Google Earth virtual globe application. RTMM is extremely valuable for optimizing individual Earth science airborne field experiments. Flight planners, scientists, and managers appreciate the contributions that RTMM makes to their flight projects. A broad spectrum of interdisciplinary scientists used RTMM during field campaigns including the hurricane-focused 2006 NASA African Monsoon Multidisciplinary Analyses (NAMMA), 2007 NOAA-NASA Aerosonde Hurricane Noel flight, 2007 Tropical Composition, Cloud, and Climate Coupling (TC4), plus a soil moisture (SMAP-VEX) and two arctic research experiments (ARCTAS) in 2008. Improving and evolving RTMM is a continuous process. RTMM recently integrated the Waypoint Planning Tool, a Java-based application that enables aircraft mission scientists to easily develop a pre-mission flight plan through an interactive point-and-click interface. Individual flight legs are automatically calculated "on the fly". The resultant flight plan is then immediately posted to the Google Earth-based RTMM for interested scientists to view the planned flight track and subsequently compare it to the actual real time flight progress. We are planning additional capabilities to RTMM including collaborations with the Jet Propulsion Laboratory in the joint development of a Tropical Cyclone Integrated Data Exchange and Analysis System (TC IDEAS) which will serve as a web portal for access to tropical cyclone data, visualizations and model output.

  19. A participatory approach to designing and enhancing integrated health information technology systems for veterans: protocol.

    PubMed

    Haun, Jolie N; Nazi, Kim M; Chavez, Margeaux; Lind, Jason D; Antinori, Nicole; Gosline, Robert M; Martin, Tracey L

    2015-02-27

    The Department of Veterans Affairs (VA) has developed health information technologies (HIT) and resources to improve veteran access to health care programs and services, and to support a patient-centered approach to health care delivery. To improve VA HIT access and meaningful use by veterans, it is necessary to understand their preferences for interacting with various HIT resources to accomplish health management related tasks and to exchange information. The objective of this paper was to describe a novel protocol for: (1) developing a HIT Digital Health Matrix Model; (2) conducting an Analytic Hierarchy Process called pairwise comparison to understand how and why veterans want to use electronic health resources to complete tasks related to health management; and (3) developing visual modeling simulations that depict veterans' preferences for using VA HIT to manage their health conditions and exchange health information. The study uses participatory research methods to understand how veterans prefer to use VA HIT to accomplish health management tasks within a given context, and how they would like to interact with HIT interfaces (eg, look, feel, and function) in the future. This study includes two rounds of veteran focus groups with self-administered surveys and visual modeling simulation techniques. This study will also convene an expert panel to assist in the development of a VA HIT Digital Health Matrix Model, so that both expert panel members and veteran participants can complete an Analytic Hierarchy Process, pairwise comparisons to evaluate and rank the applicability of electronic health resources for a series of health management tasks. This protocol describes the iterative, participatory, and patient-centered process for: (1) developing a VA HIT Digital Health Matrix Model that outlines current VA patient-facing platforms available to veterans, describing their features and relevant contexts for use; and (2) developing visual model simulations based on direct veteran feedback that depict patient preferences for enhancing the synchronization, integration, and standardization of VA patient-facing platforms. Focus group topics include current uses, preferences, facilitators, and barriers to using electronic health resources; recommendations for synchronizing, integrating, and standardizing VA HIT; and preferences on data sharing and delegation within the VA system. This work highlights the practical, technological, and personal factors that facilitate and inhibit use of current VA HIT, and informs an integrated system redesign. The Digital Health Matrix Model and visual modeling simulations use knowledge of veteran preferences and experiences to directly inform enhancements to VA HIT and provide a more holistic and integrated user experience. These efforts are designed to support the adoption and sustained use of VA HIT to support patient self-management and clinical care coordination in ways that are directly aligned with veteran preferences.

  20. A Participatory Approach to Designing and Enhancing Integrated Health Information Technology Systems for Veterans: Protocol

    PubMed Central

    Nazi, Kim M; Chavez, Margeaux; Lind, Jason D; Antinori, Nicole; Gosline, Robert M; Martin, Tracey L

    2015-01-01

    Background The Department of Veterans Affairs (VA) has developed health information technologies (HIT) and resources to improve veteran access to health care programs and services, and to support a patient-centered approach to health care delivery. To improve VA HIT access and meaningful use by veterans, it is necessary to understand their preferences for interacting with various HIT resources to accomplish health management related tasks and to exchange information. Objective The objective of this paper was to describe a novel protocol for: (1) developing a HIT Digital Health Matrix Model; (2) conducting an Analytic Hierarchy Process called pairwise comparison to understand how and why veterans want to use electronic health resources to complete tasks related to health management; and (3) developing visual modeling simulations that depict veterans’ preferences for using VA HIT to manage their health conditions and exchange health information. Methods The study uses participatory research methods to understand how veterans prefer to use VA HIT to accomplish health management tasks within a given context, and how they would like to interact with HIT interfaces (eg, look, feel, and function) in the future. This study includes two rounds of veteran focus groups with self-administered surveys and visual modeling simulation techniques. This study will also convene an expert panel to assist in the development of a VA HIT Digital Health Matrix Model, so that both expert panel members and veteran participants can complete an Analytic Hierarchy Process, pairwise comparisons to evaluate and rank the applicability of electronic health resources for a series of health management tasks. Results This protocol describes the iterative, participatory, and patient-centered process for: (1) developing a VA HIT Digital Health Matrix Model that outlines current VA patient-facing platforms available to veterans, describing their features and relevant contexts for use; and (2) developing visual model simulations based on direct veteran feedback that depict patient preferences for enhancing the synchronization, integration, and standardization of VA patient-facing platforms. Focus group topics include current uses, preferences, facilitators, and barriers to using electronic health resources; recommendations for synchronizing, integrating, and standardizing VA HIT; and preferences on data sharing and delegation within the VA system. Conclusions This work highlights the practical, technological, and personal factors that facilitate and inhibit use of current VA HIT, and informs an integrated system redesign. The Digital Health Matrix Model and visual modeling simulations use knowledge of veteran preferences and experiences to directly inform enhancements to VA HIT and provide a more holistic and integrated user experience. These efforts are designed to support the adoption and sustained use of VA HIT to support patient self-management and clinical care coordination in ways that are directly aligned with veteran preferences. PMID:25803324

  1. Distributed Fading Memory for Stimulus Properties in the Primary Visual Cortex

    PubMed Central

    Singer, Wolf; Maass, Wolfgang

    2009-01-01

    It is currently not known how distributed neuronal responses in early visual areas carry stimulus-related information. We made multielectrode recordings from cat primary visual cortex and applied methods from machine learning in order to analyze the temporal evolution of stimulus-related information in the spiking activity of large ensembles of around 100 neurons. We used sequences of up to three different visual stimuli (letters of the alphabet) presented for 100 ms and with intervals of 100 ms or larger. Most of the information about visual stimuli extractable by sophisticated methods of machine learning, i.e., support vector machines with nonlinear kernel functions, was also extractable by simple linear classification such as can be achieved by individual neurons. New stimuli did not erase information about previous stimuli. The responses to the most recent stimulus contained about equal amounts of information about both this and the preceding stimulus. This information was encoded both in the discharge rates (response amplitudes) of the ensemble of neurons and, when using short time constants for integration (e.g., 20 ms), in the precise timing of individual spikes (≤∼20 ms), and persisted for several 100 ms beyond the offset of stimuli. The results indicate that the network from which we recorded is endowed with fading memory and is capable of performing online computations utilizing information about temporally sequential stimuli. This result challenges models assuming frame-by-frame analyses of sequential inputs. PMID:20027205

  2. Predictors of visual-motor integration in children with intellectual disability.

    PubMed

    Memisevic, Haris; Sinanovic, Osman

    2012-12-01

    The aim of this study was to assess the influence of sex, age, level and etiology of intellectual disability on visual-motor integration in children with intellectual disability. The sample consisted of 90 children with intellectual disability between 7 and15 years of age. Visual-motor integration was measured using the Acadia test of visual-motor integration. A multiple regression analysis was used for data analysis. The results of this study showed that sex, level of intellectual disability, and age were significant predictors of visual-motor integration. The etiology of intellectual disability did not play a significant role in predicting visual-motor integration. Visual-motor integration skills are very important for a child's overall level of functioning. Individualized programs for the remediation of visual-motor integration skills should be a part of the curriculum for children with intellectual disability.

  3. Embedded Figures Detection in Autism and Typical Development: Preliminary Evidence of a Double Dissociation in Relationships with Visual Search

    ERIC Educational Resources Information Center

    Jarrold, Christopher; Gilchrist, Iain D.; Bender, Alison

    2005-01-01

    Individuals with autism show relatively strong performance on tasks that require them to identify the constituent parts of a visual stimulus. This is assumed to be the result of a bias towards processing the local elements in a display that follows from a weakened ability to integrate information at the global level. The results of the current…

  4. Auditory and visual interactions between the superior and inferior colliculi in the ferret.

    PubMed

    Stitt, Iain; Galindo-Leon, Edgar; Pieper, Florian; Hollensteiner, Karl J; Engler, Gerhard; Engel, Andreas K

    2015-05-01

    The integration of visual and auditory spatial information is important for building an accurate perception of the external world, but the fundamental mechanisms governing such audiovisual interaction have only partially been resolved. The earliest interface between auditory and visual processing pathways is in the midbrain, where the superior (SC) and inferior colliculi (IC) are reciprocally connected in an audiovisual loop. Here, we investigate the mechanisms of audiovisual interaction in the midbrain by recording neural signals from the SC and IC simultaneously in anesthetized ferrets. Visual stimuli reliably produced band-limited phase locking of IC local field potentials (LFPs) in two distinct frequency bands: 6-10 and 15-30 Hz. These visual LFP responses co-localized with robust auditory responses that were characteristic of the IC. Imaginary coherence analysis confirmed that visual responses in the IC were not volume-conducted signals from the neighboring SC. Visual responses in the IC occurred later than retinally driven superficial SC layers and earlier than deep SC layers that receive indirect visual inputs, suggesting that retinal inputs do not drive visually evoked responses in the IC. In addition, SC and IC recording sites with overlapping visual spatial receptive fields displayed stronger functional connectivity than sites with separate receptive fields, indicating that visual spatial maps are aligned across both midbrain structures. Reciprocal coupling between the IC and SC therefore probably serves the dynamic integration of visual and auditory representations of space. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  5. InteGO2: A web tool for measuring and visualizing gene semantic similarities using Gene Ontology

    DOE PAGES

    Peng, Jiajie; Li, Hongxiang; Liu, Yongzhuang; ...

    2016-08-31

    Here, the Gene Ontology (GO) has been used in high-throughput omics research as a major bioinformatics resource. The hierarchical structure of GO provides users a convenient platform for biological information abstraction and hypothesis testing. Computational methods have been developed to identify functionally similar genes. However, none of the existing measurements take into account all the rich information in GO. Similarly, using these existing methods, web-based applications have been constructed to compute gene functional similarities, and to provide pure text-based outputs. Without a graphical visualization interface, it is difficult for result interpretation. As a result, we present InteGO2, a web toolmore » that allows researchers to calculate the GO-based gene semantic similarities using seven widely used GO-based similarity measurements. Also, we provide an integrative measurement that synergistically integrates all the individual measurements to improve the overall performance. Using HTML5 and cytoscape.js, we provide a graphical interface in InteGO2 to visualize the resulting gene functional association networks. In conclusion, InteGO2 is an easy-to-use HTML5 based web tool. With it, researchers can measure gene or gene product functional similarity conveniently, and visualize the network of functional interactions in a graphical interface.« less

  6. InteGO2: a web tool for measuring and visualizing gene semantic similarities using Gene Ontology.

    PubMed

    Peng, Jiajie; Li, Hongxiang; Liu, Yongzhuang; Juan, Liran; Jiang, Qinghua; Wang, Yadong; Chen, Jin

    2016-08-31

    The Gene Ontology (GO) has been used in high-throughput omics research as a major bioinformatics resource. The hierarchical structure of GO provides users a convenient platform for biological information abstraction and hypothesis testing. Computational methods have been developed to identify functionally similar genes. However, none of the existing measurements take into account all the rich information in GO. Similarly, using these existing methods, web-based applications have been constructed to compute gene functional similarities, and to provide pure text-based outputs. Without a graphical visualization interface, it is difficult for result interpretation. We present InteGO2, a web tool that allows researchers to calculate the GO-based gene semantic similarities using seven widely used GO-based similarity measurements. Also, we provide an integrative measurement that synergistically integrates all the individual measurements to improve the overall performance. Using HTML5 and cytoscape.js, we provide a graphical interface in InteGO2 to visualize the resulting gene functional association networks. InteGO2 is an easy-to-use HTML5 based web tool. With it, researchers can measure gene or gene product functional similarity conveniently, and visualize the network of functional interactions in a graphical interface. InteGO2 can be accessed via http://mlg.hit.edu.cn:8089/ .

  7. Rapid recalibration of speech perception after experiencing the McGurk illusion

    PubMed Central

    Pérez-Bellido, Alexis; de Lange, Floris P.

    2018-01-01

    The human brain can quickly adapt to changes in the environment. One example is phonetic recalibration: a speech sound is interpreted differently depending on the visual speech and this interpretation persists in the absence of visual information. Here, we examined the mechanisms of phonetic recalibration. Participants categorized the auditory syllables /aba/ and /ada/, which were sometimes preceded by the so-called McGurk stimuli (in which an /aba/ sound, due to visual /aga/ input, is often perceived as ‘ada’). We found that only one trial of exposure to the McGurk illusion was sufficient to induce a recalibration effect, i.e. an auditory /aba/ stimulus was subsequently more often perceived as ‘ada’. Furthermore, phonetic recalibration took place only when auditory and visual inputs were integrated to ‘ada’ (McGurk illusion). Moreover, this recalibration depended on the sensory similarity between the preceding and current auditory stimulus. Finally, signal detection theoretical analysis showed that McGurk-induced phonetic recalibration resulted in both a criterion shift towards /ada/ and a reduced sensitivity to distinguish between /aba/ and /ada/ sounds. The current study shows that phonetic recalibration is dependent on the perceptual integration of audiovisual information and leads to a perceptual shift in phoneme categorization. PMID:29657743

  8. InteGO2: A web tool for measuring and visualizing gene semantic similarities using Gene Ontology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Jiajie; Li, Hongxiang; Liu, Yongzhuang

    Here, the Gene Ontology (GO) has been used in high-throughput omics research as a major bioinformatics resource. The hierarchical structure of GO provides users a convenient platform for biological information abstraction and hypothesis testing. Computational methods have been developed to identify functionally similar genes. However, none of the existing measurements take into account all the rich information in GO. Similarly, using these existing methods, web-based applications have been constructed to compute gene functional similarities, and to provide pure text-based outputs. Without a graphical visualization interface, it is difficult for result interpretation. As a result, we present InteGO2, a web toolmore » that allows researchers to calculate the GO-based gene semantic similarities using seven widely used GO-based similarity measurements. Also, we provide an integrative measurement that synergistically integrates all the individual measurements to improve the overall performance. Using HTML5 and cytoscape.js, we provide a graphical interface in InteGO2 to visualize the resulting gene functional association networks. In conclusion, InteGO2 is an easy-to-use HTML5 based web tool. With it, researchers can measure gene or gene product functional similarity conveniently, and visualize the network of functional interactions in a graphical interface.« less

  9. Afferentation of the lateral nidopallium: A tracing study of a brain area involved in sexual imprinting in the zebra finch (Taeniopygia guttata).

    PubMed

    Sadananda, Monika; Bischof, Hans-Joachim

    2006-08-23

    The lateral forebrain of zebra finches that comprises parts of the lateral nidopallium and parts of the lateral mesopallium is supposed to be involved in the storage and processing of visual information acquired by an early learning process called sexual imprinting. This information is later used to select an appropriate sexual partner for courtship behavior. Being involved in such a complicated behavioral task, the lateral nidopallium should be an integrative area receiving input from many other regions of the brain. Our experiments indeed show that the lateral nidopallium receives input from a variety of telencephalic regions including the primary and secondary areas of both visual pathways, the globus pallidus, the caudolateral nidopallium functionally comparable to the prefrontal cortex, the caudomedial nidopallium involved in song perception and storage of song-related memories, and some parts of the arcopallium. There are also a number of thalamic, mesencephalic, and brainstem efferents including the catecholaminergic locus coeruleus and the unspecific activating reticular formation. The spatial distribution of afferents suggests a compartmentalization of the lateral nidopallium into several subdivisions. Based on its connections, the lateral nidopallium should be considered as an area of higher order processing of visual information coming from the tectofugal and the thalamofugal visual pathways. Other sensory modalities and also motivational factors from a variety of brain areas are also integrated here. These findings support the idea of an involvement of the lateral nidopallium in imprinting and the control of courtship behavior.

  10. Regions of mid-level human visual cortex sensitive to the global coherence of local image patches.

    PubMed

    Mannion, Damien J; Kersten, Daniel J; Olman, Cheryl A

    2014-08-01

    The global structural arrangement and spatial layout of the visual environment must be derived from the integration of local signals represented in the lower tiers of the visual system. This interaction between the spatially local and global properties of visual stimulation underlies many of our visual capacities, and how this is achieved in the brain is a central question for visual and cognitive neuroscience. Here, we examine the sensitivity of regions of the posterior human brain to the global coordination of spatially displaced naturalistic image patches. We presented observers with image patches in two circular apertures to the left and right of central fixation, with the patches drawn from either the same (coherent condition) or different (noncoherent condition) extended image. Using fMRI at 7T (n = 5), we find that global coherence affected signal amplitude in regions of dorsal mid-level cortex. Furthermore, we find that extensive regions of mid-level visual cortex contained information in their local activity pattern that could discriminate coherent and noncoherent stimuli. These findings indicate that the global coordination of local naturalistic image information has important consequences for the processing in human mid-level visual cortex.

  11. Three visualization approaches for communicating and exploring PIT tag data

    USGS Publications Warehouse

    Letcher, Benjamin; Walker, Jeffrey D.; O'Donnell, Matthew; Whiteley, Andrew R.; Nislow, Keith; Coombs, Jason

    2018-01-01

    As the number, size and complexity of ecological datasets has increased, narrative and interactive raw data visualizations have emerged as important tools for exploring and understanding these large datasets. As a demonstration, we developed three visualizations to communicate and explore passive integrated transponder tag data from two long-term field studies. We created three independent visualizations for the same dataset, allowing separate entry points for users with different goals and experience levels. The first visualization uses a narrative approach to introduce users to the study. The second visualization provides interactive cross-filters that allow users to explore multi-variate relationships in the dataset. The last visualization allows users to visualize the movement histories of individual fish within the stream network. This suite of visualization tools allows a progressive discovery of more detailed information and should make the data accessible to users with a wide variety of backgrounds and interests.

  12. Visual and cross-modal cues increase the identification of overlapping visual stimuli in Balint's syndrome.

    PubMed

    D'Imperio, Daniela; Scandola, Michele; Gobbetto, Valeria; Bulgarelli, Cristina; Salgarello, Matteo; Avesani, Renato; Moro, Valentina

    2017-10-01

    Cross-modal interactions improve the processing of external stimuli, particularly when an isolated sensory modality is impaired. When information from different modalities is integrated, object recognition is facilitated probably as a result of bottom-up and top-down processes. The aim of this study was to investigate the potential effects of cross-modal stimulation in a case of simultanagnosia. We report a detailed analysis of clinical symptoms and an 18 F-fluorodeoxyglucose (FDG) brain positron emission tomography/computed tomography (PET/CT) study of a patient affected by Balint's syndrome, a rare and invasive visual-spatial disorder following bilateral parieto-occipital lesions. An experiment was conducted to investigate the effects of visual and nonvisual cues on performance in tasks involving the recognition of overlapping pictures. Four modalities of sensory cues were used: visual, tactile, olfactory, and auditory. Data from neuropsychological tests showed the presence of ocular apraxia, optic ataxia, and simultanagnosia. The results of the experiment indicate a positive effect of the cues on the recognition of overlapping pictures, not only in the identification of the congruent valid-cued stimulus (target) but also in the identification of the other, noncued stimuli. All the sensory modalities analyzed (except the auditory stimulus) were efficacious in terms of increasing visual recognition. Cross-modal integration improved the patient's ability to recognize overlapping figures. However, while in the visual unimodal modality both bottom-up (priming, familiarity effect, disengagement of attention) and top-down processes (mental representation and short-term memory, the endogenous orientation of attention) are involved, in the cross-modal integration it is semantic representations that mainly activate visual recognition processes. These results are potentially useful for the design of rehabilitation training for attentional and visual-perceptual deficits.

  13. OOMM--Object-Oriented Matrix Modelling: an instrument for the integration of the Brasilia Regional Health Information System.

    PubMed

    Cammarota, M; Huppes, V; Gaia, S; Degoulet, P

    1998-01-01

    The development of Health Information Systems is widely determined by the establishment of the underlying information models. An Object-Oriented Matrix Model (OOMM) is described which target is to facilitate the integration of the overall health system. The model is based on information modules named micro-databases that are structured in a three-dimensional network: planning, health structures and information systems. The modelling tool has been developed as a layer on top of a relational database system. A visual browser facilitates the development and maintenance of the information model. The modelling approach has been applied to the Brasilia University Hospital since 1991. The extension of the modelling approach to the Brasilia regional health system is considered.

  14. Toward semantic-based retrieval of visual information: a model-based approach

    NASA Astrophysics Data System (ADS)

    Park, Youngchoon; Golshani, Forouzan; Panchanathan, Sethuraman

    2002-07-01

    This paper center around the problem of automated visual content classification. To enable classification based image or visual object retrieval, we propose a new image representation scheme called visual context descriptor (VCD) that is a multidimensional vector in which each element represents the frequency of a unique visual property of an image or a region. VCD utilizes the predetermined quality dimensions (i.e., types of features and quantization level) and semantic model templates mined in priori. Not only observed visual cues, but also contextually relevant visual features are proportionally incorporated in VCD. Contextual relevance of a visual cue to a semantic class is determined by using correlation analysis of ground truth samples. Such co-occurrence analysis of visual cues requires transformation of a real-valued visual feature vector (e.g., color histogram, Gabor texture, etc.,) into a discrete event (e.g., terms in text). Good-feature to track, rule of thirds, iterative k-means clustering and TSVQ are involved in transformation of feature vectors into unified symbolic representations called visual terms. Similarity-based visual cue frequency estimation is also proposed and used for ensuring the correctness of model learning and matching since sparseness of sample data causes the unstable results of frequency estimation of visual cues. The proposed method naturally allows integration of heterogeneous visual or temporal or spatial cues in a single classification or matching framework, and can be easily integrated into a semantic knowledge base such as thesaurus, and ontology. Robust semantic visual model template creation and object based image retrieval are demonstrated based on the proposed content description scheme.

  15. A Bayesian Account of Visual-Vestibular Interactions in the Rod-and-Frame Task.

    PubMed

    Alberts, Bart B G T; de Brouwer, Anouk J; Selen, Luc P J; Medendorp, W Pieter

    2016-01-01

    Panoramic visual cues, as generated by the objects in the environment, provide the brain with important information about gravity direction. To derive an optimal, i.e., Bayesian, estimate of gravity direction, the brain must combine panoramic information with gravity information detected by the vestibular system. Here, we examined the individual sensory contributions to this estimate psychometrically. We asked human subjects to judge the orientation (clockwise or counterclockwise relative to gravity) of a briefly flashed luminous rod, presented within an oriented square frame (rod-in-frame). Vestibular contributions were manipulated by tilting the subject's head, whereas visual contributions were manipulated by changing the viewing distance of the rod and frame. Results show a cyclical modulation of the frame-induced bias in perceived verticality across a 90° range of frame orientations. The magnitude of this bias decreased significantly with larger viewing distance, as if visual reliability was reduced. Biases increased significantly when the head was tilted, as if vestibular reliability was reduced. A Bayesian optimal integration model, with distinct vertical and horizontal panoramic weights, a gain factor to allow for visual reliability changes, and ocular counterroll in response to head tilt, provided a good fit to the data. We conclude that subjects flexibly weigh visual panoramic and vestibular information based on their orientation-dependent reliability, resulting in the observed verticality biases and the associated response variabilities.

  16. The cannabinoid system and visual processing: a review on experimental findings and clinical presumptions.

    PubMed

    Schwitzer, Thomas; Schwan, Raymund; Angioi-Duprez, Karine; Ingster-Moati, Isabelle; Lalanne, Laurence; Giersch, Anne; Laprevote, Vincent

    2015-01-01

    Cannabis is one of the most prevalent drugs used worldwide. Regular cannabis use is associated with impairments in highly integrative cognitive functions such as memory, attention and executive functions. To date, the cerebral mechanisms of these deficits are still poorly understood. Studying the processing of visual information may offer an innovative and relevant approach to evaluate the cerebral impact of exogenous cannabinoids on the human brain. Furthermore, this knowledge is required to understand the impact of cannabis intake in everyday life, and especially in car drivers. Here we review the role of the endocannabinoids in the functioning of the visual system and the potential involvement of cannabis use in visual dysfunctions. This review describes the presence of the endocannabinoids in the critical stages of visual information processing, and their role in the modulation of visual neurotransmission and visual synaptic plasticity, thereby enabling them to alter the transmission of the visual signal. We also review several induced visual changes, together with experimental dysfunctions reported in cannabis users. In the discussion, we consider these results in relation to the existing literature. We argue for more involvement of public health research in the study of visual function in cannabis users, especially because cannabis use is implicated in driving impairments. Copyright © 2014 Elsevier B.V. and ECNP. All rights reserved.

  17. Visual pattern recognition based on spatio-temporal patterns of retinal ganglion cells’ activities

    PubMed Central

    Jing, Wei; Liu, Wen-Zhong; Gong, Xin-Wei; Gong, Hai-Qing

    2010-01-01

    Neural information is processed based on integrated activities of relevant neurons. Concerted population activity is one of the important ways for retinal ganglion cells to efficiently organize and process visual information. In the present study, the spike activities of bullfrog retinal ganglion cells in response to three different visual patterns (checker-board, vertical gratings and horizontal gratings) were recorded using multi-electrode arrays. A measurement of subsequence distribution discrepancy (MSDD) was applied to identify the spatio-temporal patterns of retinal ganglion cells’ activities in response to different stimulation patterns. The results show that the population activity patterns were different in response to different stimulation patterns, such difference in activity pattern was consistently detectable even when visual adaptation occurred during repeated experimental trials. Therefore, the stimulus pattern can be reliably discriminated according to the spatio-temporal pattern of the neuronal activities calculated using the MSDD algorithm. PMID:21886670

  18. Spatiotemporal dynamics underlying object completion in human ventral visual cortex.

    PubMed

    Tang, Hanlin; Buia, Calin; Madhavan, Radhika; Crone, Nathan E; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel

    2014-08-06

    Natural vision often involves recognizing objects from partial information. Recognition of objects from parts presents a significant challenge for theories of vision because it requires spatial integration and extrapolation from prior knowledge. Here we recorded intracranial field potentials of 113 visually selective electrodes from epilepsy patients in response to whole and partial objects. Responses along the ventral visual stream, particularly the inferior occipital and fusiform gyri, remained selective despite showing only 9%-25% of the object areas. However, these visually selective signals emerged ∼100 ms later for partial versus whole objects. These processing delays were particularly pronounced in higher visual areas within the ventral stream. This latency difference persisted when controlling for changes in contrast, signal amplitude, and the strength of selectivity. These results argue against a purely feedforward explanation of recognition from partial information, and provide spatiotemporal constraints on theories of object recognition that involve recurrent processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. High speed digital holographic interferometry for hypersonic flow visualization

    NASA Astrophysics Data System (ADS)

    Hegde, G. M.; Jagdeesh, G.; Reddy, K. P. J.

    2013-06-01

    Optical imaging techniques have played a major role in understanding the flow dynamics of varieties of fluid flows, particularly in the study of hypersonic flows. Schlieren and shadowgraph techniques have been the flow diagnostic tools for the investigation of compressible flows since more than a century. However these techniques provide only the qualitative information about the flow field. Other optical techniques such as holographic interferometry and laser induced fluorescence (LIF) have been used extensively for extracting quantitative information about the high speed flows. In this paper we present the application of digital holographic interferometry (DHI) technique integrated with short duration hypersonic shock tunnel facility having 1 ms test time, for quantitative flow visualization. Dynamics of the flow fields in hypersonic/supersonic speeds around different test models is visualized with DHI using a high-speed digital camera (0.2 million fps). These visualization results are compared with schlieren visualization and CFD simulation results. Fringe analysis is carried out to estimate the density of the flow field.

  20. Adult Visual Cortical Plasticity

    PubMed Central

    Gilbert, Charles D.; Li, Wu

    2012-01-01

    The visual cortex has the capacity for experience dependent change, or cortical plasticity, that is retained throughout life. Plasticity is invoked for encoding information during perceptual learning, by internally representing the regularities of the visual environment, which is useful for facilitating intermediate level vision - contour integration and surface segmentation. The same mechanisms have adaptive value for functional recovery after CNS damage, such as that associated with stroke or neurodegenerative disease. A common feature to plasticity in primary visual cortex (V1) is an association field that links contour elements across the visual field. The circuitry underlying the association field includes a plexus of long range horizontal connections formed by cortical pyramidal cells. These connections undergo rapid and exuberant sprouting and pruning in response to removal of sensory input, which can account for the topographic reorganization following retinal lesions. Similar alterations in cortical circuitry may be involved in perceptual learning, and the changes observed in V1 may be representative of how learned information is encoded throughout the cerebral cortex. PMID:22841310

  1. Feed-forward segmentation of figure-ground and assignment of border-ownership.

    PubMed

    Supèr, Hans; Romeo, August; Keil, Matthias

    2010-05-19

    Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment.

  2. Feed-Forward Segmentation of Figure-Ground and Assignment of Border-Ownership

    PubMed Central

    Supèr, Hans; Romeo, August; Keil, Matthias

    2010-01-01

    Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment. PMID:20502718

  3. Compression-based integral curve data reuse framework for flow visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Fan; Bi, Chongke; Guo, Hanqi

    Currently, by default, integral curves are repeatedly re-computed in different flow visualization applications, such as FTLE field computation, source-destination queries, etc., leading to unnecessary resource cost. We present a compression-based data reuse framework for integral curves, to greatly reduce their retrieval cost, especially in a resource-limited environment. In our design, a hierarchical and hybrid compression scheme is proposed to balance three objectives, including high compression ratio, controllable error, and low decompression cost. Specifically, we use and combine digitized curve sparse representation, floating-point data compression, and octree space partitioning to adaptively achieve the objectives. Results have shown that our data reusemore » framework could acquire tens of times acceleration in the resource-limited environment compared to on-the-fly particle tracing, and keep controllable information loss. Moreover, our method could provide fast integral curve retrieval for more complex data, such as unstructured mesh data.« less

  4. Multisensory integration processing during olfactory-visual stimulation-An fMRI graph theoretical network analysis.

    PubMed

    Ripp, Isabelle; Zur Nieden, Anna-Nora; Blankenagel, Sonja; Franzmeier, Nicolai; Lundström, Johan N; Freiherr, Jessica

    2018-05-07

    In this study, we aimed to understand how whole-brain neural networks compute sensory information integration based on the olfactory and visual system. Task-related functional magnetic resonance imaging (fMRI) data was obtained during unimodal and bimodal sensory stimulation. Based on the identification of multisensory integration processing (MIP) specific hub-like network nodes analyzed with network-based statistics using region-of-interest based connectivity matrices, we conclude the following brain areas to be important for processing the presented bimodal sensory information: right precuneus connected contralaterally to the supramarginal gyrus for memory-related imagery and phonology retrieval, and the left middle occipital gyrus connected ipsilaterally to the inferior frontal gyrus via the inferior fronto-occipital fasciculus including functional aspects of working memory. Applied graph theory for quantification of the resulting complex network topologies indicates a significantly increased global efficiency and clustering coefficient in networks including aspects of MIP reflecting a simultaneous better integration and segregation. Graph theoretical analysis of positive and negative network correlations allowing for inferences about excitatory and inhibitory network architectures revealed-not significant, but very consistent-that MIP-specific neural networks are dominated by inhibitory relationships between brain regions involved in stimulus processing. © 2018 Wiley Periodicals, Inc.

  5. How does interhemispheric communication in visual word recognition work? Deciding between early and late integration accounts of the split fovea theory.

    PubMed

    Van der Haegen, Lise; Brysbaert, Marc; Davis, Colin J

    2009-02-01

    It has recently been shown that interhemispheric communication is needed for the processing of foveally presented words. In this study, we examine whether the integration of information happens at an early stage, before word recognition proper starts, or whether the integration is part of the recognition process itself. Two lexical decision experiments are reported in which words were presented at different fixation positions. In Experiment 1, a masked form priming task was used with primes that had two adjacent letters transposed. The results showed that although the fixation position had a substantial influence on the transposed letter priming effect, the priming was not smaller when the transposed letters were sent to different hemispheres than when they were projected to the same hemisphere. In Experiment 2, stimuli were presented that either had high frequency hemifield competitors or could be identified unambiguously on the basis of the information in one hemifield. Again, the lexical decision times did not vary as a function of hemifield competitors. These results are consistent with the early integration account, as presented in the SERIOL model of visual word recognition.

  6. How vision and movement combine in the hippocampal place code.

    PubMed

    Chen, Guifen; King, John A; Burgess, Neil; O'Keefe, John

    2013-01-02

    How do external environmental and internal movement-related information combine to tell us where we are? We examined the neural representation of environmental location provided by hippocampal place cells while mice navigated a virtual reality environment in which both types of information could be manipulated. Extracellular recordings were made from region CA1 of head-fixed mice navigating a virtual linear track and running in a similar real environment. Despite the absence of vestibular motion signals, normal place cell firing and theta rhythmicity were found. Visual information alone was sufficient for localized firing in 25% of place cells and to maintain a local field potential theta rhythm (but with significantly reduced power). Additional movement-related information was required for normally localized firing by the remaining 75% of place cells. Trials in which movement and visual information were put into conflict showed that they combined nonlinearly to control firing location, and that the relative influence of movement versus visual information varied widely across place cells. However, within this heterogeneity, the behavior of fully half of the place cells conformed to a model of path integration in which the presence of visual cues at the start of each run together with subsequent movement-related updating of position was sufficient to maintain normal fields.

  7. Measuring healthcare integration: Operationalization of a framework for a systems evaluation of palliative care structures, processes, and outcomes.

    PubMed

    Bainbridge, Daryl; Brazil, Kevin; Ploeg, Jenny; Krueger, Paul; Taniguchi, Alan

    2016-06-01

    Healthcare integration is a priority in many countries, yet there remains little direction on how to systematically evaluate this construct to inform further development. The examination of community-based palliative care networks provides an ideal opportunity for the advancement of integration measures, in consideration of how fundamental provider cohesion is to effective care at end of life. This article presents a variable-oriented analysis from a theory-based case study of a palliative care network to help bridge the knowledge gap in integration measurement. Data from a mixed-methods case study were mapped to a conceptual framework for evaluating integrated palliative care and a visual array depicting the extent of key factors in the represented palliative care network was formulated. The study included data from 21 palliative care network administrators, 86 healthcare professionals, and 111 family caregivers, all from an established palliative care network in Ontario, Canada. The framework used to guide this research proved useful in assessing qualities of integration and functioning in the palliative care network. The resulting visual array of elements illustrates that while this network performed relatively well at the multiple levels considered, room for improvement exists, particularly in terms of interventions that could facilitate the sharing of information. This study, along with the other evaluative examples mentioned, represents important initial attempts at empirically and comprehensively examining network-integrated palliative care and healthcare integration in general. © The Author(s) 2016.

  8. Automated Visual Cognitive Tasks for Recording Neural Activity Using a Floor Projection Maze

    PubMed Central

    Kent, Brendon W.; Yang, Fang-Chi; Burwell, Rebecca D.

    2014-01-01

    Neuropsychological tasks used in primates to investigate mechanisms of learning and memory are typically visually guided cognitive tasks. We have developed visual cognitive tasks for rats using the Floor Projection Maze1,2 that are optimized for visual abilities of rats permitting stronger comparisons of experimental findings with other species. In order to investigate neural correlates of learning and memory, we have integrated electrophysiological recordings into fully automated cognitive tasks on the Floor Projection Maze1,2. Behavioral software interfaced with an animal tracking system allows monitoring of the animal's behavior with precise control of image presentation and reward contingencies for better trained animals. Integration with an in vivo electrophysiological recording system enables examination of behavioral correlates of neural activity at selected epochs of a given cognitive task. We describe protocols for a model system that combines automated visual presentation of information to rodents and intracranial reward with electrophysiological approaches. Our model system offers a sophisticated set of tools as a framework for other cognitive tasks to better isolate and identify specific mechanisms contributing to particular cognitive processes. PMID:24638057

  9. Topography from shading and stereo

    NASA Technical Reports Server (NTRS)

    Horn, Berthold K. P.

    1994-01-01

    Methods exploiting photometric information in images that have been developed in machine vision can be applied to planetary imagery. Integrating shape from shading, binocular stereo, and photometric stereo yields a robust system for recovering detailed surface shape and surface reflectance information. Such a system is useful in producing quantitative information from the vast volume of imagery being received, as well as in helping visualize the underlying surface.

  10. How a surgeon becomes superman by visualization of intelligently fused multi-modalities

    NASA Astrophysics Data System (ADS)

    Erat, Okan; Pauly, Olivier; Weidert, Simon; Thaller, Peter; Euler, Ekkehard; Mutschler, Wolf; Navab, Nassir; Fallavollita, Pascal

    2013-03-01

    Motivation: The existing visualization of the Camera augmented mobile C-arm (CamC) system does not have enough cues for depth information and presents the anatomical information in a confusing way to surgeons. Methods: We propose a method that segments anatomical information from X-ray and then augment it on the video images. To provide depth cues, pixels belonging to video images are classified as skin and object classes. The augmentation of anatomical information from X-ray is performed only when pixels have a larger probability of belonging to skin class. Results: We tested our algorithm by displaying the new visualization to 2 expert surgeons and 1 medical student during three surgical workflow sequences of the interlocking of intramedullary nail procedure, namely: skin incision, center punching, and drilling. Via a survey questionnaire, they were asked to assess the new visualization when compared to the current alphablending overlay image displayed by CamC. The participants all agreed (100%) that occlusion and instrument tip position detection were immediately improved with our technique. When asked if our visualization has potential to replace the existing alpha-blending overlay during interlocking procedures, all participants did not hesitate to suggest an immediate integration of the visualization for the correct navigation and guidance of the procedure. Conclusion: Current alpha blending visualizations lack proper depth cues and can be a source of confusion for the surgeons when performing surgery. Our visualization concept shows great potential in alleviating occlusion and facilitating clinician understanding during specific workflow steps of the intramedullary nailing procedure.

  11. Programming (Tips) for Physicists & Engineers

    ScienceCinema

    Ozcan, Erkcan

    2018-02-19

    Programming for today's physicists and engineers. Work environment: today's astroparticle, accelerator experiments and information industry rely on large collaborations. Need more than ever: code sharing/resuse, code building--framework integration, documentation and good visualization, working remotely, not reinventing the wheel.

  12. Programming (Tips) for Physicists & Engineers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ozcan, Erkcan

    2010-07-13

    Programming for today's physicists and engineers. Work environment: today's astroparticle, accelerator experiments and information industry rely on large collaborations. Need more than ever: code sharing/resuse, code building--framework integration, documentation and good visualization, working remotely, not reinventing the wheel.

  13. Active sensing in the categorization of visual patterns

    PubMed Central

    Yang, Scott Cheng-Hsin; Lengyel, Máté; Wolpert, Daniel M

    2016-01-01

    Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations. DOI: http://dx.doi.org/10.7554/eLife.12215.001 PMID:26880546

  14. Visual cues and listening effort: individual variability.

    PubMed

    Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y

    2011-10-01

    To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and 2 presentation modalities (audio only [AO] and auditory-visual [AV]). Signal-to-noise ratios were adjusted to provide matched speech recognition across audio-only and AV noise conditions. Also measured were subjective perceptions of listening effort and 2 predictive variables: (a) lipreading ability and (b) WMC. Objective and subjective results indicated that listening effort increased in the presence of noise, but on average the addition of visual cues did not significantly affect the magnitude of listening effort. Although there was substantial individual variability, on average participants who were better lipreaders or had larger WMCs demonstrated reduced listening effort in noise in AV conditions. Overall, the results support the hypothesis that integrating auditory and visual cues requires cognitive resources in some participants. The data indicate that low lipreading ability or low WMC is associated with relatively effortful integration of auditory and visual information in noise.

  15. Learning to rank using user clicks and visual features for image retrieval.

    PubMed

    Yu, Jun; Tao, Dacheng; Wang, Meng; Rui, Yong

    2015-04-01

    The inconsistency between textual features and visual contents can cause poor image search results. To solve this problem, click features, which are more reliable than textual information in justifying the relevance between a query and clicked images, are adopted in image ranking model. However, the existing ranking model cannot integrate visual features, which are efficient in refining the click-based search results. In this paper, we propose a novel ranking model based on the learning to rank framework. Visual features and click features are simultaneously utilized to obtain the ranking model. Specifically, the proposed approach is based on large margin structured output learning and the visual consistency is integrated with the click features through a hypergraph regularizer term. In accordance with the fast alternating linearization method, we design a novel algorithm to optimize the objective function. This algorithm alternately minimizes two different approximations of the original objective function by keeping one function unchanged and linearizing the other. We conduct experiments on a large-scale dataset collected from the Microsoft Bing image search engine, and the results demonstrate that the proposed learning to rank models based on visual features and user clicks outperforms state-of-the-art algorithms.

  16. When a hit sounds like a kiss: An electrophysiological exploration of semantic processing in visual narrative.

    PubMed

    Manfredi, Mirella; Cohn, Neil; Kutas, Marta

    2017-06-01

    Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. SATORI: a system for ontology-guided visual exploration of biomedical data repositories.

    PubMed

    Lekschas, Fritz; Gehlenborg, Nils

    2018-04-01

    The ever-increasing number of biomedical datasets provides tremendous opportunities for re-use but current data repositories provide limited means of exploration apart from text-based search. Ontological metadata annotations provide context by semantically relating datasets. Visualizing this rich network of relationships can improve the explorability of large data repositories and help researchers find datasets of interest. We developed SATORI-an integrative search and visual exploration interface for the exploration of biomedical data repositories. The design is informed by a requirements analysis through a series of semi-structured interviews. We evaluated the implementation of SATORI in a field study on a real-world data collection. SATORI enables researchers to seamlessly search, browse and semantically query data repositories via two visualizations that are highly interconnected with a powerful search interface. SATORI is an open-source web application, which is freely available at http://satori.refinery-platform.org and integrated into the Refinery Platform. nils@hms.harvard.edu. Supplementary data are available at Bioinformatics online.

  18. When a hit sounds like a kiss: an electrophysiological exploration of semantic processing in visual narrative

    PubMed Central

    Manfredi, Mirella; Cohn, Neil; Kutas, Marta

    2017-01-01

    Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. PMID:28242517

  19. Three Types of Cortical L5 Neurons that Differ in Brain-Wide Connectivity and Function

    PubMed Central

    Kim, Euiseok J.; Juavinett, Ashley L.; Kyubwa, Espoir M.; Jacobs, Matthew W.; Callaway, Edward M.

    2015-01-01

    SUMMARY Cortical layer 5 (L5) pyramidal neurons integrate inputs from many sources and distribute outputs to cortical and subcortical structures. Previous studies demonstrate two L5 pyramid types: cortico-cortical (CC) and cortico-subcortical (CS). We characterize connectivity and function of these cell types in mouse primary visual cortex and reveal a new subtype. Unlike previously described L5 CC and CS neurons, this new subtype does not project to striatum [cortico-cortical, non-striatal (CC-NS)] and has distinct morphology, physiology and visual responses. Monosynaptic rabies tracing reveals that CC neurons preferentially receive input from higher visual areas, while CS neurons receive more input from structures implicated in top-down modulation of brain states. CS neurons are also more direction-selective and prefer faster stimuli than CC neurons. These differences suggest distinct roles as specialized output channels, with CS neurons integrating information and generating responses more relevant to movement control and CC neurons being more important in visual perception. PMID:26671462

  20. Three Types of Cortical Layer 5 Neurons That Differ in Brain-wide Connectivity and Function.

    PubMed

    Kim, Euiseok J; Juavinett, Ashley L; Kyubwa, Espoir M; Jacobs, Matthew W; Callaway, Edward M

    2015-12-16

    Cortical layer 5 (L5) pyramidal neurons integrate inputs from many sources and distribute outputs to cortical and subcortical structures. Previous studies demonstrate two L5 pyramid types: cortico-cortical (CC) and cortico-subcortical (CS). We characterize connectivity and function of these cell types in mouse primary visual cortex and reveal a new subtype. Unlike previously described L5 CC and CS neurons, this new subtype does not project to striatum [cortico-cortical, non-striatal (CC-NS)] and has distinct morphology, physiology, and visual responses. Monosynaptic rabies tracing reveals that CC neurons preferentially receive input from higher visual areas, while CS neurons receive more input from structures implicated in top-down modulation of brain states. CS neurons are also more direction-selective and prefer faster stimuli than CC neurons. These differences suggest distinct roles as specialized output channels, with CS neurons integrating information and generating responses more relevant to movement control and CC neurons being more important in visual perception. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Modular Representation of Luminance Polarity In the Superficial Layers Of Primary Visual Cortex

    PubMed Central

    Smith, Gordon B.; Whitney, David E.; Fitzpatrick, David

    2016-01-01

    Summary The spatial arrangement of luminance increments (ON) and decrements (OFF) falling on the retina provides a wealth of information used by central visual pathways to construct coherent representations of visual scenes. But how the polarity of luminance change is represented in the activity of cortical circuits remains unclear. Using wide-field epifluorescence and two-photon imaging we demonstrate a robust modular representation of luminance polarity (ON or OFF) in the superficial layers of ferret primary visual cortex. Polarity-specific domains are found with both uniform changes in luminance and single light/dark edges, and include neurons selective for orientation and direction of motion. The integration of orientation and polarity preference is evident in the selectivity and discrimination capabilities of most layer 2/3 neurons. We conclude that polarity selectivity is an integral feature of layer 2/3 neurons, ensuring that the distinction between light and dark stimuli is available for further processing in downstream extrastriate areas. PMID:26590348

  2. Capability for Integrated Systems Risk-Reduction Analysis

    NASA Technical Reports Server (NTRS)

    Mindock, J.; Lumpkins, S.; Shelhamer, M.

    2016-01-01

    NASA's Human Research Program (HRP) is working to increase the likelihoods of human health and performance success during long-duration missions, and subsequent crew long-term health. To achieve these goals, there is a need to develop an integrated understanding of how the complex human physiological-socio-technical mission system behaves in spaceflight. This understanding will allow HRP to provide cross-disciplinary spaceflight countermeasures while minimizing resources such as mass, power, and volume. This understanding will also allow development of tools to assess the state of and enhance the resilience of individual crewmembers, teams, and the integrated mission system. We will discuss a set of risk-reduction questions that has been identified to guide the systems approach necessary to meet these needs. In addition, a framework of factors influencing human health and performance in space, called the Contributing Factor Map (CFM), is being applied as the backbone for incorporating information addressing these questions from sources throughout HRP. Using the common language of the CFM, information from sources such as the Human System Risk Board summaries, Integrated Research Plan, and HRP-funded publications has been combined and visualized in ways that allow insight into cross-disciplinary interconnections in a systematic, standardized fashion. We will show examples of these visualizations. We will also discuss applications of the resulting analysis capability that can inform science portfolio decisions, such as areas in which cross-disciplinary solicitations or countermeasure development will potentially be fruitful.

  3. Sequence Diversity Diagram for comparative analysis of multiple sequence alignments.

    PubMed

    Sakai, Ryo; Aerts, Jan

    2014-01-01

    The sequence logo is a graphical representation of a set of aligned sequences, commonly used to depict conservation of amino acid or nucleotide sequences. Although it effectively communicates the amount of information present at every position, this visual representation falls short when the domain task is to compare between two or more sets of aligned sequences. We present a new visual presentation called a Sequence Diversity Diagram and validate our design choices with a case study. Our software was developed using the open-source program called Processing. It loads multiple sequence alignment FASTA files and a configuration file, which can be modified as needed to change the visualization. The redesigned figure improves on the visual comparison of two or more sets, and it additionally encodes information on sequential position conservation. In our case study of the adenylate kinase lid domain, the Sequence Diversity Diagram reveals unexpected patterns and new insights, for example the identification of subgroups within the protein subfamily. Our future work will integrate this visual encoding into interactive visualization tools to support higher level data exploration tasks.

  4. Musical expertise is related to altered functional connectivity during audiovisual integration

    PubMed Central

    Paraskevopoulos, Evangelos; Kraneburg, Anja; Herholz, Sibylle Cornelia; Bamidis, Panagiotis D.; Pantev, Christo

    2015-01-01

    The present study investigated the cortical large-scale functional network underpinning audiovisual integration via magnetoencephalographic recordings. The reorganization of this network related to long-term musical training was investigated by comparing musicians to nonmusicians. Connectivity was calculated on the basis of the estimated mutual information of the sources’ activity, and the corresponding networks were statistically compared. Nonmusicians’ results indicated that the cortical network associated with audiovisual integration supports visuospatial processing and attentional shifting, whereas a sparser network, related to spatial awareness supports the identification of audiovisual incongruences. In contrast, musicians’ results showed enhanced connectivity in regions related to the identification of auditory pattern violations. Hence, nonmusicians rely on the processing of visual clues for the integration of audiovisual information, whereas musicians rely mostly on the corresponding auditory information. The large-scale cortical network underpinning multisensory integration is reorganized due to expertise in a cognitive domain that largely involves audiovisual integration, indicating long-term training-related neuroplasticity. PMID:26371305

  5. VESPA: software to facilitate genomic annotation of prokaryotic organisms through integration of proteomic and transcriptomic data.

    PubMed

    Peterson, Elena S; McCue, Lee Ann; Schrimpe-Rutledge, Alexandra C; Jensen, Jeffrey L; Walker, Hyunjoo; Kobold, Markus A; Webb, Samantha R; Payne, Samuel H; Ansong, Charles; Adkins, Joshua N; Cannon, William R; Webb-Robertson, Bobbie-Jo M

    2012-04-05

    The procedural aspects of genome sequencing and assembly have become relatively inexpensive, yet the full, accurate structural annotation of these genomes remains a challenge. Next-generation sequencing transcriptomics (RNA-Seq), global microarrays, and tandem mass spectrometry (MS/MS)-based proteomics have demonstrated immense value to genome curators as individual sources of information, however, integrating these data types to validate and improve structural annotation remains a major challenge. Current visual and statistical analytic tools are focused on a single data type, or existing software tools are retrofitted to analyze new data forms. We present Visual Exploration and Statistics to Promote Annotation (VESPA) is a new interactive visual analysis software tool focused on assisting scientists with the annotation of prokaryotic genomes though the integration of proteomics and transcriptomics data with current genome location coordinates. VESPA is a desktop Java™ application that integrates high-throughput proteomics data (peptide-centric) and transcriptomics (probe or RNA-Seq) data into a genomic context, all of which can be visualized at three levels of genomic resolution. Data is interrogated via searches linked to the genome visualizations to find regions with high likelihood of mis-annotation. Search results are linked to exports for further validation outside of VESPA or potential coding-regions can be analyzed concurrently with the software through interaction with BLAST. VESPA is demonstrated on two use cases (Yersinia pestis Pestoides F and Synechococcus sp. PCC 7002) to demonstrate the rapid manner in which mis-annotations can be found and explored in VESPA using either proteomics data alone, or in combination with transcriptomic data. VESPA is an interactive visual analytics tool that integrates high-throughput data into a genomic context to facilitate the discovery of structural mis-annotations in prokaryotic genomes. Data is evaluated via visual analysis across multiple levels of genomic resolution, linked searches and interaction with existing bioinformatics tools. We highlight the novel functionality of VESPA and core programming requirements for visualization of these large heterogeneous datasets for a client-side application. The software is freely available at https://www.biopilot.org/docs/Software/Vespa.php.

  6. VESPA: software to facilitate genomic annotation of prokaryotic organisms through integration of proteomic and transcriptomic data

    PubMed Central

    2012-01-01

    Background The procedural aspects of genome sequencing and assembly have become relatively inexpensive, yet the full, accurate structural annotation of these genomes remains a challenge. Next-generation sequencing transcriptomics (RNA-Seq), global microarrays, and tandem mass spectrometry (MS/MS)-based proteomics have demonstrated immense value to genome curators as individual sources of information, however, integrating these data types to validate and improve structural annotation remains a major challenge. Current visual and statistical analytic tools are focused on a single data type, or existing software tools are retrofitted to analyze new data forms. We present Visual Exploration and Statistics to Promote Annotation (VESPA) is a new interactive visual analysis software tool focused on assisting scientists with the annotation of prokaryotic genomes though the integration of proteomics and transcriptomics data with current genome location coordinates. Results VESPA is a desktop Java™ application that integrates high-throughput proteomics data (peptide-centric) and transcriptomics (probe or RNA-Seq) data into a genomic context, all of which can be visualized at three levels of genomic resolution. Data is interrogated via searches linked to the genome visualizations to find regions with high likelihood of mis-annotation. Search results are linked to exports for further validation outside of VESPA or potential coding-regions can be analyzed concurrently with the software through interaction with BLAST. VESPA is demonstrated on two use cases (Yersinia pestis Pestoides F and Synechococcus sp. PCC 7002) to demonstrate the rapid manner in which mis-annotations can be found and explored in VESPA using either proteomics data alone, or in combination with transcriptomic data. Conclusions VESPA is an interactive visual analytics tool that integrates high-throughput data into a genomic context to facilitate the discovery of structural mis-annotations in prokaryotic genomes. Data is evaluated via visual analysis across multiple levels of genomic resolution, linked searches and interaction with existing bioinformatics tools. We highlight the novel functionality of VESPA and core programming requirements for visualization of these large heterogeneous datasets for a client-side application. The software is freely available at https://www.biopilot.org/docs/Software/Vespa.php. PMID:22480257

  7. Visual-auditory integration for visual search: a behavioral study in barn owls

    PubMed Central

    Hazan, Yael; Kra, Yonatan; Yarin, Inna; Wagner, Hermann; Gutfreund, Yoram

    2015-01-01

    Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual-auditory integration at the neuronal level. However, behavioral data on visual-auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention toward salient stimuli. We attached miniature wireless video cameras on barn owls’ heads (OwlCam) to track their target of gaze. We first provide evidence that the area centralis (a retinal area with a maximal density of photoreceptors) is used as a functional fovea in barn owls. Thus, by mapping the projection of the area centralis on the OwlCam’s video frame, it is possible to extract the target of gaze. For the experiment, owls were positioned on a high perch and four food items were scattered in a large arena on the floor. In addition, a hidden loudspeaker was positioned in the arena. The positions of the food items and speaker were changed every session. Video sequences from the OwlCam were saved for offline analysis while the owls spontaneously scanned the room and the food items with abrupt gaze shifts (head saccades). From time to time during the experiment, a brief sound was emitted from the speaker. The fixation points immediately following the sounds were extracted and the distances between the gaze position and the nearest items and loudspeaker were measured. The head saccades were rarely toward the location of the sound source but to salient visual features in the room, such as the door knob or the food items. However, among the food items, the one closest to the loudspeaker had the highest probability of attracting a gaze shift. This result supports the notion that auditory signals are integrated with visual information for the selection of the next visual search target. PMID:25762905

  8. Comparing capacity coefficient and dual task assessment of visual multitasking workload

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blaha, Leslie M.

    Capacity coefficient analysis could offer a theoretically grounded alternative approach to subjective measures and dual task assessment of cognitive workload. Workload capacity or workload efficiency is a human information processing modeling construct defined as the amount of information that can be processed by the visual cognitive system given a specified of amount of time. In this paper, I explore the relationship between capacity coefficient analysis of workload efficiency and dual task response time measures. To capture multitasking performance, I examine how the relatively simple assumptions underlying the capacity construct generalize beyond the single visual decision making tasks. The fundamental toolsmore » for measuring workload efficiency are the integrated hazard and reverse hazard functions of response times, which are defined by log transforms of the response time distribution. These functions are used in the capacity coefficient analysis to provide a functional assessment of the amount of work completed by the cognitive system over the entire range of response times. For the study of visual multitasking, capacity coefficient analysis enables a comparison of visual information throughput as the number of tasks increases from one to two to any number of simultaneous tasks. I illustrate the use of capacity coefficients for visual multitasking on sample data from dynamic multitasking in the modified Multi-attribute Task Battery.« less

  9. A measure for assessing the effects of audiovisual speech integration.

    PubMed

    Altieri, Nicholas; Townsend, James T; Wenger, Michael J

    2014-06-01

    We propose a measure of audiovisual speech integration that takes into account accuracy and response times. This measure should prove beneficial for researchers investigating multisensory speech recognition, since it relates to normal-hearing and aging populations. As an example, age-related sensory decline influences both the rate at which one processes information and the ability to utilize cues from different sensory modalities. Our function assesses integration when both auditory and visual information are available, by comparing performance on these audiovisual trials with theoretical predictions for performance under the assumptions of parallel, independent self-terminating processing of single-modality inputs. We provide example data from an audiovisual identification experiment and discuss applications for measuring audiovisual integration skills across the life span.

  10. Speech Cues Contribute to Audiovisual Spatial Integration

    PubMed Central

    Bishop, Christopher W.; Miller, Lee M.

    2011-01-01

    Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral ‘what’ and dorsal ‘where’ pathways. PMID:21909378

  11. Mediator infrastructure for information integration and semantic data integration environment for biomedical research.

    PubMed

    Grethe, Jeffrey S; Ross, Edward; Little, David; Sanders, Brian; Gupta, Amarnath; Astakhov, Vadim

    2009-01-01

    This paper presents current progress in the development of semantic data integration environment which is a part of the Biomedical Informatics Research Network (BIRN; http://www.nbirn.net) project. BIRN is sponsored by the National Center for Research Resources (NCRR), a component of the National Institutes of Health (NIH). A goal is the development of a cyberinfrastructure for biomedical research that supports advance data acquisition, data storage, data management, data integration, data mining, data visualization, and other computing and information processing services over the Internet. Each participating institution maintains storage of their experimental or computationally derived data. Mediator-based data integration system performs semantic integration over the databases to enable researchers to perform analyses based on larger and broader datasets than would be available from any single institution's data. This paper describes recent revision of the system architecture, implementation, and capabilities of the semantically based data integration environment for BIRN.

  12. Learning in the Middle School Earth Science Classroom: Students Conceptually Integrate New Knowledge Using Intelligent Laserdiscs.

    ERIC Educational Resources Information Center

    Freitag, Patricia K.; Abegg, Gerald L.

    A study was designed to describe how middle school students select, link, and determine relationships between textual and visual information. Fourteen authoring groups were formed from both eighth-grade earth science classes of one veteran teacher in one school. Each group was challenged to produce an informative interactive laservideodisc project…

  13. 76 FR 54225 - Draft Toxicological Review of 1,4-Dioxane: In Support of Summary Information on the Integrated...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-31

    .../003). New studies regarding the toxicity of 1,4-dioxane through the inhalation route of exposure are... (U.S. EPA, 2010). These studies have been incorporated into the previously posted assessment for... information). When you register, please indicate if you will need audio-visual equipment (e.g., laptop...

  14. An Active System for Visually-Guided Reaching in 3D across Binocular Fixations

    PubMed Central

    2014-01-01

    Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach's performance is evaluated through experiments on both simulated and real data. PMID:24672295

  15. Cognitive-motor integration deficits in young adult athletes following concussion.

    PubMed

    Brown, Jeffrey A; Dalecki, Marc; Hughes, Cindy; Macpherson, Alison K; Sergio, Lauren E

    2015-01-01

    The ability to perform visually-guided motor tasks requires the transformation of visual information into programmed motor outputs. When the guiding visual information does not align spatially with the motor output, the brain processes rules to integrate the information for an appropriate motor response. Here, we look at how performance on such tasks is affected in young adult athletes with concussion history. Participants displaced a cursor from a central to peripheral targets on a vertical display by sliding their finger along a touch sensitive screen in one of two spatial planes. The addition of a memory component, along with variations in cursor feedback increased task complexity across conditions. Significant main effects between participants with concussion history and healthy controls without concussion history were observed in timing and accuracy measures. Importantly, the deficits were distinctly more pronounced for participants with concussion history compared to healthy controls, especially when the brain had to control movements having two levels of decoupling between vision and action. A discriminant analysis correctly classified athletes with a history of concussion based on task performance with an accuracy of 94 %, despite the majority of these athletes being rated asymptomatic by current standards. These findings correspond to our previous work with adults at risk of developing dementia, and support the use of cognitive motor integration as an enhanced assessment tool for those who may have mild brain dysfunction. Such a task may provide a more sensitive metric of performance relevant to daily function than what is currently in use, to assist in return to play/work/learn decisions.

  16. Automatic frame-centered object representation and integration revealed by iconic memory, visual priming, and backward masking.

    PubMed

    Lin, Zhicheng; He, Sheng

    2012-10-25

    Object identities ("what") and their spatial locations ("where") are processed in distinct pathways in the visual system, raising the question of how the what and where information is integrated. Because of object motions and eye movements, the retina-based representations are unstable, necessitating nonretinotopic representation and integration. A potential mechanism is to code and update objects according to their reference frames (i.e., frame-centered representation and integration). To isolate frame-centered processes, in a frame-to-frame apparent motion configuration, we (a) presented two preceding or trailing objects on the same frame, equidistant from the target on the other frame, to control for object-based (frame-based) effect and space-based effect, and (b) manipulated the target's relative location within its frame to probe frame-centered effect. We show that iconic memory, visual priming, and backward masking depend on objects' relative frame locations, orthogonal of the retinotopic coordinate. These findings not only reveal that iconic memory, visual priming, and backward masking can be nonretinotopic but also demonstrate that these processes are automatically constrained by contextual frames through a frame-centered mechanism. Thus, object representation is robustly and automatically coupled to its reference frame and continuously being updated through a frame-centered, location-specific mechanism. These findings lead to an object cabinet framework, in which objects ("files") within the reference frame ("cabinet") are orderly coded relative to the frame.

  17. Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition

    PubMed Central

    Shu, Na; Gao, Zhiyong; Chen, Xiangan; Liu, Haihua

    2015-01-01

    Humans can easily understand other people’s actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1), and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model. PMID:26132270

  18. The relationship between level of autistic traits and local bias in the context of the McGurk effect

    PubMed Central

    Ujiie, Yuta; Asai, Tomohisa; Wakabayashi, Akio

    2015-01-01

    The McGurk effect is a well-known illustration that demonstrates the influence of visual information on hearing in the context of speech perception. Some studies have reported that individuals with autism spectrum disorder (ASD) display abnormal processing of audio-visual speech integration, while other studies showed contradictory results. Based on the dimensional model of ASD, we administered two analog studies to examine the link between level of autistic traits, as assessed by the Autism Spectrum Quotient (AQ), and the McGurk effect among a sample of university students. In the first experiment, we found that autistic traits correlated negatively with fused (McGurk) responses. Then, we manipulated presentation types of visual stimuli to examine whether the local bias toward visual speech cues modulated individual differences in the McGurk effect. The presentation included four types of visual images, comprising no image, mouth only, mouth and eyes, and full face. The results revealed that global facial information facilitates the influence of visual speech cues on McGurk stimuli. Moreover, individual differences between groups with low and high levels of autistic traits appeared when the full-face visual speech cue with an incongruent voice condition was presented. These results suggest that individual differences in the McGurk effect might be due to a weak ability to process global facial information in individuals with high levels of autistic traits. PMID:26175705

  19. Novel Models of Visual Topographic Map Alignment in the Superior Colliculus

    PubMed Central

    El-Ghazawi, Tarek A.; Triplett, Jason W.

    2016-01-01

    The establishment of precise neuronal connectivity during development is critical for sensing the external environment and informing appropriate behavioral responses. In the visual system, many connections are organized topographically, which preserves the spatial order of the visual scene. The superior colliculus (SC) is a midbrain nucleus that integrates visual inputs from the retina and primary visual cortex (V1) to regulate goal-directed eye movements. In the SC, topographically organized inputs from the retina and V1 must be aligned to facilitate integration. Previously, we showed that retinal input instructs the alignment of V1 inputs in the SC in a manner dependent on spontaneous neuronal activity; however, the mechanism of activity-dependent instruction remains unclear. To begin to address this gap, we developed two novel computational models of visual map alignment in the SC that incorporate distinct activity-dependent components. First, a Correlational Model assumes that V1 inputs achieve alignment with established retinal inputs through simple correlative firing mechanisms. A second Integrational Model assumes that V1 inputs contribute to the firing of SC neurons during alignment. Both models accurately replicate in vivo findings in wild type, transgenic and combination mutant mouse models, suggesting either activity-dependent mechanism is plausible. In silico experiments reveal distinct behaviors in response to weakening retinal drive, providing insight into the nature of the system governing map alignment depending on the activity-dependent strategy utilized. Overall, we describe novel computational frameworks of visual map alignment that accurately model many aspects of the in vivo process and propose experiments to test them. PMID:28027309

  20. Workshop proceedings: Information Systems for Space Astrophysics in the 21st Century, volume 1

    NASA Technical Reports Server (NTRS)

    Cutts, James (Editor); Ng, Edward (Editor)

    1991-01-01

    The Astrophysical Information Systems Workshop was one of the three Integrated Technology Planning workshops. Its objectives were to develop an understanding of future mission requirements for information systems, the potential role of technology in meeting these requirements, and the areas in which NASA investment might have the greatest impact. Workshop participants were briefed on the astrophysical mission set with an emphasis on those missions that drive information systems technology, the existing NASA space-science operations infrastructure, and the ongoing and planned NASA information systems technology programs. Program plans and recommendations were prepared in five technical areas: Mission Planning and Operations; Space-Borne Data Processing; Space-to-Earth Communications; Science Data Systems; and Data Analysis, Integration, and Visualization.

  1. Spatial attention enhances the selective integration of activity from area MT.

    PubMed

    Masse, Nicolas Y; Herrington, Todd M; Cook, Erik P

    2012-09-01

    Distinguishing which of the many proposed neural mechanisms of spatial attention actually underlies behavioral improvements in visually guided tasks has been difficult. One attractive hypothesis is that attention allows downstream neural circuits to selectively integrate responses from the most informative sensory neurons. This would allow behavioral performance to be based on the highest-quality signals available in visual cortex. We examined this hypothesis by asking how spatial attention affects both the stimulus sensitivity of middle temporal (MT) neurons and their corresponding correlation with behavior. Analyzing a data set pooled from two experiments involving four monkeys, we found that spatial attention did not appreciably affect either the stimulus sensitivity of the neurons or the correlation between their activity and behavior. However, for those sessions in which there was a robust behavioral effect of attention, focusing attention inside the neuron's receptive field significantly increased the correlation between these two metrics, an indication of selective integration. These results suggest that, similar to mechanisms proposed for the neural basis of perceptual learning, the behavioral benefits of focusing spatial attention are attributable to selective integration of neural activity from visual cortical areas by their downstream targets.

  2. Comparison of Amplitude-Integrated EEG and Conventional EEG in a Cohort of Premature Infants.

    PubMed

    Meledin, Irina; Abu Tailakh, Muhammad; Gilat, Shlomo; Yogev, Hagai; Golan, Agneta; Novack, Victor; Shany, Eilon

    2017-03-01

    To compare amplitude-integrated EEG (aEEG) and conventional EEG (EEG) activity in premature neonates. Biweekly aEEG and EEG were simultaneously recorded in a cohort of infants born less than 34 weeks gestation. aEEG recordings were visually assessed for lower and upper border amplitude and bandwidth. EEG recordings were compressed for visual evaluation of continuity and assessed using a signal processing software for interburst intervals (IBI) and frequencies' amplitude. Ten-minute segments of aEEG and EEG indices were compared using regression analysis. A total of 189 recordings from 67 infants were made, from which 1697 aEEG/EEG pairs of 10-minute segments were assessed. Good concordance was found for visual assessment of continuity between the 2 methods. EEG IBI, alpha and theta frequencies' amplitudes were negatively correlated to the aEEG lower border while conceptional age (CA) was positively correlated to aEEG lower border ( P < .001). IBI and all frequencies' amplitude were positively correlated to the upper aEEG border ( P ≤ .001). CA was negatively correlated to aEEG span while IBI, alpha, beta, and theta frequencies' amplitude were positively correlated to the aEEG span. Important information is retained and integrated in the transformation of premature neonatal EEG to aEEG. aEEG recordings in high-risk premature neonates reflect reliably EEG background information related to continuity and amplitude.

  3. Immunization registries in the EMR Era

    PubMed Central

    Stevens, Lindsay A.; Palma, Jonathan P.; Pandher, Kiran K.; Longhurst, Christopher A.

    2013-01-01

    Background: The CDC established a national objective to create population-based tracking of immunizations through regional and statewide registries nearly 2 decades ago, and these registries have increased coverage rates and reduced duplicate immunizations. With increased adoption of commercial electronic medical records (EMR), some institutions have used unidirectional links to send immunization data to designated registries. However, access to these registries within a vendor EMR has not been previously reported. Purpose: To develop a visually integrated interface between an EMR and a statewide immunization registry at a previously non-reporting hospital, and to assess subsequent changes in provider use and satisfaction. Methods: A group of healthcare providers were surveyed before and after implementation of the new interface. The surveys addressed access of the California Immunization Registry (CAIR), and satisfaction with the availability of immunization information. Information Technology (IT) teams developed a “smart-link” within the electronic patient chart that provides a single-click interface for visual integration of data within the CAIR database. Results: Use of the tool has increased in the months since its initiation, and over 20,000 new immunizations have been exported successfully to CAIR since the hospital began sharing data with the registry. Survey data suggest that providers find this tool improves workflow and overall satisfaction with availability of immunization data. (p=0.009). Conclusions: Visual integration of external registries into a vendor EMR system is feasible and improves provider satisfaction and registry reporting. PMID:23923096

  4. Bayesian-based integration of multisensory naturalistic perithreshold stimuli.

    PubMed

    Regenbogen, Christina; Johansson, Emilia; Andersson, Patrik; Olsson, Mats J; Lundström, Johan N

    2016-07-29

    Most studies exploring multisensory integration have used clearly perceivable stimuli. According to the principle of inverse effectiveness, the added neural and behavioral benefit of integrating clear stimuli is reduced in comparison to stimuli with degraded and less salient unisensory information. Traditionally, speed and accuracy measures have been analyzed separately with few studies merging these to gain an understanding of speed-accuracy trade-offs in multisensory integration. In two separate experiments, we assessed multisensory integration of naturalistic audio-visual objects consisting of individually-tailored perithreshold dynamic visual and auditory stimuli, presented within a multiple-choice task, using a Bayesian Hierarchical Drift Diffusion Model that combines response time and accuracy. For both experiments, unisensory stimuli were degraded to reach a 75% identification accuracy level for all individuals and stimuli to promote multisensory binding. In Experiment 1, we subsequently presented uni- and their respective bimodal stimuli followed by a 5-alternative-forced-choice task. In Experiment 2, we controlled for low-level integration and attentional differences. Both experiments demonstrated significant superadditive multisensory integration of bimodal perithreshold dynamic information. We present evidence that the use of degraded sensory stimuli may provide a link between previous findings of inverse effectiveness on a single neuron level and overt behavior. We further suggest that a combined measure of accuracy and reaction time may be a more valid and holistic approach of studying multisensory integration and propose the application of drift diffusion models for studying behavioral correlates as well as brain-behavior relationships of multisensory integration. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Problem representation and mathematical problem solving of students of varying math ability.

    PubMed

    Krawec, Jennifer L

    2014-01-01

    The purpose of this study was to examine differences in math problem solving among students with learning disabilities (LD, n = 25), low-achieving students (LA, n = 30), and average-achieving students (AA, n = 29). The primary interest was to analyze the processes students use to translate and integrate problem information while solving problems. Paraphrasing, visual representation, and problem-solving accuracy were measured in eighth grade students using a researcher-modified version of the Mathematical Processing Instrument. Results indicated that both students with LD and LA students struggled with processing but that students with LD were significantly weaker than their LA peers in paraphrasing relevant information. Paraphrasing and visual representation accuracy each accounted for a statistically significant amount of variance in problem-solving accuracy. Finally, the effect of visual representation of relevant information on problem-solving accuracy was dependent on ability; specifically, for students with LD, generating accurate visual representations was more strongly related to problem-solving accuracy than for AA students. Implications for instruction for students with and without LD are discussed.

  6. Exploring the Link between Visual Perception, Visual-Motor Integration, and Reading in Normal Developing and Impaired Children using DTVP-2.

    PubMed

    Bellocchi, Stéphanie; Muneaux, Mathilde; Huau, Andréa; Lévêque, Yohana; Jover, Marianne; Ducrot, Stéphanie

    2017-08-01

    Reading is known to be primarily a linguistic task. However, to successfully decode written words, children also need to develop good visual-perception skills. Furthermore, motor skills are implicated in letter recognition and reading acquisition. Three studies have been designed to determine the link between reading, visual perception, and visual-motor integration using the Developmental Test of Visual Perception version 2 (DTVP-2). Study 1 tests how visual perception and visual-motor integration in kindergarten predict reading outcomes in Grade 1, in typical developing children. Study 2 is aimed at finding out if these skills can be seen as clinical markers in dyslexic children (DD). Study 3 determines if visual-motor integration and motor-reduced visual perception can distinguish DD children according to whether they exhibit or not developmental coordination disorder (DCD). Results showed that phonological awareness and visual-motor integration predicted reading outcomes one year later. DTVP-2 demonstrated similarities and differences in visual-motor integration and motor-reduced visual perception between children with DD, DCD, and both of these deficits. DTVP-2 is a suitable tool to investigate links between visual perception, visual-motor integration and reading, and to differentiate cognitive profiles of children with developmental disabilities (i.e. DD, DCD, and comorbid children). Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Visualization tool for human-machine interface designers

    NASA Astrophysics Data System (ADS)

    Prevost, Michael P.; Banda, Carolyn P.

    1991-06-01

    As modern human-machine systems continue to grow in capabilities and complexity, system operators are faced with integrating and managing increased quantities of information. Since many information components are highly related to each other, optimizing the spatial and temporal aspects of presenting information to the operator has become a formidable task for the human-machine interface (HMI) designer. The authors describe a tool in an early stage of development, the Information Source Layout Editor (ISLE). This tool is to be used for information presentation design and analysis; it uses human factors guidelines to assist the HMI designer in the spatial layout of the information required by machine operators to perform their tasks effectively. These human factors guidelines address such areas as the functional and physical relatedness of information sources. By representing these relationships with metaphors such as spring tension, attractors, and repellers, the tool can help designers visualize the complex constraint space and interacting effects of moving displays to various alternate locations. The tool contains techniques for visualizing the relative 'goodness' of a configuration, as well as mechanisms such as optimization vectors to provide guidance toward a more optimal design. Also available is a rule-based design checker to determine compliance with selected human factors guidelines.

  8. Information Management for a Large Multidisciplinary Project

    NASA Technical Reports Server (NTRS)

    Jones, Kennie H.; Randall, Donald P.; Cronin, Catherine K.

    1992-01-01

    In 1989, NASA's Langley Research Center (LaRC) initiated the High-Speed Airframe Integration Research (HiSAIR) Program to develop and demonstrate an integrated environment for high-speed aircraft design using advanced multidisciplinary analysis and optimization procedures. The major goals of this program were to evolve the interactions among disciplines and promote sharing of information, to provide a timely exchange of information among aeronautical disciplines, and to increase the awareness of the effects each discipline has upon other disciplines. LaRC historically has emphasized the advancement of analysis techniques. HiSAIR was founded to synthesize these advanced methods into a multidisciplinary design process emphasizing information feedback among disciplines and optimization. Crucial to the development of such an environment are the definition of the required data exchanges and the methodology for both recording the information and providing the exchanges in a timely manner. These requirements demand extensive use of data management techniques, graphic visualization, and interactive computing. HiSAIR represents the first attempt at LaRC to promote interdisciplinary information exchange on a large scale using advanced data management methodologies combined with state-of-the-art, scientific visualization techniques on graphics workstations in a distributed computing environment. The subject of this paper is the development of the data management system for HiSAIR.

  9. Mixing apples with oranges: Visual attention deficits in schizophrenia.

    PubMed

    Caprile, Claudia; Cuevas-Esteban, Jorge; Ochoa, Susana; Usall, Judith; Navarra, Jordi

    2015-09-01

    Patients with schizophrenia usually present cognitive deficits. We investigated possible anomalies at filtering out irrelevant visual information in this psychiatric disorder. Associations between these anomalies and positive and/or negative symptomatology were also addressed. A group of individuals with schizophrenia and a control group of healthy adults performed a Garner task. In Experiment 1, participants had to rapidly classify visual stimuli according to their colour while ignoring their shape. These two perceptual dimensions are reported to be "separable" by visual selective attention. In Experiment 2, participants classified the width of other visual stimuli while trying to ignore their height. These two visual dimensions are considered as being "integral" and cannot be attended separately. While healthy perceivers were, in Experiment 1, able to exclusively respond to colour, an irrelevant variation in shape increased colour-based reaction times (RTs) in the group of patients. In Experiment 2, RTs when classifying width increased in both groups as a consequence of perceiving a variation in the irrelevant dimension (height). However, this interfering effect was larger in the group of schizophrenic patients than in the control group. Further analyses revealed that these alterations in filtering out irrelevant visual information correlated with positive symptoms in PANSS scale. A possible limitation of the study is the relatively small sample. Our findings suggest the presence of attention deficits in filtering out irrelevant visual information in schizophrenia that could be related to positive symptomatology. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Visual Stimuli Evoked Action Potentials Trigger Rapidly Propagating Dendritic Calcium Transients in the Frog Optic Tectum Layer 6 Neurons.

    PubMed

    Svirskis, Gytis; Baranauskas, Gytis; Svirskiene, Natasa; Tkatch, Tatiana

    2015-01-01

    The superior colliculus in mammals or the optic tectum in amphibians is a major visual information processing center responsible for generation of orientating responses such as saccades in monkeys or prey catching avoidance behavior in frogs. The conserved structure function of the superior colliculus the optic tectum across distant species such as frogs, birds monkeys permits to draw rather general conclusions after studying a single species. We chose the frog optic tectum because we are able to perform whole-cell voltage-clamp recordings fluorescence imaging of tectal neurons while they respond to a visual stimulus. In the optic tectum of amphibians most visual information is processed by pear-shaped neurons possessing long dendritic branches, which receive the majority of synapses originating from the retinal ganglion cells. Since the first step of the retinal input integration is performed on these dendrites, it is important to know whether this integration is enhanced by active dendritic properties. We demonstrate that rapid calcium transients coinciding with the visual stimulus evoked action potentials in the somatic recordings can be readily detected up to the fine branches of these dendrites. These transients were blocked by calcium channel blockers nifedipine CdCl2 indicating that calcium entered dendrites via voltage-activated L-type calcium channels. The high speed of calcium transient propagation, >300 μm in <10 ms, is consistent with the notion that action potentials, actively propagating along dendrites, open voltage-gated L-type calcium channels causing rapid calcium concentration transients in the dendrites. We conclude that such activation by somatic action potentials of the dendritic voltage gated calcium channels in the close vicinity to the synapses formed by axons of the retinal ganglion cells may facilitate visual information processing in the principal neurons of the frog optic tectum.

  11. OCULUS fire: a command and control system for fire management with crowd sourcing and social media interconnectivity

    NASA Astrophysics Data System (ADS)

    Thomopoulos, Stelios C. A.; Kyriazanos, Dimitris M.; Astyakopoulos, Alkiviadis; Dimitros, Kostantinos; Margonis, Christos; Thanos, Giorgos Konstantinos; Skroumpelou, Katerina

    2016-05-01

    AF3 (Advanced Forest Fire Fighting2) is a European FP7 research project that intends to improve the efficiency of current fire-fighting operations and the protection of human lives, the environment and property by developing innovative technologies to ensure the integration between existing and new systems. To reach this objective, the AF3 project focuses on innovative active and passive countermeasures, early detection and monitoring, integrated crisis management and advanced public information channels. OCULUS Fire is the innovative control and command system developed within AF3 as a monitoring, GIS and Knowledge Extraction System and Visualization Tool. OCULUS Fire includes (a) an interface for real-time updating and reconstructing of maps to enable rerouting based on estimated hazards and risks, (b) processing of GIS dynamic re-construction and mission re-routing, based on the fusion of airborne, satellite, ground and ancillary geolocation data, (c) visualization components for the C2 monitoring system, displaying and managing information arriving from a variety of sources and (d) mission and situational awareness module for OCULUS Fire ground monitoring system being part of an Integrated Crisis Management Information System for ground and ancillary sensors. OCULUS Fire will also process and visualise information from public information channels, social media and also mobile applications by helpful citizens and volunteers. Social networking, community building and crowdsourcing features will enable a higher reliability and less false alarm rates when using such data in the context of safety and security applications.

  12. Musicians have enhanced audiovisual multisensory binding: experience-dependent effects in the double-flash illusion.

    PubMed

    Bidelman, Gavin M

    2016-10-01

    Musical training is associated with behavioral and neurophysiological enhancements in auditory processing for both musical and nonmusical sounds (e.g., speech). Yet, whether the benefits of musicianship extend beyond enhancements to auditory-specific skills and impact multisensory (e.g., audiovisual) processing has yet to be fully validated. Here, we investigated multisensory integration of auditory and visual information in musicians and nonmusicians using a double-flash illusion, whereby the presentation of multiple auditory stimuli (beeps) concurrent with a single visual object (flash) induces an illusory perception of multiple flashes. We parametrically varied the onset asynchrony between auditory and visual events (leads and lags of ±300 ms) to quantify participants' "temporal window" of integration, i.e., stimuli in which auditory and visual cues were fused into a single percept. Results show that musically trained individuals were both faster and more accurate at processing concurrent audiovisual cues than their nonmusician peers; nonmusicians had a higher susceptibility for responding to audiovisual illusions and perceived double flashes over an extended range of onset asynchronies compared to trained musicians. Moreover, temporal window estimates indicated that musicians' windows (<100 ms) were ~2-3× shorter than nonmusicians' (~200 ms), suggesting more refined multisensory integration and audiovisual binding. Collectively, findings indicate a more refined binding of auditory and visual cues in musically trained individuals. We conclude that experience-dependent plasticity of intensive musical experience extends beyond simple listening skills, improving multimodal processing and the integration of multiple sensory systems in a domain-general manner.

  13. Agent Architecture for Aviation Data Integration System

    NASA Technical Reports Server (NTRS)

    Kulkarni, Deepak; Wang, Yao; Windrem, May; Patel, Hemil; Wei, Mei

    2004-01-01

    This paper describes the proposed agent-based architecture of the Aviation Data Integration System (ADIS). ADIS is a software system that provides integrated heterogeneous data to support aviation problem-solving activities. Examples of aviation problem-solving activities include engineering troubleshooting, incident and accident investigation, routine flight operations monitoring, safety assessment, maintenance procedure debugging, and training assessment. A wide variety of information is typically referenced when engaging in these activities. Some of this information includes flight recorder data, Automatic Terminal Information Service (ATIS) reports, Jeppesen charts, weather data, air traffic control information, safety reports, and runway visual range data. Such wide-ranging information cannot be found in any single unified information source. Therefore, this information must be actively collected, assembled, and presented in a manner that supports the users problem-solving activities. This information integration task is non-trivial and presents a variety of technical challenges. ADIS has been developed to do this task and it permits integration of weather, RVR, radar data, and Jeppesen charts with flight data. ADIS has been implemented and used by several airlines FOQA teams. The initial feedback from airlines is that such a system is very useful in FOQA analysis. Based on the feedback from the initial deployment, we are developing a new version of the system that would make further progress in achieving following goals of our project.

  14. Welcome to health information science and systems.

    PubMed

    Zhang, Yanchun

    2013-01-01

    Health Information Science and Systems is an exciting, new, multidisciplinary journal that aims to use technologies in computer science to assist in disease diagnoses, treatment, prediction and monitoring through the modeling, design, development, visualization, integration and management of health related information. These computer-science technologies include such as information systems, web technologies, data mining, image processing, user interaction and interface, sensors and wireless networking and are applicable to a wide range of health related information including medical data, biomedical data, bioinformatics data, public health data.

  15. Contextual signals in visual cortex.

    PubMed

    Khan, Adil G; Hofer, Sonja B

    2018-06-05

    Vision is an active process. What we perceive strongly depends on our actions, intentions and expectations. During visual processing, these internal signals therefore need to be integrated with the visual information from the retina. The mechanisms of how this is achieved by the visual system are still poorly understood. Advances in recording and manipulating neuronal activity in specific cell types and axonal projections together with tools for circuit tracing are beginning to shed light on the neuronal circuit mechanisms of how internal, contextual signals shape sensory representations. Here we review recent work, primarily in mice, that has advanced our understanding of these processes, focusing on contextual signals related to locomotion, behavioural relevance and predictions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Parallel Processing Strategies of the Primate Visual System

    PubMed Central

    Nassi, Jonathan J.; Callaway, Edward M.

    2009-01-01

    Preface Incoming sensory information is sent to the brain along modality-specific channels corresponding to the five senses. Each of these channels further parses the incoming signals into parallel streams to provide a compact, efficient input to the brain. Ultimately, these parallel input signals must be elaborated upon and integrated within the cortex to provide a unified and coherent percept. Recent studies in the primate visual cortex have greatly contributed to our understanding of how this goal is accomplished. Multiple strategies including retinal tiling, hierarchical and parallel processing and modularity, defined spatially and by cell type-specific connectivity, are all used by the visual system to recover the rich detail of our visual surroundings. PMID:19352403

  17. BNDB - the Biochemical Network Database.

    PubMed

    Küntzer, Jan; Backes, Christina; Blum, Torsten; Gerasch, Andreas; Kaufmann, Michael; Kohlbacher, Oliver; Lenhof, Hans-Peter

    2007-10-02

    Technological advances in high-throughput techniques and efficient data acquisition methods have resulted in a massive amount of life science data. The data is stored in numerous databases that have been established over the last decades and are essential resources for scientists nowadays. However, the diversity of the databases and the underlying data models make it difficult to combine this information for solving complex problems in systems biology. Currently, researchers typically have to browse several, often highly focused, databases to obtain the required information. Hence, there is a pressing need for more efficient systems for integrating, analyzing, and interpreting these data. The standardization and virtual consolidation of the databases is a major challenge resulting in a unified access to a variety of data sources. We present the Biochemical Network Database (BNDB), a powerful relational database platform, allowing a complete semantic integration of an extensive collection of external databases. BNDB is built upon a comprehensive and extensible object model called BioCore, which is powerful enough to model most known biochemical processes and at the same time easily extensible to be adapted to new biological concepts. Besides a web interface for the search and curation of the data, a Java-based viewer (BiNA) provides a powerful platform-independent visualization and navigation of the data. BiNA uses sophisticated graph layout algorithms for an interactive visualization and navigation of BNDB. BNDB allows a simple, unified access to a variety of external data sources. Its tight integration with the biochemical network library BN++ offers the possibility for import, integration, analysis, and visualization of the data. BNDB is freely accessible at http://www.bndb.org.

  18. Concepts to Support HRP Integration Using Publications and Modeling

    NASA Technical Reports Server (NTRS)

    Mindock, J.; Lumpkins, S.; Shelhamer, M.

    2014-01-01

    Initial efforts are underway to enhance the Human Research Program (HRP)'s identification and support of potential cross-disciplinary scientific collaborations. To increase the emphasis on integration in HRP's science portfolio management, concepts are being explored through the development of a set of tools. These tools are intended to enable modeling, analysis, and visualization of the state of the human system in the spaceflight environment; HRP's current understanding of that state with an indication of uncertainties; and how that state changes due to HRP programmatic progress and design reference mission definitions. In this talk, we will discuss proof-of-concept work performed using a subset of publications captured in the HRP publications database. The publications were tagged in the database with words representing factors influencing health and performance in spaceflight, as well as with words representing the risks HRP research is reducing. Analysis was performed on the publication tag data to identify relationships between factors and between risks. Network representations were then created as one type of visualization of these relationships. This enables future analyses of the structure of the networks based on results from network theory. Such analyses can provide insights into HRP's current human system knowledge state as informed by the publication data. The network structure analyses can also elucidate potential improvements by identifying network connections to establish or strengthen for maximized information flow. The relationships identified in the publication data were subsequently used as inputs to a model captured in the Systems Modeling Language (SysML), which functions as a repository for relationship information to be gleaned from multiple sources. Example network visualization outputs from a simple SysML model were then also created to compare to the visualizations based on the publication data only. We will also discuss ideas for building upon this proof-of-concept work to further support an integrated approach to human spaceflight risk reduction.

  19. A Real-Time Construction Safety Monitoring System for Hazardous Gas Integrating Wireless Sensor Network and Building Information Modeling Technologies.

    PubMed

    Cheung, Weng-Fong; Lin, Tzu-Hsuan; Lin, Yu-Cheng

    2018-02-02

    In recent years, many studies have focused on the application of advanced technology as a way to improve management of construction safety management. A Wireless Sensor Network (WSN), one of the key technologies in Internet of Things (IoT) development, enables objects and devices to sense and communicate environmental conditions; Building Information Modeling (BIM), a revolutionary technology in construction, integrates database and geometry into a digital model which provides a visualized way in all construction lifecycle management. This paper integrates BIM and WSN into a unique system which enables the construction site to visually monitor the safety status via a spatial, colored interface and remove any hazardous gas automatically. Many wireless sensor nodes were placed on an underground construction site and to collect hazardous gas level and environmental condition (temperature and humidity) data, and in any region where an abnormal status is detected, the BIM model will alert the region and an alarm and ventilator on site will start automatically for warning and removing the hazard. The proposed system can greatly enhance the efficiency in construction safety management and provide an important reference information in rescue tasks. Finally, a case study demonstrates the applicability of the proposed system and the practical benefits, limitations, conclusions, and suggestions are summarized for further applications.

  20. Attenuated audiovisual integration in middle-aged adults in a discrimination task.

    PubMed

    Yang, Weiping; Ren, Yanna

    2018-02-01

    Numerous studies have focused on the diversity of audiovisual integration between younger and older adults. However, consecutive trends in audiovisual integration throughout life are still unclear. In the present study, to clarify audiovisual integration characteristics in middle-aged adults, we instructed younger and middle-aged adults to conduct an auditory/visual stimuli discrimination experiment. Randomized streams of unimodal auditory (A), unimodal visual (V) or audiovisual stimuli were presented on the left or right hemispace of the central fixation point, and subjects were instructed to respond to the target stimuli rapidly and accurately. Our results demonstrated that the responses of middle-aged adults to all unimodal and bimodal stimuli were significantly slower than those of younger adults (p < 0.05). Audiovisual integration was markedly delayed (onset time 360 ms) and weaker (peak 3.97%) in middle-aged adults than in younger adults (onset time 260 ms, peak 11.86%). The results suggested that audiovisual integration was attenuated in middle-aged adults and further confirmed age-related decline in information processing.

  1. Visual adaptation and novelty responses in the superior colliculus

    PubMed Central

    Boehnke, Susan E.; Berg, David J.; Marino, Robert M.; Baldi, Pierre F.; Itti, Laurent; Munoz, Douglas P.

    2011-01-01

    The brain's ability to ignore repeating, often redundant, information while enhancing novel information processing is paramount to survival. When stimuli are repeatedly presented, the response of visually-sensitive neurons decreases in magnitude, i.e. neurons adapt or habituate, although the mechanism is not yet known. We monitored activity of visual neurons in the superior colliculus (SC) of rhesus monkeys who actively fixated while repeated visual events were presented. We dissociated adaptation from habituation as mechanisms of the response decrement by using a Bayesian model of adaptation, and by employing a paradigm including rare trials that included an oddball stimulus that was either brighter or dimmer. If the mechanism is adaptation, response recovery should be seen only for the brighter stimulus; if habituation, response recovery (‘dishabituation’) should be seen for both the brighter and dimmer stimulus. We observed a reduction in the magnitude of the initial transient response and an increase in response onset latency with stimulus repetition for all visually responsive neurons in the SC. Response decrement was successfully captured by the adaptation model which also predicted the effects of presentation rate and rare luminance changes. However, in a subset of neurons with sustained activity to visual stimuli, a novelty signal akin to dishabituation was observed late in the visual response profile to both brighter and dimmer stimuli and was not captured by the model. This suggests that SC neurons integrate both rapidly discounted information about repeating stimuli and novelty information about oddball events, to support efficient selection in a cluttered dynamic world. PMID:21864319

  2. Integration of information and volume visualization for analysis of cell lineage and gene expression during embryogenesis

    NASA Astrophysics Data System (ADS)

    Cedilnik, Andrej; Baumes, Jeffrey; Ibanez, Luis; Megason, Sean; Wylie, Brian

    2008-01-01

    Dramatic technological advances in the field of genomics have made it possible to sequence the complete genomes of many different organisms. With this overwhelming amount of data at hand, biologists are now confronted with the challenge of understanding the function of the many different elements of the genome. One of the best places to start gaining insight on the mechanisms by which the genome controls an organism is the study of embryogenesis. There are multiple and inter-related layers of information that must be established in order to understand how the genome controls the formation of an organism. One is cell lineage which describes how patterns of cell division give rise to different parts of an organism. Another is gene expression which describes when and where different genes are turned on. Both of these data types can now be acquired using fluorescent laser-scanning (confocal or 2-photon) microscopy of embryos tagged with fluorescent proteins to generate 3D movies of developing embryos. However, analyzing the wealth of resulting images requires tools capable of interactively visualizing several different types of information as well as being scalable to terabytes of data. This paper describes how the combination of existing large data volume visualization and the new Titan information visualization framework of the Visualization Toolkit (VTK) can be applied to the problem of studying the cell lineage of an organism. In particular, by linking the visualization of spatial and temporal gene expression data with novel ways of visualizing cell lineage data, users can study how the genome regulates different aspects of embryonic development.

  3. Alcohol and disorientation-related responses. II, Nystagmus and "vertigo" during angular acceleration.

    DOT National Transportation Integrated Search

    1971-04-01

    The integrity of the visual and vestibular systems is important in the maintenance of orientation during flight. Although alcohol is known to affect the vestibular system through the development of a positional alcohol nystagmus, information concerni...

  4. Shewregdb: Database and visualization environment for experimental and predicted regulatory information in Shewanella oneidensis mr-1

    PubMed Central

    Syed, Mustafa H; Karpinets, Tatiana V; Leuze, Michael R; Kora, Guruprasad H; Romine, Margaret R; Uberbacher, Edward C

    2009-01-01

    Shewanella oneidensis MR-1 is an important model organism for environmental research as it has an exceptional metabolic and respiratory versatility regulated by a complex regulatory network. We have developed a database to collect experimental and computational data relating to regulation of gene and protein expression, and, a visualization environment that enables integration of these data types. The regulatory information in the database includes predictions of DNA regulator binding sites, sigma factor binding sites, transcription units, operons, promoters, and RNA regulators including non-coding RNAs, riboswitches, and different types of terminators. Availability http://shewanella-knowledgebase.org:8080/Shewanella/gbrowserLanding.jsp PMID:20198195

  5. No evidence for visual context-dependency of olfactory learning in Drosophila

    NASA Astrophysics Data System (ADS)

    Yarali, Ayse; Mayerle, Moritz; Nawroth, Christian; Gerber, Bertram

    2008-08-01

    How is behaviour organised across sensory modalities? Specifically, we ask concerning the fruit fly Drosophila melanogaster how visual context affects olfactory learning and recall and whether information about visual context is getting integrated into olfactory memory. We find that changing visual context between training and test does not deteriorate olfactory memory scores, suggesting that these olfactory memories can drive behaviour despite a mismatch of visual context between training and test. Rather, both the establishment and the recall of olfactory memory are generally facilitated by light. In a follow-up experiment, we find no evidence for learning about combinations of odours and visual context as predictors for reinforcement even after explicit training in a so-called biconditional discrimination task. Thus, a ‘true’ interaction between visual and olfactory modalities is not evident; instead, light seems to influence olfactory learning and recall unspecifically, for example by altering motor activity, alertness or olfactory acuity.

  6. Sketchy Rendering for Information Visualization.

    PubMed

    Wood, J; Isenberg, P; Isenberg, T; Dykes, J; Boukhelifa, N; Slingsby, A

    2012-12-01

    We present and evaluate a framework for constructing sketchy style information visualizations that mimic data graphics drawn by hand. We provide an alternative renderer for the Processing graphics environment that redefines core drawing primitives including line, polygon and ellipse rendering. These primitives allow higher-level graphical features such as bar charts, line charts, treemaps and node-link diagrams to be drawn in a sketchy style with a specified degree of sketchiness. The framework is designed to be easily integrated into existing visualization implementations with minimal programming modification or design effort. We show examples of use for statistical graphics, conveying spatial imprecision and for enhancing aesthetic and narrative qualities of visualization. We evaluate user perception of sketchiness of areal features through a series of stimulus-response tests in order to assess users' ability to place sketchiness on a ratio scale, and to estimate area. Results suggest relative area judgment is compromised by sketchy rendering and that its influence is dependent on the shape being rendered. They show that degree of sketchiness may be judged on an ordinal scale but that its judgement varies strongly between individuals. We evaluate higher-level impacts of sketchiness through user testing of scenarios that encourage user engagement with data visualization and willingness to critique visualization design. Results suggest that where a visualization is clearly sketchy, engagement may be increased and that attitudes to participating in visualization annotation are more positive. The results of our work have implications for effective information visualization design that go beyond the traditional role of sketching as a tool for prototyping or its use for an indication of general uncertainty.

  7. Goal-Directed Visual Processing Differentially Impacts Human Ventral and Dorsal Visual Representations

    PubMed Central

    2017-01-01

    Recent studies have challenged the ventral/“what” and dorsal/“where” two-visual-processing-pathway view by showing the existence of “what” and “where” information in both pathways. Is the two-pathway distinction still valid? Here, we examined how goal-directed visual information processing may differentially impact visual representations in these two pathways. Using fMRI and multivariate pattern analysis, in three experiments on human participants (57% females), by manipulating whether color or shape was task-relevant and how they were conjoined, we examined shape-based object category decoding in occipitotemporal and parietal regions. We found that object category representations in all the regions examined were influenced by whether or not object shape was task-relevant. This task effect, however, tended to decrease as task-relevant and irrelevant features were more integrated, reflecting the well-known object-based feature encoding. Interestingly, task relevance played a relatively minor role in driving the representational structures of early visual and ventral object regions. They were driven predominantly by variations in object shapes. In contrast, the effect of task was much greater in dorsal than ventral regions, with object category and task relevance both contributing significantly to the representational structures of the dorsal regions. These results showed that, whereas visual representations in the ventral pathway are more invariant and reflect “what an object is,” those in the dorsal pathway are more adaptive and reflect “what we do with it.” Thus, despite the existence of “what” and “where” information in both visual processing pathways, the two pathways may still differ fundamentally in their roles in visual information representation. SIGNIFICANCE STATEMENT Visual information is thought to be processed in two distinctive pathways: the ventral pathway that processes “what” an object is and the dorsal pathway that processes “where” it is located. This view has been challenged by recent studies revealing the existence of “what” and “where” information in both pathways. Here, we found that goal-directed visual information processing differentially modulates shape-based object category representations in the two pathways. Whereas ventral representations are more invariant to the demand of the task, reflecting what an object is, dorsal representations are more adaptive, reflecting what we do with the object. Thus, despite the existence of “what” and “where” information in both pathways, visual representations may still differ fundamentally in the two pathways. PMID:28821655

  8. Information Extraction for System-Software Safety Analysis: Calendar Year 2008 Year-End Report

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2009-01-01

    This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  9. Bioenergy Knowledge Discovery Framework Fact Sheet

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    The Bioenergy Knowledge Discovery Framework (KDF) supports the development of a sustainable bioenergy industry by providing access to a variety of data sets, publications, and collaboration and mapping tools that support bioenergy research, analysis, and decision making. In the KDF, users can search for information, contribute data, and use the tools and map interface to synthesize, analyze, and visualize information in a spatially integrated manner.

  10. Effects of visual working memory on brain information processing of irrelevant auditory stimuli.

    PubMed

    Qu, Jiagui; Rizak, Joshua D; Zhao, Lun; Li, Minghong; Ma, Yuanye

    2014-01-01

    Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM) has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP) following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC) may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.

  11. Defective imitation of finger configurations in patients with damage in the right or left hemispheres: An integration disorder of visual and somatosensory information?

    PubMed

    Okita, Manabu; Yukihiro, Takashi; Miyamoto, Kenzo; Morioka, Shu; Kaba, Hideto

    2017-04-01

    To explore the mechanism underlying the imitation of finger gestures, we devised a simple imitation task in which the patients were instructed to replicate finger configurations in two conditions: one in which they could see their hand (visual feedback: VF) and one in which they could not see their hand (non-visual feedback: NVF). Patients with left brain damage (LBD) or right brain damage (RBD), respectively, were categorized into two groups based on their scores on the imitation task in the NVF condition: the impaired imitation groups (I-LBD and I-RBD) who failed two or more of the five patterns and the control groups (C-LBD and C-RBD) who made one or no errors. We also measured the movement-production times for imitation. The I-RBD group performed significantly worse than the C-RBD group even in the VF condition. In contrast, the I-LBD group was selectively impaired in the NVF condition. The I-LBD group performed the imitations at a significantly slower rate than the C-LBD group in both the VF and NVF conditions. These results suggest that impaired imitation in patients with LBD is partly due to an abnormal integration of visual and somatosensory information based on the task specificity of the NVF condition. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. A Bayesian Account of Visual–Vestibular Interactions in the Rod-and-Frame Task

    PubMed Central

    de Brouwer, Anouk J.; Medendorp, W. Pieter

    2016-01-01

    Abstract Panoramic visual cues, as generated by the objects in the environment, provide the brain with important information about gravity direction. To derive an optimal, i.e., Bayesian, estimate of gravity direction, the brain must combine panoramic information with gravity information detected by the vestibular system. Here, we examined the individual sensory contributions to this estimate psychometrically. We asked human subjects to judge the orientation (clockwise or counterclockwise relative to gravity) of a briefly flashed luminous rod, presented within an oriented square frame (rod-in-frame). Vestibular contributions were manipulated by tilting the subject’s head, whereas visual contributions were manipulated by changing the viewing distance of the rod and frame. Results show a cyclical modulation of the frame-induced bias in perceived verticality across a 90° range of frame orientations. The magnitude of this bias decreased significantly with larger viewing distance, as if visual reliability was reduced. Biases increased significantly when the head was tilted, as if vestibular reliability was reduced. A Bayesian optimal integration model, with distinct vertical and horizontal panoramic weights, a gain factor to allow for visual reliability changes, and ocular counterroll in response to head tilt, provided a good fit to the data. We conclude that subjects flexibly weigh visual panoramic and vestibular information based on their orientation-dependent reliability, resulting in the observed verticality biases and the associated response variabilities. PMID:27844055

  13. Integration of Geographical Information Systems and Geophysical Applications with Distributed Computing Technologies.

    NASA Astrophysics Data System (ADS)

    Pierce, M. E.; Aktas, M. S.; Aydin, G.; Fox, G. C.; Gadgil, H.; Sayar, A.

    2005-12-01

    We examine the application of Web Service Architectures and Grid-based distributed computing technologies to geophysics and geo-informatics. We are particularly interested in the integration of Geographical Information System (GIS) services with distributed data mining applications. GIS services provide the general purpose framework for building archival data services, real time streaming data services, and map-based visualization services that may be integrated with data mining and other applications through the use of distributed messaging systems and Web Service orchestration tools. Building upon on our previous work in these areas, we present our current research efforts. These include fundamental investigations into increasing XML-based Web service performance, supporting real time data streams, and integrating GIS mapping tools with audio/video collaboration systems for shared display and annotation.

  14. The role of spatial integration in the perception of surface orientation with active touch.

    PubMed

    Giachritsis, Christos D; Wing, Alan M; Lovell, Paul G

    2009-10-01

    Vision research has shown that perception of line orientation, in the fovea area, improves with line length (Westheimer & Ley, 1997). This suggests that the visual system may use spatial integration to improve perception of orientation. In the present experiments, we investigated the role of spatial integration in the perception of surface orientation using kinesthetic and proprioceptive information from shoulder and elbow. With their left index fingers, participants actively explored virtual slanted surfaces of different lengths and orientations, and were asked to reproduce an orientation or discriminate between two orientations. Results showed that reproduction errors and discrimination thresholds improve with surface length. This suggests that the proprioceptive shoulder-elbow system may integrate redundant spatial information resulting from extended arm movements to improve orientation judgments.

  15. Forecasting and visualization of wildfires in a 3D geographical information system

    NASA Astrophysics Data System (ADS)

    Castrillón, M.; Jorge, P. A.; López, I. J.; Macías, A.; Martín, D.; Nebot, R. J.; Sabbagh, I.; Quintana, F. M.; Sánchez, J.; Sánchez, A. J.; Suárez, J. P.; Trujillo, A.

    2011-03-01

    This paper describes a wildfire forecasting application based on a 3D virtual environment and a fire simulation engine. A novel open-source framework is presented for the development of 3D graphics applications over large geographic areas, offering high performance 3D visualization and powerful interaction tools for the Geographic Information Systems (GIS) community. The application includes a remote module that allows simultaneous connections of several users for monitoring a real wildfire event. The system is able to make a realistic composition of what is really happening in the area of the wildfire with dynamic 3D objects and location of human and material resources in real time, providing a new perspective to analyze the wildfire information. The user is enabled to simulate and visualize the propagation of a fire on the terrain integrating at the same time spatial information on topography and vegetation types with weather and wind data. The application communicates with a remote web service that is in charge of the simulation task. The user may specify several parameters through a friendly interface before the application sends the information to the remote server responsible of carrying out the wildfire forecasting using the FARSITE simulation model. During the process, the server connects to different external resources to obtain up-to-date meteorological data. The client application implements a realistic 3D visualization of the fire evolution on the landscape. A Level Of Detail (LOD) strategy contributes to improve the performance of the visualization system.

  16. A Brief Period of Postnatal Visual Deprivation Alters the Balance between Auditory and Visual Attention.

    PubMed

    de Heering, Adélaïde; Dormal, Giulia; Pelland, Maxime; Lewis, Terri; Maurer, Daphne; Collignon, Olivier

    2016-11-21

    Is a short and transient period of visual deprivation early in life sufficient to induce lifelong changes in how we attend to, and integrate, simple visual and auditory information [1, 2]? This question is of crucial importance given the recent demonstration in both animals and humans that a period of blindness early in life permanently affects the brain networks dedicated to visual, auditory, and multisensory processing [1-16]. To address this issue, we compared a group of adults who had been treated for congenital bilateral cataracts during early infancy with a group of normally sighted controls on a task requiring simple detection of lateralized visual and auditory targets, presented alone or in combination. Redundancy gains obtained from the audiovisual conditions were similar between groups and surpassed the reaction time distribution predicted by Miller's race model. However, in comparison to controls, cataract-reversal patients were faster at processing simple auditory targets and showed differences in how they shifted attention across modalities. Specifically, they were faster at switching attention from visual to auditory inputs than in the reverse situation, while an opposite pattern was observed for controls. Overall, these results reveal that the absence of visual input during the first months of life does not prevent the development of audiovisual integration but enhances the salience of simple auditory inputs, leading to a different crossmodal distribution of attentional resources between auditory and visual stimuli. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. From Visual Exploration to Storytelling and Back Again.

    PubMed

    Gratzl, S; Lex, A; Gehlenborg, N; Cosgrove, N; Streit, M

    2016-06-01

    The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author "Vistories", visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals. (see Figure 1 for visual abstract).

  18. Multisensory integration of speech sounds with letters vs. visual speech: only visual speech induces the mismatch negativity.

    PubMed

    Stekelenburg, Jeroen J; Keetels, Mirjam; Vroomen, Jean

    2018-05-01

    Numerous studies have demonstrated that the vision of lip movements can alter the perception of auditory speech syllables (McGurk effect). While there is ample evidence for integration of text and auditory speech, there are only a few studies on the orthographic equivalent of the McGurk effect. Here, we examined whether written text, like visual speech, can induce an illusory change in the perception of speech sounds on both the behavioural and neural levels. In a sound categorization task, we found that both text and visual speech changed the identity of speech sounds from an /aba/-/ada/ continuum, but the size of this audiovisual effect was considerably smaller for text than visual speech. To examine at which level in the information processing hierarchy these multisensory interactions occur, we recorded electroencephalography in an audiovisual mismatch negativity (MMN, a component of the event-related potential reflecting preattentive auditory change detection) paradigm in which deviant text or visual speech was used to induce an illusory change in a sequence of ambiguous sounds halfway between /aba/ and /ada/. We found that only deviant visual speech induced an MMN, but not deviant text, which induced a late P3-like positive potential. These results demonstrate that text has much weaker effects on sound processing than visual speech does, possibly because text has different biological roots than visual speech. © 2018 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  19. From Visual Exploration to Storytelling and Back Again

    PubMed Central

    Gratzl, S.; Lex, A.; Gehlenborg, N.; Cosgrove, N.; Streit, M.

    2016-01-01

    The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author “Vistories”, visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals. (see Figure 1 for visual abstract) PMID:27942091

  20. Questionnaire-based person trip visualization and its integration to quantitative measurements in Myanmar

    NASA Astrophysics Data System (ADS)

    Kimijiama, S.; Nagai, M.

    2016-06-01

    With telecommunication development in Myanmar, person trip survey is supposed to shift from conversational questionnaire to GPS survey. Integration of both historical questionnaire data to GPS survey and visualizing them are very important to evaluate chronological trip changes with socio-economic and environmental events. The objectives of this paper are to: (a) visualize questionnaire-based person trip data, (b) compare the errors between questionnaire and GPS data sets with respect to sex and age and (c) assess the trip behaviour in time-series. Totally, 345 individual respondents were selected through random stratification to assess person trip using a questionnaire and GPS survey for each. Conversion of trip information such as a destination from the questionnaires was conducted by using GIS. The results show that errors between the two data sets in the number of trips, total trip distance and total trip duration are 25.5%, 33.2% and 37.2%, respectively. The smaller errors are found among working-age females mainly employed with the project-related activities generated by foreign investment. The trip distant was yearly increased. The study concluded that visualization of questionnaire-based person trip data and integrating them to current quantitative measurements are very useful to explore historical trip changes and understand impacts from socio-economic events.

  1. Development of a GIS-based integrated framework for coastal seiches monitoring and forecasting: A North Jiangsu shoal case study

    NASA Astrophysics Data System (ADS)

    Qin, Rufu; Lin, Liangzhao

    2017-06-01

    Coastal seiches have become an increasingly important issue in coastal science and present many challenges, particularly when attempting to provide warning services. This paper presents the methodologies, techniques and integrated services adopted for the design and implementation of a Seiches Monitoring and Forecasting Integration Framework (SMAF-IF). The SMAF-IF is an integrated system with different types of sensors and numerical models and incorporates the Geographic Information System (GIS) and web techniques, which focuses on coastal seiche events detection and early warning in the North Jiangsu shoal, China. The in situ sensors perform automatic and continuous monitoring of the marine environment status and the numerical models provide the meteorological and physical oceanographic parameter estimates. A model outputs processing software was developed in C# language using ArcGIS Engine functions, which provides the capabilities of automatically generating visualization maps and warning information. Leveraging the ArcGIS Flex API and ASP.NET web services, a web based GIS framework was designed to facilitate quasi real-time data access, interactive visualization and analysis, and provision of early warning services for end users. The integrated framework proposed in this study enables decision-makers and the publics to quickly response to emergency coastal seiche events and allows an easy adaptation to other regional and scientific domains related to real-time monitoring and forecasting.

  2. How scientists develop competence in visual communication

    NASA Astrophysics Data System (ADS)

    Ostergren, Marilyn

    Visuals (maps, charts, diagrams and illustrations) are an important tool for communication in most scientific disciplines, which means that scientists benefit from having strong visual communication skills. This dissertation examines the nature of competence in visual communication and the means by which scientists acquire this competence. This examination takes the form of an extensive multi-disciplinary integrative literature review and a series of interviews with graduate-level science students. The results are presented as a conceptual framework that lays out the components of competence in visual communication, including the communicative goals of science visuals, the characteristics of effective visuals, the skills and knowledge needed to create effective visuals and the learning experiences that promote the acquisition of these forms of skill and knowledge. This conceptual framework can be used to inform pedagogy and thus help graduate students achieve a higher level of competency in this area; it can also be used to identify aspects of acquiring competence in visual communication that need further study.

  3. Individual variation in the propensity for prospective thought is associated with functional integration between visual and retrosplenial cortex.

    PubMed

    Villena-Gonzalez, Mario; Wang, Hao-Ting; Sormaz, Mladen; Mollo, Giovanna; Margulies, Daniel S; Jefferies, Elizabeth A; Smallwood, Jonathan

    2018-02-01

    It is well recognized that the default mode network (DMN) is involved in states of imagination, although the cognitive processes that this association reflects are not well understood. The DMN includes many regions that function as cortical "hubs", including the posterior cingulate/retrosplenial cortex, anterior temporal lobe and the hippocampus. This suggests that the role of the DMN in cognition may reflect a process of cortical integration. In the current study we tested whether functional connectivity from uni-modal regions of cortex into the DMN is linked to features of imaginative thought. We found that strong intrinsic communication between visual and retrosplenial cortex was correlated with the degree of social thoughts about the future. Using an independent dataset, we show that the same region of retrosplenial cortex is functionally coupled to regions of primary visual cortex as well as core regions that make up the DMN. Finally, we compared the functional connectivity of the retrosplenial cortex, with a region of medial prefrontal cortex implicated in the integration of information from regions of the temporal lobe associated with future thought in a prior study. This analysis shows that the retrosplenial cortex is preferentially coupled to medial occipital, temporal lobe regions and the angular gyrus, areas linked to episodic memory, scene construction and navigation. In contrast, the medial prefrontal cortex shows preferential connectivity with motor cortex and lateral temporal and prefrontal regions implicated in language, motor processes and working memory. Together these findings suggest that integrating neural information from visual cortex into retrosplenial cortex may be important for imagining the future and may do so by creating a mental scene in which prospective simulations play out. We speculate that the role of the DMN in imagination may emerge from its capacity to bind together distributed representations from across the cortex in a coherent manner. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. The research of autonomous obstacle avoidance of mobile robot based on multi-sensor integration

    NASA Astrophysics Data System (ADS)

    Zhao, Ming; Han, Baoling

    2016-11-01

    The object of this study is the bionic quadruped mobile robot. The study has proposed a system design plan for mobile robot obstacle avoidance with the binocular stereo visual sensor and the self-control 3D Lidar integrated with modified ant colony optimization path planning to realize the reconstruction of the environmental map. Because the working condition of a mobile robot is complex, the result of the 3D reconstruction with a single binocular sensor is undesirable when feature points are few and the light condition is poor. Therefore, this system integrates the stereo vision sensor blumblebee2 and the Lidar sensor together to detect the cloud information of 3D points of environmental obstacles. This paper proposes the sensor information fusion technology to rebuild the environment map. Firstly, according to the Lidar data and visual data on obstacle detection respectively, and then consider two methods respectively to detect the distribution of obstacles. Finally fusing the data to get the more complete, more accurate distribution of obstacles in the scene. Then the thesis introduces ant colony algorithm. It has analyzed advantages and disadvantages of the ant colony optimization and its formation cause deeply, and then improved the system with the help of the ant colony optimization to increase the rate of convergence and precision of the algorithm in robot path planning. Such improvements and integrations overcome the shortcomings of the ant colony optimization like involving into the local optimal solution easily, slow search speed and poor search results. This experiment deals with images and programs the motor drive under the compiling environment of Matlab and Visual Studio and establishes the visual 2.5D grid map. Finally it plans a global path for the mobile robot according to the ant colony algorithm. The feasibility and effectiveness of the system are confirmed by ROS and simulation platform of Linux.

  5. Models of Speed Discrimination

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The prime purpose of this project was to investigate various theoretical issues concerning the integration of information across visual space. To date, most of the research efforts in the study of the visual system seem to have been focused in two almost non-overlaping directions. One research focus has been the low level perception as studied by psychophysics. The other focus has been the study of high level vision exemplified by the study of object perception. Most of the effort in psychophysics has been devoted to the search for the fundamental "features" of perception. The general idea is that the most peripheral processes of the visual system decompose the input into features that are then used for classification and recognition. The experimental and theoretical focus has been on finding and describing these analyzers that decompose images into useful components. Various models are then compared to the physiological measurements performed on neurons in the sensory systems. In the study of higher level perception, the work has been focused on the representation of objects and on the connections between various physical effects and object perception. In this category we find the perception of 3D from a variety of physical measurements including motion, shading and other physical phenomena. With few exceptions, there seem to be very limited development of theories describing how the visual system might combine the output of the analyzers to form the representation of visual objects. Therefore, the processes underlying the integration of information over space represent critical aspects of vision system. The understanding of these processes will have implications on our expectations for the underlying physiological mechanisms, as well as for our models of the internal representation for visual percepts. In this project, we explored several mechanisms related to spatial summation, attention, and eye movements. The project comprised three components: 1. Modeling visual search for the detection of speed deviation. 2. Perception of moving objects. 3. Exploring the role of eye movements in various visual tasks.

  6. Development and Evaluation of a Compartmental Picture Archiving and Communications System Model for Integration and Visualization of Multidisciplinary Biomedical Data to Facilitate Student Learning in an Integrative Health Clinic

    ERIC Educational Resources Information Center

    Chow, Meyrick; Chan, Lawrence

    2010-01-01

    Information technology (IT) has the potential to improve the clinical learning environment. The extent to which IT enhances or detracts from healthcare professionals' role performance can be expected to affect both student learning and patient outcomes. This study evaluated nursing students' satisfaction with a novel compartmental Picture…

  7. Approaches to integrating indicators into 3D landscape visualisations and their benefits for participative planning situations.

    PubMed

    Wissen, Ulrike; Schroth, Olaf; Lange, Eckart; Schmid, Willy A

    2008-11-01

    In discussing issues of landscape change, the complex relationships in the landscape have to be assessed. In participative planning processes, 3D visualisations have a high potential as an aid in understanding and communicating characteristics of landscape conditions by integrating visual and non-visual landscape information. Unclear is, which design and how much interactivity is required for an indicator visualisation that would suit stakeholders best in workshop situations. This paper describes the preparation and application of three different types of integrated 3D visualisations in workshops conducted in the Entlebuch UNESCO Biosphere Reserve (CH). The results reveal that simple representations of a complex issue created by draping thematic maps on the 3D model can make problematic developments visible at a glance; that diagrams linked to the spatial context can help draw attention to problematic relationships not considered beforehand; and that the size of species as indicators of conditions of the landscape's production and biotope function seems to provide a common language for stakeholders with different perspectives. Overall, the of the indicators the functions required to assist in information processing. Further research should focus on testing the effectiveness of the integrated visualisation tools in participative processes for the general public.

  8. Ground Penetrating Radar as a Contextual Sensor for Multi-Sensor Radiological Characterisation

    PubMed Central

    Ukaegbu, Ikechukwu K.; Gamage, Kelum A. A.

    2017-01-01

    Radioactive sources exist in environments or contexts that influence how they are detected and localised. For instance, the context of a moving source is different from a stationary source because of the effects of motion. The need to incorporate this contextual information in the radiation detection and localisation process has necessitated the integration of radiological and contextual sensors. The benefits of the successful integration of both types of sensors is well known and widely reported in fields such as medical imaging. However, the integration of both types of sensors has also led to innovative solutions to challenges in characterising radioactive sources in non-medical applications. This paper presents a review of such recent applications. It also identifies that these applications mostly use visual sensors as contextual sensors for characterising radiation sources. However, visual sensors cannot retrieve contextual information about radioactive wastes located in opaque environments encountered at nuclear sites, e.g., underground contamination. Consequently, this paper also examines ground-penetrating radar (GPR) as a contextual sensor for characterising this category of wastes and proposes several ways of integrating data from GPR and radiological sensors. Finally, it demonstrates combined GPR and radiation imaging for three-dimensional localisation of contamination in underground pipes using radiation transport and GPR simulations. PMID:28387706

  9. How music alters a kiss: superior temporal gyrus controls fusiform-amygdalar effective connectivity.

    PubMed

    Pehrs, Corinna; Deserno, Lorenz; Bakels, Jan-Hendrik; Schlochtermeier, Lorna H; Kappelhoff, Hermann; Jacobs, Arthur M; Fritz, Thomas Hans; Koelsch, Stefan; Kuchinke, Lars

    2014-11-01

    While watching movies, the brain integrates the visual information and the musical soundtrack into a coherent percept. Multisensory integration can lead to emotion elicitation on which soundtrack valences may have a modulatory impact. Here, dynamic kissing scenes from romantic comedies were presented to 22 participants (13 females) during functional magnetic resonance imaging scanning. The kissing scenes were either accompanied by happy music, sad music or no music. Evidence from cross-modal studies motivated a predefined three-region network for multisensory integration of emotion, consisting of fusiform gyrus (FG), amygdala (AMY) and anterior superior temporal gyrus (aSTG). The interactions in this network were investigated using dynamic causal models of effective connectivity. This revealed bilinear modulations by happy and sad music with suppression effects on the connectivity from FG and AMY to aSTG. Non-linear dynamic causal modeling showed a suppressive gating effect of aSTG on fusiform-amygdalar connectivity. In conclusion, fusiform to amygdala coupling strength is modulated via feedback through aSTG as region for multisensory integration of emotional material. This mechanism was emotion-specific and more pronounced for sad music. Therefore, soundtrack valences may modulate emotion elicitation in movies by differentially changing preprocessed visual information to the amygdala. © The Author (2013). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  10. How music alters a kiss: superior temporal gyrus controls fusiform–amygdalar effective connectivity

    PubMed Central

    Deserno, Lorenz; Bakels, Jan-Hendrik; Schlochtermeier, Lorna H.; Kappelhoff, Hermann; Jacobs, Arthur M.; Fritz, Thomas Hans; Koelsch, Stefan; Kuchinke, Lars

    2014-01-01

    While watching movies, the brain integrates the visual information and the musical soundtrack into a coherent percept. Multisensory integration can lead to emotion elicitation on which soundtrack valences may have a modulatory impact. Here, dynamic kissing scenes from romantic comedies were presented to 22 participants (13 females) during functional magnetic resonance imaging scanning. The kissing scenes were either accompanied by happy music, sad music or no music. Evidence from cross-modal studies motivated a predefined three-region network for multisensory integration of emotion, consisting of fusiform gyrus (FG), amygdala (AMY) and anterior superior temporal gyrus (aSTG). The interactions in this network were investigated using dynamic causal models of effective connectivity. This revealed bilinear modulations by happy and sad music with suppression effects on the connectivity from FG and AMY to aSTG. Non-linear dynamic causal modeling showed a suppressive gating effect of aSTG on fusiform–amygdalar connectivity. In conclusion, fusiform to amygdala coupling strength is modulated via feedback through aSTG as region for multisensory integration of emotional material. This mechanism was emotion-specific and more pronounced for sad music. Therefore, soundtrack valences may modulate emotion elicitation in movies by differentially changing preprocessed visual information to the amygdala. PMID:24298171

  11. Integrated visualization of remote sensing data using Google Earth

    NASA Astrophysics Data System (ADS)

    Castella, M.; Rigo, T.; Argemi, O.; Bech, J.; Pineda, N.; Vilaclara, E.

    2009-09-01

    The need for advanced visualization tools for meteorological data has lead in the last years to the development of sophisticated software packages either by observing systems manufacturers or by third-party solution providers. For example, manufacturers of remote sensing systems such as weather radars or lightning detection systems include zoom, product selection, archive access capabilities, as well as quantitative tools for data analysis, as standard features which are highly appreciated in weather surveillance or post-event case study analysis. However, the fact that each manufacturer has its own visualization system and data formats hampers the usability and integration of different data sources. In this context, Google Earth (GE) offers the possibility of combining several graphical information types in a unique visualization system which can be easily accessed by users. The Meteorological Service of Catalonia (SMC) has been evaluating the use of GE as a visualization platform for surveillance tasks in adverse weather events. First experiences are related to the integration in real-time of remote sensing data: radar, lightning, and satellite. The tool shows the animation of the combined products in the last hour, giving a good picture of the meteorological situation. One of the main advantages of this product is that is easy to be installed in many computers and does not need high computational requirements. Besides this, the capability of GE provides information about the most affected areas by heavy rain or other weather phenomena. On the opposite, the main disadvantage is that the product offers only qualitative information, and quantitative data is only available though the graphical display (i.e. trough color scales but not associated to physical values that can be accessed by users easily). The procedure developed to run in real time is divided in three parts. First of all, a crontab file launches different applications, depending on the data type (satellite, radar, or lightning) to be treated. For each type of data, the time of launching is different, and goes from 5 (satellite and lightning) to 6 minutes (radar). The second part is the use of IDL and ENVI programs, which search in each archive file the last images in one hour. In the case of lightning data, the files are generated for the procedure, while for the others the procedure searches for existing imagery. Finally, the procedure generates metadata information required by GE, kml files, and sends them to the internal server. At the same time, in the local computer where GE is running, there exists kml files which update the information referring to the server ones. Another application that has been evaluated is the analysis of past events. In this sense, further work is devoted to develop access procedures to archived data via cgi scripts in order to retrieve and convert the information in a format suitable for GE. The presentation includes examples of the evaluation of the use of GE, and a brief comparison with other existing visualization systems available within the SMC.

  12. Blind subjects construct conscious mental images of visual scenes encoded in musical form.

    PubMed Central

    Cronly-Dillon, J; Persaud, K C; Blore, R

    2000-01-01

    Blind (previously sighted) subjects are able to analyse, describe and graphically represent a number of high-contrast visual images translated into musical form de novo. We presented musical transforms of a random assortment of photographic images of objects and urban scenes to such subjects, a few of which depicted architectural and other landmarks that may be useful in navigating a route to a particular destination. Our blind subjects were able to use the sound representation to construct a conscious mental image that was revealed by their ability to depict a visual target by drawing it. We noted the similarity between the way the visual system integrates information from successive fixations to form a representation that is stable across eye movements and the way a succession of image frames (encoded in sound) which depict different portions of the image are integrated to form a seamless mental image. Finally, we discuss the profound resemblance between the way a professional musician carries out a structural analysis of a musical composition in order to relate its structure to the perception of musical form and the strategies used by our blind subjects in isolating structural features that collectively reveal the identity of visual form. PMID:11413637

  13. Higher order visual input to the mushroom bodies in the bee, Bombus impatiens.

    PubMed

    Paulk, Angelique C; Gronenberg, Wulfila

    2008-11-01

    To produce appropriate behaviors based on biologically relevant associations, sensory pathways conveying different modalities are integrated by higher-order central brain structures, such as insect mushroom bodies. To address this function of sensory integration, we characterized the structure and response of optic lobe (OL) neurons projecting to the calyces of the mushroom bodies in bees. Bees are well known for their visual learning and memory capabilities and their brains possess major direct visual input from the optic lobes to the mushroom bodies. To functionally characterize these visual inputs to the mushroom bodies, we recorded intracellularly from neurons in bumblebees (Apidae: Bombus impatiens) and a single neuron in a honeybee (Apidae: Apis mellifera) while presenting color and motion stimuli. All of the mushroom body input neurons were color sensitive while a subset was motion sensitive. Additionally, most of the mushroom body input neurons would respond to the first, but not to subsequent, presentations of repeated stimuli. In general, the medulla or lobula neurons projecting to the calyx signaled specific chromatic, temporal, and motion features of the visual world to the mushroom bodies, which included sensory information required for the biologically relevant associations bees form during foraging tasks.

  14. Visualizing Complex Environments in the Geo- and BioSciences

    NASA Astrophysics Data System (ADS)

    Prabhu, A.; Fox, P. A.; Zhong, H.; Eleish, A.; Ma, X.; Zednik, S.; Morrison, S. M.; Moore, E. K.; Muscente, D.; Meyer, M.; Hazen, R. M.

    2017-12-01

    Earth's living and non-living components have co-evolved for 4 billion years through numerous positive and negative feedbacks. Earth and life scientists have amassed vast amounts of data in diverse fields related to planetary evolution through deep time-mineralogy and petrology, paleobiology and paleontology, paleotectonics and paleomagnetism, geochemistry and geochrononology, genomics and proteomics, and more. Integrating the data from these complimentary disciplines is very useful in gaining an understanding of the evolution of our planet's environment. The integrated data however, represent many extremely complex environments. In order to gain insights and make discoveries using this data, it is important for us to model and visualize these complex environments. As part of work in understanding the "Co-Evolution of Geo and Biospheres using Data Driven Methodologies," we have developed several visualizations to help represent the information stored in the datasets from complimentary disciplines. These visualizations include 2D and 3D force directed Networks, Chord Diagrams, 3D Klee Diagrams. Evolving Network Diagrams, Skyline Diagrams and Tree Diagrams. Combining these visualizations with the results of machine learning and data analysis methods leads to a powerful way to discover patterns and relationships about the Earth's past and today's changing environment.

  15. Those are Your Legs: The Effect of Visuo-Spatial Viewpoint on Visuo-Tactile Integration and Body Ownership

    PubMed Central

    Pozeg, Polona; Galli, Giulia; Blanke, Olaf

    2015-01-01

    Experiencing a body part as one’s own, i.e., body ownership, depends on the integration of multisensory bodily signals (including visual, tactile, and proprioceptive information) with the visual top-down signals from peripersonal space. Although it has been shown that the visuo-spatial viewpoint from where the body is seen is an important visual top-down factor for body ownership, different studies have reported diverging results. Furthermore, the role of visuo-spatial viewpoint (sometime also called first-person perspective) has only been studied for hands or the whole body, but not for the lower limbs. We thus investigated whether and how leg visuo-tactile integration and leg ownership depended on the visuo-spatial viewpoint from which the legs were seen and the anatomical similarity of the visual leg stimuli. Using a virtual leg illusion, we tested the strength of visuo-tactile integration of leg stimuli using the crossmodal congruency effect (CCE) as well as the subjective sense of leg ownership (assessed by a questionnaire). Fifteen participants viewed virtual legs or non-corporeal control objects, presented either from their habitual first-person viewpoint or from a viewpoint that was rotated by 90°(third-person viewpoint), while applying visuo-tactile stroking between the participants legs and the virtual legs shown on a head-mounted display. The data show that the first-person visuo-spatial viewpoint significantly boosts the visuo-tactile integration as well as the sense of leg ownership. Moreover, the viewpoint-dependent increment of the visuo-tactile integration was only found in the conditions when participants viewed the virtual legs (absent for control objects). These results confirm the importance of first person visuo-spatial viewpoint for the integration of visuo-tactile stimuli and extend findings from the upper extremity and the trunk to visuo-tactile integration and ownership for the legs. PMID:26635663

  16. Time-interval for integration of stabilizing haptic and visual information in subjects balancing under static and dynamic conditions

    PubMed Central

    Honeine, Jean-Louis; Schieppati, Marco

    2014-01-01

    Maintaining equilibrium is basically a sensorimotor integration task. The central nervous system (CNS) continually and selectively weights and rapidly integrates sensory inputs from multiple sources, and coordinates multiple outputs. The weighting process is based on the availability and accuracy of afferent signals at a given instant, on the time-period required to process each input, and possibly on the plasticity of the relevant pathways. The likelihood that sensory inflow changes while balancing under static or dynamic conditions is high, because subjects can pass from a dark to a well-lit environment or from a tactile-guided stabilization to loss of haptic inflow. This review article presents recent data on the temporal events accompanying sensory transition, on which basic information is fragmentary. The processing time from sensory shift to reaching a new steady state includes the time to (a) subtract or integrate sensory inputs; (b) move from allocentric to egocentric reference or vice versa; and (c) adjust the calibration of motor activity in time and amplitude to the new sensory set. We present examples of processes of integration of posture-stabilizing information, and of the respective sensorimotor time-intervals while allowing or occluding vision or adding or subtracting tactile information. These intervals are short, in the order of 1–2 s for different postural conditions, modalities and deliberate or passive shift. They are just longer for haptic than visual shift, just shorter on withdrawal than on addition of stabilizing input, and on deliberate than unexpected mode. The delays are the shortest (for haptic shift) in blind subjects. Since automatic balance stabilization may be vulnerable to sensory-integration delays and to interference from concurrent cognitive tasks in patients with sensorimotor problems, insight into the processing time for balance control represents a critical step in the design of new balance- and locomotion training devices. PMID:25339872

  17. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    PubMed

    Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C

    2009-01-01

    Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  18. Reliability and relative weighting of visual and nonvisual information for perceiving direction of self-motion during walking

    PubMed Central

    Saunders, Jeffrey A.

    2014-01-01

    Direction of self-motion during walking is indicated by multiple cues, including optic flow, nonvisual sensory cues, and motor prediction. I measured the reliability of perceived heading from visual and nonvisual cues during walking, and whether cues are weighted in an optimal manner. I used a heading alignment task to measure perceived heading during walking. Observers walked toward a target in a virtual environment with and without global optic flow. The target was simulated to be infinitely far away, so that it did not provide direct feedback about direction of self-motion. Variability in heading direction was low even without optic flow, with average RMS error of 2.4°. Global optic flow reduced variability to 1.9°–2.1°, depending on the structure of the environment. The small amount of variance reduction was consistent with optimal use of visual information. The relative contribution of visual and nonvisual information was also measured using cue conflict conditions. Optic flow specified a conflicting heading direction (±5°), and bias in walking direction was used to infer relative weighting. Visual feedback influenced heading direction by 16%–34% depending on scene structure, with more effect with dense motion parallax. The weighting of visual feedback was close to the predictions of an optimal integration model given the observed variability measures. PMID:24648194

  19. ENVIRONMENTAL REVIEWS AND CASE STUDIES: The National Park Service Visual Resource Inventory: Capturing the Historic and Cultural Values of Scenic Views

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sullivan, Robert G.; Meyer, Mark E.

    Several United States (US) federal agencies have developed visual resource inventory (VRI) and management systems that reflect specific agency missions and visual resource management objectives. These programs have varied in the degree to which they incorporate historic and cultural elements and values into the scenic inventory process. The recent nationwide expansion of renewable energy and associated transmission development is causing an increase in visual impacts on both scenic and historic/cultural resources. This increase has highlighted the need for better integration of visual and historic/cultural resource assessment and management activities for land use planning purposes. The US Department of the Interiormore » National Park Service (NPS), in response to concerns arising from potential scenic impacts from renewable energy, electric transmission, and other types of development on lands and waters near NPS units, has developed a VRI process for high-value views both within and outside NPS unit boundaries. The NPS VRI incorporates historic and cultural elements and values into the scenic resource inventory process and provides practical guidance and metrics for successfully integrating historic and cultural concerns into the NPS’s scenic resource conservation efforts. This article describes the NPS VRI process and compares it with the VRI processes of the US Department of the Interior Bureau of Land Management and the US Department of Agriculture Forest Service, with respect to the incorporation of historic and cultural values. The article discusses why a scenic inventory approach that more robustly integrates the historic and cultural values of the landscape is essential for NPS landscapes, and for fulfillment of NPS’s mission. Inventories are underway at many NPS units, and the results indicate that the VRI process can be used successfully to capture important historic and cultural resource information and incorporate that information into the assessment of the scenic values of views within and outside NPS units. Environmental Practice 18: 166–179 (2016)« less

  20. imDEV: a graphical user interface to R multivariate analysis tools in Microsoft Excel

    PubMed Central

    Grapov, Dmitry; Newman, John W.

    2012-01-01

    Summary: Interactive modules for Data Exploration and Visualization (imDEV) is a Microsoft Excel spreadsheet embedded application providing an integrated environment for the analysis of omics data through a user-friendly interface. Individual modules enables interactive and dynamic analyses of large data by interfacing R's multivariate statistics and highly customizable visualizations with the spreadsheet environment, aiding robust inferences and generating information-rich data visualizations. This tool provides access to multiple comparisons with false discovery correction, hierarchical clustering, principal and independent component analyses, partial least squares regression and discriminant analysis, through an intuitive interface for creating high-quality two- and a three-dimensional visualizations including scatter plot matrices, distribution plots, dendrograms, heat maps, biplots, trellis biplots and correlation networks. Availability and implementation: Freely available for download at http://sourceforge.net/projects/imdev/. Implemented in R and VBA and supported by Microsoft Excel (2003, 2007 and 2010). Contact: John.Newman@ars.usda.gov Supplementary Information: Installation instructions, tutorials and users manual are available at http://sourceforge.net/projects/imdev/. PMID:22815358

Top