Sample records for additional visual information

  1. When kinesthetic information is neglected in learning a Novel bimanual rhythmic coordination.

    PubMed

    Zhu, Qin; Mirich, Todd; Huang, Shaochen; Snapp-Childs, Winona; Bingham, Geoffrey P

    2017-08-01

    Many studies have shown that rhythmic interlimb coordination involves perception of the coupled limb movements, and different sensory modalities can be used. Using visual displays to inform the coupled bimanual movement, novel bimanual coordination patterns can be learned with practice. A recent study showed that similar learning occurred without vision when a coach provided manual guidance during practice. The information provided via the two different modalities may be same (amodal) or different (modality specific). If it is different, then learning with both is a dual task, and one source of information might be used in preference to the other in performing the task when both are available. In the current study, participants learned a novel 90° bimanual coordination pattern without or with visual information in addition to kinesthesis. In posttest, all participants were tested without and with visual information in addition to kinesthesis. When tested with visual information, all participants exhibited performance that was significantly improved by practice. When tested without visual information, participants who practiced using only kinesthetic information showed improvement, but those who practiced with visual information in addition showed remarkably less improvement. The results indicate that (1) the information is not amodal, (2) use of a single type of information was preferred, and (3) the preferred information was visual. We also hypothesized that older participants might be more likely to acquire dual task performance given their greater experience of the two sensory modes in combination, but results were replicated with both 20- and 50-year-olds.

  2. The contribution of visual and vestibular information to spatial orientation by 6- to 14-month-old infants and adults.

    PubMed

    Bremner, J Gavin; Hatton, Fran; Foster, Kirsty A; Mason, Uschi

    2011-09-01

    Although there is much research on infants' ability to orient in space, little is known regarding the information they use to do so. This research uses a rotating room to evaluate the relative contribution of visual and vestibular information to location of a target following bodily rotation. Adults responded precisely on the basis of visual flow information. Seven-month-olds responded mostly on the basis of visual flow, whereas 9-month-olds responded mostly on the basis of vestibular information, and 12-month-olds responded mostly on the basis of visual information. Unlike adults, infants of all ages showed partial influence by both modalities. Additionally, 7-month-olds were capable of using vestibular information when there was no visual information for movement or stability, and 9-month-olds still relied on vestibular information when visual information was enhanced. These results are discussed in the context of neuroscientific evidence regarding visual-vestibular interaction, and in relation to possible changes in reliance on visual and vestibular information following acquisition of locomotion. © 2011 Blackwell Publishing Ltd.

  3. High visual resolution matters in audiovisual speech perception, but only for some.

    PubMed

    Alsius, Agnès; Wayne, Rachel V; Paré, Martin; Munhall, Kevin G

    2016-07-01

    The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect.

  4. Training haptic stiffness discrimination: time course of learning with or without visual information and knowledge of results.

    PubMed

    Teodorescu, Kinneret; Bouchigny, Sylvain; Korman, Maria

    2013-08-01

    In this study, we explored the time course of haptic stiffness discrimination learning and how it was affected by two experimental factors, the addition of visual information and/or knowledge of results (KR) during training. Stiffness perception may integrate both haptic and visual modalities. However, in many tasks, the visual field is typically occluded, forcing stiffness perception to be dependent exclusively on haptic information. No studies to date addressed the time course of haptic stiffness perceptual learning. Using a virtual environment (VE) haptic interface and a two-alternative forced-choice discrimination task, the haptic stiffness discrimination ability of 48 participants was tested across 2 days. Each day included two haptic test blocks separated by a training block Additional visual information and/or KR were manipulated between participants during training blocks. Practice repetitions alone induced significant improvement in haptic stiffness discrimination. Between days, accuracy was slightly improved, but decision time performance was deteriorated. The addition of visual information and/or KR had only temporary effects on decision time, without affecting the time course of haptic discrimination learning. Learning in haptic stiffness discrimination appears to evolve through at least two distinctive phases: A single training session resulted in both immediate and latent learning. This learning was not affected by the training manipulations inspected. Training skills in VE in spaced sessions can be beneficial for tasks in which haptic perception is critical, such as surgery procedures, when the visual field is occluded. However, training protocols for such tasks should account for low impact of multisensory information and KR.

  5. The NAS Computational Aerosciences Archive

    NASA Technical Reports Server (NTRS)

    Miceli, Kristina D.; Globus, Al; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    In order to further the state-of-the-art in computational aerosciences (CAS) technology, researchers must be able to gather and understand existing work in the field. One aspect of this information gathering is studying published work available in scientific journals and conference proceedings. However, current scientific publications are very limited in the type and amount of information that they can disseminate. Information is typically restricted to text, a few images, and a bibliography list. Additional information that might be useful to the researcher, such as additional visual results, referenced papers, and datasets, are not available. New forms of electronic publication, such as the World Wide Web (WWW), limit publication size only by available disk space and data transmission bandwidth, both of which are improving rapidly. The Numerical Aerodynamic Simulation (NAS) Systems Division at NASA Ames Research Center is in the process of creating an archive of CAS information on the WWW. This archive will be based on the large amount of information produced by researchers associated with the NAS facility. The archive will contain technical summaries and reports of research performed on NAS supercomputers, visual results (images, animations, visualization system scripts), datasets, and any other supporting meta-information. This information will be available via the WWW through the NAS homepage, located at http://www.nas.nasa.gov/, fully indexed for searching. The main components of the archive are technical summaries and reports, visual results, and datasets. Technical summaries are gathered every year by researchers who have been allotted resources on NAS supercomputers. These summaries, together with supporting visual results and references, are browsable by interested researchers. Referenced papers made available by researchers can be accessed through hypertext links. Technical reports are in-depth accounts of tools and applications research projects performed by NAS staff members and collaborators. Visual results, which may be available in the form of images, animations, and/or visualization scripts, are generated by researchers with respect to a certain research project, depicting dataset features that were determined important by the investigating researcher. For example, script files for visualization systems (e.g. FAST, PLOT3D, AVS) are provided to create visualizations on the user's local workstation to elucidate the key points of the numerical study. Users can then interact with the data starting where the investigator left off. Datasets are intended to give researchers an opportunity to understand previous work, 'mine' solutions for new information (for example, have you ever read a paper thinking "I wonder what the helicity density looks like?"), compare new techniques with older results, collaborate with remote colleagues, and perform validation. Supporting meta-information associated with the research projects is also important to provide additional context for research projects. This may include information such as the software used in the simulation (e.g. grid generators, flow solvers, visualization). In addition to serving the CAS research community, the information archive will also be helpful to students, visualization system developers and researchers, and management. Students (of any age) can use the data to study fluid dynamics, compare results from different flow solvers, learn about meshing techniques, etc., leading to better informed individuals. For these users it is particularly important that visualization be integrated into dataset archives. Visualization researchers can use dataset archives to test algorithms and techniques, leading to better visualization systems, Management can use the data to figure what is really going on behind the viewgraphs. All users will benefit from fast, easy, and convenient access to CFD datasets. The CAS information archive hopes to serve as a useful resource to those interested in computational sciences. At present, only information that may be distributed internationally is made available via the archive. Studies are underway to determine security requirements and solutions to make additional information available. By providing access to the archive via the WWW, the process of information gathering can be more productive and fruitful due to ease of access and ability to manage many different types of information. As the archive grows, additional resources from outside NAS will be added, providing a dynamic source of research results.

  6. Information processing in the primate visual system - An integrated systems perspective

    NASA Technical Reports Server (NTRS)

    Van Essen, David C.; Anderson, Charles H.; Felleman, Daniel J.

    1992-01-01

    The primate visual system contains dozens of distinct areas in the cerebral cortex and several major subcortical structures. These subdivisions are extensively interconnected in a distributed hierarchical network that contains several intertwined processing streams. A number of strategies are used for efficient information processing within this hierarchy. These include linear and nonlinear filtering, passage through information bottlenecks, and coordinated use of multiple types of information. In addition, dynamic regulation of information flow within and between visual areas may provide the computational flexibility needed for the visual system to perform a broad spectrum of tasks accurately and at high resolution.

  7. The effect of visual and verbal modes of presentation on children's retention of images and words

    NASA Astrophysics Data System (ADS)

    Vasu, Ellen Storey; Howe, Ann C.

    This study tested the hypothesis that the use of two modes of presenting information to children has an additive memory effect for the retention of both images and words. Subjects were 22 first-grade and 22 fourth-grade children randomly assigned to visual and visual-verbal treatment groups. The visual-verbal group heard a description while observing an object; the visual group observed the same object but did not hear a description. Children were tested individually immediately after presentation of stimuli and two weeks later. They were asked to represent the information recalled through a drawing and an oral verbal description. In general, results supported the hypothesis and indicated, in addition, that children represent more information in iconic (pictorial) form than in symbolic (verbal) form. Strategies for using these results to enhance science learning at the elementary school level are discussed.

  8. A lack of vision: evidence for poor communication of visual problems and support needs in education statements/plans for children with SEN.

    PubMed

    Little, J-A; Saunders, K J

    2015-02-01

    Visual dysfunction is more common in children with neurological impairments and previous studies have recommended such children receive visual and refractive assessment. In the UK, children with neurological impairment often have educational statementing for Special Educational Needs (SEN) and the statement should detail all health care and support needs to ensure the child's needs are met during school life. This study examined the representation of visual information in statements of SEN and compared this to orthoptic visual information from school visual assessments for children in a special school in Northern Ireland, UK. The parents of 115 school children in a special school were informed about the study via written information. Participation involved parents permitting the researchers to access their child's SEN educational statement and orthoptic clinical records. Statement information was accessed for 28 participants aged between four and 19 years; 25 contained visual information. Two participants were identified in their statements as having a certification of visual impairment. An additional 10 children had visual acuity ≥ 0.3 logMAR. This visual deficit was not reported in statements in eight out of these 12 cases (67%). 11 participants had significant refractive error and wore spectacles, but only five (45%) had this requirement recorded in their statement. Overall, 10 participants (55%) had either reduced visual acuity or significant refractive error which was not recorded in their statement. Despite additional visual needs being common, and described in clinical records, the majority of those with reduced vision and/or spectacle requirements did not have this information included in their statement. If visual limitations are not recognized by educational services, the child's needs may not be met during school life. More comprehensive eye care services, embedded with stakeholder communication and links to education are necessary to improve understanding of vision for children with neurological impairments. Copyright © 2014 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  9. Speech identification in noise: Contribution of temporal, spectral, and visual speech cues.

    PubMed

    Kim, Jeesun; Davis, Chris; Groot, Christopher

    2009-12-01

    This study investigated the degree to which two types of reduced auditory signals (cochlear implant simulations) and visual speech cues combined for speech identification. The auditory speech stimuli were filtered to have only amplitude envelope cues or both amplitude envelope and spectral cues and were presented with/without visual speech. In Experiment 1, IEEE sentences were presented in quiet and noise. For in-quiet presentation, speech identification was enhanced by the addition of both spectral and visual speech cues. Due to a ceiling effect, the degree to which these effects combined could not be determined. In noise, these facilitation effects were more marked and were additive. Experiment 2 examined consonant and vowel identification in the context of CVC or VCV syllables presented in noise. For consonants, both spectral and visual speech cues facilitated identification and these effects were additive. For vowels, the effect of combined cues was underadditive, with the effect of spectral cues reduced when presented with visual speech cues. Analysis indicated that without visual speech, spectral cues facilitated the transmission of place information and vowel height, whereas with visual speech, they facilitated lip rounding, with little impact on the transmission of place information.

  10. The shaping of information by visual metaphors.

    PubMed

    Ziemkiewicz, Caroline; Kosara, Robert

    2008-01-01

    The nature of an information visualization can be considered to lie in the visual metaphors it uses to structure information. The process of understanding a visualization therefore involves an interaction between these external visual metaphors and the user's internal knowledge representations. To investigate this claim, we conducted an experiment to test the effects of visual metaphor and verbal metaphor on the understanding of tree visualizations. Participants answered simple data comprehension questions while viewing either a treemap or a node-link diagram. Questions were worded to reflect a verbal metaphor that was either compatible or incompatible with the visualization a participant was using. The results (based on correctness and response time) suggest that the visual metaphor indeed affects how a user derives information from a visualization. Additionally, we found that the degree to which a user is affected by the metaphor is strongly correlated with the user's ability to answer task questions correctly. These findings are a first step towards illuminating how visual metaphors shape user understanding, and have significant implications for the evaluation, application, and theory of visualization.

  11. Learning Building Layouts with Non-geometric Visual Information: The Effects of Visual Impairment and Age

    PubMed Central

    Kalia, Amy A.; Legge, Gordon E.; Giudice, Nicholas A.

    2009-01-01

    Previous studies suggest that humans rely on geometric visual information (hallway structure) rather than non-geometric visual information (e.g., doors, signs and lighting) for acquiring cognitive maps of novel indoor layouts. This study asked whether visual impairment and age affect reliance on non-geometric visual information for layout learning. We tested three groups of participants—younger (< 50 years) normally sighted, older (50–70 years) normally sighted, and low vision (people with heterogeneous forms of visual impairment ranging in age from 18–67). Participants learned target locations in building layouts using four presentation modes: a desktop virtual environment (VE) displaying only geometric cues (Sparse VE), a VE displaying both geometric and non-geometric cues (Photorealistic VE), a Map, and a Real building. Layout knowledge was assessed by map drawing and by asking participants to walk to specified targets in the real space. Results indicate that low-vision and older normally-sighted participants relied on additional non-geometric information to accurately learn layouts. In conclusion, visual impairment and age may result in reduced perceptual and/or memory processing that makes it difficult to learn layouts without non-geometric visual information. PMID:19189732

  12. The contribution of visual information to the perception of speech in noise with and without informative temporal fine structure

    PubMed Central

    Stacey, Paula C.; Kitterick, Pádraig T.; Morris, Saffron D.; Sumner, Christian J.

    2017-01-01

    Understanding what is said in demanding listening situations is assisted greatly by looking at the face of a talker. Previous studies have observed that normal-hearing listeners can benefit from this visual information when a talker's voice is presented in background noise. These benefits have also been observed in quiet listening conditions in cochlear-implant users, whose device does not convey the informative temporal fine structure cues in speech, and when normal-hearing individuals listen to speech processed to remove these informative temporal fine structure cues. The current study (1) characterised the benefits of visual information when listening in background noise; and (2) used sine-wave vocoding to compare the size of the visual benefit when speech is presented with or without informative temporal fine structure. The accuracy with which normal-hearing individuals reported words in spoken sentences was assessed across three experiments. The availability of visual information and informative temporal fine structure cues was varied within and across the experiments. The results showed that visual benefit was observed using open- and closed-set tests of speech perception. The size of the benefit increased when informative temporal fine structure cues were removed. This finding suggests that visual information may play an important role in the ability of cochlear-implant users to understand speech in many everyday situations. Models of audio-visual integration were able to account for the additional benefit of visual information when speech was degraded and suggested that auditory and visual information was being integrated in a similar way in all conditions. The modelling results were consistent with the notion that audio-visual benefit is derived from the optimal combination of auditory and visual sensory cues. PMID:27085797

  13. Visualization rhetoric: framing effects in narrative visualization.

    PubMed

    Hullman, Jessica; Diakopoulos, Nicholas

    2011-12-01

    Narrative visualizations combine conventions of communicative and exploratory information visualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations in visualizations that "tell a story" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrative visualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation. © 2011 IEEE

  14. Age-equivalent top-down modulation during cross-modal selective attention.

    PubMed

    Guerreiro, Maria J S; Anguera, Joaquin A; Mishra, Jyoti; Van Gerven, Pascal W M; Gazzaley, Adam

    2014-12-01

    Selective attention involves top-down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top-down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top-down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.

  15. Use of Visual Cues by Adults With Traumatic Brain Injuries to Interpret Explicit and Inferential Information.

    PubMed

    Brown, Jessica A; Hux, Karen; Knollman-Porter, Kelly; Wallace, Sarah E

    2016-01-01

    Concomitant visual and cognitive impairments following traumatic brain injuries (TBIs) may be problematic when the visual modality serves as a primary source for receiving information. Further difficulties comprehending visual information may occur when interpretation requires processing inferential rather than explicit content. The purpose of this study was to compare the accuracy with which people with and without severe TBI interpreted information in contextually rich drawings. Fifteen adults with and 15 adults without severe TBI. Repeated-measures between-groups design. Participants were asked to match images to sentences that either conveyed explicit (ie, main action or background) or inferential (ie, physical or mental inference) information. The researchers compared accuracy between participant groups and among stimulus conditions. Participants with TBI demonstrated significantly poorer accuracy than participants without TBI extracting information from images. In addition, participants with TBI demonstrated significantly higher response accuracy when interpreting explicit rather than inferential information; however, no significant difference emerged between sentences referencing main action versus background information or sentences providing physical versus mental inference information for this participant group. Difficulties gaining information from visual environmental cues may arise for people with TBI given their difficulties interpreting inferential content presented through the visual modality.

  16. The four-meter confrontation visual field test.

    PubMed Central

    Kodsi, S R; Younge, B R

    1992-01-01

    The 4-m confrontation visual field test has been successfully used at the Mayo Clinic for many years in addition to the standard 0.5-m confrontation visual field test. The 4-m confrontation visual field test is a test of macular function and can identify small central or paracentral scotomas that the examiner may not find when the patient is tested only at 0.5 m. Also, macular sparing in homonymous hemianopias and quadrantanopias may be identified with the 4-m confrontation visual field test. We recommend use of this confrontation visual field test, in addition to the standard 0.5-m confrontation visual field test, on appropriately selected patients to obtain the most information possible by confrontation visual field tests. PMID:1494829

  17. The four-meter confrontation visual field test.

    PubMed

    Kodsi, S R; Younge, B R

    1992-01-01

    The 4-m confrontation visual field test has been successfully used at the Mayo Clinic for many years in addition to the standard 0.5-m confrontation visual field test. The 4-m confrontation visual field test is a test of macular function and can identify small central or paracentral scotomas that the examiner may not find when the patient is tested only at 0.5 m. Also, macular sparing in homonymous hemianopias and quadrantanopias may be identified with the 4-m confrontation visual field test. We recommend use of this confrontation visual field test, in addition to the standard 0.5-m confrontation visual field test, on appropriately selected patients to obtain the most information possible by confrontation visual field tests.

  18. How do field of view and resolution affect the information content of panoramic scenes for visual navigation? A computational investigation.

    PubMed

    Wystrach, Antoine; Dewar, Alex; Philippides, Andrew; Graham, Paul

    2016-02-01

    The visual systems of animals have to provide information to guide behaviour and the informational requirements of an animal's behavioural repertoire are often reflected in its sensory system. For insects, this is often evident in the optical array of the compound eye. One behaviour that insects share with many animals is the use of learnt visual information for navigation. As ants are expert visual navigators it may be that their vision is optimised for navigation. Here we take a computational approach in asking how the details of the optical array influence the informational content of scenes used in simple view matching strategies for orientation. We find that robust orientation is best achieved with low-resolution visual information and a large field of view, similar to the optical properties seen for many ant species. A lower resolution allows for a trade-off between specificity and generalisation for stored views. Additionally, our simulations show that orientation performance increases if different portions of the visual field are considered as discrete visual sensors, each giving an independent directional estimate. This suggests that ants might benefit by processing information from their two eyes independently.

  19. Presentation of Information on Visual Displays.

    ERIC Educational Resources Information Center

    Pettersson, Rune

    This discussion of factors involved in the presentation of text, numeric data, and/or visuals using video display devices describes in some detail the following types of presentation: (1) visual displays, with attention to additive color combination; measurements, including luminance, radiance, brightness, and lightness; and standards, with…

  20. Predictive and postdictive mechanisms jointly contribute to visual awareness.

    PubMed

    Soga, Ryosuke; Akaishi, Rei; Sakai, Katsuyuki

    2009-09-01

    One of the fundamental issues in visual awareness is how we are able to perceive the scene in front of our eyes on time despite the delay in processing visual information. The prediction theory postulates that our visual system predicts the future to compensate for such delays. On the other hand, the postdiction theory postulates that our visual awareness is inevitably a delayed product. In the present study we used flash-lag paradigms in motion and color domains and examined how the perception of visual information at the time of flash is influenced by prior and subsequent visual events. We found that both types of event additively influence the perception of the present visual image, suggesting that our visual awareness results from joint contribution of predictive and postdictive mechanisms.

  1. Uncertainty Representation in Visualizations of Learning Analytics for Learners: Current Approaches and Opportunities

    ERIC Educational Resources Information Center

    Demmans Epp, Carrie; Bull, Susan

    2015-01-01

    Adding uncertainty information to visualizations is becoming increasingly common across domains since its addition helps ensure that informed decisions are made. This work has shown the difficulty that is inherent to representing uncertainty. Moreover, the representation of uncertainty has yet to be thoroughly explored in educational domains even…

  2. Getting connected: Both associative and semantic links structure semantic memory for newly learned persons.

    PubMed

    Wiese, Holger; Schweinberger, Stefan R

    2015-01-01

    The present study examined whether semantic memory for newly learned people is structured by visual co-occurrence, shared semantics, or both. Participants were trained with pairs of simultaneously presented (i.e., co-occurring) preexperimentally unfamiliar faces, which either did or did not share additionally provided semantic information (occupation, place of living, etc.). Semantic information could also be shared between faces that did not co-occur. A subsequent priming experiment revealed faster responses for both co-occurrence/no shared semantics and no co-occurrence/shared semantics conditions, than for an unrelated condition. Strikingly, priming was strongest in the co-occurrence/shared semantics condition, suggesting additive effects of these factors. Additional analysis of event-related brain potentials yielded priming in the N400 component only for combined effects of visual co-occurrence and shared semantics, with more positive amplitudes in this than in the unrelated condition. Overall, these findings suggest that both semantic relatedness and visual co-occurrence are important when novel information is integrated into person-related semantic memory.

  3. The quality of visual information about the lower extremities influences visuomotor coordination during virtual obstacle negotiation.

    PubMed

    Kim, Aram; Kretch, Kari S; Zhou, Zixuan; Finley, James M

    2018-05-09

    Successful negotiation of obstacles during walking relies on the integration of visual information about the environment with ongoing locomotor commands. When information about the body and environment are removed through occlusion of the lower visual field, individuals increase downward head pitch angle, reduce foot placement precision, and increase safety margins during crossing. However, whether these effects are mediated by loss of visual information about the lower extremities, the obstacle, or both remains to be seen. Here, we used a fully immersive, virtual obstacle negotiation task to investigate how visual information about the lower extremities is integrated with information about the environment to facilitate skillful obstacle negotiation. Participants stepped over virtual obstacles while walking on a treadmill with one of three types of visual feedback about the lower extremities: no feedback, end-point feedback, or a link-segment model. We found that absence of visual information about the lower extremities led to an increase in the variability of leading foot placement after crossing. The presence of a visual representation of the lower extremities promoted greater downward head pitch angle during the approach to and subsequent crossing of an obstacle. In addition, having greater downward head pitch was associated with closer placement of the trailing foot to the obstacle, further placement of the leading foot after the obstacle, and higher trailing foot clearance. These results demonstrate that the fidelity of visual information about the lower extremities influences both feed-forward and feedback aspects of visuomotor coordination during obstacle negotiation.

  4. Effects of body lean and visual information on the equilibrium maintenance during stance.

    PubMed

    Duarte, Marcos; Zatsiorsky, Vladimir M

    2002-09-01

    Maintenance of equilibrium was tested in conditions when humans assume different leaning postures during upright standing. Subjects ( n=11) stood in 13 different body postures specified by visual center of pressure (COP) targets within their base of support (BOS). Different types of visual information were tested: continuous presentation of visual target, no vision after target presentation, and with simultaneous visual feedback of the COP. The following variables were used to describe the equilibrium maintenance: the mean of the COP position, the area of the ellipse covering the COP sway, and the resultant median frequency of the power spectral density of the COP displacement. The variability of the COP displacement, quantified by the COP area variable, increased when subjects occupied leaning postures, irrespective of the kind of visual information provided. This variability also increased when vision was removed in relation to when vision was present. Without vision, drifts in the COP data were observed which were larger for COP targets farther away from the neutral position. When COP feedback was given in addition to the visual target, the postural control system did not control stance better than in the condition with only visual information. These results indicate that the visual information is used by the postural control system at both short and long time scales.

  5. Short-term memory for spatial configurations in the tactile modality: a comparison with vision.

    PubMed

    Picard, Delphine; Monnier, Catherine

    2009-11-01

    This study investigates the role of acquisition constraints on the short-term retention of spatial configurations in the tactile modality in comparison with vision. It tests whether the sequential processing of information inherent to the tactile modality could account for limitation in short-term memory span for tactual-spatial information. In addition, this study investigates developmental aspects of short-term memory for tactual- and visual-spatial configurations. A total of 144 child and adult participants were assessed for their memory span in three different conditions: tactual, visual, and visual with a limited field of view. The results showed lower tactual-spatial memory span than visual-spatial, regardless of age. However, differences in memory span observed between the tactile and visual modalities vanished when the visual processing of information occurred within a limited field. These results provide evidence for an impact of acquisition constraints on the retention of spatial information in the tactile modality in both childhood and adulthood.

  6. Visual information mining in remote sensing image archives

    NASA Astrophysics Data System (ADS)

    Pelizzari, Andrea; Descargues, Vincent; Datcu, Mihai P.

    2002-01-01

    The present article focuses on the development of interactive exploratory tools for visually mining the image content in large remote sensing archives. Two aspects are treated: the iconic visualization of the global information in the archive and the progressive visualization of the image details. The proposed methods are integrated in the Image Information Mining (I2M) system. The images and image structure in the I2M system are indexed based on a probabilistic approach. The resulting links are managed by a relational data base. Both the intrinsic complexity of the observed images and the diversity of user requests result in a great number of associations in the data base. Thus new tools have been designed to visualize, in iconic representation the relationships created during a query or information mining operation: the visualization of the query results positioned on the geographical map, quick-looks gallery, visualization of the measure of goodness of the query, visualization of the image space for statistical evaluation purposes. Additionally the I2M system is enhanced with progressive detail visualization in order to allow better access for operator inspection. I2M is a three-tier Java architecture and is optimized for the Internet.

  7. Perceived visual informativeness (PVI): construct and scale development to assess visual information in printed materials.

    PubMed

    King, Andy J; Jensen, Jakob D; Davis, LaShara A; Carcioppolo, Nick

    2014-01-01

    There is a paucity of research on the visual images used in health communication messages and campaign materials. Even though many studies suggest further investigation of these visual messages and their features, few studies provide specific constructs or assessment tools for evaluating the characteristics of visual messages in health communication contexts. The authors conducted 2 studies to validate a measure of perceived visual informativeness (PVI), a message construct assessing visual messages presenting statistical or indexical information. In Study 1, a 7-item scale was created that demonstrated good internal reliability (α = .91), as well as convergent and divergent validity with related message constructs such as perceived message quality, perceived informativeness, and perceived attractiveness. PVI also converged with a preference for visual learning but was unrelated to a person's actual vision ability. In addition, PVI exhibited concurrent validity with a number of important constructs including perceived message effectiveness, decisional satisfaction, and three key public health theory behavior predictors: perceived benefits, perceived barriers, and self-efficacy. Study 2 provided more evidence that PVI is an internally reliable measure and demonstrates that PVI is a modifiable message feature that can be tested in future experimental work. PVI provides an initial step to assist in the evaluation and testing of visual messages in campaign and intervention materials promoting informed decision making and behavior change.

  8. Photo-sharing social media for eHealth: analysing perceived message effectiveness of sexual health information on Instagram.

    PubMed

    O'Donnell, Nicole Hummel; Willoughby, Jessica Fitts

    2017-10-01

    Health professionals increasingly use social media to communicate health information, but it is unknown how visual message presentation on these platforms affects message reception. This study used an experiment to analyse how young adults (n = 839) perceive sexual health messages on Instagram. Participants were exposed to one of four conditions based on visual message presentation. Messages with embedded health content had the highest perceived message effectiveness ratings. Additionally, message sensation value, attitudes and systematic information processing were significant predictors of perceived message effectiveness. Implications for visual message design for electronic health are discussed.

  9. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs

    PubMed Central

    ten Oever, Sanne; Sack, Alexander T.; Wheat, Katherine L.; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception. PMID:23805110

  10. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs.

    PubMed

    Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.

  11. Semantic integration of differently asynchronous audio-visual information in videos of real-world events in cognitive processing: an ERP study.

    PubMed

    Liu, Baolin; Wu, Guangning; Wang, Zhongning; Ji, Xiang

    2011-07-01

    In the real world, some of the auditory and visual information received by the human brain are temporally asynchronous. How is such information integrated in cognitive processing in the brain? In this paper, we aimed to study the semantic integration of differently asynchronous audio-visual information in cognitive processing using ERP (event-related potential) method. Subjects were presented with videos of real world events, in which the auditory and visual information are temporally asynchronous. When the critical action was prior to the sound, sounds incongruous with the preceding critical actions elicited a N400 effect when compared to congruous condition. This result demonstrates that semantic contextual integration indexed by N400 also applies to cognitive processing of multisensory information. In addition, the N400 effect is early in latency when contrasted with other visually induced N400 studies. It is shown that cross modal information is facilitated in time when contrasted with visual information in isolation. When the sound was prior to the critical action, a larger late positive wave was observed under the incongruous condition compared to congruous condition. P600 might represent a reanalysis process, in which the mismatch between the critical action and the preceding sound was evaluated. It is shown that environmental sound may affect the cognitive processing of a visual event. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  12. Widespread correlation patterns of fMRI signal across visual cortex reflect eccentricity organization.

    PubMed

    Arcaro, Michael J; Honey, Christopher J; Mruczek, Ryan E B; Kastner, Sabine; Hasson, Uri

    2015-02-19

    The human visual system can be divided into over two-dozen distinct areas, each of which contains a topographic map of the visual field. A fundamental question in vision neuroscience is how the visual system integrates information from the environment across different areas. Using neuroimaging, we investigated the spatial pattern of correlated BOLD signal across eight visual areas on data collected during rest conditions and during naturalistic movie viewing. The correlation pattern between areas reflected the underlying receptive field organization with higher correlations between cortical sites containing overlapping representations of visual space. In addition, the correlation pattern reflected the underlying widespread eccentricity organization of visual cortex, in which the highest correlations were observed for cortical sites with iso-eccentricity representations including regions with non-overlapping representations of visual space. This eccentricity-based correlation pattern appears to be part of an intrinsic functional architecture that supports the integration of information across functionally specialized visual areas.

  13. How the blind "see" Braille: lessons from functional magnetic resonance imaging.

    PubMed

    Sadato, Norihiro

    2005-12-01

    What does the visual cortex of the blind do during Braille reading? This process involves converting simple tactile information into meaningful patterns that have lexical and semantic properties. The perceptual processing of Braille might be mediated by the somatosensory system, whereas visual letter identity is accomplished within the visual system in sighted people. Recent advances in functional neuroimaging techniques, such as functional magnetic resonance imaging, have enabled exploration of the neural substrates of Braille reading. The primary visual cortex of early-onset blind subjects is functionally relevant to Braille reading, suggesting that the brain shows remarkable plasticity that potentially permits the additional processing of tactile information in the visual cortical areas.

  14. Disentangling brain activity related to the processing of emotional visual information and emotional arousal.

    PubMed

    Kuniecki, Michał; Wołoszyn, Kinga; Domagalik, Aleksandra; Pilarczyk, Joanna

    2018-05-01

    Processing of emotional visual information engages cognitive functions and induces arousal. We aimed to examine the modulatory role of emotional valence on brain activations linked to the processing of visual information and those linked to arousal. Participants were scanned and their pupil size was measured while viewing negative and neutral images. The visual noise was added to the images in various proportions to parametrically manipulate the amount of visual information. Pupil size was used as an index of physiological arousal. We show that arousal induced by the negative images, as compared to the neutral ones, is primarily related to greater amygdala activity while increasing visibility of negative content to enhanced activity in the lateral occipital complex (LOC). We argue that more intense visual processing of negative scenes can occur irrespective of the level of arousal. It may suggest that higher areas of the visual stream are fine-tuned to process emotionally relevant objects. Both arousal and processing of emotional visual information modulated activity within the ventromedial prefrontal cortex (vmPFC). Overlapping activations within the vmPFC may reflect the integration of these aspects of emotional processing. Additionally, we show that emotionally-evoked pupil dilations are related to activations in the amygdala, vmPFC, and LOC.

  15. Domestic pigs' (Sus scrofa domestica) use of direct and indirect visual and auditory cues in an object choice task.

    PubMed

    Nawroth, Christian; von Borell, Eberhard

    2015-05-01

    Recently, foraging strategies have been linked to the ability to use indirect visual information. More selective feeders should express a higher aversion against losses compared to non-selective feeders and should therefore be more prone to avoid empty food locations. To extend these findings, in this study, we present a series of studies investigating the use of direct and indirect visual and auditory information by an omnivorous but selective feeder-the domestic pig. Subjects had to choose between two buckets, with only one containing a reward. Before making a choice, the subjects in Experiment 1 (N = 8) received full information regarding both the baited and non-baited location, either in a visual or auditory domain. In this experiment, the subjects were able to use visual but not auditory cues to infer the location of the reward spontaneously. Additionally, four individuals learned to use auditory cues after a period of training. In Experiment 2 (N = 8), the pigs were given different amounts of visual information about the content of the buckets-lifting either both of the buckets (full information), the baited bucket (direct information), the empty bucket (indirect information) or no bucket at all (no information). The subjects as a group were able to use direct and indirect visual cues. However, over the course of the experiment, the performance dropped to chance level when indirect information was provided. A final experiment (N = 3) provided preliminary results for pigs' use of indirect auditory information to infer the location of a reward. We conclude that pigs at a very young age are able to make decisions based on indirect information in the visual domain, whereas their performance in the use of indirect auditory information warrants further investigation.

  16. Brain processing of visual information during fast eye movements maintains motor performance.

    PubMed

    Panouillères, Muriel; Gaveau, Valérie; Socasau, Camille; Urquizar, Christian; Pélisson, Denis

    2013-01-01

    Movement accuracy depends crucially on the ability to detect errors while actions are being performed. When inaccuracies occur repeatedly, both an immediate motor correction and a progressive adaptation of the motor command can unfold. Of all the movements in the motor repertoire of humans, saccadic eye movements are the fastest. Due to the high speed of saccades, and to the impairment of visual perception during saccades, a phenomenon called "saccadic suppression", it is widely believed that the adaptive mechanisms maintaining saccadic performance depend critically on visual error signals acquired after saccade completion. Here, we demonstrate that, contrary to this widespread view, saccadic adaptation can be based entirely on visual information presented during saccades. Our results show that visual error signals introduced during saccade execution--by shifting a visual target at saccade onset and blanking it at saccade offset--induce the same level of adaptation as error signals, presented for the same duration, but after saccade completion. In addition, they reveal that this processing of intra-saccadic visual information for adaptation depends critically on visual information presented during the deceleration phase, but not the acceleration phase, of the saccade. These findings demonstrate that the human central nervous system can use short intra-saccadic glimpses of visual information for motor adaptation, and they call for a reappraisal of current models of saccadic adaptation.

  17. Visual representation of statistical information improves diagnostic inferences in doctors and their patients.

    PubMed

    Garcia-Retamero, Rocio; Hoffrage, Ulrich

    2013-04-01

    Doctors and patients have difficulty inferring the predictive value of a medical test from information about the prevalence of a disease and the sensitivity and false-positive rate of the test. Previous research has established that communicating such information in a format the human mind is adapted to-namely natural frequencies-as compared to probabilities, boosts accuracy of diagnostic inferences. In a study, we investigated to what extent these inferences can be improved-beyond the effect of natural frequencies-by providing visual aids. Participants were 81 doctors and 81 patients who made diagnostic inferences about three medical tests on the basis of information about prevalence of a disease, and the sensitivity and false-positive rate of the tests. Half of the participants received the information in natural frequencies, while the other half received the information in probabilities. Half of the participants only received numerical information, while the other half additionally received a visual aid representing the numerical information. In addition, participants completed a numeracy scale. Our study showed three important findings: (1) doctors and patients made more accurate inferences when information was communicated in natural frequencies as compared to probabilities; (2) visual aids boosted accuracy even when the information was provided in natural frequencies; and (3) doctors were more accurate in their diagnostic inferences than patients, though differences in accuracy disappeared when differences in numerical skills were controlled for. Our findings have important implications for medical practice as they suggest suitable ways to communicate quantitative medical data. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Visual speech information: a help or hindrance in perceptual processing of dysarthric speech.

    PubMed

    Borrie, Stephanie A

    2015-03-01

    This study investigated the influence of visual speech information on perceptual processing of neurologically degraded speech. Fifty listeners identified spastic dysarthric speech under both audio (A) and audiovisual (AV) conditions. Condition comparisons revealed that the addition of visual speech information enhanced processing of the neurologically degraded input in terms of (a) acuity (percent phonemes correct) of vowels and consonants and (b) recognition (percent words correct) of predictive and nonpredictive phrases. Listeners exploited stress-based segmentation strategies more readily in AV conditions, suggesting that the perceptual benefit associated with adding visual speech information to the auditory signal-the AV advantage-has both segmental and suprasegmental origins. Results also revealed that the magnitude of the AV advantage can be predicted, to some degree, by the extent to which an individual utilizes syllabic stress cues to inform word recognition in AV conditions. Findings inform the development of a listener-specific model of speech perception that applies to processing of dysarthric speech in everyday communication contexts.

  19. What Visual Information Do Children and Adults Consider while Switching between Tasks? Eye-Tracking Investigation of Cognitive Flexibility Development

    ERIC Educational Resources Information Center

    Chevalier, Nicolas; Blaye, Agnes; Dufau, Stephane; Lucenet, Joanna

    2010-01-01

    This study investigated the visual information that children and adults consider while switching or maintaining object-matching rules. Eye movements of 5- and 6-year-old children and adults were collected with two versions of the Advanced Dimensional Change Card Sort, which requires switching between shape- and color-matching rules. In addition to…

  20. Image visualization of hyperspectral spectrum for LWIR

    NASA Astrophysics Data System (ADS)

    Chong, Eugene; Jeong, Young-Su; Lee, Jai-Hoon; Park, Dong Jo; Kim, Ju Hyun

    2015-07-01

    The image visualization of a real-time hyperspectral spectrum in the long-wave infrared (LWIR) range of 900-1450 cm-1 by a color-matching function is addressed. It is well known that the absorption spectra of main toxic industrial chemical (TIC) and chemical warfare agent (CWA) clouds are detected in this spectral region. Furthermore, a significant spectral peak due to various background species and unknown targets are also present. However, those are dismissed as noise, resulting in utilization limit. Herein, we applied a color-matching function that uses the information from hyperspectral data, which is emitted from the materials and surfaces of artificial or natural backgrounds in the LWIR region. This information was used to classify and differentiate the background signals from the targeted substances, and the results were visualized as image data without additional visual equipment. The tristimulus value based visualization information can quickly identify the background species and target in real-time detection in LWIR.

  1. Eyes Matched to the Prize: The State of Matched Filters in Insect Visual Circuits.

    PubMed

    Kohn, Jessica R; Heath, Sarah L; Behnia, Rudy

    2018-01-01

    Confronted with an ever-changing visual landscape, animals must be able to detect relevant stimuli and translate this information into behavioral output. A visual scene contains an abundance of information: to interpret the entirety of it would be uneconomical. To optimally perform this task, neural mechanisms exist to enhance the detection of important features of the sensory environment while simultaneously filtering out irrelevant information. This can be accomplished by using a circuit design that implements specific "matched filters" that are tuned to relevant stimuli. Following this rule, the well-characterized visual systems of insects have evolved to streamline feature extraction on both a structural and functional level. Here, we review examples of specialized visual microcircuits for vital behaviors across insect species, including feature detection, escape, and estimation of self-motion. Additionally, we discuss how these microcircuits are modulated to weigh relevant input with respect to different internal and behavioral states.

  2. Comparing object recognition from binary and bipolar edge images for visual prostheses.

    PubMed

    Jung, Jae-Hyun; Pu, Tian; Peli, Eli

    2016-11-01

    Visual prostheses require an effective representation method due to the limited display condition which has only 2 or 3 levels of grayscale in low resolution. Edges derived from abrupt luminance changes in images carry essential information for object recognition. Typical binary (black and white) edge images have been used to represent features to convey essential information. However, in scenes with a complex cluttered background, the recognition rate of the binary edge images by human observers is limited and additional information is required. The polarity of edges and cusps (black or white features on a gray background) carries important additional information; the polarity may provide shape from shading information missing in the binary edge image. This depth information may be restored by using bipolar edges. We compared object recognition rates from 16 binary edge images and bipolar edge images by 26 subjects to determine the possible impact of bipolar filtering in visual prostheses with 3 or more levels of grayscale. Recognition rates were higher with bipolar edge images and the improvement was significant in scenes with complex backgrounds. The results also suggest that erroneous shape from shading interpretation of bipolar edges resulting from pigment rather than boundaries of shape may confound the recognition.

  3. Individual's recollections of their experiences in eye clinics and understanding of their eye condition: results from a survey of visually impaired people in Britain.

    PubMed

    Douglas, Graeme; Pavey, Sue; Corcoran, Christine; Eperjesi, Frank

    2010-11-01

    Network 1000 is a UK-based panel survey of a representative sample of adults with registered visual impairment, with the aim of gathering information about people's opinions and circumstances. Participants were interviewed (Survey 1, n = 1007: 2005; Survey 2, n = 922: 2006/07) on a range of topics including the nature of their eye condition, details of other health issues, use of low vision aids (LVAs) and their experiences in eye clinics. Eleven percent of individuals did not know the name of their eye condition. Seventy percent of participants reported having long-term health problems or disabilities in addition to visual impairment and 43% reported having hearing difficulties. Seventy one percent reported using LVAs for reading tasks. Participants who had become registered as visually impaired in the previous 8 years (n = 395) were asked questions about non-medical information received in the eye clinic around that time. Reported information received included advice about 'registration' (48%), low vision aids (45%) and social care routes (43%); 17% reported receiving no information. While 70% of people were satisfied with the information received, this was lower for those of working age (56%) compared with retirement age (72%). Those who recalled receiving additional non-medical information and advice at the time of registration also recalled their experiences more positively. Whilst caution should be applied to the accuracy of recall of past events, the data provide a valuable insight into the types of information and support that visually impaired people feel they would benefit from in the eye clinic. © 2010 The Authors. Ophthalmic and Physiological Optics © 2010 The College of Optometrists.

  4. Prediction and constraint in audiovisual speech perception

    PubMed Central

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. PMID:25890390

  5. Methods of visualizing graphs

    DOEpatents

    Wong, Pak C.; Mackey, Patrick S.; Perrine, Kenneth A.; Foote, Harlan P.; Thomas, James J.

    2008-12-23

    Methods for visualizing a graph by automatically drawing elements of the graph as labels are disclosed. In one embodiment, the method comprises receiving node information and edge information from an input device and/or communication interface, constructing a graph layout based at least in part on that information, wherein the edges are automatically drawn as labels, and displaying the graph on a display device according to the graph layout. In some embodiments, the nodes are automatically drawn as labels instead of, or in addition to, the label-edges.

  6. Higher Level Visual Cortex Represents Retinotopic, Not Spatiotopic, Object Location

    PubMed Central

    Kanwisher, Nancy

    2012-01-01

    The crux of vision is to identify objects and determine their locations in the environment. Although initial visual representations are necessarily retinotopic (eye centered), interaction with the real world requires spatiotopic (absolute) location information. We asked whether higher level human visual cortex—important for stable object recognition and action—contains information about retinotopic and/or spatiotopic object position. Using functional magnetic resonance imaging multivariate pattern analysis techniques, we found information about both object category and object location in each of the ventral, dorsal, and early visual regions tested, replicating previous reports. By manipulating fixation position and stimulus position, we then tested whether these location representations were retinotopic or spatiotopic. Crucially, all location information was purely retinotopic. This pattern persisted when location information was irrelevant to the task, and even when spatiotopic (not retinotopic) stimulus position was explicitly emphasized. We also conducted a “searchlight” analysis across our entire scanned volume to explore additional cortex but again found predominantly retinotopic representations. The lack of explicit spatiotopic representations suggests that spatiotopic object position may instead be computed indirectly and continually reconstructed with each eye movement. Thus, despite our subjective impression that visual information is spatiotopic, even in higher level visual cortex, object location continues to be represented in retinotopic coordinates. PMID:22190434

  7. Early Visual Foraging in Relationship to Familial Risk for Autism and Hyperactivity/Inattention.

    PubMed

    Gliga, Teodora; Smith, Tim J; Likely, Noreen; Charman, Tony; Johnson, Mark H

    2015-12-04

    Information foraging is atypical in both autism spectrum disorders (ASDs) and ADHD; however, while ASD is associated with restricted exploration and preference for sameness, ADHD is characterized by hyperactivity and increased novelty seeking. Here, we ask whether similar biases are present in visual foraging in younger siblings of children with a diagnosis of ASD with or without additional high levels of hyperactivity and inattention. Fifty-four low-risk controls (LR) and 50 high-risk siblings (HR) took part in an eye-tracking study at 8 and 14 months and at 3 years of age. At 8 months, siblings of children with ASD and low levels of hyperactivity/inattention (HR/ASD-HI) were more likely to return to previously visited areas in the visual scene than were LR and siblings of children with ASD and high levels of hyperactivity/inattention (HR/ASD+HI). We show that visual foraging is atypical in infants at-risk for ASD. We also reveal a paradoxical effect, in that additional family risk for ADHD core symptoms mitigates the effect of ASD risk on visual information foraging. © The Author(s) 2015.

  8. Feature-based memory-driven attentional capture: visual working memory content affects visual attention.

    PubMed

    Olivers, Christian N L; Meijer, Frank; Theeuwes, Jan

    2006-10-01

    In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by an additional memory task. Singleton distractors interfered even more when they were identical or related to the object held in memory, but only when it was difficult to verbalize the memory content. Furthermore, this content-specific interaction occurred for features that were relevant to the memory task but not for irrelevant features of the same object or for once-remembered objects that could be forgotten. Finally, memory-related distractors attracted more eye movements but did not result in longer fixations. The results demonstrate memory-driven attentional capture on the basis of content-specific representations. Copyright 2006 APA.

  9. Augmenting Amyloid PET Interpretations With Quantitative Information Improves Consistency of Early Amyloid Detection.

    PubMed

    Harn, Nicholas R; Hunt, Suzanne L; Hill, Jacqueline; Vidoni, Eric; Perry, Mark; Burns, Jeffrey M

    2017-08-01

    Establishing reliable methods for interpreting elevated cerebral amyloid-β plaque on PET scans is increasingly important for radiologists, as availability of PET imaging in clinical practice increases. We examined a 3-step method to detect plaque in cognitively normal older adults, focusing on the additive value of quantitative information during the PET scan interpretation process. Fifty-five F-florbetapir PET scans were evaluated by 3 experienced raters. Scans were first visually interpreted as having "elevated" or "nonelevated" plaque burden ("Visual Read"). Images were then processed using a standardized quantitative analysis software (MIMneuro) to generate whole brain and region of interest SUV ratios. This "Quantitative Read" was considered elevated if at least 2 of 6 regions of interest had an SUV ratio of more than 1.1. The final interpretation combined both visual and quantitative data together ("VisQ Read"). Cohen kappa values were assessed as a measure of interpretation agreement. Plaque was elevated in 25.5% to 29.1% of the 165 total Visual Reads. Interrater agreement was strong (kappa = 0.73-0.82) and consistent with reported values. Quantitative Reads were elevated in 45.5% of participants. Final VisQ Reads changed from initial Visual Reads in 16 interpretations (9.7%), with most changing from "nonelevated" Visual Reads to "elevated." These changed interpretations demonstrated lower plaque quantification than those initially read as "elevated" that remained unchanged. Interrater variability improved for VisQ Reads with the addition of quantitative information (kappa = 0.88-0.96). Inclusion of quantitative information increases consistency of PET scan interpretations for early detection of cerebral amyloid-β plaque accumulation.

  10. Correspondence of presaccadic activity in the monkey primary visual cortex with saccadic eye movements

    PubMed Central

    Supèr, Hans; van der Togt, Chris; Spekreijse, Henk; Lamme, Victor A. F.

    2004-01-01

    We continuously scan the visual world via rapid or saccadic eye movements. Such eye movements are guided by visual information, and thus the oculomotor structures that determine when and where to look need visual information to control the eye movements. To know whether visual areas contain activity that may contribute to the control of eye movements, we recorded neural responses in the visual cortex of monkeys engaged in a delayed figure-ground detection task and analyzed the activity during the period of oculomotor preparation. We show that ≈100 ms before the onset of visually and memory-guided saccades neural activity in V1 becomes stronger where the strongest presaccadic responses are found at the location of the saccade target. In addition, in memory-guided saccades the strength of presaccadic activity shows a correlation with the onset of the saccade. These findings indicate that the primary visual cortex contains saccade-related responses and participates in visually guided oculomotor behavior. PMID:14970334

  11. Widespread correlation patterns of fMRI signal across visual cortex reflect eccentricity organization

    PubMed Central

    Arcaro, Michael J; Honey, Christopher J; Mruczek, Ryan EB; Kastner, Sabine; Hasson, Uri

    2015-01-01

    The human visual system can be divided into over two-dozen distinct areas, each of which contains a topographic map of the visual field. A fundamental question in vision neuroscience is how the visual system integrates information from the environment across different areas. Using neuroimaging, we investigated the spatial pattern of correlated BOLD signal across eight visual areas on data collected during rest conditions and during naturalistic movie viewing. The correlation pattern between areas reflected the underlying receptive field organization with higher correlations between cortical sites containing overlapping representations of visual space. In addition, the correlation pattern reflected the underlying widespread eccentricity organization of visual cortex, in which the highest correlations were observed for cortical sites with iso-eccentricity representations including regions with non-overlapping representations of visual space. This eccentricity-based correlation pattern appears to be part of an intrinsic functional architecture that supports the integration of information across functionally specialized visual areas. DOI: http://dx.doi.org/10.7554/eLife.03952.001 PMID:25695154

  12. An Information-Theoretic-Cluster Visualization for Self-Organizing Maps.

    PubMed

    Brito da Silva, Leonardo Enzo; Wunsch, Donald C

    2018-06-01

    Improved data visualization will be a significant tool to enhance cluster analysis. In this paper, an information-theoretic-based method for cluster visualization using self-organizing maps (SOMs) is presented. The information-theoretic visualization (IT-vis) has the same structure as the unified distance matrix, but instead of depicting Euclidean distances between adjacent neurons, it displays the similarity between the distributions associated with adjacent neurons. Each SOM neuron has an associated subset of the data set whose cardinality controls the granularity of the IT-vis and with which the first- and second-order statistics are computed and used to estimate their probability density functions. These are used to calculate the similarity measure, based on Renyi's quadratic cross entropy and cross information potential (CIP). The introduced visualizations combine the low computational cost and kernel estimation properties of the representative CIP and the data structure representation of a single-linkage-based grouping algorithm to generate an enhanced SOM-based visualization. The visual quality of the IT-vis is assessed by comparing it with other visualization methods for several real-world and synthetic benchmark data sets. Thus, this paper also contains a significant literature survey. The experiments demonstrate the IT-vis cluster revealing capabilities, in which cluster boundaries are sharply captured. Additionally, the information-theoretic visualizations are used to perform clustering of the SOM. Compared with other methods, IT-vis of large SOMs yielded the best results in this paper, for which the quality of the final partitions was evaluated using external validity indices.

  13. Health motivation and product design determine consumers' visual attention to nutrition information on food products.

    PubMed

    Visschers, Vivianne H M; Hess, Rebecca; Siegrist, Michael

    2010-07-01

    In the present study we investigated consumers' visual attention to nutrition information on food products using an indirect instrument, an eye tracker. In addition, we looked at whether people with a health motivation focus on nutrition information on food products more than people with a taste motivation. Respondents were instructed to choose one of five cereals for either the kindergarten (health motivation) or the student cafeteria (taste motivation). The eye tracker measured their visual attention during this task. Then respondents completed a short questionnaire. Laboratory of the ETH Zurich, Switzerland. Videos and questionnaires from thirty-two students (seventeen males; mean age 24.91 years) were analysed. Respondents with a health motivation viewed the nutrition information on the food products for longer and more often than respondents with a taste motivation. Health motivation also seemed to stimulate deeper processing of the nutrition information. The student cafeteria group focused primarily on the other information and did this for longer and more often than the health motivation group. Additionally, the package design affected participants' nutrition information search. Two factors appear to influence whether people pay attention to nutrition information on food products: their motivation and the product's design. If the package design does not sufficiently facilitate the localization of nutrition information, health motivation can stimulate consumers to look for nutrition information so that they may make a more deliberate food choice.

  14. Prefrontal contributions to visual selective attention.

    PubMed

    Squire, Ryan F; Noudoost, Behrad; Schafer, Robert J; Moore, Tirin

    2013-07-08

    The faculty of attention endows us with the capacity to process important sensory information selectively while disregarding information that is potentially distracting. Much of our understanding of the neural circuitry underlying this fundamental cognitive function comes from neurophysiological studies within the visual modality. Past evidence suggests that a principal function of the prefrontal cortex (PFC) is selective attention and that this function involves the modulation of sensory signals within posterior cortices. In this review, we discuss recent progress in identifying the specific prefrontal circuits controlling visual attention and its neural correlates within the primate visual system. In addition, we examine the persisting challenge of precisely defining how behavior should be affected when attentional function is lost.

  15. Learning-based saliency model with depth information.

    PubMed

    Ma, Chih-Yao; Hang, Hsueh-Ming

    2015-01-01

    Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.

  16. How do schizophrenia patients use visual information to decode facial emotion?

    PubMed

    Lee, Junghee; Gosselin, Frédéric; Wynn, Jonathan K; Green, Michael F

    2011-09-01

    Impairment in recognizing facial emotions is a prominent feature of schizophrenia patients, but the underlying mechanism of this impairment remains unclear. This study investigated the specific aspects of visual information that are critical for schizophrenia patients to recognize emotional expression. Using the Bubbles technique, we probed the use of visual information during a facial emotion discrimination task (fear vs. happy) in 21 schizophrenia patients and 17 healthy controls. Visual information was sampled through randomly located Gaussian apertures (or "bubbles") at 5 spatial frequency scales. Online calibration of the amount of face exposed through bubbles was used to ensure 75% overall accuracy for each subject. Least-square multiple linear regression analyses between sampled information and accuracy were performed to identify critical visual information that was used to identify emotional expression. To accurately identify emotional expression, schizophrenia patients required more exposure of facial areas (i.e., more bubbles) compared with healthy controls. To identify fearful faces, schizophrenia patients relied less on bilateral eye regions at high-spatial frequency compared with healthy controls. For identification of happy faces, schizophrenia patients relied on the mouth and eye regions; healthy controls did not utilize eyes and used the mouth much less than patients did. Schizophrenia patients needed more facial information to recognize emotional expression of faces. In addition, patients differed from controls in their use of high-spatial frequency information from eye regions to identify fearful faces. This study provides direct evidence that schizophrenia patients employ an atypical strategy of using visual information to recognize emotional faces.

  17. Is cross-modal integration of emotional expressions independent of attentional resources?

    PubMed

    Vroomen, J; Driver, J; de Gelder, B

    2001-12-01

    In this study, we examined whether integration of visual and auditory information about emotions requires limited attentional resources. Subjects judged whether a voice expressed happiness or fear, while trying to ignore a concurrently presented static facial expression. As an additional task, the subjects had to add two numbers together rapidly (Experiment 1), count the occurrences of a target digit in a rapid serial visual presentation (Experiment 2), or judge the pitch of a tone as high or low (Experiment 3). The visible face had an impact on judgments of the emotion of the heard voice in all the experiments. This cross-modal effect was independent of whether or not the subjects performed a demanding additional task. This suggests that integration of visual and auditory information about emotions may be a mandatory process, unconstrained by attentional resources.

  18. Effects of aging on pointing movements under restricted visual feedback conditions.

    PubMed

    Zhang, Liancun; Yang, Jiajia; Inai, Yoshinobu; Huang, Qiang; Wu, Jinglong

    2015-04-01

    The goal of this study was to investigate the effects of aging on pointing movements under restricted visual feedback of hand movement and target location. Fifteen young subjects and fifteen elderly subjects performed pointing movements under four restricted visual feedback conditions that included full visual feedback of hand movement and target location (FV), no visual feedback of hand movement and target location condition (NV), no visual feedback of hand movement (NM) and no visual feedback of target location (NT). This study suggested that Fitts' law applied for pointing movements of the elderly adults under different visual restriction conditions. Moreover, significant main effect of aging on movement times has been found in all four tasks. The peripheral and central changes may be the key factors for these different characteristics. Furthermore, no significant main effects of age on the mean accuracy rate under condition of restricted visual feedback were found. The present study suggested that the elderly subjects made a very similar use of the available sensory information as young subjects under restricted visual feedback conditions. In addition, during the pointing movement, information about the hand's movement was more useful than information about the target location for young and elderly subjects. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Discrepant visual speech facilitates covert selective listening in "cocktail party" conditions.

    PubMed

    Williams, Jason A

    2012-06-01

    The presence of congruent visual speech information facilitates the identification of auditory speech, while the addition of incongruent visual speech information often impairs accuracy. This latter arrangement occurs naturally when one is being directly addressed in conversation but listens to a different speaker. Under these conditions, performance may diminish since: (a) one is bereft of the facilitative effects of the corresponding lip motion and (b) one becomes subject to visual distortion by incongruent visual speech; by contrast, speech intelligibility may be improved due to (c) bimodal localization of the central unattended stimulus. Participants were exposed to centrally presented visual and auditory speech while attending to a peripheral speech stream. In some trials, the lip movements of the central visual stimulus matched the unattended speech stream; in others, the lip movements matched the attended peripheral speech. Accuracy for the peripheral stimulus was nearly one standard deviation greater with incongruent visual information, compared to the congruent condition which provided bimodal pattern recognition cues. Likely, the bimodal localization of the central stimulus further differentiated the stimuli and thus facilitated intelligibility. Results are discussed with regard to similar findings in an investigation of the ventriloquist effect, and the relative strength of localization and speech cues in covert listening.

  20. Prediction and constraint in audiovisual speech perception.

    PubMed

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Literature review of visual representation of the results of benefit-risk assessments of medicinal products.

    PubMed

    Hallgreen, Christine E; Mt-Isa, Shahrul; Lieftucht, Alfons; Phillips, Lawrence D; Hughes, Diana; Talbot, Susan; Asiimwe, Alex; Downey, Gerald; Genov, Georgy; Hermann, Richard; Noel, Rebecca; Peters, Ruth; Micaleff, Alain; Tzoulaki, Ioanna; Ashby, Deborah

    2016-03-01

    The PROTECT Benefit-Risk group is dedicated to research in methods for continuous benefit-risk monitoring of medicines, including the presentation of the results, with a particular emphasis on graphical methods. A comprehensive review was performed to identify visuals used for medical risk and benefit-risk communication. The identified visual displays were grouped into visual types, and each visual type was appraised based on five criteria: intended audience, intended message, knowledge required to understand the visual, unintentional messages that may be derived from the visual and missing information that may be needed to understand the visual. Sixty-six examples of visual formats were identified from the literature and classified into 14 visual types. We found that there is not one single visual format that is consistently superior to others for the communication of benefit-risk information. In addition, we found that most of the drawbacks found in the visual formats could be considered general to visual communication, although some appear more relevant to specific formats and should be considered when creating visuals for different audiences depending on the exact message to be communicated. We have arrived at recommendations for the use of visual displays for benefit-risk communication. The recommendation refers to the creation of visuals. We outline four criteria to determine audience-visual compatibility and consider these to be a key task in creating any visual. Next we propose specific visual formats of interest, to be explored further for their ability to address nine different types of benefit-risk analysis information. Copyright © 2015 John Wiley & Sons, Ltd.

  2. Stimulus information contaminates summation tests of independent neural representations of features

    NASA Technical Reports Server (NTRS)

    Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.

    2002-01-01

    Many models of visual processing assume that visual information is analyzed into separable and independent neural codes, or features. A common psychophysical test of independent features is known as a summation study, which measures performance in a detection, discrimination, or visual search task as the number of proposed features increases. Improvement in human performance with increasing number of available features is typically attributed to the summation, or combination, of information across independent neural coding of the features. In many instances, however, increasing the number of available features also increases the stimulus information in the task, as assessed by an optimal observer that does not include the independent neural codes. In a visual search task with spatial frequency and orientation as the component features, a particular set of stimuli were chosen so that all searches had equivalent stimulus information, regardless of the number of features. In this case, human performance did not improve with increasing number of features, implying that the improvement observed with additional features may be due to stimulus information and not the combination across independent features.

  3. PATTERNS OF CLINICALLY SIGNIFICANT COGNITIVE IMPAIRMENT IN HOARDING DISORDER.

    PubMed

    Mackin, R Scott; Vigil, Ofilio; Insel, Philip; Kivowitz, Alana; Kupferman, Eve; Hough, Christina M; Fekri, Shiva; Crothers, Ross; Bickford, David; Delucchi, Kevin L; Mathews, Carol A

    2016-03-01

    The cognitive characteristics of individuals with hoarding disorder (HD) are not well understood. Existing studies are relatively few and somewhat inconsistent but suggest that individuals with HD may have specific dysfunction in the cognitive domains of categorization, speed of information processing, and decision making. However, there have been no studies evaluating the degree to which cognitive dysfunction in these domains reflects clinically significant cognitive impairment (CI). Participants included 78 individuals who met DSM-V criteria for HD and 70 age- and education-matched controls. Cognitive performance on measures of memory, attention, information processing speed, abstract reasoning, visuospatial processing, decision making, and categorization ability was evaluated for each participant. Rates of clinical impairment for each measure were compared, as were age- and education-corrected raw scores for each cognitive test. HD participants showed greater incidence of CI on measures of visual memory, visual detection, and visual categorization relative to controls. Raw-score comparisons between groups showed similar results with HD participants showing lower raw-score performance on each of these measures. In addition, in raw-score comparisons HD participants also demonstrated relative strengths compared to control participants on measures of verbal and visual abstract reasoning. These results suggest that HD is associated with a pattern of clinically significant CI in some visually mediated neurocognitive processes including visual memory, visual detection, and visual categorization. Additionally, these results suggest HD individuals may also exhibit relative strengths, perhaps compensatory, in abstract reasoning in both verbal and visual domains. © 2015 Wiley Periodicals, Inc.

  4. An Empirical Study on Using Visual Embellishments in Visualization.

    PubMed

    Borgo, R; Abdul-Rahman, A; Mohamed, F; Grant, P W; Reppa, I; Floridi, L; Chen, Min

    2012-12-01

    In written and spoken communications, figures of speech (e.g., metaphors and synecdoche) are often used as an aid to help convey abstract or less tangible concepts. However, the benefits of using rhetorical illustrations or embellishments in visualization have so far been inconclusive. In this work, we report an empirical study to evaluate hypotheses that visual embellishments may aid memorization, visual search and concept comprehension. One major departure from related experiments in the literature is that we make use of a dual-task methodology in our experiment. This design offers an abstraction of typical situations where viewers do not have their full attention focused on visualization (e.g., in meetings and lectures). The secondary task introduces "divided attention", and makes the effects of visual embellishments more observable. In addition, it also serves as additional masking in memory-based trials. The results of this study show that visual embellishments can help participants better remember the information depicted in visualization. On the other hand, visual embellishments can have a negative impact on the speed of visual search. The results show a complex pattern as to the benefits of visual embellishments in helping participants grasp key concepts from visualization.

  5. Comparing object recognition from binary and bipolar edge images for visual prostheses

    PubMed Central

    Jung, Jae-Hyun; Pu, Tian; Peli, Eli

    2017-01-01

    Visual prostheses require an effective representation method due to the limited display condition which has only 2 or 3 levels of grayscale in low resolution. Edges derived from abrupt luminance changes in images carry essential information for object recognition. Typical binary (black and white) edge images have been used to represent features to convey essential information. However, in scenes with a complex cluttered background, the recognition rate of the binary edge images by human observers is limited and additional information is required. The polarity of edges and cusps (black or white features on a gray background) carries important additional information; the polarity may provide shape from shading information missing in the binary edge image. This depth information may be restored by using bipolar edges. We compared object recognition rates from 16 binary edge images and bipolar edge images by 26 subjects to determine the possible impact of bipolar filtering in visual prostheses with 3 or more levels of grayscale. Recognition rates were higher with bipolar edge images and the improvement was significant in scenes with complex backgrounds. The results also suggest that erroneous shape from shading interpretation of bipolar edges resulting from pigment rather than boundaries of shape may confound the recognition. PMID:28458481

  6. A Software Developer’s Guide to Informal Evaluation of Visual Analytics Environments Using VAST Challenge Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Kristin A.; Scholtz, Jean; Whiting, Mark A.

    The VAST Challenge has been a popular venue for academic and industry participants for over ten years. Many participants comment that the majority of their time in preparing VAST Challenge entries is discovering elements in their software environments that need to be redesigned in order to solve the given task. Fortunately, there is no need to wait until the VAST Challenge is announced to test out software systems. The Visual Analytics Benchmark Repository contains all past VAST Challenge tasks, data, solutions and submissions. This paper details the various types of evaluations that may be conducted using the Repository information. Inmore » this paper we describe how developers can do informal evaluations of various aspects of their visual analytics environments using VAST Challenge information. Aspects that can be evaluated include the appropriateness of the software for various tasks, the various data types and formats that can be accommodated, the effectiveness and efficiency of the process supported by the software, and the intuitiveness of the visualizations and interactions. Researchers can compare their visualizations and interactions to those submitted to determine novelty. In addition, the paper provides pointers to various guidelines that software teams can use to evaluate the usability of their software. While these evaluations are not a replacement for formal evaluation methods, this information can be extremely useful during the development of visual analytics environments.« less

  7. Usability of stereoscopic view in teleoperation

    NASA Astrophysics Data System (ADS)

    Boonsuk, Wutthigrai

    2015-03-01

    Recently, there are tremendous growths in the area of 3D stereoscopic visualization. The 3D stereoscopic visualization technology has been used in a growing number of consumer products such as the 3D televisions and the 3D glasses for gaming systems. This technology refers to the idea that human brain develops depth of perception by retrieving information from the two eyes. Our brain combines the left and right images on the retinas and extracts depth information. Therefore, viewing two video images taken at slightly distance apart as shown in Figure 1 can create illusion of depth [8]. Proponents of this technology argue that the stereo view of 3D visualization increases user immersion and performance as more information is gained through the 3D vision as compare to the 2D view. However, it is still uncertain if additional information gained from the 3D stereoscopic visualization can actually improve user performance in real world situations such as in the case of teleoperation.

  8. Development of driver’s assistant system of additional visual information of blind areas for Gazelle Next

    NASA Astrophysics Data System (ADS)

    Makarov, V.; Korelin, O.; Koblyakov, D.; Kostin, S.; Komandirov, A.

    2018-02-01

    The article is devoted to the development of the Advanced Driver Assistance Systems (ADAS) for the GAZelle NEXT car. This project is aimed at developing a visual information system for the driver integrated into the windshield racks. The developed system implements the following functions: assistance in maneuvering and parking; Recognition of road signs; Warning the driver about the possibility of a frontal collision; Control of "blind" zones; "Transparent" vision in the windshield racks, widening the field of view, behind them; Visual and sound information about the traffic situation; Control and descent from the lane of the vehicle; Monitoring of the driver’s condition; navigation system; All-round review. The scheme of action of sensors of the developed system of visual information of the driver is provided. The moments of systems on a prototype of a vehicle are considered. Possible changes in the interior and dashboard of the car are given. The results of the implementation are aimed at the implementation of the system - improved informing of the driver about the environment and the development of an ergonomic interior for this system within the new Functional Salon of the Gazelle Next vehicle equipped with a visual information system for the driver.

  9. Toward Model Building for Visual Aesthetic Perception

    PubMed Central

    Lughofer, Edwin; Zeng, Xianyi

    2017-01-01

    Several models of visual aesthetic perception have been proposed in recent years. Such models have drawn on investigations into the neural underpinnings of visual aesthetics, utilizing neurophysiological techniques and brain imaging techniques including functional magnetic resonance imaging, magnetoencephalography, and electroencephalography. The neural mechanisms underlying the aesthetic perception of the visual arts have been explained from the perspectives of neuropsychology, brain and cognitive science, informatics, and statistics. Although corresponding models have been constructed, the majority of these models contain elements that are difficult to be simulated or quantified using simple mathematical functions. In this review, we discuss the hypotheses, conceptions, and structures of six typical models for human aesthetic appreciation in the visual domain: the neuropsychological, information processing, mirror, quartet, and two hierarchical feed-forward layered models. Additionally, the neural foundation of aesthetic perception, appreciation, or judgement for each model is summarized. The development of a unified framework for the neurobiological mechanisms underlying the aesthetic perception of visual art and the validation of this framework via mathematical simulation is an interesting challenge in neuroaesthetics research. This review aims to provide information regarding the most promising proposals for bridging the gap between visual information processing and brain activity involved in aesthetic appreciation. PMID:29270194

  10. Additional helmet and pack loading reduce situational awareness during the establishment of marksmanship posture.

    PubMed

    Lim, Jongil; Palmer, Christopher J; Busa, Michael A; Amado, Avelino; Rosado, Luis D; Ducharme, Scott W; Simon, Darnell; Van Emmerik, Richard E A

    2017-06-01

    The pickup of visual information is critical for controlling movement and maintaining situational awareness in dangerous situations. Altered coordination while wearing protective equipment may impact the likelihood of injury or death. This investigation examined the consequences of load magnitude and distribution on situational awareness, segmental coordination and head gaze in several protective equipment ensembles. Twelve soldiers stepped down onto force plates and were instructed to quickly and accurately identify visual information while establishing marksmanship posture in protective equipment. Time to discriminate visual information was extended when additional pack and helmet loads were added, with the small increase in helmet load having the largest effect. Greater head-leading and in-phase trunk-head coordination were found with lighter pack loads, while trunk-leading coordination increased and head gaze dynamics were more disrupted in heavier pack loads. Additional armour load in the vest had no consequences for Time to discriminate, coordination or head dynamics. This suggests that the addition of head borne load be carefully considered when integrating new technology and that up-armouring does not necessarily have negative consequences for marksmanship performance. Practitioner Summary: Understanding the trade-space between protection and reductions in task performance continue to challenge those developing personal protective equipment. These methods provide an approach that can help optimise equipment design and loading techniques by quantifying changes in task performance and the emergent coordination dynamics that underlie that performance.

  11. Augmented reality warnings in vehicles: Effects of modality and specificity on effectiveness.

    PubMed

    Schwarz, Felix; Fastenmeier, Wolfgang

    2017-04-01

    In the future, vehicles will be able to warn drivers of hidden dangers before they are visible. Specific warning information about these hazards could improve drivers' reactions and the warning effectiveness, but could also impair them, for example, by additional cognitive-processing costs. In a driving simulator study with 88 participants, we investigated the effects of modality (auditory vs. visual) and specificity (low vs. high) on warning effectiveness. For the specific warnings, we used augmented reality as an advanced technology to display the additional auditory or visual warning information. Part one of the study concentrates on the effectiveness of necessary warnings and part two on the drivers' compliance despite false alarms. For the first warning scenario, we found several positive main effects of specificity. However, subsequent effects of specificity were moderated by the modality of the warnings. The specific visual warnings were observed to have advantages over the three other warning designs concerning gaze and braking reaction times, passing speeds and collision rates. Besides the true alarms, braking reaction times as well as subjective evaluation after these warnings were still improved despite false alarms. The specific auditory warnings were revealed to have only a few advantages, but also several disadvantages. The results further indicate that the exact coding of additional information, beyond its mere amount and modality, plays an important role. Moreover, the observed advantages of the specific visual warnings highlight the potential benefit of augmented reality coding to improve future collision warnings. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. A Location Aware Middleware Framework for Collaborative Visual Information Discovery and Retrieval

    DTIC Science & Technology

    2017-09-14

    Information Discovery and Retrieval Andrew J.M. Compton Follow this and additional works at: https://scholar.afit.edu/etd Part of the Digital...and Dissertations by an authorized administrator of AFIT Scholar. For more information , please contact richard.mansfield@afit.edu. Recommended Citation...

  13. Information Design: A Bibliography.

    ERIC Educational Resources Information Center

    Albers, Michael J.; Lisberg, Beth Conney

    2000-01-01

    Presents a 17-item annotated list of essential books on information design chosen by members of the InfoDesign e-mail list. Includes a 113-item unannotated bibliography of additional works, on topics of creativity and critical thinking; visual thinking; graphic design; infographics; information design; instructional design; interface design;…

  14. Integration of Visual and Proprioceptive Limb Position Information in Human Posterior Parietal, Premotor, and Extrastriate Cortex.

    PubMed

    Limanowski, Jakub; Blankenburg, Felix

    2016-03-02

    The brain constructs a flexible representation of the body from multisensory information. Previous work on monkeys suggests that the posterior parietal cortex (PPC) and ventral premotor cortex (PMv) represent the position of the upper limbs based on visual and proprioceptive information. Human experiments on the rubber hand illusion implicate similar regions, but since such experiments rely on additional visuo-tactile interactions, they cannot isolate visuo-proprioceptive integration. Here, we independently manipulated the position (palm or back facing) of passive human participants' unseen arm and of a photorealistic virtual 3D arm. Functional magnetic resonance imaging (fMRI) revealed that matching visual and proprioceptive information about arm position engaged the PPC, PMv, and the body-selective extrastriate body area (EBA); activity in the PMv moreover reflected interindividual differences in congruent arm ownership. Further, the PPC, PMv, and EBA increased their coupling with the primary visual cortex during congruent visuo-proprioceptive position information. These results suggest that human PPC, PMv, and EBA evaluate visual and proprioceptive position information and, under sufficient cross-modal congruence, integrate it into a multisensory representation of the upper limb in space. The position of our limbs in space constantly changes, yet the brain manages to represent limb position accurately by combining information from vision and proprioception. Electrophysiological recordings in monkeys have revealed neurons in the posterior parietal and premotor cortices that seem to implement and update such a multisensory limb representation, but this has been difficult to demonstrate in humans. Our fMRI experiment shows that human posterior parietal, premotor, and body-selective visual brain areas respond preferentially to a virtual arm seen in a position corresponding to one's unseen hidden arm, while increasing their communication with regions conveying visual information. These brain areas thus likely integrate visual and proprioceptive information into a flexible multisensory body representation. Copyright © 2016 the authors 0270-6474/16/362582-08$15.00/0.

  15. The Benefits of More Electronic Screen Space on Students' Retention of Material in Classroom Lectures

    ERIC Educational Resources Information Center

    Lanir, Joel; Booth, Kellogg S.; Hawkey, Kirstie

    2010-01-01

    Many lecture halls today have two or more screens to be used by instructors for lectures with computer-supported visual aids. Typically, this additional screen real estate is not used to display additional information; rather a single stream of information is projected on all screens. We describe a controlled laboratory study that empirically…

  16. The impact of using visual images of the body within a personalized health risk assessment: an experimental study.

    PubMed

    Hollands, Gareth J; Marteau, Theresa M

    2013-05-01

    To examine the motivational impact of the addition of a visual image to a personalized health risk assessment and the underlying cognitive and emotional mechanisms. An online experimental study in which participants (n = 901; mean age = 27.2 years; 61.5% female) received an assessment and information focusing on the health implications of internal body fat and highlighting the protective benefits of physical activity. Participants were randomized to receive this in either (a) solely text form (control arm) or (b) text plus a visual image of predicted internal body fat (image arm). Participants received information representing one of three levels of health threat, determined by how physically active they were: high, moderate or benign. Main outcome measures were physical activity intentions (assessed pre- and post-intervention), worry, coherence and believability of the information. Intentions to undertake recommended levels of physical activity were significantly higher in the image arm, but only amongst those participants who received a high-threat communication. Believability of the results received was greater in the image arm and mediated the intervention effect on intentions. The addition of a visual image to a risk assessment led to small but significant increases in intentions to undertake recommended levels of physical activity in those at increased health risk. Limitations of the study and implications for future research are discussed. What is already known on this subject? Health risk information that is personalized to the individual may more strongly motivate risk-reducing behaviour change. Little prior research attention has been paid specifically to the motivational impact of personalized visual images and underlying mechanisms. What does this study add? In an experimental design, it is shown that receipt of visual images increases intentions to engage in risk-reducing behaviour, although only when a significant level of threat is presented. The study suggests that images increase the believability of health risk information and this may underlie motivational impact. © 2012 The British Psychological Society.

  17. fMRI evidence for sensorimotor transformations in human cortex during smooth pursuit eye movements.

    PubMed

    Kimmig, H; Ohlendorf, S; Speck, O; Sprenger, A; Rutschmann, R M; Haller, S; Greenlee, M W

    2008-01-01

    Smooth pursuit eye movements (SP) are driven by moving objects. The pursuit system processes the visual input signals and transforms this information into an oculomotor output signal. Despite the object's movement on the retina and the eyes' movement in the head, we are able to locate the object in space implying coordinate transformations from retinal to head and space coordinates. To test for the visual and oculomotor components of SP and the possible transformation sites, we investigated three experimental conditions: (I) fixation of a stationary target with a second target moving across the retina (visual), (II) pursuit of the moving target with the second target moving in phase (oculomotor), (III) pursuit of the moving target with the second target remaining stationary (visuo-oculomotor). Precise eye movement data were simultaneously measured with the fMRI data. Visual components of activation during SP were located in the motion-sensitive, temporo-parieto-occipital region MT+ and the right posterior parietal cortex (PPC). Motor components comprised more widespread activation in these regions and additional activations in the frontal and supplementary eye fields (FEF, SEF), the cingulate gyrus and precuneus. The combined visuo-oculomotor stimulus revealed additional activation in the putamen. Possible transformation sites were found in MT+ and PPC. The MT+ activation evoked by the motion of a single visual dot was very localized, while the activation of the same single dot motion driving the eye was rather extended across MT+. The eye movement information appeared to be dispersed across the visual map of MT+. This could be interpreted as a transfer of the one-dimensional eye movement information into the two-dimensional visual map. Potentially, the dispersed information could be used to remap MT+ to space coordinates rather than retinal coordinates and to provide the basis for a motor output control. A similar interpretation holds for our results in the PPC region.

  18. Foveal and peripheral fields of vision influences perceptual skill in anticipating opponents' attacking position in volleyball.

    PubMed

    Schorer, Jörg; Rienhoff, Rebecca; Fischer, Lennart; Baker, Joseph

    2013-09-01

    The importance of perceptual-cognitive expertise in sport has been repeatedly demonstrated. In this study we examined the role of different sources of visual information (i.e., foveal versus peripheral) in anticipating volleyball attack positions. Expert (n = 11), advanced (n = 13) and novice (n = 16) players completed an anticipation task that involved predicting the location of volleyball attacks. Video clips of volleyball attacks (n = 72) were spatially and temporally occluded to provide varying amounts of information to the participant. In addition, participants viewed the attacks under three visual conditions: full vision, foveal vision only, and peripheral vision only. Analysis of variance revealed significant between group differences in prediction accuracy with higher skilled players performing better than lower skilled players. Additionally, we found significant differences between temporal and spatial occlusion conditions. Both of those factors interacted separately, but not combined with expertise. Importantly, for experts the sum of both fields of vision was superior to either source in isolation. Our results suggest different sources of visual information work collectively to facilitate expert anticipation in time-constrained sports and reinforce the complexity of expert perception.

  19. Functional neuroimaging and behavioral correlates of capacity decline in visual short-term memory after sleep deprivation.

    PubMed

    Chee, Michael W L; Chuah, Y M Lisa

    2007-05-29

    Sleep deprivation (SD) impairs short-term memory, but it is unclear whether this is because of reduced storage capacity or processes contributing to appropriate information encoding. We evaluated 30 individuals twice, once after a night of normal sleep and again after 24 h of SD. In each session, we evaluated visual memory capacity by presenting arrays of one to eight colored squares. Additionally, we measured cortical responses to varying visual array sizes without engaging memory. The magnitude of intraparietal sulcus activation and memory capacity after normal sleep were highly correlated. SD elicited a pattern of activation in both tasks, indicating that deficits in visual processing and visual attention accompany and could account for loss of short-term memory capacity. Additionally, a comparison between better and poorer performers showed that preservation of precuneus and temporoparietal junction deactivation with increasing memory load corresponds to less performance decline when one is sleep-deprived.

  20. Living Liquid: Design and Evaluation of an Exploratory Visualization Tool for Museum Visitors.

    PubMed

    Ma, J; Liao, I; Ma, Kwan-Liu; Frazier, J

    2012-12-01

    Interactive visualizations can allow science museum visitors to explore new worlds by seeing and interacting with scientific data. However, designing interactive visualizations for informal learning environments, such as museums, presents several challenges. First, visualizations must engage visitors on a personal level. Second, visitors often lack the background to interpret visualizations of scientific data. Third, visitors have very limited time at individual exhibits in museums. This paper examines these design considerations through the iterative development and evaluation of an interactive exhibit as a visualization tool that gives museumgoers access to scientific data generated and used by researchers. The exhibit prototype, Living Liquid, encourages visitors to ask and answer their own questions while exploring the time-varying global distribution of simulated marine microbes using a touchscreen interface. Iterative development proceeded through three rounds of formative evaluations using think-aloud protocols and interviews, each round informing a key visualization design decision: (1) what to visualize to initiate inquiry, (2) how to link data at the microscopic scale to global patterns, and (3) how to include additional data that allows visitors to pursue their own questions. Data from visitor evaluations suggests that, when designing visualizations for public audiences, one should (1) avoid distracting visitors from data that they should explore, (2) incorporate background information into the visualization, (3) favor understandability over scientific accuracy, and (4) layer data accessibility to structure inquiry. Lessons learned from this case study add to our growing understanding of how to use visualizations to actively engage learners with scientific data.

  1. Aberrant patterns of visual facial information usage in schizophrenia.

    PubMed

    Clark, Cameron M; Gosselin, Frédéric; Goghari, Vina M

    2013-05-01

    Deficits in facial emotion perception have been linked to poorer functional outcome in schizophrenia. However, the relationship between abnormal emotion perception and functional outcome remains poorly understood. To better understand the nature of facial emotion perception deficits in schizophrenia, we used the Bubbles Facial Emotion Perception Task to identify differences in usage of visual facial information in schizophrenia patients (n = 20) and controls (n = 20), when differentiating between angry and neutral facial expressions. As hypothesized, schizophrenia patients required more facial information than controls to accurately differentiate between angry and neutral facial expressions, and they relied on different facial features and spatial frequencies to differentiate these facial expressions. Specifically, schizophrenia patients underutilized the eye regions, overutilized the nose and mouth regions, and virtually ignored information presented at the lowest levels of spatial frequency. In addition, a post hoc one-tailed t test revealed a positive relationship of moderate strength between the degree of divergence from "normal" visual facial information usage in the eye region and lower overall social functioning. These findings provide direct support for aberrant patterns of visual facial information usage in schizophrenia in differentiating between socially salient emotional states. © 2013 American Psychological Association

  2. Depth reversals in stereoscopic displays driven by apparent size

    NASA Astrophysics Data System (ADS)

    Sacher, Gunnar; Hayes, Amy; Thornton, Ian M.; Sereno, Margaret E.; Malony, Allen D.

    1998-04-01

    In visual scenes, depth information is derived from a variety of monocular and binocular cues. When in conflict, a monocular cue is sometimes able to override the binocular information. We examined the accuracy of relative depth judgments in orthographic, stereoscopic displays and found that perceived relative size can override binocular disparity as a depth cue in a situation where the relative size information is itself generated from disparity information, not from retinal size difference. A size discrimination task confirmed the assumption that disparity information was perceived and used to generate apparent size differences. The tendency for the apparent size cue to override disparity information can be modulated by varying the strength of the apparent size cue. In addition, an analysis of reaction times provides supporting evidence for this novel depth reversal effect. We believe that human perception must be regarded as an important component of stereoscopic applications. Hence, if applications are to be effective and accurate, it is necessary to take into account the richness and complexity of the human visual perceptual system that interacts with them. We discuss implications of this and similar research for human performance in virtual environments, the design of visual presentations for virtual worlds, and the design of visualization tools.

  3. Gestalt Effects in Visual Working Memory.

    PubMed

    Kałamała, Patrycja; Sadowska, Aleksandra; Ordziniak, Wawrzyniec; Chuderski, Adam

    2017-01-01

    Four experiments investigated whether conforming to Gestalt principles, well known to drive visual perception, also facilitates the active maintenance of information in visual working memory (VWM). We used the change detection task, which required the memorization of visual patterns composed of several shapes. We observed no effects of symmetry of visual patterns on VWM performance. However, there was a moderate positive effect when a particular shape that was probed matched the shape of the whole pattern (the whole-part similarity effect). Data support the models assuming that VWM encodes not only particular objects of the perceptual scene but also the spatial relations between them (the ensemble representation). The ensemble representation may prime objects similar to its shape and thereby boost access to them. In contrast, the null effect of symmetry relates the fact that this very feature of an ensemble does not yield any useful additional information for VWM.

  4. Filling-in visual motion with sounds.

    PubMed

    Väljamäe, A; Soto-Faraco, S

    2008-10-01

    Information about the motion of objects can be extracted by multiple sensory modalities, and, as a consequence, object motion perception typically involves the integration of multi-sensory information. Often, in naturalistic settings, the flow of such information can be rather discontinuous (e.g. a cat racing through the furniture in a cluttered room is partly seen and partly heard). This study addressed audio-visual interactions in the perception of time-sampled object motion by measuring adaptation after-effects. We found significant auditory after-effects following adaptation to unisensory auditory and visual motion in depth, sampled at 12.5 Hz. The visually induced (cross-modal) auditory motion after-effect was eliminated if visual adaptors flashed at half of the rate (6.25 Hz). Remarkably, the addition of the high-rate acoustic flutter (12.5 Hz) to this ineffective, sparsely time-sampled, visual adaptor restored the auditory after-effect to a level comparable to what was seen with high-rate bimodal adaptors (flashes and beeps). Our results suggest that this auditory-induced reinstatement of the motion after-effect from the poor visual signals resulted from the occurrence of sound-induced illusory flashes. This effect was found to be dependent both on the directional congruency between modalities and on the rate of auditory flutter. The auditory filling-in of time-sampled visual motion supports the feasibility of using reduced frame rate visual content in multisensory broadcasting and virtual reality applications.

  5. Shape Similarity, Better than Semantic Membership, Accounts for the Structure of Visual Object Representations in a Population of Monkey Inferotemporal Neurons

    PubMed Central

    DiCarlo, James J.; Zecchina, Riccardo; Zoccolan, Davide

    2013-01-01

    The anterior inferotemporal cortex (IT) is the highest stage along the hierarchy of visual areas that, in primates, processes visual objects. Although several lines of evidence suggest that IT primarily represents visual shape information, some recent studies have argued that neuronal ensembles in IT code the semantic membership of visual objects (i.e., represent conceptual classes such as animate and inanimate objects). In this study, we investigated to what extent semantic, rather than purely visual information, is represented in IT by performing a multivariate analysis of IT responses to a set of visual objects. By relying on a variety of machine-learning approaches (including a cutting-edge clustering algorithm that has been recently developed in the domain of statistical physics), we found that, in most instances, IT representation of visual objects is accounted for by their similarity at the level of shape or, more surprisingly, low-level visual properties. Only in a few cases we observed IT representations of semantic classes that were not explainable by the visual similarity of their members. Overall, these findings reassert the primary function of IT as a conveyor of explicit visual shape information, and reveal that low-level visual properties are represented in IT to a greater extent than previously appreciated. In addition, our work demonstrates how combining a variety of state-of-the-art multivariate approaches, and carefully estimating the contribution of shape similarity to the representation of object categories, can substantially advance our understanding of neuronal coding of visual objects in cortex. PMID:23950700

  6. Tailored information for cancer patients on the Internet: effects of visual cues and language complexity on information recall and satisfaction.

    PubMed

    van Weert, Julia C M; van Noort, Guda; Bol, Nadine; van Dijk, Liset; Tates, Kiek; Jansen, Jesse

    2011-09-01

    This study was designed to investigate the effects of visual cues and language complexity on satisfaction and information recall using a personalised website for lung cancer patients. In addition, age effects were investigated. An experiment using a 2 (complex vs. non-complex language)×3 (text only vs. photograph vs. drawing) factorial design was conducted. In total, 200 respondents without cancer were exposed to one of the six conditions. Respondents were more satisfied with the comprehensibility of both websites when they were presented with a visual cue. A significant interaction effect was found between language complexity and photograph use such that satisfaction with comprehensibility improved when a photograph was added to the complex language condition. Next, an interaction effect was found between age and satisfaction, which indicates that adding a visual cue is more important for older adults than younger adults. Finally, respondents who were exposed to a website with less complex language showed higher recall scores. The use of visual cues enhances satisfaction with the information presented on the website, and the use of non-complex language improves recall. The results of the current study can be used to improve computer-based information systems for patients. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  7. Action starring narratives and events: Structure and inference in visual narrative comprehension

    PubMed Central

    Cohn, Neil; Wittenberg, Eva

    2015-01-01

    Studies of discourse have long placed focus on the inference generated by information that is not overtly expressed, and theories of visual narrative comprehension similarly focused on the inference generated between juxtaposed panels. Within the visual language of comics, star-shaped “flashes” commonly signify impacts, but can be enlarged to the size of a whole panel that can omit all other representational information. These “action star” panels depict a narrative culmination (a “Peak”), but have content which readers must infer, thereby posing a challenge to theories of inference generation in visual narratives that focus only on the semantic changes between juxtaposed images. This paper shows that action stars demand more inference than depicted events, and that they are more coherent in narrative sequences than scrambled sequences (Experiment 1). In addition, action stars play a felicitous narrative role in the sequence (Experiment 2). Together, these results suggest that visual narratives use conventionalized depictions that demand the generation of inferences while retaining narrative coherence of a visual sequence. PMID:26709362

  8. Action starring narratives and events: Structure and inference in visual narrative comprehension.

    PubMed

    Cohn, Neil; Wittenberg, Eva

    Studies of discourse have long placed focus on the inference generated by information that is not overtly expressed, and theories of visual narrative comprehension similarly focused on the inference generated between juxtaposed panels. Within the visual language of comics, star-shaped "flashes" commonly signify impacts, but can be enlarged to the size of a whole panel that can omit all other representational information. These "action star" panels depict a narrative culmination (a "Peak"), but have content which readers must infer, thereby posing a challenge to theories of inference generation in visual narratives that focus only on the semantic changes between juxtaposed images. This paper shows that action stars demand more inference than depicted events, and that they are more coherent in narrative sequences than scrambled sequences (Experiment 1). In addition, action stars play a felicitous narrative role in the sequence (Experiment 2). Together, these results suggest that visual narratives use conventionalized depictions that demand the generation of inferences while retaining narrative coherence of a visual sequence.

  9. The Need for a Uniform Method of Recording and Reporting Functional Vision Assessments

    ERIC Educational Resources Information Center

    Shaw, Rona; Russotti, Joanne; Strauss-Schwartz, Judy; Vail, Helen; Kahn, Ronda

    2009-01-01

    The use of functional vision by school-age students who have visual impairments, including those with additional disabilities, is typically reported by teachers of students with visual impairments. Functional vision assessments determine how well a student uses his or her vision to perform tasks throughout the school day. The information that is…

  10. PRISMA-MAR: An Architecture Model for Data Visualization in Augmented Reality Mobile Devices

    ERIC Educational Resources Information Center

    Gomes Costa, Mauro Alexandre Folha; Serique Meiguins, Bianchi; Carneiro, Nikolas S.; Gonçalves Meiguins, Aruanda Simões

    2013-01-01

    This paper proposes an extension to mobile augmented reality (MAR) environments--the addition of data charts to the more usual text, image and video components. To this purpose, we have designed a client-server architecture including the main necessary modules and services to provide an Information Visualization MAR experience. The server side…

  11. Neural Mechanisms of Selective Visual Attention.

    PubMed

    Moore, Tirin; Zirnsak, Marc

    2017-01-03

    Selective visual attention describes the tendency of visual processing to be confined largely to stimuli that are relevant to behavior. It is among the most fundamental of cognitive functions, particularly in humans and other primates for whom vision is the dominant sense. We review recent progress in identifying the neural mechanisms of selective visual attention. We discuss evidence from studies of different varieties of selective attention and examine how these varieties alter the processing of stimuli by neurons within the visual system, current knowledge of their causal basis, and methods for assessing attentional dysfunctions. In addition, we identify some key questions that remain in identifying the neural mechanisms that give rise to the selective processing of visual information.

  12. The absence or temporal offset of visual feedback does not influence adaptation to novel movement dynamics.

    PubMed

    McKenna, Erin; Bray, Laurence C Jayet; Zhou, Weiwei; Joiner, Wilsaan M

    2017-10-01

    Delays in transmitting and processing sensory information require correctly associating delayed feedback to issued motor commands for accurate error compensation. The flexibility of this alignment between motor signals and feedback has been demonstrated for movement recalibration to visual manipulations, but the alignment dependence for adapting movement dynamics is largely unknown. Here we examined the effect of visual feedback manipulations on force-field adaptation. Three subject groups used a manipulandum while experiencing a lag in the corresponding cursor motion (0, 75, or 150 ms). When the offset was applied at the start of the session (continuous condition), adaptation was not significantly different between groups. However, these similarities may be due to acclimation to the offset before motor adaptation. We tested additional subjects who experienced the same delays concurrent with the introduction of the perturbation (abrupt condition). In this case adaptation was statistically indistinguishable from the continuous condition, indicating that acclimation to feedback delay was not a factor. In addition, end-point errors were not significantly different across the delay or onset conditions, but end-point correction (e.g., deceleration duration) was influenced by the temporal offset. As an additional control, we tested a group of subjects who performed without visual feedback and found comparable movement adaptation results. These results suggest that visual feedback manipulation (absence or temporal misalignment) does not affect adaptation to novel dynamics, independent of both acclimation and perceptual awareness. These findings could have implications for modeling how the motor system adjusts to errors despite concurrent delays in sensory feedback information. NEW & NOTEWORTHY A temporal offset between movement and distorted visual feedback (e.g., visuomotor rotation) influences the subsequent motor recalibration, but the effects of this offset for altered movement dynamics are largely unknown. Here we examined the influence of 1 ) delayed and 2 ) removed visual feedback on the adaptation to novel movement dynamics. These results contribute to understanding of the control strategies that compensate for movement errors when there is a temporal separation between motion state and sensory information. Copyright © 2017 the American Physiological Society.

  13. Blindsight and Unconscious Vision: What They Teach Us about the Human Visual System

    PubMed Central

    Ajina, Sara; Bridge, Holly

    2017-01-01

    Damage to the primary visual cortex removes the major input from the eyes to the brain, causing significant visual loss as patients are unable to perceive the side of the world contralateral to the damage. Some patients, however, retain the ability to detect visual information within this blind region; this is known as blindsight. By studying the visual pathways that underlie this residual vision in patients, we can uncover additional aspects of the human visual system that likely contribute to normal visual function but cannot be revealed under physiological conditions. In this review, we discuss the residual abilities and neural activity that have been described in blindsight and the implications of these findings for understanding the intact system. PMID:27777337

  14. Arctic Research Mapping Application (ARMAP): visualize project-level information for U.S. funded research in the Arctic

    NASA Astrophysics Data System (ADS)

    Kassin, A.; Cody, R. P.; Barba, M.; Escarzaga, S. M.; Score, R.; Dover, M.; Gaylord, A. G.; Manley, W. F.; Habermann, T.; Tweedie, C. E.

    2015-12-01

    The Arctic Research Mapping Application (ARMAP; http://armap.org/) is a suite of online applications and data services that support Arctic science by providing project tracking information (who's doing what, when and where in the region) for United States Government funded projects. In collaboration with 17 research agencies, project locations are displayed in a visually enhanced web mapping application. Key information about each project is presented along with links to web pages that provide additional information. The mapping application includes new reference data layers and an updated ship tracks layer. Visual enhancements are achieved by redeveloping the front-end from FLEX to HTML5 and JavaScript, which now provide access to mobile users utilizing tablets and cell phone devices. New tools have been added that allow users to navigate, select, draw, measure, print, use a time slider, and more. Other module additions include a back-end Apache SOLR search platform that provides users with the capability to perform advance searches throughout the ARMAP database. Furthermore, a new query builder interface has been developed in order to provide more intuitive controls to generate complex queries. These improvements have been made to increase awareness of projects funded by numerous entities in the Arctic, enhance coordination for logistics support, help identify geographic gaps in research efforts and potentially foster more collaboration amongst researchers working in the region. Additionally, ARMAP can be used to demonstrate past, present, and future research efforts supported by the U.S. Government.

  15. The Arctic Research Mapping Application (ARMAP): a Geoportal for Visualizing Project-level Information About U.S. Funded Research in the Arctic

    NASA Astrophysics Data System (ADS)

    Kassin, A.; Cody, R. P.; Barba, M.; Gaylord, A. G.; Manley, W. F.; Score, R.; Escarzaga, S. M.; Tweedie, C. E.

    2016-12-01

    The Arctic Research Mapping Application (ARMAP; http://armap.org/) is a suite of online applications and data services that support Arctic science by providing project tracking information (who's doing what, when and where in the region) for United States Government funded projects. In collaboration with 17 research agencies, project locations are displayed in a visually enhanced web mapping application. Key information about each project is presented along with links to web pages that provide additional information, including links to data where possible. The latest ARMAP iteration has i) reworked the search user interface (UI) to enable multiple filters to be applied in user-driven queries and ii) implemented ArcGIS Javascript API 4.0 to allow for deployment of 3D maps directly into a users web-browser and enhanced customization of popups. Module additions include i) a dashboard UI powered by a back-end Apache SOLR engine to visualize data in intuitive and interactive charts; and ii) a printing module that allows users to customize maps and export these to different formats (pdf, ppt, gif and jpg). New reference layers and an updated ship tracks layer have also been added. These improvements have been made to improve discoverability, enhance logistics coordination, identify geographic gaps in research/observation effort, and foster enhanced collaboration among the research community. Additionally, ARMAP can be used to demonstrate past, present, and future research effort supported by the U.S. Government.

  16. Students using visual thinking to learn science in a Web-based environment

    NASA Astrophysics Data System (ADS)

    Plough, Jean Margaret

    United States students' science test scores are low, especially in problem solving, and traditional science instruction could be improved. Consequently, visual thinking, constructing science structures, and problem solving in a web-based environment may be valuable strategies for improving science learning. This ethnographic study examined the science learning of fifteen fourth grade students in an after school computer club involving diverse students at an inner city school. The investigation was done from the perspective of the students, and it described the processes of visual thinking, web page construction, and problem solving in a web-based environment. The study utilized informal group interviews, field notes, Visual Learning Logs, and student web pages, and incorporated a Standards-Based Rubric which evaluated students' performance on eight science and technology standards. The Visual Learning Logs were drawings done on the computer to represent science concepts related to the Food Chain. Students used the internet to search for information on a plant or animal of their choice. Next, students used this internet information, with the information from their Visual Learning Logs, to make web pages on their plant or animal. Later, students linked their web pages to form Science Structures. Finally, students linked their Science Structures with the structures of other students, and used these linked structures as models for solving problems. Further, during informal group interviews, students answered questions about visual thinking, problem solving, and science concepts. The results of this study showed clearly that (1) making visual representations helped students understand science knowledge, (2) making links between web pages helped students construct Science Knowledge Structures, and (3) students themselves said that visual thinking helped them learn science. In addition, this study found that when using Visual Learning Logs, the main overall ideas of the science concepts were usually represented accurately. Further, looking for information on the internet may cause new problems in learning. Likewise, being absent, starting late, and/or dropping out all may negatively influence students' proficiency on the standards. Finally, the way Science Structures are constructed and linked may provide insights into the way individual students think and process information.

  17. The direct, not V1-mediated, functional influence between the thalamus and middle temporal complex in the human brain is modulated by the speed of visual motion.

    PubMed

    Gaglianese, A; Costagli, M; Ueno, K; Ricciardi, E; Bernardi, G; Pietrini, P; Cheng, K

    2015-01-22

    The main visual pathway that conveys motion information to the middle temporal complex (hMT+) originates from the primary visual cortex (V1), which, in turn, receives spatial and temporal features of the perceived stimuli from the lateral geniculate nucleus (LGN). In addition, visual motion information reaches hMT+ directly from the thalamus, bypassing the V1, through a direct pathway. We aimed at elucidating whether this direct route between LGN and hMT+ represents a 'fast lane' reserved to high-speed motion, as proposed previously, or it is merely involved in processing motion information irrespective of speeds. We evaluated functional magnetic resonance imaging (fMRI) responses elicited by moving visual stimuli and applied connectivity analyses to investigate the effect of motion speed on the causal influence between LGN and hMT+, independent of V1, using the Conditional Granger Causality (CGC) in the presence of slow and fast visual stimuli. Our results showed that at least part of the visual motion information from LGN reaches hMT+, bypassing V1, in response to both slow and fast motion speeds of the perceived stimuli. We also investigated whether motion speeds have different effects on the connections between LGN and functional subdivisions within hMT+: direct connections between LGN and MT-proper carry mainly slow motion information, while connections between LGN and MST carry mainly fast motion information. The existence of a parallel pathway that connects the LGN directly to hMT+ in response to both slow and fast speeds may explain why MT and MST can still respond in the presence of V1 lesions. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  18. HPIminer: A text mining system for building and visualizing human protein interaction networks and pathways.

    PubMed

    Subramani, Suresh; Kalpana, Raja; Monickaraj, Pankaj Moses; Natarajan, Jeyakumar

    2015-04-01

    The knowledge on protein-protein interactions (PPI) and their related pathways are equally important to understand the biological functions of the living cell. Such information on human proteins is highly desirable to understand the mechanism of several diseases such as cancer, diabetes, and Alzheimer's disease. Because much of that information is buried in biomedical literature, an automated text mining system for visualizing human PPI and pathways is highly desirable. In this paper, we present HPIminer, a text mining system for visualizing human protein interactions and pathways from biomedical literature. HPIminer extracts human PPI information and PPI pairs from biomedical literature, and visualize their associated interactions, networks and pathways using two curated databases HPRD and KEGG. To our knowledge, HPIminer is the first system to build interaction networks from literature as well as curated databases. Further, the new interactions mined only from literature and not reported earlier in databases are highlighted as new. A comparative study with other similar tools shows that the resultant network is more informative and provides additional information on interacting proteins and their associated networks. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Automatic lip reading by using multimodal visual features

    NASA Astrophysics Data System (ADS)

    Takahashi, Shohei; Ohya, Jun

    2013-12-01

    Since long time ago, speech recognition has been researched, though it does not work well in noisy places such as in the car or in the train. In addition, people with hearing-impaired or difficulties in hearing cannot receive benefits from speech recognition. To recognize the speech automatically, visual information is also important. People understand speeches from not only audio information, but also visual information such as temporal changes in the lip shape. A vision based speech recognition method could work well in noisy places, and could be useful also for people with hearing disabilities. In this paper, we propose an automatic lip-reading method for recognizing the speech by using multimodal visual information without using any audio information such as speech recognition. First, the ASM (Active Shape Model) is used to track and detect the face and lip in a video sequence. Second, the shape, optical flow and spatial frequencies of the lip features are extracted from the lip detected by ASM. Next, the extracted multimodal features are ordered chronologically so that Support Vector Machine is performed in order to learn and classify the spoken words. Experiments for classifying several words show promising results of this proposed method.

  20. The look of royalty: visual and odour signals of reproductive status in a paper wasp

    PubMed Central

    Tannure-Nascimento, Ivelize C; Nascimento, Fabio S; Zucchi, Ronaldo

    2008-01-01

    Reproductive conflicts within animal societies occur when all females can potentially reproduce. In social insects, these conflicts are regulated largely by behaviour and chemical signalling. There is evidence that presence of signals, which provide direct information about the quality of the reproductive females would increase the fitness of all parties. In this study, we present an association between visual and chemical signals in the paper wasp Polistes satan. Our results showed that in nest-founding phase colonies, variation of visual signals is linked to relative fertility, while chemical signals are related to dominance status. In addition, experiments revealed that higher hierarchical positions were occupied by subordinates with distinct proportions of cuticular hydrocarbons and distinct visual marks. Therefore, these wasps present cues that convey reliable information of their reproductive status. PMID:18682372

  1. The look of royalty: visual and odour signals of reproductive status in a paper wasp.

    PubMed

    Tannure-Nascimento, Ivelize C; Nascimento, Fabio S; Zucchi, Ronaldo

    2008-11-22

    Reproductive conflicts within animal societies occur when all females can potentially reproduce. In social insects, these conflicts are regulated largely by behaviour and chemical signalling. There is evidence that presence of signals, which provide direct information about the quality of the reproductive females would increase the fitness of all parties. In this study, we present an association between visual and chemical signals in the paper wasp Polistes satan. Our results showed that in nest-founding phase colonies, variation of visual signals is linked to relative fertility, while chemical signals are related to dominance status. In addition, experiments revealed that higher hierarchical positions were occupied by subordinates with distinct proportions of cuticular hydrocarbons and distinct visual marks. Therefore, these wasps present cues that convey reliable information of their reproductive status.

  2. Harnessing the web information ecosystem with wiki-based visualization dashboards.

    PubMed

    McKeon, Matt

    2009-01-01

    We describe the design and deployment of Dashiki, a public website where users may collaboratively build visualization dashboards through a combination of a wiki-like syntax and interactive editors. Our goals are to extend existing research on social data analysis into presentation and organization of data from multiple sources, explore new metaphors for these activities, and participate more fully in the web!s information ecology by providing tighter integration with real-time data. To support these goals, our design includes novel and low-barrier mechanisms for editing and layout of dashboard pages and visualizations, connection to data sources, and coordinating interaction between visualizations. In addition to describing these technologies, we provide a preliminary report on the public launch of a prototype based on this design, including a description of the activities of our users derived from observation and interviews.

  3. Visualization of Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Gerald-Yamasaki, Michael; Hultquist, Jeff; Bryson, Steve; Kenwright, David; Lane, David; Walatka, Pamela; Clucas, Jean; Watson, Velvin; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Scientific visualization serves the dual purpose of exploration and exposition of the results of numerical simulations of fluid flow. Along with the basic visualization process which transforms source data into images, there are four additional components to a complete visualization system: Source Data Processing, User Interface and Control, Presentation, and Information Management. The requirements imposed by the desired mode of operation (i.e. real-time, interactive, or batch) and the source data have their effect on each of these visualization system components. The special requirements imposed by the wide variety and size of the source data provided by the numerical simulation of fluid flow presents an enormous challenge to the visualization system designer. We describe the visualization system components including specific visualization techniques and how the mode of operation and source data requirements effect the construction of computational fluid dynamics visualization systems.

  4. Using Scaled Visual Texture for Autonomous Rock Clustering

    NASA Technical Reports Server (NTRS)

    Anderson, R. C.; Castano, R.; Stough, T.; Gor, V.; Mjolsness, E.

    2001-01-01

    To maximize the return on future planetary missions, it will be critical that rovers have the capability to analyze information onboard and select and return data that is most likely to yield valuable scientific discoveries. Additional information is contained in the original extended abstract.

  5. Electrophysiological evidence for biased competition in V1 for fear expressions.

    PubMed

    West, Greg L; Anderson, Adam A K; Ferber, Susanne; Pratt, Jay

    2011-11-01

    When multiple stimuli are concurrently displayed in the visual field, they must compete for neural representation at the processing expense of their contemporaries. This biased competition is thought to begin as early as primary visual cortex, and can be driven by salient low-level stimulus features. Stimuli important for an organism's survival, such as facial expressions signaling environmental threat, might be similarly prioritized at this early stage of visual processing. In the present study, we used ERP recordings from striate cortex to examine whether fear expressions can bias the competition for neural representation at the earliest stage of retinotopic visuo-cortical processing when in direct competition with concurrently presented visual information of neutral valence. We found that within 50 msec after stimulus onset, information processing in primary visual cortex is biased in favor of perceptual representations of fear at the expense of competing visual information (Experiment 1). Additional experiments confirmed that the facial display's emotional content rather than low-level features is responsible for this prioritization in V1 (Experiment 2), and that this competition is reliant on a face's upright canonical orientation (Experiment 3). These results suggest that complex stimuli important for an organism's survival can indeed be prioritized at the earliest stage of cortical processing at the expense of competing information, with competition possibly beginning before encoding in V1.

  6. Visual attention and emotional reactions to negative stimuli: The role of age and cognitive reappraisal.

    PubMed

    Wirth, Maria; Isaacowitz, Derek M; Kunzmann, Ute

    2017-09-01

    Prominent life span theories of emotion propose that older adults attend less to negative emotional information and report less negative emotional reactions to the same information than younger adults do. Although parallel age differences in affective information processing and age differences in emotional reactivity have been proposed, they have rarely been investigated within the same study. In this eye-tracking study, we tested age differences in visual attention and emotional reactivity, using standardized emotionally negative stimuli. Additionally, we investigated age differences in the association between visual attention and emotional reactivity, and whether these are moderated by cognitive reappraisal. Older as compared with younger adults showed fixation patterns away from negative image content, while they reacted with greater negative emotions. The association between visual attention and emotional reactivity differed by age group and positive reappraisal. Younger adults felt better when they attended more to negative content rather than less, but this relationship only held for younger adults who did not attach a positive meaning to the negative situation. For older adults, overall, there was no significant association between visual attention and emotional reactivity. However, for older adults who did not use positive reappraisal, decreases in attention to negative information were associated with less negative emotions. The present findings point to a complex relationship between younger and older adults' visual attention and emotional reactions. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Stroboscopic visual training improves information encoding in short-term memory.

    PubMed

    Appelbaum, L Gregory; Cain, Matthew S; Schroeder, Julia E; Darling, Elise F; Mitroff, Stephen R

    2012-11-01

    The visual system has developed to transform an undifferentiated and continuous flow of information into discrete and manageable representations, and this ability rests primarily on the uninterrupted nature of the input. Here we explore the impact of altering how visual information is accumulated over time by assessing how intermittent vision influences memory retention. Previous work has shown that intermittent, or stroboscopic, visual training (i.e., practicing while only experiencing snapshots of vision) can enhance visual-motor control and visual cognition, yet many questions remain unanswered about the mechanisms that are altered. In the present study, we used a partial-report memory paradigm to assess the possible changes in visual memory following training under stroboscopic conditions. In Experiment 1, the memory task was completed before and immediately after a training phase, wherein participants engaged in physical activities (e.g., playing catch) while wearing either specialized stroboscopic eyewear or transparent control eyewear. In Experiment 2, an additional group of participants underwent the same stroboscopic protocol but were delayed 24 h between training and assessment, so as to measure retention. In comparison to the control group, both stroboscopic groups (immediate and delayed retest) revealed enhanced retention of information in short-term memory, leading to better recall at longer stimulus-to-cue delays (640-2,560 ms). These results demonstrate that training under stroboscopic conditions has the capacity to enhance some aspects of visual memory, that these faculties generalize beyond the specific tasks that were trained, and that trained improvements can be maintained for at least a day.

  8. Structural organization of parallel information processing within the tectofugal visual system of the pigeon.

    PubMed

    Hellmann, B; Güntürkün, O

    2001-01-01

    Visual information processing within the ascending tectofugal pathway to the forebrain undergoes essential rearrangements between the mesencephalic tectum opticum and the diencephalic nucleus rotundus of birds. The outer tectal layers constitute a two-dimensional map of the visual surrounding, whereas nucleus rotundus is characterized by functional domains in which different visual features such as movement, color, or luminance are processed in parallel. Morphologic correlates of this reorganization were investigated by means of focal injections of the neuronal tracer choleratoxin subunit B into different regions of the nuclei rotundus and triangularis of the pigeon. Dependent on the thalamic injection site, variations in the retrograde labeling pattern of ascending tectal efferents were observed. All rotundal projecting neurons were located within the deep tectal layer 13. Five different cell populations were distinguished that could be differentiated according to their dendritic ramifications within different retinorecipient laminae and their axons projecting to different subcomponents of the nucleus rotundus. Because retinorecipient tectal layers differ in their input from distinct classes of retinal ganglion cells, each tectorotundal cell type probably processes different aspects of the visual surrounding. Therefore, the differential input/output connections of the five tectorotundal cell groups might constitute the structural basis for spatially segregated parallel information processing of different stimulus aspects within the tectofugal visual system. Because two of five rotundal projecting cell groups additionally exhibited quantitative shifts along the dorsoventral extension of the tectum, data also indicate visual field-dependent alterations in information processing for particular visual features. Copyright 2001 Wiley-Liss, Inc.

  9. How vision and movement combine in the hippocampal place code.

    PubMed

    Chen, Guifen; King, John A; Burgess, Neil; O'Keefe, John

    2013-01-02

    How do external environmental and internal movement-related information combine to tell us where we are? We examined the neural representation of environmental location provided by hippocampal place cells while mice navigated a virtual reality environment in which both types of information could be manipulated. Extracellular recordings were made from region CA1 of head-fixed mice navigating a virtual linear track and running in a similar real environment. Despite the absence of vestibular motion signals, normal place cell firing and theta rhythmicity were found. Visual information alone was sufficient for localized firing in 25% of place cells and to maintain a local field potential theta rhythm (but with significantly reduced power). Additional movement-related information was required for normally localized firing by the remaining 75% of place cells. Trials in which movement and visual information were put into conflict showed that they combined nonlinearly to control firing location, and that the relative influence of movement versus visual information varied widely across place cells. However, within this heterogeneity, the behavior of fully half of the place cells conformed to a model of path integration in which the presence of visual cues at the start of each run together with subsequent movement-related updating of position was sufficient to maintain normal fields.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Kris A.; Scholtz, Jean; Whiting, Mark A.

    The VAST Challenge has been a popular venue for academic and industry participants for over ten years. Many participants comment that the majority of their time in preparing VAST Challenge entries is discovering elements in their software environments that need to be redesigned in order to solve the given task. Fortunately, there is no need to wait until the VAST Challenge is announced to test out software systems. The Visual Analytics Benchmark Repository contains all past VAST Challenge tasks, data, solutions and submissions. This paper details the various types of evaluations that may be conducted using the Repository information. Inmore » this paper we describe how developers can do informal evaluations of various aspects of their visual analytics environments using VAST Challenge information. Aspects that can be evaluated include the appropriateness of the software for various tasks, the various data types and formats that can be accommodated, the effectiveness and efficiency of the process supported by the software, and the intuitiveness of the visualizations and interactions. Researchers can compare their visualizations and interactions to those submitted to determine novelty. In addition, the paper provides pointers to various guidelines that software teams can use to evaluate the usability of their software. While these evaluations are not a replacement for formal evaluation methods, this information can be extremely useful during the development of visual analytics environments.« less

  11. Visual function, driving safety, and the elderly.

    PubMed

    Keltner, J L; Johnson, C A

    1987-09-01

    The authors have conducted a survey of the Departments of Motor Vehicles in all 50 states, the District of Columbia, and Puerto Rico requesting information about the visual standards, accidents, and conviction rates for different age groups. In addition, we have reviewed the literature on visual function and traffic safety. Elderly drivers have a greater number of vision problems that affect visual acuity and/or peripheral visual fields. Although the elderly are responsible for a small percentage of the total number of traffic accidents, the types of accidents they are involved in (e.g., failure to yield the right-of-way, intersection collisions, left turns onto crossing streets) may be related to peripheral and central visual field problems. Because age-related changes in performance occur at different rates for various individuals, licensing of the elderly driver should be based on functional abilities rather than age. Based on information currently available, we can make the following recommendations: (1) periodic evaluations of visual acuity and visual fields should be performed every 1 to 2 years in the population over age 65; (2) drivers of any age with multiple accidents or moving violations should have visual acuity and visual fields evaluated; and (3) a system should be developed for physicians to report patients with potentially unsafe visual function. The authors believe that these recommendations may help to reduce the number of traffic accidents that result from peripheral visual field deficits.

  12. Auditory biofeedback substitutes for loss of sensory information in maintaining stance.

    PubMed

    Dozza, Marco; Horak, Fay B; Chiari, Lorenzo

    2007-03-01

    The importance of sensory feedback for postural control in stance is evident from the balance improvements occurring when sensory information from the vestibular, somatosensory, and visual systems is available. However, the extent to which also audio-biofeedback (ABF) information can improve balance has not been determined. It is also unknown why additional artificial sensory feedback is more effective for some subjects than others and in some environmental contexts than others. The aim of this study was to determine the relative effectiveness of an ABF system to reduce postural sway in stance in healthy control subjects and in subjects with bilateral vestibular loss, under conditions of reduced vestibular, visual, and somatosensory inputs. This ABF system used a threshold region and non-linear scaling parameters customized for each individual, to provide subjects with pitch and volume coding of their body sway. ABF had the largest effect on reducing the body sway of the subjects with bilateral vestibular loss when the environment provided limited visual and somatosensory information; it had the smallest effect on reducing the sway of subjects with bilateral vestibular loss, when the environment provided full somatosensory information. The extent that all subjects substituted ABF information for their loss of sensory information was related to the extent that each subject was visually dependent or somatosensory-dependent for their postural control. Comparison of postural sway under a variety of sensory conditions suggests that patients with profound bilateral loss of vestibular function show larger than normal information redundancy among the remaining senses and ABF of trunk sway. The results support the hypothesis that the nervous system uses augmented sensory information differently depending both on the environment and on individual proclivities to rely on vestibular, somatosensory or visual information to control sway.

  13. Active and passive spatial learning in human navigation: acquisition of survey knowledge.

    PubMed

    Chrastil, Elizabeth R; Warren, William H

    2013-09-01

    It seems intuitively obvious that active exploration of a new environment would lead to better spatial learning than would passive visual exposure. It is unclear, however, which components of active learning contribute to spatial knowledge, and previous literature is decidedly mixed. This experiment tests the contributions of 4 components to metric survey knowledge: visual, vestibular, and podokinetic information and cognitive decision making. In the learning phase, 6 groups of participants learned the locations of 8 objects in a virtual hedge maze by (a) walking, (b) being pushed in a wheelchair, or (c) watching a video, crossed with (1) making decisions about their path or (2) being guided through the maze. In the test phase, survey knowledge was assessed by having participants walk a novel shortcut from a starting object to the remembered location of a test object, with the maze removed. Performance was slightly better than chance in the passive video condition. The addition of vestibular information did not improve performance in the wheelchair condition, but the addition of podokinetic information significantly improved angular accuracy in the walking condition. In contrast, there was no effect of decision making in any condition. The results indicate that visual and podokinetic information significantly contribute to survey knowledge, whereas vestibular information and decision making do not. We conclude that podokinetic information is the primary component of active learning for the acquisition of metric survey knowledge. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  14. Denoising imaging polarimetry by adapted BM3D method.

    PubMed

    Tibbs, Alexander B; Daly, Ilse M; Roberts, Nicholas W; Bull, David R

    2018-04-01

    In addition to the visual information contained in intensity and color, imaging polarimetry allows visual information to be extracted from the polarization of light. However, a major challenge of imaging polarimetry is image degradation due to noise. This paper investigates the mitigation of noise through denoising algorithms and compares existing denoising algorithms with a new method, based on BM3D (Block Matching 3D). This algorithm, Polarization-BM3D (PBM3D), gives visual quality superior to the state of the art across all images and noise standard deviations tested. We show that denoising polarization images using PBM3D allows the degree of polarization to be more accurately calculated by comparing it with spectral polarimetry measurements.

  15. [Clinical Neuropsychology of Dementia with Lewy Bodies].

    PubMed

    Nagahama, Yasuhiro

    2016-02-01

    Dementia with Lewy bodies (DLB) shows lesser memory impairment and more severe visuospatial disability than Alzheimer disease (AD). Although deficits in both consolidation and retrieval underlie the memory impairment, retrieval deficit is predominant in DLB. Visuospatial dysfunctions in DLB are related to the impairments in both ventral and dorsal streams of higher visual information processing, and lower visual processing in V1/V2 may also be impaired. Attention and executive functions are more widely disturbed in DLB than in AD. Imitation of finger gestures is impaired more frequently in DLB than in other mild dementia, and provides additional information for diagnosis of mild dementia, especially for DLB. Pareidolia, which lies between hallucination and visual misperception, is found frequently in DLB, but its mechanism is still under investigation.

  16. Wide-Field Fundus Autofluorescence for Retinitis Pigmentosa and Cone/Cone-Rod Dystrophy.

    PubMed

    Oishi, Akio; Oishi, Maho; Ogino, Ken; Morooka, Satoshi; Yoshimura, Nagahisa

    2016-01-01

    Retinitis pigmentosa and cone/cone-rod dystrophy are inherited retinal diseases characterized by the progressive loss of rod and/or cone photoreceptors. To evaluate the status of rod/cone photoreceptors and visual function, visual acuity and visual field tests, electroretinogram, and optical coherence tomography are typically used. In addition to these examinations, fundus autofluorescence (FAF) has recently garnered attention. FAF visualizes the intrinsic fluorescent material in the retina, which is mainly lipofuscin contained within the retinal pigment epithelium. While conventional devices offer limited viewing angles in FAF, the recently developed Optos machine enables recording of wide-field FAF. With wide-field analysis, an association between abnormal FAF areas and visual function was demonstrated in retinitis pigmentosa and cone-rod dystrophy. In addition, the presence of "patchy" hypoautofluorescent areas was found to be correlated with symptom duration. Although physicians should be cautious when interpreting wide-field FAF results because the peripheral parts of the image are magnified significantly, this examination method provides previously unavailable information.

  17. Multi-scale image segmentation method with visual saliency constraints and its application

    NASA Astrophysics Data System (ADS)

    Chen, Yan; Yu, Jie; Sun, Kaimin

    2018-03-01

    Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.

  18. Comparing two types of engineering visualizations: task-related manipulations matter.

    PubMed

    Cölln, Martin C; Kusch, Kerstin; Helmert, Jens R; Kohler, Petra; Velichkovsky, Boris M; Pannasch, Sebastian

    2012-01-01

    This study focuses on the comparison of traditional engineering drawings with a CAD (computer aided design) visualization in terms of user performance and eye movements in an applied context. Twenty-five students of mechanical engineering completed search tasks for measures in two distinct depictions of a car engine component (engineering drawing vs. CAD model). Besides spatial dimensionality, the display types most notably differed in terms of information layout, access and interaction options. The CAD visualization yielded better performance, if users directly manipulated the object, but was inferior, if employed in a conventional static manner, i.e. inspecting only predefined views. An additional eye movement analysis revealed longer fixation durations and a stronger increase of task-relevant fixations over time when interacting with the CAD visualization. This suggests a more focused extraction and filtering of information. We conclude that the three-dimensional CAD visualization can be advantageous if its ability to manipulate is used. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  19. Visual processing in Alzheimer's disease: surface detail and colour fail to aid object identification.

    PubMed

    Adlington, Rebecca L; Laws, Keith R; Gale, Tim M

    2009-10-01

    It has been suggested that object recognition in patients with Alzheimer's disease (AD) may be strongly influenced both by image format (e.g. colour vs. line-drawn) and by low-level visual impairments. To examine these notions, we tested basic visual functioning and picture naming in 41 AD patients and 40 healthy elderly controls. Picture naming was examined using 105 images representing a wide range of living and nonliving subcategories (from the Hatfield image test [HIT]: [Adlington, R. A., Laws, K. R., & Gale, T. M. (in press). The Hatfield image test (HIT): A new picture test and norms for experimental and clinical use. Journal of Clinical and Experimental Neuropsychology]), with each item presented in colour, greyscale, or line-drawn formats. Whilst naming for elderly controls improved linearly with the addition of surface detail and colour, AD patients showed no benefit from the addition of either surface information or colour. Additionally, controls showed a significant category by format interaction; however, the same profile did not emerge for AD patients. Finally, AD patients showed widespread and significant impairment on tasks of visual functioning, and low-level visual impairment was predictive of patient naming.

  20. The Effect of Temporal Perception on Weight Perception

    PubMed Central

    Kambara, Hiroyuki; Shin, Duk; Kawase, Toshihiro; Yoshimura, Natsue; Akahane, Katsuhito; Sato, Makoto; Koike, Yasuharu

    2013-01-01

    A successful catch of a falling ball requires an accurate estimation of the timing for when the ball hits the hand. In a previous experiment in which participants performed ball-catching task in virtual reality environment, we accidentally found that the weight of a falling ball was perceived differently when the timing of ball load force to the hand was shifted from the timing expected from visual information. Although it is well known that spatial information of an object, such as size, can easily deceive our perception of its heaviness, the relationship between temporal information and perceived heaviness is still not clear. In this study, we investigated the effect of temporal factors on weight perception. We conducted ball-catching experiments in a virtual environment where the timing of load force exertion was shifted away from the visual contact timing (i.e., time when the ball hit the hand in the display). We found that the ball was perceived heavier when force was applied earlier than visual contact and lighter when force was applied after visual contact. We also conducted additional experiments in which participants were conditioned to one of two constant time offsets prior to testing weight perception. After performing ball-catching trials with 60 ms advanced or delayed load force exertion, participants’ subjective judgment on the simultaneity of visual contact and force exertion changed, reflecting a shift in perception of time offset. In addition, timing of catching motion initiation relative to visual contact changed, reflecting a shift in estimation of force timing. We also found that participants began to perceive the ball as lighter after conditioning to 60 ms advanced offset and heavier after the 60 ms delayed offset. These results suggest that perceived heaviness depends not on the actual time offset between force exertion and visual contact but on the subjectively perceived time offset between them and/or estimation error in force timing. PMID:23450805

  1. Language identification from visual-only speech signals

    PubMed Central

    Ronquest, Rebecca E.; Levi, Susannah V.; Pisoni, David B.

    2010-01-01

    Our goal in the present study was to examine how observers identify English and Spanish from visual-only displays of speech. First, we replicated the recent findings of Soto-Faraco et al. (2007) with Spanish and English bilingual and monolingual observers using different languages and a different experimental paradigm (identification). We found that prior linguistic experience affected response bias but not sensitivity (Experiment 1). In two additional experiments, we investigated the visual cues that observers use to complete the language-identification task. The results of Experiment 2 indicate that some lexical information is available in the visual signal but that it is limited. Acoustic analyses confirmed that our Spanish and English stimuli differed acoustically with respect to linguistic rhythmic categories. In Experiment 3, we tested whether this rhythmic difference could be used by observers to identify the language when the visual stimuli is temporally reversed, thereby eliminating lexical information but retaining rhythmic differences. The participants performed above chance even in the backward condition, suggesting that the rhythmic differences between the two languages may aid language identification in visual-only speech signals. The results of Experiments 3A and 3B also confirm previous findings that increased stimulus length facilitates language identification. Taken together, the results of these three experiments replicate earlier findings and also show that prior linguistic experience, lexical information, rhythmic structure, and utterance length influence visual-only language identification. PMID:20675804

  2. Cross-sensory reference frame transfer in spatial memory: the case of proprioceptive learning.

    PubMed

    Avraamides, Marios N; Sarrou, Mikaella; Kelly, Jonathan W

    2014-04-01

    In three experiments, we investigated whether the information available to visual perception prior to encoding the locations of objects in a path through proprioception would influence the reference direction from which the spatial memory was formed. Participants walked a path whose orientation was misaligned to the walls of the enclosing room and to the square sheet that covered the path prior to learning (Exp. 1) and, in addition, to the intrinsic structure of a layout studied visually prior to walking the path and to the orientation of stripes drawn on the floor (Exps. 2 and 3). Despite the availability of prior visual information, participants constructed spatial memories that were aligned with the canonical axes of the path, as opposed to the reference directions primed by visual experience. The results are discussed in the context of previous studies documenting transfer of reference frames within and across perceptual modalities.

  3. Decoding visual object categories in early somatosensory cortex.

    PubMed

    Smith, Fraser W; Goodale, Melvyn A

    2015-04-01

    Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. © The Author 2013. Published by Oxford University Press.

  4. Decoding Visual Object Categories in Early Somatosensory Cortex

    PubMed Central

    Smith, Fraser W.; Goodale, Melvyn A.

    2015-01-01

    Neurons, even in the earliest sensory areas of cortex, are subject to a great deal of contextual influence from both within and across modality connections. In the present work, we investigated whether the earliest regions of somatosensory cortex (S1 and S2) would contain content-specific information about visual object categories. We reasoned that this might be possible due to the associations formed through experience that link different sensory aspects of a given object. Participants were presented with visual images of different object categories in 2 fMRI experiments. Multivariate pattern analysis revealed reliable decoding of familiar visual object category in bilateral S1 (i.e., postcentral gyri) and right S2. We further show that this decoding is observed for familiar but not unfamiliar visual objects in S1. In addition, whole-brain searchlight decoding analyses revealed several areas in the parietal lobe that could mediate the observed context effects between vision and somatosensation. These results demonstrate that even the first cortical stages of somatosensory processing carry information about the category of visually presented familiar objects. PMID:24122136

  5. Sequence Diversity Diagram for comparative analysis of multiple sequence alignments.

    PubMed

    Sakai, Ryo; Aerts, Jan

    2014-01-01

    The sequence logo is a graphical representation of a set of aligned sequences, commonly used to depict conservation of amino acid or nucleotide sequences. Although it effectively communicates the amount of information present at every position, this visual representation falls short when the domain task is to compare between two or more sets of aligned sequences. We present a new visual presentation called a Sequence Diversity Diagram and validate our design choices with a case study. Our software was developed using the open-source program called Processing. It loads multiple sequence alignment FASTA files and a configuration file, which can be modified as needed to change the visualization. The redesigned figure improves on the visual comparison of two or more sets, and it additionally encodes information on sequential position conservation. In our case study of the adenylate kinase lid domain, the Sequence Diversity Diagram reveals unexpected patterns and new insights, for example the identification of subgroups within the protein subfamily. Our future work will integrate this visual encoding into interactive visualization tools to support higher level data exploration tasks.

  6. Short-term retention of visual information: Evidence in support of feature-based attention as an underlying mechanism.

    PubMed

    Sneve, Markus H; Sreenivasan, Kartik K; Alnæs, Dag; Endestad, Tor; Magnussen, Svein

    2015-01-01

    Retention of features in visual short-term memory (VSTM) involves maintenance of sensory traces in early visual cortex. However, the mechanism through which this is accomplished is not known. Here, we formulate specific hypotheses derived from studies on feature-based attention to test the prediction that visual cortex is recruited by attentional mechanisms during VSTM of low-level features. Functional magnetic resonance imaging (fMRI) of human visual areas revealed that neural populations coding for task-irrelevant feature information are suppressed during maintenance of detailed spatial frequency memory representations. The narrow spectral extent of this suppression agrees well with known effects of feature-based attention. Additionally, analyses of effective connectivity during maintenance between retinotopic areas in visual cortex show that the observed highlighting of task-relevant parts of the feature spectrum originates in V4, a visual area strongly connected with higher-level control regions and known to convey top-down influence to earlier visual areas during attentional tasks. In line with this property of V4 during attentional operations, we demonstrate that modulations of earlier visual areas during memory maintenance have behavioral consequences, and that these modulations are a result of influences from V4. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Emotion and Perception: The Role of Affective Information

    PubMed Central

    Zadra, Jonathan R.; Clore, Gerald L.

    2011-01-01

    Visual perception and emotion are traditionally considered separate domains of study. In this article, however, we review research showing them to be less separable that usually assumed. In fact, emotions routinely affect how and what we see. Fear, for example, can affect low-level visual processes, sad moods can alter susceptibility to visual illusions, and goal-directed desires can change the apparent size of goal-relevant objects. In addition, the layout of the physical environment, including the apparent steepness of a hill and the distance to the ground from a balcony can both be affected by emotional states. We propose that emotions provide embodied information about the costs and benefits of anticipated action, information that can be used automatically and immediately, circumventing the need for cogitating on the possible consequences of potential actions. Emotions thus provide a strong motivating influence on how the environment is perceived. PMID:22039565

  8. Text Stream Trend Analysis using Multiscale Visual Analytics with Applications to Social Media Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A; Beaver, Justin M; BogenII, Paul L.

    In this paper, we introduce a new visual analytics system, called Matisse, that allows exploration of global trends in textual information streams with specific application to social media platforms. Despite the potential for real-time situational awareness using these services, interactive analysis of such semi-structured textual information is a challenge due to the high-throughput and high-velocity properties. Matisse addresses these challenges through the following contributions: (1) robust stream data management, (2) automated sen- timent/emotion analytics, (3) inferential temporal, geospatial, and term-frequency visualizations, and (4) a flexible drill-down interaction scheme that progresses from macroscale to microscale views. In addition to describing thesemore » contributions, our work-in-progress paper concludes with a practical case study focused on the analysis of Twitter 1% sample stream information captured during the week of the Boston Marathon bombings.« less

  9. Modulation of induced gamma band activity in the human EEG by attention and visual information processing.

    PubMed

    Müller, M M; Gruber, T; Keil, A

    2000-12-01

    Here we present a series of four studies aimed to investigate the link between induced gamma band activity in the human EEG and visual information processing. We demonstrated and validated the modulation of spectral gamma band power by spatial selective visual attention. When subjects attended to a certain stimulus, spectral power was increased as compared to when the same stimulus was ignored. In addition, we showed a shift in spectral gamma band power increase to the contralateral hemisphere when subjects shifted their attention to one visual hemifield. The following study investigated induced gamma band activity and the perception of a Gestalt. Ambiguous rotating figures were used to operationalize the law of good figure (gute Gestalt). We found increased gamma band power at posterior electrode sites when subjects perceived an object. In the last experiment we demonstrated a differential hemispheric gamma band activation when subjects were confronted with emotional pictures. Results of the present experiments in combination with other studies presented in this volume are supportive for the notion that induced gamma band activity in the human EEG is closely related to visual information processing and attentional perceptual mechanisms.

  10. The role of shared visual information for joint action coordination.

    PubMed

    Vesper, Cordula; Schmitz, Laura; Safra, Lou; Sebanz, Natalie; Knoblich, Günther

    2016-08-01

    Previous research has identified a number of coordination processes that enable people to perform joint actions. But what determines which coordination processes joint action partners rely on in a given situation? The present study tested whether varying the shared visual information available to co-actors can trigger a shift in coordination processes. Pairs of participants performed a movement task that required them to synchronously arrive at a target from separate starting locations. When participants in a pair received only auditory feedback about the time their partner reached the target they held their movement duration constant to facilitate coordination. When they received additional visual information about each other's movements they switched to a fundamentally different coordination process, exaggerating the curvature of their movements to communicate their arrival time. These findings indicate that the availability of shared perceptual information is a major factor in determining how individuals coordinate their actions to obtain joint outcomes. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Automatically augmenting lifelog events using pervasively generated content from millions of people.

    PubMed

    Doherty, Aiden R; Smeaton, Alan F

    2010-01-01

    In sensor research we take advantage of additional contextual sensor information to disambiguate potentially erroneous sensor readings or to make better informed decisions on a single sensor's output. This use of additional information reinforces, validates, semantically enriches, and augments sensed data. Lifelog data is challenging to augment, as it tracks one's life with many images including the places they go, making it non-trivial to find associated sources of information. We investigate realising the goal of pervasive user-generated content based on sensors, by augmenting passive visual lifelogs with "Web 2.0" content collected by millions of other individuals.

  12. Holographic data visualization: using synthetic full-parallax holography to share information

    NASA Astrophysics Data System (ADS)

    Dalenius, Tove N.; Rees, Simon; Richardson, Martin

    2017-03-01

    This investigation explores representing information through data visualization using the medium holography. It is an exploration from the perspective of a creative practitioner deploying a transdisciplinary approach. The task of visualizing and making use of data and "big data" has been the focus of a large number of research projects during the opening of this century. As the amount of data that can be gathered has increased in a short time our ability to comprehend and get meaning out of the numbers has been brought into attention. This project is looking at the possibility of employing threedimensional imaging using holography to visualize data and additional information. To explore the viability of the concept, this project has set out to transform the visualization of calculated energy and fluid flow data to a holographic medium. A Computational Fluid Dynamics (CFD) model of flow around a vehicle, and a model of Solar irradiation on a building were chosen to investigate the process. As no pre-existing software is available to directly transform the data into a compatible format the team worked collaboratively and transdisciplinary in order to achieve an accurate conversion from the format of the calculation and visualization tools to a configuration suitable for synthetic holography production. The project also investigates ideas for layout and design suitable for holographic visualization of energy data. Two completed holograms will be presented. Future possibilities for developing the concept of Holographic Data Visualization are briefly deliberated upon.

  13. The Characteristics and Limits of Rapid Visual Categorization

    PubMed Central

    Fabre-Thorpe, Michèle

    2011-01-01

    Visual categorization appears both effortless and virtually instantaneous. The study by Thorpe et al. (1996) was the first to estimate the processing time necessary to perform fast visual categorization of animals in briefly flashed (20 ms) natural photographs. They observed a large differential EEG activity between target and distracter correct trials that developed from 150 ms after stimulus onset, a value that was later shown to be even shorter in monkeys! With such strong processing time constraints, it was difficult to escape the conclusion that rapid visual categorization was relying on massively parallel, essentially feed-forward processing of visual information. Since 1996, we have conducted a large number of studies to determine the characteristics and limits of fast visual categorization. The present chapter will review some of the main results obtained. I will argue that rapid object categorizations in natural scenes can be done without focused attention and are most likely based on coarse and unconscious visual representations activated with the first available (magnocellular) visual information. Fast visual processing proved efficient for the categorization of large superordinate object or scene categories, but shows its limits when more detailed basic representations are required. The representations for basic objects (dogs, cars) or scenes (mountain or sea landscapes) need additional processing time to be activated. This finding is at odds with the widely accepted idea that such basic representations are at the entry level of the system. Interestingly, focused attention is still not required to perform these time consuming basic categorizations. Finally we will show that object and context processing can interact very early in an ascending wave of visual information processing. We will discuss how such data could result from our experience with a highly structured and predictable surrounding world that shaped neuronal visual selectivity. PMID:22007180

  14. Impact of language on development of auditory-visual speech perception.

    PubMed

    Sekiyama, Kaoru; Burnham, Denis

    2008-03-01

    The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various signal-to-noise levels. In Experiment 1 with two groups of adults, native speakers of Japanese and native speakers of English, the results on both percent visually influenced responses and reaction time supported previous reports of a weaker visual influence for Japanese participants. In Experiment 2, an additional three age groups (6, 8, and 11 years) in each language group were tested. The results showed that the degree of visual influence was low and equivalent for Japanese and English language 6-year-olds, and increased over age for English language participants, especially between 6 and 8 years, but remained the same for Japanese participants. This may be related to the fact that English language adults and older children processed visual speech information relatively faster than auditory information whereas no such inter-modal differences were found in the Japanese participants' reaction times.

  15. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  16. Audio-Visual Situational Awareness for General Aviation Pilots

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Lodha, Suresh K.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    Weather is one of the major causes of general aviation accidents. Researchers are addressing this problem from various perspectives including improving meteorological forecasting techniques, collecting additional weather data automatically via on-board sensors and "flight" modems, and improving weather data dissemination and presentation. We approach the problem from the improved presentation perspective and propose weather visualization and interaction methods tailored for general aviation pilots. Our system, Aviation Weather Data Visualization Environment (AWE), utilizes information visualization techniques, a direct manipulation graphical interface, and a speech-based interface to improve a pilot's situational awareness of relevant weather data. The system design is based on a user study and feedback from pilots.

  17. Visual analytics of inherently noisy crowdsourced data on ultra high resolution displays

    NASA Astrophysics Data System (ADS)

    Huynh, Andrew; Ponto, Kevin; Lin, Albert Yu-Min; Kuester, Falko

    The increasing prevalence of distributed human microtasking, crowdsourcing, has followed the exponential increase in data collection capabilities. The large scale and distributed nature of these microtasks produce overwhelming amounts of information that is inherently noisy due to the nature of human input. Furthermore, these inputs create a constantly changing dataset with additional information added on a daily basis. Methods to quickly visualize, filter, and understand this information over temporal and geospatial constraints is key to the success of crowdsourcing. This paper present novel methods to visually analyze geospatial data collected through crowdsourcing on top of remote sensing satellite imagery. An ultra high resolution tiled display system is used to explore the relationship between human and satellite remote sensing data at scale. A case study is provided that evaluates the presented technique in the context of an archaeological field expedition. A team in the field communicated in real-time with and was guided by researchers in the remote visual analytics laboratory, swiftly sifting through incoming crowdsourced data to identify target locations that were identified as viable archaeological sites.

  18. An evaluation of unisensory and multisensory adaptive flight-path navigation displays

    NASA Astrophysics Data System (ADS)

    Moroney, Brian W.

    1999-11-01

    The present study assessed the use of unimodal (auditory or visual) and multimodal (audio-visual) adaptive interfaces to aid military pilots in the performance of a precision-navigation flight task when they were confronted with additional information-processing loads. A standard navigation interface was supplemented by adaptive interfaces consisting of either a head-up display based flight director, a 3D virtual audio interface, or a combination of the two. The adaptive interfaces provided information about how to return to the pathway when off course. Using an advanced flight simulator, pilots attempted two navigation scenarios: (A) maintain proper course under normal flight conditions and (B) return to course after their aircraft's position has been perturbed. Pilots flew in the presence or absence of an additional information-processing task presented in either the visual or auditory modality. The additional information-processing tasks were equated in terms of perceived mental workload as indexed by the NASA-TLX. Twelve experienced military pilots (11 men and 1 woman), naive to the purpose of the experiment, participated in the study. They were recruited from Wright-Patterson Air Force Base and had a mean of 2812 hrs. of flight experience. Four navigational interface configurations, the standard visual navigation interface alone (SV), SV plus adaptive visual, SV plus adaptive auditory, and SV plus adaptive visual-auditory composite were combined factorially with three concurrent tasks (CT), the no CT, the visual CT, and the auditory CT, a completely repeated measures design. The adaptive navigation displays were activated whenever the aircraft was more than 450 ft off course. In the normal flight scenario, the adaptive interfaces did not bolster navigation performance in comparison to the standard interface. It is conceivable that the pilots performed quite adequately using the familiar generic interface under normal flight conditions and hence showed no added benefit of the adaptive interfaces. In the return-to-course scenario, the relative advantages of the three adaptive interfaces were dependent upon the nature of the CT in a complex way. In the absence of a CT, recovery heading performance was superior with the adaptive visual and adaptive composite interfaces compared to the adaptive auditory interface. In the context of a visual CT, recovery when using the adaptive composite interface was superior to that when using the adaptive visual interface. Post-experimental inquiry indicated that when faced with a visual CT, the pilots used the auditory component of the multimodal guidance display to detect gross heading errors and the visual component to make more fine-grained heading adjustments. In the context of the auditory CT, navigation performance using the adaptive visual interface tended to be superior to that when using the adaptive auditory interface. Neither CT performance nor NASA-TLX workload level was influenced differentially by the interface configurations. Thus, the potential benefits associated with the proposed interfaces appear to be unaccompanied by negative side effects involving CT interference and workload. The adaptive interface configurations were altered without any direct input from the pilot. Thus, it was feared that pilots might reject the activation of interfaces independent of their control. However, pilots' debriefing comments about the efficacy of the adaptive interface approach were very positive. (Abstract shortened by UMI.)

  19. Combining spectral material properties in the infrared and the visible spectral range for qualification and nondestructive evaluation of components

    NASA Astrophysics Data System (ADS)

    Eisler, K.; Goldammer, M.; Rothenfusser, M.; Arnold, W.; Homma, C.

    2012-05-01

    The spectral selective thermography with infrared filters can be used to determine or to distinguish materials such as contaminations on a metallic component. With additional visual information, the indications by the IR signal can be selectively accentuated or suppressed for easier evaluation of passive and active thermography measurements. For flash thermography the detected IR signal between 3.4 and 5.1 μm is analyzed with regard to the spectral material information. The presented hybrid camera uses beam overlapping to obtain combined images of both in the infrared and the visual range.

  20. Sensory complementation and antipredator behavioural compensation in acid-impacted juvenile Atlantic salmon.

    PubMed

    Elvidge, C K; Macnaughton, C J; Brown, G E

    2013-05-01

    Prey incorporate multiple forms of publicly available information on predation risk into threat-sensitive antipredator behaviours. Changes in information availability have previously been demonstrated to elicit transient alterations in behavioural patterns, while the effects of long-term deprivation of particular forms of information remain largely unexplored. Damage-released chemical alarm cues from the epidermis of fishes are rendered non-functional under weakly acidic conditions (pH < 6.6), depriving fish of an important source of information on predation risk in acidified waterbodies. We addressed the effects of long-term deprivation on the antipredator responses to different combinations of chemical and visual threat cues via in situ observations of wild, free-swimming 0(+) Atlantic salmon (Salmo salar) fry in four neutral and four weakly acidic nursery streams. In addition, a cross-population transplant experiment and natural interannual variation in acidity enabled the examination of provenance and environment as causes of the observed differences in response. Fish living under weakly acidic conditions demonstrate significantly greater or hypersensitive antipredator responses to visual cues compared to fish under neutral conditions. Under neutral conditions, fish demonstrate complementary (additive or synergistic) effects of paired visual and chemical cues consistent with threat-sensitive responses. Cross-population transplants and interannual comparisons of responses strongly support the conclusion that differences in antipredator responses between neutral and weakly acidic streams result from the loss of chemical information on predation risk, as opposed to population-derived differences in behaviours.

  1. Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration.

    PubMed

    Wahn, Basil; König, Peter

    2015-01-01

    Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.

  2. Genetic parameter estimates for carcass traits and visual scores including or not genomic information.

    PubMed

    Gordo, D G M; Espigolan, R; Tonussi, R L; Júnior, G A F; Bresolin, T; Magalhães, A F Braga; Feitosa, F L; Baldi, F; Carvalheiro, R; Tonhati, H; de Oliveira, H N; Chardulo, L A L; de Albuquerque, L G

    2016-05-01

    The objective of this study was to determine whether visual scores used as selection criteria in Nellore breeding programs are effective indicators of carcass traits measured after slaughter. Additionally, this study evaluated the effect of different structures of the relationship matrix ( and ) on the estimation of genetic parameters and on the prediction accuracy of breeding values. There were 13,524 animals for visual scores of conformation (CS), finishing precocity (FP), and muscling (MS) and 1,753, 1,747, and 1,564 for LM area (LMA), backfat thickness (BF), and HCW, respectively. Of these, 1,566 animals were genotyped using a high-density panel containing 777,962 SNP. Six analyses were performed using multitrait animal models, each including the 3 visual scores and 1 carcass trait. For the visual scores, the model included direct additive genetic and residual random effects and the fixed effects of contemporary group (defined by year of birth, management group at yearling, and farm) and the linear effect of age of animal at yearling. The same model was used for the carcass traits, replacing the effect of age of animal at yearling with the linear effect of age of animal at slaughter. The variance and covariance components were estimated by the REML method in analyses using the numerator relationship matrix () or combining the genomic and the numerator relationship matrices (). The heritability estimates for the visual scores obtained with the 2 methods were similar and of moderate magnitude (0.23-0.34), indicating that these traits should response to direct selection. The heritabilities for LMA, BF, and HCW were 0.13, 0.07, and 0.17, respectively, using matrix and 0.29, 0.16, and 0.23, respectively, using matrix . The genetic correlations between the visual scores and carcass traits were positive, and higher correlations were generally obtained when matrix was used. Considering the difficulties and cost of measuring carcass traits postmortem, visual scores of CS, FP, and MS could be used as selection criteria to improve HCW, BF, and LMA. The use of genomic information permitted the detection of greater additive genetic variability for LMA and BF. For HCW, the high magnitude of the genetic correlations with visual scores was probably sufficient to recover genetic variability. The methods provided similar breeding value accuracies, especially for the visual scores.

  3. Optical hiding with visual cryptography

    NASA Astrophysics Data System (ADS)

    Shi, Yishi; Yang, Xiubo

    2017-11-01

    We propose an optical hiding method based on visual cryptography. In the hiding process, we convert the secret information into a set of fabricated phase-keys, which are completely independent of each other, intensity-detected-proof and image-covered, leading to the high security. During the extraction process, the covered phase-keys are illuminated with laser beams and then incoherently superimposed to extract the hidden information directly by human vision, without complicated optical implementations and any additional computation, resulting in the convenience of extraction. Also, the phase-keys are manufactured as the diffractive optical elements that are robust to the attacks, such as the blocking and the phase-noise. Optical experiments verify that the high security, the easy extraction and the strong robustness are all obtainable in the visual-cryptography-based optical hiding.

  4. Visual-perceptual mismatch in robotic surgery.

    PubMed

    Abiri, Ahmad; Tao, Anna; LaRocca, Meg; Guan, Xingmin; Askari, Syed J; Bisley, James W; Dutson, Erik P; Grundfest, Warren S

    2017-08-01

    The principal objective of the experiment was to analyze the effects of the clutch operation of robotic surgical systems on the performance of the operator. The relative coordinate system introduced by the clutch operation can introduce a visual-perceptual mismatch which can potentially have negative impact on a surgeon's performance. We also assess the impact of the introduction of additional tactile sensory information on reducing the impact of visual-perceptual mismatch on the performance of the operator. We asked 45 novice subjects to complete peg transfers using the da Vinci IS 1200 system with grasper-mounted, normal force sensors. The task involves picking up a peg with one of the robotic arms, passing it to the other arm, and then placing it on the opposite side of the view. Subjects were divided into three groups: aligned group (no mismatch), the misaligned group (10 cm z axis mismatch), and the haptics-misaligned group (haptic feedback and z axis mismatch). Each subject performed the task five times, during which the grip force, time of completion, and number of faults were recorded. Compared to the subjects that performed the tasks using a properly aligned controller/arm configuration, subjects with a single-axis misalignment showed significantly more peg drops (p = 0.011) and longer time to completion (p < 0.001). Additionally, it was observed that addition of tactile feedback helps reduce the negative effects of visual-perceptual mismatch in some cases. Grip force data recorded from grasper-mounted sensors showed no difference between the different groups. The visual-perceptual mismatch created by the misalignment of the robotic controls relative to the robotic arms has a negative impact on the operator of a robotic surgical system. Introduction of other sensory information and haptic feedback systems can help in potentially reducing this effect.

  5. Accessibility limits recall from visual working memory.

    PubMed

    Rajsic, Jason; Swan, Garrett; Wilson, Daryl E; Pratt, Jay

    2017-09-01

    In this article, we demonstrate limitations of accessibility of information in visual working memory (VWM). Recently, cued-recall has been used to estimate the fidelity of information in VWM, where the feature of a cued object is reproduced from memory (Bays, Catalao, & Husain, 2009; Wilken & Ma, 2004; Zhang & Luck, 2008). Response error in these tasks has been largely studied with respect to failures of encoding and maintenance; however, the retrieval operations used in these tasks remain poorly understood. By varying the number and type of object features provided as a cue in a visual delayed-estimation paradigm, we directly assess the nature of retrieval errors in delayed estimation from VWM. Our results demonstrate that providing additional object features in a single cue reliably improves recall, largely by reducing swap, or misbinding, responses. In addition, performance simulations using the binding pool model (Swan & Wyble, 2014) were able to mimic this pattern of performance across a large span of parameter combinations, demonstrating that the binding pool provides a possible mechanism underlying this pattern of results that is not merely a symptom of one particular parametrization. We conclude that accessing visual working memory is a noisy process, and can lead to errors over and above those of encoding and maintenance limitations. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Spatiotemporal Processing in Crossmodal Interactions for Perception of the External World: A Review

    PubMed Central

    Hidaka, Souta; Teramoto, Wataru; Sugita, Yoichi

    2015-01-01

    Research regarding crossmodal interactions has garnered much interest in the last few decades. A variety of studies have demonstrated that multisensory information (vision, audition, tactile sensation, and so on) can perceptually interact with each other in the spatial and temporal domains. Findings regarding crossmodal interactions in the spatiotemporal domain (i.e., motion processing) have also been reported, with updates in the last few years. In this review, we summarize past and recent findings on spatiotemporal processing in crossmodal interactions regarding perception of the external world. A traditional view regarding crossmodal interactions holds that vision is superior to audition in spatial processing, but audition is dominant over vision in temporal processing. Similarly, vision is considered to have dominant effects over the other sensory modalities (i.e., visual capture) in spatiotemporal processing. However, recent findings demonstrate that sound could have a driving effect on visual motion perception. Moreover, studies regarding perceptual associative learning reported that, after association is established between a sound sequence without spatial information and visual motion information, the sound sequence could trigger visual motion perception. Other sensory information, such as motor action or smell, has also exhibited similar driving effects on visual motion perception. Additionally, recent brain imaging studies demonstrate that similar activation patterns could be observed in several brain areas, including the motion processing areas, between spatiotemporal information from different sensory modalities. Based on these findings, we suggest that multimodal information could mutually interact in spatiotemporal processing in the percept of the external world and that common perceptual and neural underlying mechanisms would exist for spatiotemporal processing. PMID:26733827

  7. Driver Distraction Using Visual-Based Sensors and Algorithms.

    PubMed

    Fernández, Alberto; Usamentiaga, Rubén; Carús, Juan Luis; Casado, Rubén

    2016-10-28

    Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed.

  8. Driver Distraction Using Visual-Based Sensors and Algorithms

    PubMed Central

    Fernández, Alberto; Usamentiaga, Rubén; Carús, Juan Luis; Casado, Rubén

    2016-01-01

    Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed. PMID:27801822

  9. Multiplicative and additive modulation of neuronal tuning with population activity affects encoded information

    PubMed Central

    Arandia-Romero, Iñigo; Tanabe, Seiji; Drugowitsch, Jan; Kohn, Adam; Moreno-Bote, Rubén

    2016-01-01

    Numerous studies have shown that neuronal responses are modulated by stimulus properties, and also by the state of the local network. However, little is known about how activity fluctuations of neuronal populations modulate the sensory tuning of cells and affect their encoded information. We found that fluctuations in ongoing and stimulus-evoked population activity in primate visual cortex modulate the tuning of neurons in a multiplicative and additive manner. While distributed on a continuum, neurons with stronger multiplicative effects tended to have less additive modulation, and vice versa. The information encoded by multiplicatively-modulated neurons increased with greater population activity, while that of additively-modulated neurons decreased. These effects offset each other, so that population activity had little effect on total information. Our results thus suggest that intrinsic activity fluctuations may act as a `traffic light' that determines which subset of neurons are most informative. PMID:26924437

  10. The Representation of Color across the Human Visual Cortex: Distinguishing Chromatic Signals Contributing to Object Form Versus Surface Color.

    PubMed

    Seymour, K J; Williams, M A; Rich, A N

    2016-05-01

    Many theories of visual object perception assume the visual system initially extracts borders between objects and their background and then "fills in" color to the resulting object surfaces. We investigated the transformation of chromatic signals across the human ventral visual stream, with particular interest in distinguishing representations of object surface color from representations of chromatic signals reflecting the retinal input. We used fMRI to measure brain activity while participants viewed figure-ground stimuli that differed either in the position or in the color contrast polarity of the foreground object (the figure). Multivariate pattern analysis revealed that classifiers were able to decode information about which color was presented at a particular retinal location from early visual areas, whereas regions further along the ventral stream exhibited biases for representing color as part of an object's surface, irrespective of its position on the retina. Additional analyses showed that although activity in V2 contained strong chromatic contrast information to support the early parsing of objects within a visual scene, activity in this area also signaled information about object surface color. These findings are consistent with the view that mechanisms underlying scene segmentation and the binding of color to object surfaces converge in V2. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Eye guidance during real-world scene search: The role color plays in central and peripheral vision.

    PubMed

    Nuthmann, Antje; Malcolm, George L

    2016-01-01

    The visual system utilizes environmental features to direct gaze efficiently when locating objects. While previous research has isolated various features' contributions to gaze guidance, these studies generally used sparse displays and did not investigate how features facilitated search as a function of their location on the visual field. The current study investigated how features across the visual field--particularly color--facilitate gaze guidance during real-world search. A gaze-contingent window followed participants' eye movements, restricting color information to specified regions. Scene images were presented in full color, with color in the periphery and gray in central vision or gray in the periphery and color in central vision, or in grayscale. Color conditions were crossed with a search cue manipulation, with the target cued either with a word label or an exact picture. Search times increased as color information in the scene decreased. A gaze-data based decomposition of search time revealed color-mediated effects on specific subprocesses of search. Color in peripheral vision facilitated target localization, whereas color in central vision facilitated target verification. Picture cues facilitated search, with the effects of cue specificity and scene color combining additively. When available, the visual system utilizes the environment's color information to facilitate different real-world visual search behaviors based on the location within the visual field.

  12. What makes a visualization memorable?

    PubMed

    Borkin, Michelle A; Vo, Azalea A; Bylinskii, Zoya; Isola, Phillip; Sunkavalli, Shashank; Oliva, Aude; Pfister, Hanspeter

    2013-12-01

    An ongoing debate in the Visualization community concerns the role that visualization types play in data understanding. In human cognition, understanding and memorability are intertwined. As a first step towards being able to ask questions about impact and effectiveness, here we ask: 'What makes a visualization memorable?' We ran the largest scale visualization study to date using 2,070 single-panel visualizations, categorized with visualization type (e.g., bar chart, line graph, etc.), collected from news media sites, government reports, scientific journals, and infographic sources. Each visualization was annotated with additional attributes, including ratings for data-ink ratios and visual densities. Using Amazon's Mechanical Turk, we collected memorability scores for hundreds of these visualizations, and discovered that observers are consistent in which visualizations they find memorable and forgettable. We find intuitive results (e.g., attributes like color and the inclusion of a human recognizable object enhance memorability) and less intuitive results (e.g., common graphs are less memorable than unique visualization types). Altogether our findings suggest that quantifying memorability is a general metric of the utility of information, an essential step towards determining how to design effective visualizations.

  13. Why Do Pictures, but Not Visual Words, Reduce Older Adults’ False Memories?

    PubMed Central

    Smith, Rebekah E.; Hunt, R. Reed; Dunlap, Kathryn R.

    2015-01-01

    Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both the case of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment we provide the first simultaneous comparison of all three study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. PMID:26213799

  14. Why do pictures, but not visual words, reduce older adults' false memories?

    PubMed

    Smith, Rebekah E; Hunt, R Reed; Dunlap, Kathryn R

    2015-09-01

    Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both cases of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment, we provide the first simultaneous comparison of all 3 study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  15. Learning to Recognize Patterns: Changes in the Visual Field with Familiarity

    NASA Astrophysics Data System (ADS)

    Bebko, James M.; Uchikawa, Keiji; Saida, Shinya; Ikeda, Mitsuo

    1995-01-01

    Two studies were conducted to investigate changes which take place in the visual information processing of novel stimuli as they become familiar. Japanese writing characters (Hiragana and Kanji) which were unfamiliar to two native English speaking subjects were presented using a moving window technique to restrict their visual fields. Study time for visual recognition was recorded across repeated sessions, and with varying visual field restrictions. The critical visual field was defined as the size of the visual field beyond which further increases did not improve the speed of recognition performance. In the first study, when the Hiragana patterns were novel, subjects needed to see about half of the entire pattern simultaneously to maintain optimal performance. However, the critical visual field size decreased as familiarity with the patterns increased. These results were replicated in the second study with more complex Kanji characters. In addition, the critical field size decreased as pattern complexity decreased. We propose a three component model of pattern perception. In the first stage a representation of the stimulus must be constructed by the subject, and restricting of the visual field interferes dramatically with this component when stimuli are unfamiliar. With increased familiarity, subjects become able to reconstruct a previous representation from very small, unique segments of the pattern, analogous to the informativeness areas hypothesized by Loftus and Mackworth [J. Exp. Psychol., 4 (1978) 565].

  16. Massively parallel neural circuits for stereoscopic color vision: encoding, decoding and identification.

    PubMed

    Lazar, Aurel A; Slutskiy, Yevgeniy B; Zhou, Yiyin

    2015-03-01

    Past work demonstrated how monochromatic visual stimuli could be faithfully encoded and decoded under Nyquist-type rate conditions. Color visual stimuli were then traditionally encoded and decoded in multiple separate monochromatic channels. The brain, however, appears to mix information about color channels at the earliest stages of the visual system, including the retina itself. If information about color is mixed and encoded by a common pool of neurons, how can colors be demixed and perceived? We present Color Video Time Encoding Machines (Color Video TEMs) for encoding color visual stimuli that take into account a variety of color representations within a single neural circuit. We then derive a Color Video Time Decoding Machine (Color Video TDM) algorithm for color demixing and reconstruction of color visual scenes from spikes produced by a population of visual neurons. In addition, we formulate Color Video Channel Identification Machines (Color Video CIMs) for functionally identifying color visual processing performed by a spiking neural circuit. Furthermore, we derive a duality between TDMs and CIMs that unifies the two and leads to a general theory of neural information representation for stereoscopic color vision. We provide examples demonstrating that a massively parallel color visual neural circuit can be first identified with arbitrary precision and its spike trains can be subsequently used to reconstruct the encoded stimuli. We argue that evaluation of the functional identification methodology can be effectively and intuitively performed in the stimulus space. In this space, a signal reconstructed from spike trains generated by the identified neural circuit can be compared to the original stimulus. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. General visual robot controller networks via artificial evolution

    NASA Astrophysics Data System (ADS)

    Cliff, David; Harvey, Inman; Husbands, Philip

    1993-08-01

    We discuss recent results from our ongoing research concerning the application of artificial evolution techniques (i.e., an extended form of genetic algorithm) to the problem of developing `neural' network controllers for visually guided robots. The robot is a small autonomous vehicle with extremely low-resolution vision, employing visual sensors which could readily be constructed from discrete analog components. In addition to visual sensing, the robot is equipped with a small number of mechanical tactile sensors. Activity from the sensors is fed to a recurrent dynamical artificial `neural' network, which acts as the robot controller, providing signals to motors governing the robot's motion. Prior to presentation of new results, this paper summarizes our rationale and past work, which has demonstrated that visually guided control networks can arise without any explicit specification that visual processing should be employed: the evolutionary process opportunistically makes use of visual information if it is available.

  18. Real-time tracking of visually attended objects in virtual environments and its application to LOD.

    PubMed

    Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon

    2009-01-01

    This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.

  19. Revisiting Frazier's subdeltas: enhancing datasets with dimensionality, better to understand geologic systems

    USGS Publications Warehouse

    Flocks, James

    2006-01-01

    Scientific knowledge from the past century is commonly represented by two-dimensional figures and graphs, as presented in manuscripts and maps. Using today's computer technology, this information can be extracted and projected into three- and four-dimensional perspectives. Computer models can be applied to datasets to provide additional insight into complex spatial and temporal systems. This process can be demonstrated by applying digitizing and modeling techniques to valuable information within widely used publications. The seminal paper by D. Frazier, published in 1967, identified 16 separate delta lobes formed by the Mississippi River during the past 6,000 yrs. The paper includes stratigraphic descriptions through geologic cross-sections, and provides distribution and chronologies of the delta lobes. The data from Frazier's publication are extensively referenced in the literature. Additional information can be extracted from the data through computer modeling. Digitizing and geo-rectifying Frazier's geologic cross-sections produce a three-dimensional perspective of the delta lobes. Adding the chronological data included in the report provides the fourth-dimension of the delta cycles, which can be visualized through computer-generated animation. Supplemental information can be added to the model, such as post-abandonment subsidence of the delta-lobe surface. Analyzing the regional, net surface-elevation balance between delta progradations and land subsidence is computationally intensive. By visualizing this process during the past 4,500 yrs through multi-dimensional animation, the importance of sediment compaction in influencing both the shape and direction of subsequent delta progradations becomes apparent. Visualization enhances a classic dataset, and can be further refined using additional data, as well as provide a guide for identifying future areas of study.

  20. RGB-D SLAM Combining Visual Odometry and Extended Information Filter

    PubMed Central

    Zhang, Heng; Liu, Yanli; Tan, Jindong; Xiong, Naixue

    2015-01-01

    In this paper, we present a novel RGB-D SLAM system based on visual odometry and an extended information filter, which does not require any other sensors or odometry. In contrast to the graph optimization approaches, this is more suitable for online applications. A visual dead reckoning algorithm based on visual residuals is devised, which is used to estimate motion control input. In addition, we use a novel descriptor called binary robust appearance and normals descriptor (BRAND) to extract features from the RGB-D frame and use them as landmarks. Furthermore, considering both the 3D positions and the BRAND descriptors of the landmarks, our observation model avoids explicit data association between the observations and the map by marginalizing the observation likelihood over all possible associations. Experimental validation is provided, which compares the proposed RGB-D SLAM algorithm with just RGB-D visual odometry and a graph-based RGB-D SLAM algorithm using the publicly-available RGB-D dataset. The results of the experiments demonstrate that our system is quicker than the graph-based RGB-D SLAM algorithm. PMID:26263990

  1. Visual and skill effects on soccer passing performance, kinematics, and outcome estimations

    PubMed Central

    Basevitch, Itay; Tenenbaum, Gershon; Land, William M.; Ward, Paul

    2015-01-01

    The role of visual information and action representations in executing a motor task was examined from a mental representations approach. High-skill (n = 20) and low-skill (n = 20) soccer players performed a passing task to two targets at distances of 9.14 and 18.29 m, under three visual conditions: normal, occluded, and distorted vision (i.e., +4.0 corrective lenses, a visual acuity of approximately 6/75) without knowledge of results. Following each pass, participants estimated the relative horizontal distance from the target as the ball crossed the target plane. Kinematic data during each pass were also recorded for the shorter distance. Results revealed that performance on the motor task decreased as a function of visual information and task complexity (i.e., distance from target) regardless of skill level. High-skill players performed significantly better than low-skill players on both the actual passing and estimation tasks, at each target distance and visual condition. In addition, kinematic data indicated that high-skill participants were more consistent and had different kinematic movement patterns than low-skill participants. Findings contribute to the understanding of the underlying mechanisms required for successful performance in a self-paced, discrete and closed motor task. PMID:25784886

  2. Rethinking Visual Analytics for Streaming Data Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crouser, R. Jordan; Franklin, Lyndsey; Cook, Kris

    In the age of data science, the use of interactive information visualization techniques has become increasingly ubiquitous. From online scientific journals to the New York Times graphics desk, the utility of interactive visualization for both storytelling and analysis has become ever more apparent. As these techniques have become more readily accessible, the appeal of combining interactive visualization with computational analysis continues to grow. Arising out of a need for scalable, human-driven analysis, primary objective of visual analytics systems is to capitalize on the complementary strengths of human and machine analysis, using interactive visualization as a medium for communication between themore » two. These systems leverage developments from the fields of information visualization, computer graphics, machine learning, and human-computer interaction to support insight generation in areas where purely computational analyses fall short. Over the past decade, visual analytics systems have generated remarkable advances in many historically challenging analytical contexts. These include areas such as modeling political systems [Crouser et al. 2012], detecting financial fraud [Chang et al. 2008], and cybersecurity [Harrison et al. 2012]. In each of these contexts, domain expertise and human intuition is a necessary component of the analysis. This intuition is essential to building trust in the analytical products, as well as supporting the translation of evidence into actionable insight. In addition, each of these examples also highlights the need for scalable analysis. In each case, it is infeasible for a human analyst to manually assess the raw information unaided, and the communication overhead to divide the task between a large number of analysts makes simple parallelism intractable. Regardless of the domain, visual analytics tools strive to optimize the allocation of human analytical resources, and to streamline the sensemaking process on data that is massive, complex, incomplete, and uncertain in scenarios requiring human judgment.« less

  3. Information transfer rate with serial and simultaneous visual display formats

    NASA Astrophysics Data System (ADS)

    Matin, Ethel; Boff, Kenneth R.

    1988-04-01

    Information communication rate for a conventional display with three spatially separated windows was compared with rate for a serial display in which data frames were presented sequentially in one window. For both methods, each frame contained a randomly selected digit with various amounts of additional display 'clutter.' Subjects recalled the digits in a prescribed order. Large rate differences were found, with faster serial communication for all levels of the clutter factors. However, the rate difference was most pronounced for highly cluttered displays. An explanation for the latter effect in terms of visual masking in the retinal periphery was supported by the results of a second experiment. The working hypothesis that serial displays can speed information transfer for automatic but not for controlled processing is discussed.

  4. Neural practice effect during cross-modal selective attention: Supra-modal and modality-specific effects.

    PubMed

    Xia, Jing; Zhang, Wei; Jiang, Yizhou; Li, You; Chen, Qi

    2018-05-16

    Practice and experiences gradually shape the central nervous system, from the synaptic level to large-scale neural networks. In natural multisensory environment, even when inundated by streams of information from multiple sensory modalities, our brain does not give equal weight to different modalities. Rather, visual information more frequently receives preferential processing and eventually dominates consciousness and behavior, i.e., visual dominance. It remains unknown, however, the supra-modal and modality-specific practice effect during cross-modal selective attention, and moreover whether the practice effect shows similar modality preferences as the visual dominance effect in the multisensory environment. To answer the above two questions, we adopted a cross-modal selective attention paradigm in conjunction with the hybrid fMRI design. Behaviorally, visual performance significantly improved while auditory performance remained constant with practice, indicating that visual attention more flexibly adapted behavior with practice than auditory attention. At the neural level, the practice effect was associated with decreasing neural activity in the frontoparietal executive network and increasing activity in the default mode network, which occurred independently of the modality attended, i.e., the supra-modal mechanisms. On the other hand, functional decoupling between the auditory and the visual system was observed with the progress of practice, which varied as a function of the modality attended. The auditory system was functionally decoupled with both the dorsal and ventral visual stream during auditory attention while was decoupled only with the ventral visual stream during visual attention. To efficiently suppress the irrelevant visual information with practice, auditory attention needs to additionally decouple the auditory system from the dorsal visual stream. The modality-specific mechanisms, together with the behavioral effect, thus support the visual dominance model in terms of the practice effect during cross-modal selective attention. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Developmental changes in the neural influence of sublexical information on semantic processing.

    PubMed

    Lee, Shu-Hui; Booth, James R; Chou, Tai-Li

    2015-07-01

    Functional magnetic resonance imaging (fMRI) was used to examine the developmental changes in a group of normally developing children (aged 8-12) and adolescents (aged 13-16) during semantic processing. We manipulated association strength (i.e. a global reading unit) and semantic radical (i.e. a local reading unit) to explore the interaction of lexical and sublexical semantic information in making semantic judgments. In the semantic judgment task, two types of stimuli were used: visually-similar (i.e. shared a semantic radical) versus visually-dissimilar (i.e. did not share a semantic radical) character pairs. Participants were asked to indicate if two Chinese characters, arranged according to association strength, were related in meaning. The results showed greater developmental increases in activation in left angular gyrus (BA 39) in the visually-similar compared to the visually-dissimilar pairs for the strong association. There were also greater age-related increases in angular gyrus for the strong compared to weak association in the visually-similar pairs. Both of these results suggest that shared semantics at the sublexical level facilitates the integration of overlapping features at the lexical level in older children. In addition, there was a larger developmental increase in left posterior middle temporal gyrus (BA 21) for the weak compared to strong association in the visually-dissimilar pairs, suggesting conflicting sublexical information placed greater demands on access to lexical representations in the older children. All together, these results suggest that older children are more sensitive to sublexical information when processing lexical representations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Does It Really Matter Where You Look When Walking on Stairs? Insights from a Dual-Task Study

    PubMed Central

    Miyasike-daSilva, Veronica; McIlroy, William E.

    2012-01-01

    Although the visual system is known to provide relevant information to guide stair locomotion, there is less understanding of the specific contributions of foveal and peripheral visual field information. The present study investigated the specific role of foveal vision during stair locomotion and ground-stairs transitions by using a dual-task paradigm to influence the ability to rely on foveal vision. Fifteen healthy adults (26.9±3.3 years; 8 females) ascended a 7-step staircase under four conditions: no secondary tasks (CONTROL); gaze fixation on a fixed target located at the end of the pathway (TARGET); visual reaction time task (VRT); and auditory reaction time task (ART). Gaze fixations towards stair features were significantly reduced in TARGET and VRT compared to CONTROL and ART. Despite the reduced fixations, participants were able to successfully ascend stairs and rarely used the handrail. Step time was increased during VRT compared to CONTROL in most stair steps. Navigating on the transition steps did not require more gaze fixations than the middle steps. However, reaction time tended to increase during locomotion on transitions suggesting additional executive demands during this phase. These findings suggest that foveal vision may not be an essential source of visual information regarding stair features to guide stair walking, despite the unique control challenges at transition phases as highlighted by phase-specific challenges in dual-tasking. Instead, the tendency to look at the steps in usual conditions likely provides a stable reference frame for extraction of visual information regarding step features from the entire visual field. PMID:22970297

  7. Promoting Visualization Skills through Deconstruction Using Physical Models and a Visualization Activity Intervention

    NASA Astrophysics Data System (ADS)

    Schiltz, Holly Kristine

    Visualization skills are important in learning chemistry, as these skills have been shown to correlate to high ability in problem solving. Students' understanding of visual information and their problem-solving processes may only ever be accessed indirectly: verbalization, gestures, drawings, etc. In this research, deconstruction of complex visual concepts was aligned with the promotion of students' verbalization of visualized ideas to teach students to solve complex visual tasks independently. All instructional tools and teaching methods were developed in accordance with the principles of the theoretical framework, the Modeling Theory of Learning: deconstruction of visual representations into model components, comparisons to reality, and recognition of students' their problemsolving strategies. Three physical model systems were designed to provide students with visual and tangible representations of chemical concepts. The Permanent Reflection Plane Demonstration provided visual indicators that students used to support or invalidate the presence of a reflection plane. The 3-D Coordinate Axis system provided an environment that allowed students to visualize and physically enact symmetry operations in a relevant molecular context. The Proper Rotation Axis system was designed to provide a physical and visual frame of reference to showcase multiple symmetry elements that students must identify in a molecular model. Focus groups of students taking Inorganic chemistry working with the physical model systems demonstrated difficulty documenting and verbalizing processes and descriptions of visual concepts. Frequently asked student questions were classified, but students also interacted with visual information through gestures and model manipulations. In an effort to characterize how much students used visualization during lecture or recitation, we developed observation rubrics to gather information about students' visualization artifacts and examined the effect instructors' modeled visualization artifacts had on students. No patterns emerged from the passive observation of visualization artifacts in lecture or recitation, but the need to elicit visual information from students was made clear. Deconstruction proved to be a valuable method for instruction and assessment of visual information. Three strategies for using deconstruction in teaching were distilled from the lessons and observations of the student focus groups: begin with observations of what is given in an image and what it's composed of, identify the relationships between components to find additional operations in different environments about the molecule, and deconstructing steps of challenging questions can reveal mistakes. An intervention was developed to teach students to use deconstruction and verbalization to analyze complex visualization tasks and employ the principles of the theoretical framework. The activities were scaffolded to introduce increasingly challenging concepts to students, but also support them as they learned visually demanding chemistry concepts. Several themes were observed in the analysis of the visualization activities. Students used deconstruction by documenting which parts of the images were useful for interpretation of the visual. Students identified valid patterns and rules within the images, which signified understanding of arrangement of information presented in the representation. Successful strategy communication was identified when students documented personal strategies that allowed them to complete the activity tasks. Finally, students demonstrated the ability to extend symmetry skills to advanced applications they had not previously seen. This work shows how the use of deconstruction and verbalization may have a great impact on how students master difficult topics and combined, they offer students a powerful strategy to approach visually demanding chemistry problems and to the instructor a unique insight to mentally constructed strategies.

  8. Motor-cognitive dual-task performance: effects of a concurrent motor task on distinct components of visual processing capacity.

    PubMed

    Künstler, E C S; Finke, K; Günther, A; Klingner, C; Witte, O; Bublak, P

    2018-01-01

    Dual tasking, or the simultaneous execution of two continuous tasks, is frequently associated with a performance decline that can be explained within a capacity sharing framework. In this study, we assessed the effects of a concurrent motor task on the efficiency of visual information uptake based on the 'theory of visual attention' (TVA). TVA provides parameter estimates reflecting distinct components of visual processing capacity: perceptual threshold, visual processing speed, and visual short-term memory (VSTM) storage capacity. Moreover, goodness-of-fit values and bootstrapping estimates were derived to test whether the TVA-model is validly applicable also under dual task conditions, and whether the robustness of parameter estimates is comparable in single- and dual-task conditions. 24 subjects of middle to higher age performed a continuous tapping task, and a visual processing task (whole report of briefly presented letter arrays) under both single- and dual-task conditions. Results suggest a decline of both visual processing capacity and VSTM storage capacity under dual-task conditions, while the perceptual threshold remained unaffected by a concurrent motor task. In addition, goodness-of-fit values and bootstrapping estimates support the notion that participants processed the visual task in a qualitatively comparable, although quantitatively less efficient way under dual-task conditions. The results support a capacity sharing account of motor-cognitive dual tasking and suggest that even performing a relatively simple motor task relies on central attentional capacity that is necessary for efficient visual information uptake.

  9. MindSeer: a portable and extensible tool for visualization of structural and functional neuroimaging data

    PubMed Central

    Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F

    2007-01-01

    Background Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. Results We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: . Conclusion MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine. PMID:17937818

  10. MindSeer: a portable and extensible tool for visualization of structural and functional neuroimaging data.

    PubMed

    Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F

    2007-10-15

    Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: http://sig.biostr.washington.edu/projects/MindSeer. MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine.

  11. Visual question answering using hierarchical dynamic memory networks

    NASA Astrophysics Data System (ADS)

    Shang, Jiayu; Li, Shiren; Duan, Zhikui; Huang, Junwei

    2018-04-01

    Visual Question Answering (VQA) is one of the most popular research fields in machine learning which aims to let the computer learn to answer natural language questions with images. In this paper, we propose a new method called hierarchical dynamic memory networks (HDMN), which takes both question attention and visual attention into consideration impressed by Co-Attention method, which is the best (or among the best) algorithm for now. Additionally, we use bi-directional LSTMs, which have a better capability to remain more information from the question and image, to replace the old unit so that we can capture information from both past and future sentences to be used. Then we rebuild the hierarchical architecture for not only question attention but also visual attention. What's more, we accelerate the algorithm via a new technic called Batch Normalization which helps the network converge more quickly than other algorithms. The experimental result shows that our model improves the state of the art on the large COCO-QA dataset, compared with other methods.

  12. Colour and luminance contrasts predict the human detection of natural stimuli in complex visual environments.

    PubMed

    White, Thomas E; Rojas, Bibiana; Mappes, Johanna; Rautiala, Petri; Kemp, Darrell J

    2017-09-01

    Much of what we know about human colour perception has come from psychophysical studies conducted in tightly-controlled laboratory settings. An enduring challenge, however, lies in extrapolating this knowledge to the noisy conditions that characterize our actual visual experience. Here we combine statistical models of visual perception with empirical data to explore how chromatic (hue/saturation) and achromatic (luminant) information underpins the detection and classification of stimuli in a complex forest environment. The data best support a simple linear model of stimulus detection as an additive function of both luminance and saturation contrast. The strength of each predictor is modest yet consistent across gross variation in viewing conditions, which accords with expectation based upon general primate psychophysics. Our findings implicate simple visual cues in the guidance of perception amidst natural noise, and highlight the potential for informing human vision via a fusion between psychophysical modelling and real-world behaviour. © 2017 The Author(s).

  13. Language in the Classroom: Studies of the Pygmalion Effect

    ERIC Educational Resources Information Center

    Williams, Frederick; Whitehead, Jack L.

    1971-01-01

    Research is reported on the degree to which the speaker characteristics of children can be related to the attitudes of teachers, in the absence, and in the presence of additional visual information about the speaker. (JM)

  14. The effects of aging on the working memory processes of multimodal information.

    PubMed

    Solesio-Jofre, Elena; López-Frutos, José María; Cashdollar, Nathan; Aurtenetxe, Sara; de Ramón, Ignacio; Maestú, Fernando

    2017-05-01

    Normal aging is associated with deficits in working memory processes. However, the majority of research has focused on storage or inhibitory processes using unimodal paradigms, without addressing their relationships using different sensory modalities. Hence, we pursued two objectives. First, was to examine the effects of aging on storage and inhibitory processes. Second, was to evaluate aging effects on multisensory integration of visual and auditory stimuli. To this end, young and older participants performed a multimodal task for visual and auditory pairs of stimuli with increasing memory load at encoding and interference during retention. Our results showed an age-related increased vulnerability to interrupting and distracting interference reflecting inhibitory deficits related to the off-line reactivation and on-line suppression of relevant and irrelevant information, respectively. Storage capacity was impaired with increasing task demands in both age groups. Additionally, older adults showed a deficit in multisensory integration, with poorer performance for new visual compared to new auditory information.

  15. Interactive visualization and analysis of multimodal datasets for surgical applications.

    PubMed

    Kirmizibayrak, Can; Yim, Yeny; Wakid, Mike; Hahn, James

    2012-12-01

    Surgeons use information from multiple sources when making surgical decisions. These include volumetric datasets (such as CT, PET, MRI, and their variants), 2D datasets (such as endoscopic videos), and vector-valued datasets (such as computer simulations). Presenting all the information to the user in an effective manner is a challenging problem. In this paper, we present a visualization approach that displays the information from various sources in a single coherent view. The system allows the user to explore and manipulate volumetric datasets, display analysis of dataset values in local regions, combine 2D and 3D imaging modalities and display results of vector-based computer simulations. Several interaction methods are discussed: in addition to traditional interfaces including mouse and trackers, gesture-based natural interaction methods are shown to control these visualizations with real-time performance. An example of a medical application (medialization laryngoplasty) is presented to demonstrate how the combination of different modalities can be used in a surgical setting with our approach.

  16. Chasing the negawatt: visualization for sustainable living.

    PubMed

    Bartram, Lyn; Rodgers, Johnny; Muise, Kevin

    2010-01-01

    Energy and resource management is an important and growing research area at the intersection of conservation, sustainable design, alternative energy production, and social behavior. Energy consumption can be significantly reduced by simply changing how occupants inhabit and use buildings, with little or no additional costs. Reflecting this fact, an emerging measure of grid energy capacity is the negawatt: a unit of power saved by increasing efficiency or reducing consumption.Visualization clearly has an important role in enabling residents to understand and manage their energy use. This role is tied to providing real-time feedback of energy use, which encourages people to conserve energy.The challenge is to understand not only what kinds of visualizations are most effective but also where and how they fit into a larger information system to help residents make informed decisions. In this article, we also examine the effective display of home energy-use data using a net-zero solar-powered home (North House) and the Adaptive Living Interface System (ALIS), North House's information backbone.

  17. A tactual display aid for primary flight training

    NASA Technical Reports Server (NTRS)

    Gilson, R. D.

    1979-01-01

    A means of flight instruction is discussed. In addition to verbal assistance, control feedback was continously presented via a nonvisual means utilizing touch. A kinesthetic-tactile (KT) display was used as a readout and tracking device for a computer generated signal of desired angle of attack during the approach and landing. Airspeed and glide path information was presented via KT or visual heads up display techniques. Performance with the heads up display of pitch information was shown to be significantly better than performance with the KT pitch display. Testing without the displays showed that novice pilots who had received tactile pitch error information performed both pitch and throttle control tasks significantly better than those who had received the same information from the visual heads up display of pitch during the test series of approaches to landing.

  18. Risk based meat hygiene--examples on food chain information and visual meat inspection.

    PubMed

    Ellerbroek, Lüppo

    2007-08-01

    The Regulation (EC) No. 854/2004 refers in particularly to a broad spectrum of official supervision of products of animal origin. The supervision should be based on the most current relevant information which is available. The proposal presented includes forms for implementation of Food Chain Information (FCI) as laid down in the Regulation (EC) No. 853/2004 as well as suggestions for the implementation of the visual inspection and criteria for a practical approach to assist the competent authority and the business operator. These suggestions are summarised in two forms including the FCI and practical information from the farm of origin as well as from the slaughterhouse needed for the competent authority to permit the visual meat inspection of fattening pigs. The requested information from the farm level include i.e. an animal loss rate during the fattening period and diagnostic findings of carcasses and respectively on organs of the animals during meat inspection procedure. The evaluation of findings in liver and pleura at the slaughterhouse level indicate a correlation to the general health status of the fattening conditions at farm level. The criteria developed are in principle suited for a "gold standard" of the health status of fattening pigs and the criteria may serve as practical implementation of the term Good Farming Practice in relation to consumer protection. Only for fattening pigs deriving from farms fulfilling this "gold standard" the visual meat inspection shall be permitted. It is proposed to integrate the two forms for FCI and additional information for authorisation of visual meat inspection for fattening pigs in the working document of the DG SANCO titled "Commission regulation of laying down specific rules on official controls for the inspection of meat" (SANCO/2696/2006).

  19. Vestibular Activation Differentially Modulates Human Early Visual Cortex and V5/MT Excitability and Response Entropy

    PubMed Central

    Guzman-Lopez, Jessica; Arshad, Qadeer; Schultz, Simon R; Walsh, Vincent; Yousif, Nada

    2013-01-01

    Head movement imposes the additional burdens on the visual system of maintaining visual acuity and determining the origin of retinal image motion (i.e., self-motion vs. object-motion). Although maintaining visual acuity during self-motion is effected by minimizing retinal slip via the brainstem vestibular-ocular reflex, higher order visuovestibular mechanisms also contribute. Disambiguating self-motion versus object-motion also invokes higher order mechanisms, and a cortical visuovestibular reciprocal antagonism is propounded. Hence, one prediction is of a vestibular modulation of visual cortical excitability and indirect measures have variously suggested none, focal or global effects of activation or suppression in human visual cortex. Using transcranial magnetic stimulation-induced phosphenes to probe cortical excitability, we observed decreased V5/MT excitability versus increased early visual cortex (EVC) excitability, during vestibular activation. In order to exclude nonspecific effects (e.g., arousal) on cortical excitability, response specificity was assessed using information theory, specifically response entropy. Vestibular activation significantly modulated phosphene response entropy for V5/MT but not EVC, implying a specific vestibular effect on V5/MT responses. This is the first demonstration that vestibular activation modulates human visual cortex excitability. Furthermore, using information theory, not previously used in phosphene response analysis, we could distinguish between a specific vestibular modulation of V5/MT excitability from a nonspecific effect at EVC. PMID:22291031

  20. Visual exploration and analysis of human-robot interaction rules

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Boyles, Michael J.

    2013-01-01

    We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming interfaces, information visualization, and visual data mining methods to facilitate designing, comprehending, and evaluating HRI interfaces.

  1. Supranormal orientation selectivity of visual neurons in orientation-restricted animals.

    PubMed

    Sasaki, Kota S; Kimura, Rui; Ninomiya, Taihei; Tabuchi, Yuka; Tanaka, Hiroki; Fukui, Masayuki; Asada, Yusuke C; Arai, Toshiya; Inagaki, Mikio; Nakazono, Takayuki; Baba, Mika; Kato, Daisuke; Nishimoto, Shinji; Sanada, Takahisa M; Tani, Toshiki; Imamura, Kazuyuki; Tanaka, Shigeru; Ohzawa, Izumi

    2015-11-16

    Altered sensory experience in early life often leads to remarkable adaptations so that humans and animals can make the best use of the available information in a particular environment. By restricting visual input to a limited range of orientations in young animals, this investigation shows that stimulus selectivity, e.g., the sharpness of tuning of single neurons in the primary visual cortex, is modified to match a particular environment. Specifically, neurons tuned to an experienced orientation in orientation-restricted animals show sharper orientation tuning than neurons in normal animals, whereas the opposite was true for neurons tuned to non-experienced orientations. This sharpened tuning appears to be due to elongated receptive fields. Our results demonstrate that restricted sensory experiences can sculpt the supranormal functions of single neurons tailored for a particular environment. The above findings, in addition to the minimal population response to orientations close to the experienced one, agree with the predictions of a sparse coding hypothesis in which information is represented efficiently by a small number of activated neurons. This suggests that early brain areas adopt an efficient strategy for coding information even when animals are raised in a severely limited visual environment where sensory inputs have an unnatural statistical structure.

  2. Supranormal orientation selectivity of visual neurons in orientation-restricted animals

    PubMed Central

    Sasaki, Kota S.; Kimura, Rui; Ninomiya, Taihei; Tabuchi, Yuka; Tanaka, Hiroki; Fukui, Masayuki; Asada, Yusuke C.; Arai, Toshiya; Inagaki, Mikio; Nakazono, Takayuki; Baba, Mika; Kato, Daisuke; Nishimoto, Shinji; Sanada, Takahisa M.; Tani, Toshiki; Imamura, Kazuyuki; Tanaka, Shigeru; Ohzawa, Izumi

    2015-01-01

    Altered sensory experience in early life often leads to remarkable adaptations so that humans and animals can make the best use of the available information in a particular environment. By restricting visual input to a limited range of orientations in young animals, this investigation shows that stimulus selectivity, e.g., the sharpness of tuning of single neurons in the primary visual cortex, is modified to match a particular environment. Specifically, neurons tuned to an experienced orientation in orientation-restricted animals show sharper orientation tuning than neurons in normal animals, whereas the opposite was true for neurons tuned to non-experienced orientations. This sharpened tuning appears to be due to elongated receptive fields. Our results demonstrate that restricted sensory experiences can sculpt the supranormal functions of single neurons tailored for a particular environment. The above findings, in addition to the minimal population response to orientations close to the experienced one, agree with the predictions of a sparse coding hypothesis in which information is represented efficiently by a small number of activated neurons. This suggests that early brain areas adopt an efficient strategy for coding information even when animals are raised in a severely limited visual environment where sensory inputs have an unnatural statistical structure. PMID:26567927

  3. “Distracters” Do Not Always Distract: Visual Working Memory for Angry Faces is Enhanced by Incidental Emotional Words

    PubMed Central

    Jackson, Margaret C.; Linden, David E. J.; Raymond, Jane E.

    2012-01-01

    We are often required to filter out distraction in order to focus on a primary task during which working memory (WM) is engaged. Previous research has shown that negative versus neutral distracters presented during a visual WM maintenance period significantly impair memory for neutral information. However, the contents of WM are often also emotional in nature. The question we address here is how incidental information might impact upon visual WM when both this and the memory items contain emotional information. We presented emotional versus neutral words during the maintenance interval of an emotional visual WM faces task. Participants encoded two angry or happy faces into WM, and several seconds into a 9 s maintenance period a negative, positive, or neutral word was flashed on the screen three times. A single neutral test face was presented for retrieval with a face identity that was either present or absent in the preceding study array. WM for angry face identities was significantly better when an emotional (negative or positive) versus neutral (or no) word was presented. In contrast, WM for happy face identities was not significantly affected by word valence. These findings suggest that the presence of emotion within an intervening stimulus boosts the emotional value of threat-related information maintained in visual WM and thus improves performance. In addition, we show that incidental events that are emotional in nature do not always distract from an ongoing WM task. PMID:23112782

  4. Help Physically Handicapped Students Achieve Business Skills.

    ERIC Educational Resources Information Center

    Simon, Judith C.

    1980-01-01

    Lists suggestions for teachers to prepare them for teaching clerical skills to handicapped students. Specifically discusses teaching typewriting to students with missing fingers or arms and business skills to hearing and visually impaired students. Lists sources of additional information. (JOW)

  5. Multi-modal information processing for visual workload relief

    NASA Technical Reports Server (NTRS)

    Burke, M. W.; Gilson, R. D.; Jagacinski, R. J.

    1980-01-01

    The simultaneous performance of two single-dimensional compensatory tracking tasks, one with the left hand and one with the right hand, is discussed. The tracking performed with the left hand was considered the primary task and was performed with a visual display or a quickened kinesthetic-tactual (KT) display. The right-handed tracking was considered the secondary task and was carried out only with a visual display. Although the two primary task displays had afforded equivalent performance in a critical tracking task performed alone, in the dual-task situation the quickened KT primary display resulted in superior secondary visual task performance. Comparisons of various combinations of primary and secondary visual displays in integrated or separated formats indicate that the superiority of the quickened KT display is not simply due to the elimination of visual scanning. Additional testing indicated that quickening per se also is not the immediate cause of the observed KT superiority.

  6. Computation and visualization of uncertainty in surgical navigation.

    PubMed

    Simpson, Amber L; Ma, Burton; Vasarhelyi, Edward M; Borschneck, Dan P; Ellis, Randy E; James Stewart, A

    2014-09-01

    Surgical displays do not show uncertainty information with respect to the position and orientation of instruments. Data is presented as though it were perfect; surgeons unaware of this uncertainty could make critical navigational mistakes. The propagation of uncertainty to the tip of a surgical instrument is described and a novel uncertainty visualization method is proposed. An extensive study with surgeons has examined the effect of uncertainty visualization on surgical performance with pedicle screw insertion, a procedure highly sensitive to uncertain data. It is shown that surgical performance (time to insert screw, degree of breach of pedicle, and rotation error) is not impeded by the additional cognitive burden imposed by uncertainty visualization. Uncertainty can be computed in real time and visualized without adversely affecting surgical performance, and the best method of uncertainty visualization may depend upon the type of navigation display. Copyright © 2013 John Wiley & Sons, Ltd.

  7. [Challenges in Traffic for Blind and Visually Impaired People and Strategies for their Safe Participation].

    PubMed

    Högner, N

    2015-08-01

    Blind and visually impaired people experience special risks and hazards in road traffic. This refers to participation as a driver, bicycle rider and pedestrian. These risks are shown by a review of international research studies and a study by the author, where 45 people with Usher syndrome were asked about their accident rates and causes as driver, bicycle rider and pedestrian. In addition, basic legal information has been worked out to demonstrate the visual conditions of people with visual impairment for participation in road traffic. The research studies show that blind and visually impaired persons are particularly exposed to experience high risks in traffic. These risks can be reduced through acquisition of skills and coping strategies such as training in orientation and mobility. People with visual impairment need special programmes which help to reduce traffic hazards. Georg Thieme Verlag KG Stuttgart · New York.

  8. Analysis of EEG signals related to artists and nonartists during visual perception, mental imagery, and rest using approximate entropy.

    PubMed

    Shourie, Nasrin; Firoozabadi, Mohammad; Badie, Kambiz

    2014-01-01

    In this paper, differences between multichannel EEG signals of artists and nonartists were analyzed during visual perception and mental imagery of some paintings and at resting condition using approximate entropy (ApEn). It was found that ApEn is significantly higher for artists during the visual perception and the mental imagery in the frontal lobe, suggesting that artists process more information during these conditions. It was also observed that ApEn decreases for the two groups during the visual perception due to increasing mental load; however, their variation patterns are different. This difference may be used for measuring progress in novice artists. In addition, it was found that ApEn is significantly lower during the visual perception than the mental imagery in some of the channels, suggesting that visual perception task requires more cerebral efforts.

  9. Software Tool for Researching Annotations of Proteins (STRAP): Open-Source Protein Annotation Software with Data Visualization

    PubMed Central

    Bhatia, Vivek N.; Perlman, David H.; Costello, Catherine E.; McComb, Mark E.

    2009-01-01

    In order that biological meaning may be derived and testable hypotheses may be built from proteomics experiments, assignments of proteins identified by mass spectrometry or other techniques must be supplemented with additional notation, such as information on known protein functions, protein-protein interactions, or biological pathway associations. Collecting, organizing, and interpreting this data often requires the input of experts in the biological field of study, in addition to the time-consuming search for and compilation of information from online protein databases. Furthermore, visualizing this bulk of information can be challenging due to the limited availability of easy-to-use and freely available tools for this process. In response to these constraints, we have undertaken the design of software to automate annotation and visualization of proteomics data in order to accelerate the pace of research. Here we present the Software Tool for Researching Annotations of Proteins (STRAP) – a user-friendly, open-source C# application. STRAP automatically obtains gene ontology (GO) terms associated with proteins in a proteomics results ID list using the freely accessible UniProtKB and EBI GOA databases. Summarized in an easy-to-navigate tabular format, STRAP includes meta-information on the protein in addition to complimentary GO terminology. Additionally, this information can be edited by the user so that in-house expertise on particular proteins may be integrated into the larger dataset. STRAP provides a sortable tabular view for all terms, as well as graphical representations of GO-term association data in pie (biological process, cellular component and molecular function) and bar charts (cross comparison of sample sets) to aid in the interpretation of large datasets and differential analyses experiments. Furthermore, proteins of interest may be exported as a unique FASTA-formatted file to allow for customizable re-searching of mass spectrometry data, and gene names corresponding to the proteins in the lists may be encoded in the Gaggle microformat for further characterization, including pathway analysis. STRAP, a tutorial, and the C# source code are freely available from http://cpctools.sourceforge.net. PMID:19839595

  10. Controlling the spotlight of attention: visual span size and flexibility in schizophrenia.

    PubMed

    Elahipanah, Ava; Christensen, Bruce K; Reingold, Eyal M

    2011-10-01

    The current study investigated the size and flexible control of visual span among patients with schizophrenia during visual search performance. Visual span is the region of the visual field from which one extracts information during a single eye fixation, and a larger visual span size is linked to more efficient search performance. Therefore, a reduced visual span may explain patients' impaired performance on search tasks. The gaze-contingent moving window paradigm was used to estimate the visual span size of patients and healthy participants while they performed two different search tasks. In addition, changes in visual span size were measured as a function of two manipulations of task difficulty: target-distractor similarity and stimulus familiarity. Patients with schizophrenia searched more slowly across both tasks and conditions. Patients also demonstrated smaller visual span sizes on the easier search condition in each task. Moreover, healthy controls' visual span size increased as target discriminability or distractor familiarity increased. This modulation of visual span size, however, was reduced or not observed among patients. The implications of the present findings, with regard to previously reported visual search deficits, and other functional and structural abnormalities associated with schizophrenia, are discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. High-level user interfaces for transfer function design with semantics.

    PubMed

    Salama, Christof Rezk; Keller, Maik; Kohlmann, Peter

    2006-01-01

    Many sophisticated techniques for the visualization of volumetric data such as medical data have been published. While existing techniques are mature from a technical point of view, managing the complexity of visual parameters is still difficult for non-expert users. To this end, this paper presents new ideas to facilitate the specification of optical properties for direct volume rendering. We introduce an additional level of abstraction for parametric models of transfer functions. The proposed framework allows visualization experts to design high-level transfer function models which can intuitively be used by non-expert users. The results are user interfaces which provide semantic information for specialized visualization problems. The proposed method is based on principal component analysis as well as on concepts borrowed from computer animation.

  12. Crawling and walking infants see the world differently

    PubMed Central

    Kretch, Kari S.; Franchak, John M.; Adolph, Karen E.

    2013-01-01

    How does visual experience change over development? To investigate changes in visual input over the developmental transition from crawling to walking, thirty 13-month-olds crawled or walked down a straight path wearing a head-mounted eye-tracker that recorded gaze direction and head-centered field of view. Thirteen additional infants wore a motion-tracker that recorded head orientation. Compared with walkers, crawlers’ field of view contained less walls and more floor. Walkers directed gaze straight ahead at caregivers, whereas crawlers looked down at the floor. Crawlers obtained visual information about targets at higher elevations—caregivers and toys—by craning their heads upward and sitting up to bring the room into view. Findings indicate that visual experiences are intimately tied to infants’ posture. PMID:24341362

  13. Real-Time Aerodynamic Flow and Data Visualization in an Interactive Virtual Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2005-01-01

    Significant advances have been made to non-intrusive flow field diagnostics in the past decade. Camera based techniques are now capable of determining physical qualities such as surface deformation, surface pressure and temperature, flow velocities, and molecular species concentration. In each case, extracting the pertinent information from the large volume of acquired data requires powerful and efficient data visualization tools. The additional requirement for real time visualization is fueled by an increased emphasis on minimizing test time in expensive facilities. This paper will address a capability titled LiveView3D, which is the first step in the development phase of an in depth, real time data visualization and analysis tool for use in aerospace testing facilities.

  14. Explicit awareness supports conditional visual search in the retrieval guidance paradigm.

    PubMed

    Buttaccio, Daniel R; Lange, Nicholas D; Hahn, Sowon; Thomas, Rick P

    2014-01-01

    In four experiments we explored whether participants would be able to use probabilistic prompts to simplify perceptually demanding visual search in a task we call the retrieval guidance paradigm. On each trial a memory prompt appeared prior to (and during) the search task and the diagnosticity of the prompt(s) was manipulated to provide complete, partial, or non-diagnostic information regarding the target's color on each trial (Experiments 1-3). In Experiment 1 we found that the more diagnostic prompts was associated with faster visual search performance. However, similar visual search behavior was observed in Experiment 2 when the diagnosticity of the prompts was eliminated, suggesting that participants in Experiment 1 were merely relying on base rate information to guide search and were not utilizing the prompts. In Experiment 3 participants were informed of the relationship between the prompts and the color of the target and this was associated with faster search performance relative to Experiment 1, suggesting that the participants were using the prompts to guide search. Additionally, in Experiment 3 a knowledge test was implemented and performance in this task was associated with qualitative differences in search behavior such that participants that were able to name the color(s) most associated with the prompts were faster to find the target than participants who were unable to do so. However, in Experiments 1-3 diagnosticity of the memory prompt was manipulated via base rate information, making it possible that participants were merely relying on base rate information to inform search in Experiment 3. In Experiment 4 we manipulated diagnosticity of the prompts without manipulating base rate information and found a similar pattern of results as Experiment 3. Together, the results emphasize the importance of base rate and diagnosticity information in visual search behavior. In the General discussion section we explore how a recent computational model of hypothesis generation (HyGene; Thomas, Dougherty, Sprenger, & Harbison, 2008), linking attention with long-term and working memory, accounts for the present results and provides a useful framework of cued recall visual search. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Does Spatial or Visual Information in Maps Facilitate Text Recall?: Reconsidering the Conjoint Retention Hypothesis

    ERIC Educational Resources Information Center

    Griffin, Marlynn M.; Robinson, Daniel H.; Sarama, Julie

    2005-01-01

    The conjoint retention hypothesis (CRH) claims that students recall more text information when they study geographic maps in addition to text than when they study text alone, because the maps are encoded spatially (Kulhavy, Lee, & Caterino, 1985). This claim was recently challenged by Griffin and Robinson (2000), who found no advantage for maps…

  16. Material and shape perception based on two types of intensity gradient information

    PubMed Central

    Nishida, Shin'ya

    2018-01-01

    Visual estimation of the material and shape of an object from a single image includes a hard ill-posed computational problem. However, in our daily life we feel we can estimate both reasonably well. The neural computation underlying this ability remains poorly understood. Here we propose that the human visual system uses different aspects of object images to separately estimate the contributions of the material and shape. Specifically, material perception relies mainly on the intensity gradient magnitude information, while shape perception relies mainly on the intensity gradient order information. A clue to this hypothesis was provided by the observation that luminance-histogram manipulation, which changes luminance gradient magnitudes but not the luminance-order map, effectively alters the material appearance but not the shape of an object. In agreement with this observation, we found that the simulated physical material changes do not significantly affect the intensity order information. A series of psychophysical experiments further indicate that human surface shape perception is robust against intensity manipulations provided they do not disturb the intensity order information. In addition, we show that the two types of gradient information can be utilized for the discrimination of albedo changes from highlights. These findings suggest that the visual system relies on these diagnostic image features to estimate physical properties in a distal world. PMID:29702644

  17. Visual Short-Term Memory Activity in Parietal Lobe Reflects Cognitive Processes beyond Attentional Selection.

    PubMed

    Sheremata, Summer L; Somers, David C; Shomstein, Sarah

    2018-02-07

    Visual short-term memory (VSTM) and attention are distinct yet interrelated processes. While both require selection of information across the visual field, memory additionally requires the maintenance of information across time and distraction. VSTM recruits areas within human (male and female) dorsal and ventral parietal cortex that are also implicated in spatial selection; therefore, it is important to determine whether overlapping activation might reflect shared attentional demands. Here, identical stimuli and controlled sustained attention across both tasks were used to ask whether fMRI signal amplitude, functional connectivity, and contralateral visual field bias reflect memory-specific task demands. While attention and VSTM activated similar cortical areas, BOLD amplitude and functional connectivity in parietal cortex differentiated the two tasks. Relative to attention, VSTM increased BOLD amplitude in dorsal parietal cortex and decreased BOLD amplitude in the angular gyrus. Additionally, the tasks differentially modulated parietal functional connectivity. Contrasting VSTM and attention, intraparietal sulcus (IPS) 1-2 were more strongly connected with anterior frontoparietal areas and more weakly connected with posterior regions. This divergence between tasks demonstrates that parietal activation reflects memory-specific functions and consequently modulates functional connectivity across the cortex. In contrast, both tasks demonstrated hemispheric asymmetries for spatial processing, exhibiting a stronger contralateral visual field bias in the left versus the right hemisphere across tasks, suggesting that asymmetries are characteristic of a shared selection process in IPS. These results demonstrate that parietal activity and patterns of functional connectivity distinguish VSTM from more general attention processes, establishing a central role of the parietal cortex in maintaining visual information. SIGNIFICANCE STATEMENT Visual short-term memory (VSTM) and attention are distinct yet interrelated processes. Cognitive mechanisms and neural activity underlying these tasks show a large degree of overlap. To examine whether activity within the posterior parietal cortex (PPC) reflects object maintenance across distraction or sustained attention per se, it is necessary to control for attentional demands inherent in VSTM tasks. We demonstrate that activity in PPC reflects VSTM demands even after controlling for attention; remembering items across distraction modulates relationships between parietal and other areas differently than during periods of sustained attention. Our study fills a gap in the literature by directly comparing and controlling for overlap between visual attention and VSTM tasks. Copyright © 2018 the authors 0270-6474/18/381511-09$15.00/0.

  18. Human microbiome visualization using 3D technology.

    PubMed

    Moore, Jason H; Lari, Richard Cowper Sal; Hill, Douglas; Hibberd, Patricia L; Madan, Juliette C

    2011-01-01

    High-throughput sequencing technology has opened the door to the study of the human microbiome and its relationship with health and disease. This is both an opportunity and a significant biocomputing challenge. We present here a 3D visualization methodology and freely-available software package for facilitating the exploration and analysis of high-dimensional human microbiome data. Our visualization approach harnesses the power of commercial video game development engines to provide an interactive medium in the form of a 3D heat map for exploration of microbial species and their relative abundance in different patients. The advantage of this approach is that the third dimension provides additional layers of information that cannot be visualized using a traditional 2D heat map. We demonstrate the usefulness of this visualization approach using microbiome data collected from a sample of premature babies with and without sepsis.

  19. Psychological distress and visual functioning in relation to vision-related disability in older individuals with cataracts.

    PubMed

    Walker, J G; Anstey, K J; Lord, S R

    2006-05-01

    To determine whether demographic, health status and psychological functioning measures, in addition to impaired visual acuity, are related to vision-related disability. Participants were 105 individuals (mean age=73.7 years) with cataracts requiring surgery and corrected visual acuity in the better eye of 6/24 to 6/36 were recruited from waiting lists at three public out-patient ophthalmology clinics. Visual disability was measured with the Visual Functioning-14 survey. Visual acuity was assessed using better and worse eye logMAR scores and the Melbourne Edge Test (MET) for edge contrast sensitivity. Data relating to demographic information, depression, anxiety and stress, health care and medication use and numbers of co-morbid conditions were obtained. Principal component analysis revealed four meaningful factors that accounted for 75% of the variance in visual disability: recreational activities, reading and fine work, activities of daily living and driving behaviour. Multiple regression analyses determined that visual acuity variables were the only significant predictors of overall vision-related functioning and difficulties with reading and fine work. For the remaining visual disability domains, non-visual factors were also significant predictors. Difficulties with recreational activities were predicted by stress, as well as worse eye visual acuity, and difficulties with activities of daily living were associated with self-reported health status, age and depression as well as MET contrast scores. Driving behaviour was associated with sex (with fewer women driving), depression, anxiety and stress scores, and MET contrast scores. Vision-related disability is common in older individuals with cataracts. In addition to visual acuity, demographic, psychological and health status factors influence the severity of vision-related disability, affecting recreational activities, activities of daily living and driving.

  20. Multisensory guidance of orienting behavior.

    PubMed

    Maier, Joost X; Groh, Jennifer M

    2009-12-01

    We use both vision and audition when localizing objects and events in our environment. However, these sensory systems receive spatial information in different coordinate systems: sounds are localized using inter-aural and spectral cues, yielding a head-centered representation of space, whereas the visual system uses an eye-centered representation of space, based on the site of activation on the retina. In addition, the visual system employs a place-coded, retinotopic map of space, whereas the auditory system's representational format is characterized by broad spatial tuning and a lack of topographical organization. A common view is that the brain needs to reconcile these differences in order to control behavior, such as orienting gaze to the location of a sound source. To accomplish this, it seems that either auditory spatial information must be transformed from a head-centered rate code to an eye-centered map to match the frame of reference used by the visual system, or vice versa. Here, we review a number of studies that have focused on the neural basis underlying such transformations in the primate auditory system. Although, these studies have found some evidence for such transformations, many differences in the way the auditory and visual system encode space exist throughout the auditory pathway. We will review these differences at the neural level, and will discuss them in relation to differences in the way auditory and visual information is used in guiding orienting movements.

  1. Models of Information Aggregation Pertaining to Combat Identification: A Review of the Literature (Modele du Regroupement de L’information Concernant L’identification au Combat: Une Analyse Documentaire)

    DTIC Science & Technology

    2007-04-01

    communication from the gunner who is able to offer enhanced visual information about the entity (e.g., insignia, type of weaponry) or radio contact may...1999 (Fuzzy Logic); Clemen & Winkler, in press (Bayes Theorem); Sentz & Ferson, 2002 (Dempster-Shafer)). Humansystems® Combat Identification...incidents when other units get lost and appear in unexpected locations. The formation radios for additional information from the operations officer

  2. Quick Guide to Health Literacy and Older Adults

    MedlinePlus

    ... starter tips Resources Additional Resources Cultural Competence Health Literacy Older Adults Plain Language Visual Impairment Web sites Content last updated: October 30, 2007 skip to navigation | skip to content ACCESSIBILITY | FREEDOM OF INFORMATION ACT | PRIVACY POLICY | CONTACT US Office of Disease ...

  3. How visual timing and form information affect speech and non-speech processing.

    PubMed

    Kim, Jeesun; Davis, Chris

    2014-10-01

    Auditory speech processing is facilitated when the talker's face/head movements are seen. This effect is typically explained in terms of visual speech providing form and/or timing information. We determined the effect of both types of information on a speech/non-speech task (non-speech stimuli were spectrally rotated speech). All stimuli were presented paired with the talker's static or moving face. Two types of moving face stimuli were used: full-face versions (both spoken form and timing information available) and modified face versions (only timing information provided by peri-oral motion available). The results showed that the peri-oral timing information facilitated response time for speech and non-speech stimuli compared to a static face. An additional facilitatory effect was found for full-face versions compared to the timing condition; this effect only occurred for speech stimuli. We propose the timing effect was due to cross-modal phase resetting; the form effect to cross-modal priming. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Effects of audio-visual presentation of target words in word translation training

    NASA Astrophysics Data System (ADS)

    Akahane-Yamada, Reiko; Komaki, Ryo; Kubo, Rieko

    2004-05-01

    Komaki and Akahane-Yamada (Proc. ICA2004) used 2AFC translation task in vocabulary training, in which the target word is presented visually in orthographic form of one language, and the appropriate meaning in another language has to be chosen between two choices. Present paper examined the effect of audio-visual presentation of target word when native speakers of Japanese learn to translate English words into Japanese. Pairs of English words contrasted in several phonemic distinctions (e.g., /r/-/l/, /b/-/v/, etc.) were used as word materials, and presented in three conditions; visual-only (V), audio-only (A), and audio-visual (AV) presentations. Identification accuracy of those words produced by two talkers was also assessed. During pretest, the accuracy for A stimuli was lowest, implying that insufficient translation ability and listening ability interact with each other when aurally presented word has to be translated. However, there was no difference in accuracy between V and AV stimuli, suggesting that participants translate the words depending on visual information only. The effect of translation training using AV stimuli did not transfer to identification ability, showing that additional audio information during translation does not help improve speech perception. Further examination is necessary to determine the effective L2 training method. [Work supported by TAO, Japan.

  5. Caries experience and oral hygiene status of a group of visually impaired children in Istanbul, Turkey.

    PubMed

    Bekiroglu, Nural; Acar, Nihan; Kargul, Betul

    2012-01-01

    To evaluate the caries experience, oral hygiene status and oral health knowledge of a group of visually impaired students. The study was conducted at one of the largest visually impaired children's schools among students aged between 7 and 16 years (n = 178) in Istanbul, Turkey. A 16-item questionnaire was asked in addition to a clinical tooth examination. The 16-item verbal questionnaire was developed to record the students' general health, impairment, the socioeconomic profile and education level of their parents, oral health knowledge, sources of information about oral health and oral hygiene habits. Oral hygiene was assessed according to Greene and Vermillion's Simplified Oral Hygiene Index (OHI-S). To measure the oral hygiene status, OHI-S index scores were recorded. Additionally, DMFT and dft indices were documented. Only 26.40% of children were caries free, and only 2.2% of students had good oral hygiene. A total of 3.3% of these students were mildly retarded and 2.8% of them had a developmental disability. Visually impaired children exhibited a fair-to-poor level of oral hygiene. Maintenance of oral hygiene remains the greatest challenge in the care of visually impaired children.

  6. Positive visualization of implanted devices with susceptibility gradient mapping using the original resolution.

    PubMed

    Varma, Gopal; Clough, Rachel E; Acher, Peter; Sénégas, Julien; Dahnke, Hannes; Keevil, Stephen F; Schaeffter, Tobias

    2011-05-01

    In magnetic resonance imaging, implantable devices are usually visualized with a negative contrast. Recently, positive contrast techniques have been proposed, such as susceptibility gradient mapping (SGM). However, SGM reduces the spatial resolution making positive visualization of small structures difficult. Here, a development of SGM using the original resolution (SUMO) is presented. For this, a filter is applied in k-space and the signal amplitude is analyzed in the image domain to determine quantitatively the susceptibility gradient for each pixel. It is shown in simulations and experiments that SUMO results in a better visualization of small structures in comparison to SGM. SUMO is applied to patient datasets for visualization of stent and prostate brachytherapy seeds. In addition, SUMO also provides quantitative information about the number of prostate brachytherapy seeds. The method might be extended to application for visualization of other interventional devices, and, like SGM, it might also be used to visualize magnetically labelled cells. Copyright © 2010 Wiley-Liss, Inc.

  7. Color-Space-Based Visual-MIMO for V2X Communication †

    PubMed Central

    Kim, Jai-Eun; Kim, Ji-Won; Park, Youngil; Kim, Ki-Doo

    2016-01-01

    In this paper, we analyze the applicability of color-space-based, color-independent visual-MIMO for V2X. We aim to achieve a visual-MIMO scheme that can maintain the original color and brightness while performing seamless communication. We consider two scenarios of GCM based visual-MIMO for V2X. One is a multipath transmission using visual-MIMO networking and the other is multi-node V2X communication. In the scenario of multipath transmission, we analyze the channel capacity numerically and we illustrate the significance of networking information such as distance, reference color (symbol), and multiplexing-diversity mode transitions. In addition, in the V2X scenario of multiple access, we may achieve the simultaneous multiple access communication without node interferences by dividing the communication area using image processing. Finally, through numerical simulation, we show the superior SER performance of the visual-MIMO scheme compared with LED-PD communication and show the numerical result of the GCM based visual-MIMO channel capacity versus distance. PMID:27120603

  8. Color-Space-Based Visual-MIMO for V2X Communication.

    PubMed

    Kim, Jai-Eun; Kim, Ji-Won; Park, Youngil; Kim, Ki-Doo

    2016-04-23

    In this paper, we analyze the applicability of color-space-based, color-independent visual-MIMO for V2X. We aim to achieve a visual-MIMO scheme that can maintain the original color and brightness while performing seamless communication. We consider two scenarios of GCM based visual-MIMO for V2X. One is a multipath transmission using visual-MIMO networking and the other is multi-node V2X communication. In the scenario of multipath transmission, we analyze the channel capacity numerically and we illustrate the significance of networking information such as distance, reference color (symbol), and multiplexing-diversity mode transitions. In addition, in the V2X scenario of multiple access, we may achieve the simultaneous multiple access communication without node interferences by dividing the communication area using image processing. Finally, through numerical simulation, we show the superior SER performance of the visual-MIMO scheme compared with LED-PD communication and show the numerical result of the GCM based visual-MIMO channel capacity versus distance.

  9. Real-time new satellite product demonstration from microwave sensors and GOES-16 at NRL TC web

    NASA Astrophysics Data System (ADS)

    Cossuth, J.; Richardson, K.; Surratt, M. L.; Bankert, R.

    2017-12-01

    The Naval Research Laboratory (NRL) Tropical Cyclone (TC) satellite webpage (https://www.nrlmry.navy.mil/TC.html) provides demonstration analyses of storm imagery to benefit operational TC forecast centers around the world. With the availability of new spectral information provided by GOES-16 satellite data and recent research into improved visualization methods of microwave data, experimental imagery was operationally tested to visualize the structural changes of TCs during the 2017 hurricane season. This presentation provides an introduction into these innovative satellite analysis methods, NRL's next generation satellite analysis system (the Geolocated Information Processing System, GeoIPSTM), and demonstration the added value of additional spectral frequencies when monitoring storms in near-realtime.

  10. Seeing is believing: information content and behavioural response to visual and chemical cues

    PubMed Central

    Gonzálvez, Francisco G.; Rodríguez-Gironés, Miguel A.

    2013-01-01

    Predator avoidance and foraging often pose conflicting demands. Animals can decrease mortality risk searching for predators, but searching decreases foraging time and hence intake. We used this principle to investigate how prey should use information to detect, assess and respond to predation risk from an optimal foraging perspective. A mathematical model showed that solitary bees should increase flower examination time in response to predator cues and that the rate of false alarms should be negatively correlated with the relative value of the flower explored. The predatory ant, Oecophylla smaragdina, and the harmless ant, Polyrhachis dives, differ in the profile of volatiles they emit and in their visual appearance. As predicted, the solitary bee Nomia strigata spent more time examining virgin flowers in presence of predator cues than in their absence. Furthermore, the proportion of flowers rejected decreased from morning to noon, as the relative value of virgin flowers increased. In addition, bees responded differently to visual and chemical cues. While chemical cues induced bees to search around flowers, bees detecting visual cues hovered in front of them. These strategies may allow prey to identify the nature of visual cues and to locate the source of chemical cues. PMID:23698013

  11. Development of visual working memory and distractor resistance in relation to academic performance.

    PubMed

    Tsubomi, Hiroyuki; Watanabe, Katsumi

    2017-02-01

    Visual working memory (VWM) enables active maintenance of goal-relevant visual information in a readily accessible state. The storage capacity of VWM is severely limited, often as few as 3 simple items. Thus, it is crucial to restrict distractor information from consuming VWM capacity. The current study investigated how VWM storage and distractor resistance develop during childhood in relation to academic performance in the classroom. Elementary school children (7- to 12-year-olds) and adults (total N=140) completed a VWM task with and without visual/verbal distractors during the retention period. The results showed that VWM performance with and without distractors developed at similar rates until reaching adult levels at 10years of age. In addition, higher VWM performance without distractors was associated with higher academic scores in literacy (reading and writing), mathematics, and science for the younger children (7- to 9-year-olds), whereas these academic scores for the older children (10- to 12-year-olds) were associated with VWM performance with visual distractors. Taken together, these results suggest that VWM storage and distractor resistance develop at a similar rate, whereas their contributions to academic performance differ with age. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. A Rules-Based Service for Suggesting Visualizations to Analyze Earth Science Phenomena.

    NASA Astrophysics Data System (ADS)

    Prabhu, A.; Zednik, S.; Fox, P. A.; Ramachandran, R.; Maskey, M.; Shie, C. L.; Shen, S.

    2016-12-01

    Current Earth Science Information Systems lack support for new or interdisciplinary researchers, who may be unfamiliar with the domain vocabulary or the breadth of relevant data available. We need to evolve the current information systems, to reduce the time required for data preparation, processing and analysis. This can be done by effectively salvaging the "dark" resources in Earth Science. We assert that Earth science metadata assets are dark resources, information resources that organizations collect, process, and store for regular business or operational activities but fail to utilize for other purposes. In order to effectively use these dark resources, especially for data processing and visualization, we need a combination of domain, data product and processing knowledge, i.e. a knowledge base from which specific data operations can be performed. In this presentation, we describe a semantic, rules based approach to provide i.e. a service to visualize Earth Science phenomena, based on the data variables extracted using the "dark" metadata resources. We use Jena rules to make assertions about compatibility between a phenomena and various visualizations based on multiple factors. We created separate orthogonal rulesets to map each of these factors to the various phenomena. Some of the factors we have considered include measurements, spatial resolution and time intervals. This approach enables easy additions and deletions based on newly obtained domain knowledge or phenomena related information and thus improving the accuracy of the rules service overall.

  13. Multisensory Integration Strategy for Modality-Specific Loss of Inhibition Control in Older Adults.

    PubMed

    Lee, Ahreum; Ryu, Hokyoung; Kim, Jae-Kwan; Jeong, Eunju

    2018-04-11

    Older adults are known to have lesser cognitive control capability and greater susceptibility to distraction than young adults. Previous studies have reported age-related problems in selective attention and inhibitory control, yielding mixed results depending on modality and context in which stimuli and tasks were presented. The purpose of the study was to empirically demonstrate a modality-specific loss of inhibitory control in processing audio-visual information with ageing. A group of 30 young adults (mean age = 25.23, Standar Desviation (SD) = 1.86) and 22 older adults (mean age = 55.91, SD = 4.92) performed the audio-visual contour identification task (AV-CIT). We compared performance of visual/auditory identification (Uni-V, Uni-A) with that of visual/auditory identification in the presence of distraction in counterpart modality (Multi-V, Multi-A). The findings showed a modality-specific effect on inhibitory control. Uni-V performance was significantly better than Multi-V, indicating that auditory distraction significantly hampered visual target identification. However, Multi-A performance was significantly enhanced compared to Uni-A, indicating that auditory target performance was significantly enhanced by visual distraction. Additional analysis showed an age-specific effect on enhancement between Uni-A and Multi-A depending on the level of visual inhibition. Together, our findings indicated that the loss of visual inhibitory control was beneficial for the auditory target identification presented in a multimodal context in older adults. A likely multisensory information processing strategy in the older adults was further discussed in relation to aged cognition.

  14. Task-technology fit of video telehealth for nurses in an outpatient clinic setting.

    PubMed

    Cady, Rhonda G; Finkelstein, Stanley M

    2014-07-01

    Incorporating telehealth into outpatient care delivery supports management of consumer health between clinic visits. Task-technology fit is a framework for understanding how technology helps and/or hinders a person during work processes. Evaluating the task-technology fit of video telehealth for personnel working in a pediatric outpatient clinic and providing care between clinic visits ensures the information provided matches the information needed to support work processes. The workflow of advanced practice registered nurse (APRN) care coordination provided via telephone and video telehealth was described and measured using a mixed-methods workflow analysis protocol that incorporated cognitive ethnography and time-motion study. Qualitative and quantitative results were merged and analyzed within the task-technology fit framework to determine the workflow fit of video telehealth for APRN care coordination. Incorporating video telehealth into APRN care coordination workflow provided visual information unavailable during telephone interactions. Despite additional tasks and interactions needed to obtain the visual information, APRN workflow efficiency, as measured by time, was not significantly changed. Analyzed within the task-technology fit framework, the increased visual information afforded by video telehealth supported the assessment and diagnostic information needs of the APRN. Telehealth must provide the right information to the right clinician at the right time. Evaluating task-technology fit using a mixed-methods protocol ensured rigorous analysis of fit within work processes and identified workflows that benefit most from the technology.

  15. A Novel Marking Reader for Progressive Addition Lenses Based on Gabor Holography.

    PubMed

    Perucho, Beatriz; Picazo-Bueno, José Angel; Micó, Vicente

    2016-05-01

    Progressive addition lenses (PALs) are marked with permanent engraved marks (PEMs) at standardized locations. Permanent engraved marks are very useful through the manufacturing and mounting processes, act as locator marks to re-ink the removable marks, and contain useful information about the PAL. However, PEMs are often faint and weak, obscured by scratches, partially occluded, and difficult to recognize on tinted lenses or with antireflection or scratch-resistant coatings. The aim of this article is to present a new generation of portable marking reader based on an extremely simplified concept for visualization and identification of PEMs in PALs. Permanent engraved marks on different PALs are visualized using classical Gabor holography as underlying principle. Gabor holography allows phase sample visualization with adjustable magnification and can be implemented in either classical or digital versions. Here, visual Gabor holography is used to provide a magnified defocused image of the PEMs onto a translucent visualization screen where the PEM is clearly identified. Different types of PALs (conventional, personalized, old and scratched, sunglasses, etc.) have been tested to visualize PEMs with the proposed marking reader. The PEMs are visible in every case, and variable magnification factor can be achieved simply moving up and down the PAL in the instrument. In addition, a second illumination wavelength is also tested, showing the applicability of this novel marking reader for different illuminations. A new concept of marking reader ophthalmic instrument has been presented and validated in the laboratory. The configuration involves only a commercial-grade laser diode and a visualization screen for PEM identification. The instrument is portable, economic, and easy to use, and it can be used for identifying patient's current PAL model and for marking removable PALs again or finding test points regardless of the age of the PAL, its scratches, tints, or coatings.

  16. Employing WebGL to develop interactive stereoscopic 3D content for use in biomedical visualization

    NASA Astrophysics Data System (ADS)

    Johnston, Semay; Renambot, Luc; Sauter, Daniel

    2013-03-01

    Web Graphics Library (WebGL), the forthcoming web standard for rendering native 3D graphics in a browser, represents an important addition to the biomedical visualization toolset. It is projected to become a mainstream method of delivering 3D online content due to shrinking support for third-party plug-ins. Additionally, it provides a virtual reality (VR) experience to web users accommodated by the growing availability of stereoscopic displays (3D TV, desktop, and mobile). WebGL's value in biomedical visualization has been demonstrated by applications for interactive anatomical models, chemical and molecular visualization, and web-based volume rendering. However, a lack of instructional literature specific to the field prevents many from utilizing this technology. This project defines a WebGL design methodology for a target audience of biomedical artists with a basic understanding of web languages and 3D graphics. The methodology was informed by the development of an interactive web application depicting the anatomy and various pathologies of the human eye. The application supports several modes of stereoscopic displays for a better understanding of 3D anatomical structures.

  17. CyanoBase: the cyanobacteria genome database update 2010.

    PubMed

    Nakao, Mitsuteru; Okamoto, Shinobu; Kohara, Mitsuyo; Fujishiro, Tsunakazu; Fujisawa, Takatomo; Sato, Shusei; Tabata, Satoshi; Kaneko, Takakazu; Nakamura, Yasukazu

    2010-01-01

    CyanoBase (http://genome.kazusa.or.jp/cyanobase) is the genome database for cyanobacteria, which are model organisms for photosynthesis. The database houses cyanobacteria species information, complete genome sequences, genome-scale experiment data, gene information, gene annotations and mutant information. In this version, we updated these datasets and improved the navigation and the visual display of the data views. In addition, a web service API now enables users to retrieve the data in various formats with other tools, seamlessly.

  18. Characterizing Provenance in Visualization and Data Analysis: An Organizational Framework of Provenance Types and Purposes.

    PubMed

    Ragan, Eric D; Endert, Alex; Sanyal, Jibonananda; Chen, Jian

    2016-01-01

    While the primary goal of visual analytics research is to improve the quality of insights and findings, a substantial amount of research in provenance has focused on the history of changes and advances throughout the analysis process. The term, provenance, has been used in a variety of ways to describe different types of records and histories related to visualization. The existing body of provenance research has grown to a point where the consolidation of design knowledge requires cross-referencing a variety of projects and studies spanning multiple domain areas. We present an organizational framework of the different types of provenance information and purposes for why they are desired in the field of visual analytics. Our organization is intended to serve as a framework to help researchers specify types of provenance and coordinate design knowledge across projects. We also discuss the relationships between these factors and the methods used to capture provenance information. In addition, our organization can be used to guide the selection of evaluation methodology and the comparison of study outcomes in provenance research.

  19. Making data matter: Voxel printing for the digital fabrication of data across scales and domains.

    PubMed

    Bader, Christoph; Kolb, Dominik; Weaver, James C; Sharma, Sunanda; Hosny, Ahmed; Costa, João; Oxman, Neri

    2018-05-01

    We present a multimaterial voxel-printing method that enables the physical visualization of data sets commonly associated with scientific imaging. Leveraging voxel-based control of multimaterial three-dimensional (3D) printing, our method enables additive manufacturing of discontinuous data types such as point cloud data, curve and graph data, image-based data, and volumetric data. By converting data sets into dithered material deposition descriptions, through modifications to rasterization processes, we demonstrate that data sets frequently visualized on screen can be converted into physical, materially heterogeneous objects. Our approach alleviates the need to postprocess data sets to boundary representations, preventing alteration of data and loss of information in the produced physicalizations. Therefore, it bridges the gap between digital information representation and physical material composition. We evaluate the visual characteristics and features of our method, assess its relevance and applicability in the production of physical visualizations, and detail the conversion of data sets for multimaterial 3D printing. We conclude with exemplary 3D-printed data sets produced by our method pointing toward potential applications across scales, disciplines, and problem domains.

  20. D Modelling and Interactive Web-Based Visualization of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Koeva, M. N.

    2016-06-01

    Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria - a country with thousands of years of history and cultural heritage dating back to ancient civilizations. This motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1) image-based modelling using a non-metric hand-held camera; (2) 3D visualization based on spherical panoramic images; (3) and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This comparative study discusses the advantages and disadvantages of these three approaches and their integration in multiple domains, such as web-based 3D city modelling, tourism and architectural 3D visualization. It was concluded that image-based modelling and panoramic visualisation are simple, fast and effective techniques suitable for simultaneous virtual representation of many objects. However, additional measurements or CAD information will be beneficial for obtaining higher accuracy.

  1. The influence of attention, learning, and motivation on visual search.

    PubMed

    Dodd, Michael D; Flowers, John H

    2012-01-01

    The 59th Annual Nebraska Symposium on Motivation (The Influence of Attention, Learning, and Motivation on Visual Search) took place April 7-8, 2011, on the University of Nebraska-Lincoln campus. The symposium brought together leading scholars who conduct research related to visual search at a variety levels for a series of talks, poster presentations, panel discussions, and numerous additional opportunities for intellectual exchange. The Symposium was also streamed online for the first time in the history of the event, allowing individuals from around the world to view the presentations and submit questions. The present volume is intended to both commemorate the event itself and to allow our speakers additional opportunity to address issues and current research that have since arisen. Each of the speakers (and, in some cases, their graduate students and post docs) has provided a chapter which both summarizes and expands on their original presentations. In this chapter, we sought to a) provide additional context as to how the Symposium came to be, b) discuss why we thought that this was an ideal time to organize a visual search symposium, and c) to briefly address recent trends and potential future directions in the field. We hope you find the volume both enjoyable and informative, and we thank the authors who have contributed a series of engaging chapters.

  2. Additive effects of emotional content and spatial selective attention on electrocortical facilitation.

    PubMed

    Keil, Andreas; Moratti, Stephan; Sabatinelli, Dean; Bradley, Margaret M; Lang, Peter J

    2005-08-01

    Affectively arousing visual stimuli have been suggested to automatically attract attentional resources in order to optimize sensory processing. The present study crosses the factors of spatial selective attention and affective content, and examines the relationship between instructed (spatial) and automatic attention to affective stimuli. In addition to response times and error rate, electroencephalographic data from 129 electrodes were recorded during a covert spatial attention task. This task required silent counting of random-dot targets embedded in a 10 Hz flicker of colored pictures presented to both hemifields. Steady-state visual evoked potentials (ssVEPs) were obtained to determine amplitude and phase of electrocortical responses to pictures. An increase of ssVEP amplitude was observed as an additive function of spatial attention and emotional content. Statistical parametric mapping of this effect indicated occipito-temporal and parietal cortex activation contralateral to the attended visual hemifield in ssVEP amplitude modulation. This difference was most pronounced during selection of the left visual hemifield, at right temporal electrodes. In line with this finding, phase information revealed accelerated processing of aversive arousing, compared to affectively neutral pictures. The data suggest that affective stimulus properties modulate the spatiotemporal process along the ventral stream, encompassing amplitude amplification and timing changes of posterior and temporal cortex.

  3. An object-mediated updating account of insensitivity to transsaccadic change

    PubMed Central

    Tas, A. Caglar; Moore, Cathleen M.; Hollingworth, Andrew

    2012-01-01

    Recent evidence has suggested that relatively precise information about the location and visual form of a saccade target object is retained across a saccade. However, this information appears to be available for report only when the target is removed briefly, so that the display is blank when the eyes land. We hypothesized that the availability of precise target information is dependent on whether a post-saccade object is mapped to the same object representation established for the presaccade target. If so, then the post-saccade features of the target overwrite the presaccade features, a process of object mediated updating in which visual masking is governed by object continuity. In two experiments, participants' sensitivity to the spatial displacement of a saccade target was improved when that object changed surface feature properties across the saccade, consistent with the prediction of the object-mediating updating account. Transsaccadic perception appears to depend on a mechanism of object-based masking that is observed across multiple domains of vision. In addition, the results demonstrate that surface-feature continuity contributes to visual stability across saccades. PMID:23092946

  4. I see/hear what you mean: semantic activation in visual word recognition depends on perceptual attention.

    PubMed

    Connell, Louise; Lynott, Dermot

    2014-04-01

    How does the meaning of a word affect how quickly we can recognize it? Accounts of visual word recognition allow semantic information to facilitate performance but have neglected the role of modality-specific perceptual attention in activating meaning. We predicted that modality-specific semantic information would differentially facilitate lexical decision and reading aloud, depending on how perceptual attention is implicitly directed by each task. Large-scale regression analyses showed the perceptual modalities involved in representing a word's referent concept influence how easily that word is recognized. Both lexical decision and reading-aloud tasks direct attention toward vision, and are faster and more accurate for strongly visual words. Reading aloud additionally directs attention toward audition and is faster and more accurate for strongly auditory words. Furthermore, the overall semantic effects are as large for reading aloud as lexical decision and are separable from age-of-acquisition effects. These findings suggest that implicitly directing perceptual attention toward a particular modality facilitates representing modality-specific perceptual information in the meaning of a word, which in turn contributes to the lexical decision or reading-aloud response.

  5. Verbal-spatial and visuospatial coding of power-space interactions.

    PubMed

    Dai, Qiang; Zhu, Lei

    2018-05-10

    A power-space interaction, which denotes the phenomenon that people responded faster to powerful words when they are placed higher in a visual field and faster to powerless words when they are placed lower in a visual field, has been repeatedly found. The dominant explanation of this power-space interaction is that it results from a tight correspondence between the representation of power and visual space (i.e., a visuospatial coding account). In the present study, we demonstrated that the interaction between power and space could be also based on a verbal-spatial coding in absence of any vertical spatial information. Additionally, the verbal-spatial coding was dominant in driving the power-space interaction when verbal space was contrasted with the visual space. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Matisse: A Visual Analytics System for Exploring Emotion Trends in Social Media Text Streams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A; Drouhard, Margaret MEG G; Beaver, Justin M

    Dynamically mining textual information streams to gain real-time situational awareness is especially challenging with social media systems where throughput and velocity properties push the limits of a static analytical approach. In this paper, we describe an interactive visual analytics system, called Matisse, that aids with the discovery and investigation of trends in streaming text. Matisse addresses the challenges inherent to text stream mining through the following technical contributions: (1) robust stream data management, (2) automated sentiment/emotion analytics, (3) interactive coordinated visualizations, and (4) a flexible drill-down interaction scheme that accesses multiple levels of detail. In addition to positive/negative sentiment prediction,more » Matisse provides fine-grained emotion classification based on Valence, Arousal, and Dominance dimensions and a novel machine learning process. Information from the sentiment/emotion analytics are fused with raw data and summary information to feed temporal, geospatial, term frequency, and scatterplot visualizations using a multi-scale, coordinated interaction model. After describing these techniques, we conclude with a practical case study focused on analyzing the Twitter sample stream during the week of the 2013 Boston Marathon bombings. The case study demonstrates the effectiveness of Matisse at providing guided situational awareness of significant trends in social media streams by orchestrating computational power and human cognition.« less

  7. Assessing Local Model Adequacy in Bayesian Hierarchical Models Using the Partitioned Deviance Information Criterion

    PubMed Central

    Wheeler, David C.; Hickson, DeMarc A.; Waller, Lance A.

    2010-01-01

    Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessing model adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data. PMID:21243121

  8. Mechanoluminescence assisting agile optimization of processing design on surgical epiphysis plates

    NASA Astrophysics Data System (ADS)

    Terasaki, Nao; Toyomasu, Takashi; Sonohata, Motoki

    2018-04-01

    We propose a novel method for agile optimization of processing design by visualization of mechanoluminescence. To demonstrate the effect of the new method, epiphysis plates were processed to form dots (diameters: 1 and 1.5 mm) and the mechanical information was evaluated. As a result, the appearance of new strain concentration was successfully visualized on the basis of mechanoluminescence, and complex mechanical information was instinctively understood by surgeons as the designers. In addition, it was clarified by mechanoluminescence analysis that small dots do not have serious mechanical effects such as strength reduction. Such detail mechanical information evaluated on the basis of mechanoluminescence was successfully applied to the judgement of the validity of the processing design. This clearly proves the effectiveness of the new methodology using mechanoluminescence for assisting agile optimization of the processing design.

  9. Evaluation of angiogram visualization methods for fast and reliable aneurysm diagnosis

    NASA Astrophysics Data System (ADS)

    Lesar, Žiga; Bohak, Ciril; Marolt, Matija

    2015-03-01

    In this paper we present the results of an evaluation of different visualization methods for angiogram volumetric data-ray casting, marching cubes, and multi-level partition of unity implicits. There are several options available with ray-casting: isosurface extraction, maximum intensity projection and alpha compositing, each producing fundamentally different results. Different visualization methods are suitable for different needs, so this choice is crucial in diagnosis and decision making processes. We also evaluate visual effects such as ambient occlusion, screen space ambient occlusion, and depth of field. Some visualization methods include transparency, so we address the question of relevancy of this additional visual information. We employ transfer functions to map data values to color and transparency, allowing us to view or hide particular tissues. All the methods presented in this paper were developed using OpenCL, striving for real-time rendering and quality interaction. An evaluation has been conducted to assess the suitability of the visualization methods. Results show superiority of isosurface extraction with ambient occlusion effects. Visual effects may positively or negatively affect perception of depth, motion, and relative positions in space.

  10. Theta coupling between V4 and prefrontal cortex predicts visual short-term memory performance.

    PubMed

    Liebe, Stefanie; Hoerzer, Gregor M; Logothetis, Nikos K; Rainer, Gregor

    2012-01-29

    Short-term memory requires communication between multiple brain regions that collectively mediate the encoding and maintenance of sensory information. It has been suggested that oscillatory synchronization underlies intercortical communication. Yet, whether and how distant cortical areas cooperate during visual memory remains elusive. We examined neural interactions between visual area V4 and the lateral prefrontal cortex using simultaneous local field potential (LFP) recordings and single-unit activity (SUA) in monkeys performing a visual short-term memory task. During the memory period, we observed enhanced between-area phase synchronization in theta frequencies (3-9 Hz) of LFPs together with elevated phase locking of SUA to theta oscillations across regions. In addition, we found that the strength of intercortical locking was predictive of the animals' behavioral performance. This suggests that theta-band synchronization coordinates action potential communication between V4 and prefrontal cortex that may contribute to the maintenance of visual short-term memories.

  11. Curvature and the visual perception of shape: theory on information along object boundaries and the minima rule revisited.

    PubMed

    Lim, Ik Soo; Leek, E Charles

    2012-07-01

    Previous empirical studies have shown that information along visual contours is known to be concentrated in regions of high magnitude of curvature, and, for closed contours, segments of negative curvature (i.e., concave segments) carry greater perceptual relevance than corresponding regions of positive curvature (i.e., convex segments). Lately, Feldman and Singh (2005, Psychological Review, 112, 243-252) proposed a mathematical derivation to yield information content as a function of curvature along a contour. Here, we highlight several fundamental errors in their derivation and in its associated implementation, which are problematic in both mathematical and psychological senses. Instead, we propose an alternative mathematical formulation for information measure of contour curvature that addresses these issues. Additionally, unlike in previous work, we extend this approach to 3-dimensional (3D) shape by providing a formal measure of information content for surface curvature and outline a modified version of the minima rule relating to part segmentation using curvature in 3D shape. Copyright 2012 APA, all rights reserved.

  12. People-oriented Information Visualization Design

    NASA Astrophysics Data System (ADS)

    Chen, Zhiyong; Zhang, Bolun

    2018-04-01

    In the 21st century with rapid development, in the wake of the continuous progress of science and technology, human society enters the information era and the era of big data, and the lifestyle and aesthetic system also change accordingly, so the emerging field of information visualization is increasingly popular. Information visualization design is the process of visualizing all kinds of tedious information data, so as to quickly accept information and save time-cost. Along with the development of the process of information visualization, information design, also becomes hotter and hotter, and emotional design, people-oriented design is an indispensable part of in the design of information. This paper probes information visualization design through emotional analysis of information design based on the social context of people-oriented experience from the perspective of art design. Based on the three levels of emotional information design: instinct level, behavior level and reflective level research, to explore and discuss information visualization design.

  13. Helioviewer: A Web 2.0 Tool for Visualizing Heterogeneous Heliophysics Data

    NASA Astrophysics Data System (ADS)

    Hughitt, V. K.; Ireland, J.; Lynch, M. J.; Schmeidel, P.; Dimitoglou, G.; Müeller, D.; Fleck, B.

    2008-12-01

    Solar physics datasets are becoming larger, richer, more numerous and more distributed. Feature/event catalogs (describing objects of interest in the original data) are becoming important tools in navigating these data. In the wake of this increasing influx of data and catalogs there has been a growing need for highly sophisticated tools for accessing and visualizing this wealth of information. Helioviewer is a novel tool for integrating and visualizing disparate sources of solar and Heliophysics data. Taking advantage of the newly available power of modern web application frameworks, Helioviewer merges image and feature catalog data, and provides for Heliophysics data a familiar interface not unlike Google Maps or MapQuest. In addition to streamlining the process of combining heterogeneous Heliophysics datatypes such as full-disk images and coronagraphs, the inclusion of visual representations of automated and human-annotated features provides the user with an integrated and intuitive view of how different factors may be interacting on the Sun. Currently, Helioviewer offers images from The Extreme ultraviolet Imaging Telescope (EIT), The Large Angle and Spectrometric COronagraph experiment (LASCO) and the Michelson Doppler Imager (MDI) instruments onboard The Solar and Heliospheric Observatory (SOHO), as well as The Transition Region and Coronal Explorer (TRACE). Helioviewer also incorporates feature/event information from the LASCO CME List, NOAA Active Regions, CACTus CME and Type II Radio Bursts feature/event catalogs. The project is undergoing continuous development with many more data sources and additional functionality planned for the near future.

  14. Audio-visual presentation of information for informed consent for participation in clinical trials.

    PubMed

    Ryan, R E; Prictor, M J; McLaughlin, K J; Hill, S J

    2008-01-23

    Informed consent is a critical component of clinical research. Different methods of presenting information to potential participants of clinical trials may improve the informed consent process. Audio-visual interventions (presented for example on the Internet, DVD, or video cassette) are one such method. To assess the effects of providing audio-visual information alone, or in conjunction with standard forms of information provision, to potential clinical trial participants in the informed consent process, in terms of their satisfaction, understanding and recall of information about the study, level of anxiety and their decision whether or not to participate. We searched: the Cochrane Consumers and Communication Review Group Specialised Register (searched 20 June 2006); the Cochrane Central Register of Controlled Trials (CENTRAL), The Cochrane Library, issue 2, 2006; MEDLINE (Ovid) (1966 to June week 1 2006); EMBASE (Ovid) (1988 to 2006 week 24); and other databases. We also searched reference lists of included studies and relevant review articles, and contacted study authors and experts. There were no language restrictions. Randomised and quasi-randomised controlled trials comparing audio-visual information alone, or in conjunction with standard forms of information provision (such as written or oral information as usually employed in the particular service setting), with standard forms of information provision alone, in the informed consent process for clinical trials. Trials involved individuals or their guardians asked to participate in a real (not hypothetical) clinical study. Two authors independently assessed studies for inclusion and extracted data. Due to heterogeneity no meta-analysis was possible; we present the findings in a narrative review. We included 4 trials involving data from 511 people. Studies were set in the USA and Canada. Three were randomised controlled trials (RCTs) and the fourth a quasi-randomised trial. Their quality was mixed and results should be interpreted with caution. Considerable uncertainty remains about the effects of audio-visual interventions, compared with standard forms of information provision (such as written or oral information normally used in the particular setting), for use in the process of obtaining informed consent for clinical trials. Audio-visual interventions did not consistently increase participants' levels of knowledge/understanding (assessed in four studies), although one study showed better retention of knowledge amongst intervention recipients. An audio-visual intervention may transiently increase people's willingness to participate in trials (one study), but this was not sustained at two to four weeks post-intervention. Perceived worth of the trial did not appear to be influenced by an audio-visual intervention (one study), but another study suggested that the quality of information disclosed may be enhanced by an audio-visual intervention. Many relevant outcomes including harms were not measured. The heterogeneity in results may reflect the differences in intervention design, content and delivery, the populations studied and the diverse methods of outcome assessment in included studies. The value of audio-visual interventions for people considering participating in clinical trials remains unclear. Evidence is mixed as to whether audio-visual interventions enhance people's knowledge of the trial they are considering entering, and/or the health condition the trial is designed to address; one study showed improved retention of knowledge amongst intervention recipients. The intervention may also have small positive effects on the quality of information disclosed, and may increase willingness to participate in the short-term; however the evidence is weak. There were no data for several primary outcomes, including harms. In the absence of clear results, triallists should continue to explore innovative methods of providing information to potential trial participants. Further research should take the form of high-quality randomised controlled trials, with clear reporting of methods. Studies should conduct content assessment of audio-visual and other innovative interventions for people of differing levels of understanding and education; also for different age and cultural groups. Researchers should assess systematically the effects of different intervention components and delivery characteristics, and should involve consumers in intervention development. Studies should assess additional outcomes relevant to individuals' decisional capacity, using validated tools, including satisfaction; anxiety; and adherence to the subsequent trial protocol.

  15. Visual Network Asymmetry and Default Mode Network Function in ADHD: An fMRI Study

    PubMed Central

    Hale, T. Sigi; Kane, Andrea M.; Kaminsky, Olivia; Tung, Kelly L.; Wiley, Joshua F.; McGough, James J.; Loo, Sandra K.; Kaplan, Jonas T.

    2014-01-01

    Background: A growing body of research has identified abnormal visual information processing in attention-deficit hyperactivity disorder (ADHD). In particular, slow processing speed and increased reliance on visuo-perceptual strategies have become evident. Objective: The current study used recently developed fMRI methods to replicate and further examine abnormal rightward biased visual information processing in ADHD and to further characterize the nature of this effect; we tested its association with several large-scale distributed network systems. Method: We examined fMRI BOLD response during letter and location judgment tasks, and directly assessed visual network asymmetry and its association with large-scale networks using both a voxelwise and an averaged signal approach. Results: Initial within-group analyses revealed a pattern of left-lateralized visual cortical activity in controls but right-lateralized visual cortical activity in ADHD children. Direct analyses of visual network asymmetry confirmed atypical rightward bias in ADHD children compared to controls. This ADHD characteristic was atypically associated with reduced activation across several extra-visual networks, including the default mode network (DMN). We also found atypical associations between DMN activation and ADHD subjects’ inattentive symptoms and task performance. Conclusion: The current study demonstrated rightward VNA in ADHD during a simple letter discrimination task. This result adds an important novel consideration to the growing literature identifying abnormal visual processing in ADHD. We postulate that this characteristic reflects greater perceptual engagement of task-extraneous content, and that it may be a basic feature of less efficient top-down task-directed control over visual processing. We additionally argue that abnormal DMN function may contribute to this characteristic. PMID:25076915

  16. Attentional reorienting triggers spatial asymmetries in a search task with cross-modal spatial cueing

    PubMed Central

    Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias

    2018-01-01

    Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637

  17. Neuropsychological findings associated with Panayiotopoulos syndrome in three children.

    PubMed

    Hodges, Samantha L; Gabriel, Marsha T; Perry, M Scott

    2016-01-01

    Panayiotopoulos syndrome is a common idiopathic benign epilepsy that has a peak age of onset in early childhood. The syndrome is multifocal and shows significant electroencephalogram (EEG) variability, with occipital predominance. Although a benign syndrome often refers to the absence of neurological and neuropsychological deficits, the syndrome has recently been associated with cognitive impairments. Also, despite frequent occipital EEG abnormalities, research regarding the visual functioning of patients is less reported and often contradictory. The purpose of this study was to gain additional knowledge regarding the neurocognitive functioning of patients with Panayiotopoulos syndrome and specifically to address any visual processing deficits associated with the syndrome. Following diagnosis of the syndrome based on typical clinical and electrophysiological criteria, three patients, aged 5, 8, and 10years were referred by epileptologists for neuropsychological evaluation. Neuropsychological findings suggest that the patients had notable impairments on visual memory tasks, especially in comparison with verbal memory. Further, they demonstrated increased difficulty on picture memory suggesting difficulty retaining information from a crowded visual field. Two of the three patients showed weakness in visual processing speed, which may account for weaker retention of complex visual stimuli. Abilities involving attention were normal for all patients, suggesting that inattention is not responsible for these visual deficits. Academically, the patients were weak in numerical operations and spelling, which both rely partially on visual memory and may affect achievement in these areas. Overall, the results suggest that patients with Panayiotopoulos syndrome may have visual processing and visual memory problems that could potentially affect their academic capabilities. Identifying such difficulties may be helpful in creating educational and remedial assistance programs for children with this syndrome, as well as developing appropriate presentation of information to these children in school. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Performance evaluation of a kinesthetic-tactual display

    NASA Technical Reports Server (NTRS)

    Jagacinski, R. J.; Flach, J. M.; Gilson, R. D.; Dunn, R. S.

    1982-01-01

    Simulator studies demonstrated the feasibility of using kinesthetic-tactual (KT) displays for providing collective and cyclic command information, and suggested that KT displays may increase pilot workload capability. A dual-axis laboratory tracking task suggested that beyond reduction in visual scanning, there may be additional sensory or cognitive benefits to the use of multiple sensory modalities. Single-axis laboratory tracking tasks revealed performance with a quickened KT display to be equivalent to performance with a quickened visual display for a low frequency sum-of-sinewaves input. In contrast, an unquickened KT display was inferior to an unquickened visual display. Full scale simulator studies and/or inflight testing are recommended to determine the generality of these results.

  19. Altered Connectivity of the Balance Processing Network After Tongue Stimulation in Balance-Impaired Individuals

    PubMed Central

    Tyler, Mitchell E.; Danilov, Yuri P.; Kaczmarek, Kurt A.; Meyerand, Mary E.

    2013-01-01

    Abstract Some individuals with balance impairment have hypersensitivity of the motion-sensitive visual cortices (hMT+) compared to healthy controls. Previous work showed that electrical tongue stimulation can reduce the exaggerated postural sway induced by optic flow in this subject population and decrease the hypersensitive response of hMT+. Additionally, a region within the brainstem (BS), likely containing the vestibular and trigeminal nuclei, showed increased optic flow-induced activity after tongue stimulation. The aim of this study was to understand how the modulation induced by tongue stimulation affects the balance-processing network as a whole and how modulation of BS structures can influence cortical activity. Four volumes of interest, discovered in a general linear model analysis, constitute major contributors to the balance-processing network. These regions were entered into a dynamic causal modeling analysis to map the network and measure any connection or topology changes due to the stimulation. Balance-impaired individuals had downregulated response of the primary visual cortex (V1) to visual stimuli but upregulated modulation of the connection between V1 and hMT+ by visual motion compared to healthy controls (p≤1E–5). This upregulation was decreased to near-normal levels after stimulation. Additionally, the region within the BS showed increased response to visual motion after stimulation compared to both prestimulation and controls. Stimulation to the tongue enters the central nervous system at the BS but likely propagates to the cortex through supramodal information transfer. We present a model to explain these brain responses that utilizes an anatomically present, but functionally dormant pathway of information flow within the processing network. PMID:23216162

  20. Evaluation of the 3d Urban Modelling Capabilities in Geographical Information Systems

    NASA Astrophysics Data System (ADS)

    Dogru, A. O.; Seker, D. Z.

    2010-12-01

    Geographical Information System (GIS) Technology, which provides successful solutions to basic spatial problems, is currently widely used in 3 dimensional (3D) modeling of physical reality with its developing visualization tools. The modeling of large and complicated phenomenon is a challenging problem in terms of computer graphics currently in use. However, it is possible to visualize that phenomenon in 3D by using computer systems. 3D models are used in developing computer games, military training, urban planning, tourism and etc. The use of 3D models for planning and management of urban areas is very popular issue of city administrations. In this context, 3D City models are produced and used for various purposes. However the requirements of the models vary depending on the type and scope of the application. While a high level visualization, where photorealistic visualization techniques are widely used, is required for touristy and recreational purposes, an abstract visualization of the physical reality is generally sufficient for the communication of the thematic information. The visual variables, which are the principle components of cartographic visualization, such as: color, shape, pattern, orientation, size, position, and saturation are used for communicating the thematic information. These kinds of 3D city models are called as abstract models. Standardization of technologies used for 3D modeling is now available by the use of CityGML. CityGML implements several novel concepts to support interoperability, consistency and functionality. For example it supports different Levels-of-Detail (LoD), which may arise from independent data collection processes and are used for efficient visualization and efficient data analysis. In one CityGML data set, the same object may be represented in different LoD simultaneously, enabling the analysis and visualization of the same object with regard to different degrees of resolution. Furthermore, two CityGML data sets containing the same object in different LoD may be combined and integrated. In this study GIS tools used for 3D modeling issues were examined. In this context, the availability of the GIS tools for obtaining different LoDs of CityGML standard. Additionally a 3D GIS application that covers a small part of the city of Istanbul was implemented for communicating the thematic information rather than photorealistic visualization by using 3D model. An abstract model was created by using a commercial GIS software modeling tools and the results of the implementation were also presented in the study.

  1. The influence of selective attention to auditory and visual speech on the integration of audiovisual speech information.

    PubMed

    Buchan, Julie N; Munhall, Kevin G

    2011-01-01

    Conflicting visual speech information can influence the perception of acoustic speech, causing an illusory percept of a sound not present in the actual acoustic speech (the McGurk effect). We examined whether participants can voluntarily selectively attend to either the auditory or visual modality by instructing participants to pay attention to the information in one modality and to ignore competing information from the other modality. We also examined how performance under these instructions was affected by weakening the influence of the visual information by manipulating the temporal offset between the audio and video channels (experiment 1), and the spatial frequency information present in the video (experiment 2). Gaze behaviour was also monitored to examine whether attentional instructions influenced the gathering of visual information. While task instructions did have an influence on the observed integration of auditory and visual speech information, participants were unable to completely ignore conflicting information, particularly information from the visual stream. Manipulating temporal offset had a more pronounced interaction with task instructions than manipulating the amount of visual information. Participants' gaze behaviour suggests that the attended modality influences the gathering of visual information in audiovisual speech perception.

  2. CyanoBase: the cyanobacteria genome database update 2010

    PubMed Central

    Nakao, Mitsuteru; Okamoto, Shinobu; Kohara, Mitsuyo; Fujishiro, Tsunakazu; Fujisawa, Takatomo; Sato, Shusei; Tabata, Satoshi; Kaneko, Takakazu; Nakamura, Yasukazu

    2010-01-01

    CyanoBase (http://genome.kazusa.or.jp/cyanobase) is the genome database for cyanobacteria, which are model organisms for photosynthesis. The database houses cyanobacteria species information, complete genome sequences, genome-scale experiment data, gene information, gene annotations and mutant information. In this version, we updated these datasets and improved the navigation and the visual display of the data views. In addition, a web service API now enables users to retrieve the data in various formats with other tools, seamlessly. PMID:19880388

  3. Design and Development of a Linked Open Data-Based Health Information Representation and Visualization System: Potentials and Preliminary Evaluation

    PubMed Central

    Kauppinen, Tomi; Keßler, Carsten; Fritz, Fleur

    2014-01-01

    Background Healthcare organizations around the world are challenged by pressures to reduce cost, improve coordination and outcome, and provide more with less. This requires effective planning and evidence-based practice by generating important information from available data. Thus, flexible and user-friendly ways to represent, query, and visualize health data becomes increasingly important. International organizations such as the World Health Organization (WHO) regularly publish vital data on priority health topics that can be utilized for public health policy and health service development. However, the data in most portals is displayed in either Excel or PDF formats, which makes information discovery and reuse difficult. Linked Open Data (LOD)—a new Semantic Web set of best practice of standards to publish and link heterogeneous data—can be applied to the representation and management of public level health data to alleviate such challenges. However, the technologies behind building LOD systems and their effectiveness for health data are yet to be assessed. Objective The objective of this study is to evaluate whether Linked Data technologies are potential options for health information representation, visualization, and retrieval systems development and to identify the available tools and methodologies to build Linked Data-based health information systems. Methods We used the Resource Description Framework (RDF) for data representation, Fuseki triple store for data storage, and Sgvizler for information visualization. Additionally, we integrated SPARQL query interface for interacting with the data. We primarily use the WHO health observatory dataset to test the system. All the data were represented using RDF and interlinked with other related datasets on the Web of Data using Silk—a link discovery framework for Web of Data. A preliminary usability assessment was conducted following the System Usability Scale (SUS) method. Results We developed an LOD-based health information representation, querying, and visualization system by using Linked Data tools. We imported more than 20,000 HIV-related data elements on mortality, prevalence, incidence, and related variables, which are freely available from the WHO global health observatory database. Additionally, we automatically linked 5312 data elements from DBpedia, Bio2RDF, and LinkedCT using the Silk framework. The system users can retrieve and visualize health information according to their interests. For users who are not familiar with SPARQL queries, we integrated a Linked Data search engine interface to search and browse the data. We used the system to represent and store the data, facilitating flexible queries and different kinds of visualizations. The preliminary user evaluation score by public health data managers and users was 82 on the SUS usability measurement scale. The need to write queries in the interface was the main reported difficulty of LOD-based systems to the end user. Conclusions The system introduced in this article shows that current LOD technologies are a promising alternative to represent heterogeneous health data in a flexible and reusable manner so that they can serve intelligent queries, and ultimately support decision-making. However, the development of advanced text-based search engines is necessary to increase its usability especially for nontechnical users. Further research with large datasets is recommended in the future to unfold the potential of Linked Data and Semantic Web for future health information systems development. PMID:25601195

  4. Design and development of a linked open data-based health information representation and visualization system: potentials and preliminary evaluation.

    PubMed

    Tilahun, Binyam; Kauppinen, Tomi; Keßler, Carsten; Fritz, Fleur

    2014-10-25

    Healthcare organizations around the world are challenged by pressures to reduce cost, improve coordination and outcome, and provide more with less. This requires effective planning and evidence-based practice by generating important information from available data. Thus, flexible and user-friendly ways to represent, query, and visualize health data becomes increasingly important. International organizations such as the World Health Organization (WHO) regularly publish vital data on priority health topics that can be utilized for public health policy and health service development. However, the data in most portals is displayed in either Excel or PDF formats, which makes information discovery and reuse difficult. Linked Open Data (LOD)-a new Semantic Web set of best practice of standards to publish and link heterogeneous data-can be applied to the representation and management of public level health data to alleviate such challenges. However, the technologies behind building LOD systems and their effectiveness for health data are yet to be assessed. The objective of this study is to evaluate whether Linked Data technologies are potential options for health information representation, visualization, and retrieval systems development and to identify the available tools and methodologies to build Linked Data-based health information systems. We used the Resource Description Framework (RDF) for data representation, Fuseki triple store for data storage, and Sgvizler for information visualization. Additionally, we integrated SPARQL query interface for interacting with the data. We primarily use the WHO health observatory dataset to test the system. All the data were represented using RDF and interlinked with other related datasets on the Web of Data using Silk-a link discovery framework for Web of Data. A preliminary usability assessment was conducted following the System Usability Scale (SUS) method. We developed an LOD-based health information representation, querying, and visualization system by using Linked Data tools. We imported more than 20,000 HIV-related data elements on mortality, prevalence, incidence, and related variables, which are freely available from the WHO global health observatory database. Additionally, we automatically linked 5312 data elements from DBpedia, Bio2RDF, and LinkedCT using the Silk framework. The system users can retrieve and visualize health information according to their interests. For users who are not familiar with SPARQL queries, we integrated a Linked Data search engine interface to search and browse the data. We used the system to represent and store the data, facilitating flexible queries and different kinds of visualizations. The preliminary user evaluation score by public health data managers and users was 82 on the SUS usability measurement scale. The need to write queries in the interface was the main reported difficulty of LOD-based systems to the end user. The system introduced in this article shows that current LOD technologies are a promising alternative to represent heterogeneous health data in a flexible and reusable manner so that they can serve intelligent queries, and ultimately support decision-making. However, the development of advanced text-based search engines is necessary to increase its usability especially for nontechnical users. Further research with large datasets is recommended in the future to unfold the potential of Linked Data and Semantic Web for future health information systems development.

  5. Magnetic stimulation of the dorsolateral prefrontal cortex dissociates fragile visual short-term memory from visual working memory.

    PubMed

    Sligte, Ilja G; Wokke, Martijn E; Tesselaar, Johannes P; Scholte, H Steven; Lamme, Victor A F

    2011-05-01

    To guide our behavior in successful ways, we often need to rely on information that is no longer in view, but maintained in visual short-term memory (VSTM). While VSTM is usually broken down into iconic memory (brief and high-capacity store) and visual working memory (sustained, yet limited-capacity store), recent studies have suggested the existence of an additional and intermediate form of VSTM that depends on activity in extrastriate cortex. In previous work, we have shown that this fragile form of VSTM can be dissociated from iconic memory. In the present study, we provide evidence that fragile VSTM is different from visual working memory as magnetic stimulation of the right dorsolateral prefrontal cortex (DLPFC) disrupts visual working memory, while leaving fragile VSTM intact. In addition, we observed that people with high DLPFC activity had superior working memory capacity compared to people with low DLPFC activity, and only people with high DLPFC activity really showed a reduction in working memory capacity in response to magnetic stimulation. Altogether, this study shows that VSTM consists of three stages that have clearly different characteristics and rely on different neural structures. On the methodological side, we show that it is possible to predict individual susceptibility to magnetic stimulation based on functional MRI activity. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  6. Receptive Fields and the Reconstruction of Visual Information.

    DTIC Science & Technology

    1985-09-01

    depending on the noise . Thus our model would suggest that the interpolation filters for deblurring are playing a role in Ii hyperacuity. This is novel...of additional precision in the information can be obtained by a process of deblurring , which could be relevant to hyperacuity. It also provides an... impulse of heat diffuses into increasingly larger Gaussian distributions as time proceeds. Mathematically, let f(x) denote the initial temperature

  7. Hominoid visual brain structure volumes and the position of the lunate sulcus.

    PubMed

    de Sousa, Alexandra A; Sherwood, Chet C; Mohlberg, Hartmut; Amunts, Katrin; Schleicher, Axel; MacLeod, Carol E; Hof, Patrick R; Frahm, Heiko; Zilles, Karl

    2010-04-01

    It has been argued that changes in the relative sizes of visual system structures predated an increase in brain size and provide evidence of brain reorganization in hominins. However, data about the volume and anatomical limits of visual brain structures in the extant taxa phylogenetically closest to humans-the apes-remain scarce, thus complicating tests of hypotheses about evolutionary changes. Here, we analyze new volumetric data for the primary visual cortex and the lateral geniculate nucleus to determine whether or not the human brain departs from allometrically-expected patterns of brain organization. Primary visual cortex volumes were compared to lunate sulcus position in apes to investigate whether or not inferences about brain reorganization made from fossil hominin endocasts are reliable in this context. In contrast to previous studies, in which all species were relatively poorly sampled, the current study attempted to evaluate the degree of intraspecific variability by including numerous hominoid individuals (particularly Pan troglodytes and Homo sapiens). In addition, we present and compare volumetric data from three new hominoid species-Pan paniscus, Pongo pygmaeus, and Symphalangus syndactylus. These new data demonstrate that hominoid visual brain structure volumes vary more than previously appreciated. In addition, humans have relatively reduced primary visual cortex and lateral geniculate nucleus volumes as compared to allometric predictions from other hominoids. These results suggest that inferences about the position of the lunate sulcus on fossil endocasts may provide information about brain organization. Copyright 2010 Elsevier Ltd. All rights reserved.

  8. Robust selectivity to two-object images in human visual cortex

    PubMed Central

    Agam, Yigal; Liu, Hesheng; Papanastassiou, Alexander; Buia, Calin; Golby, Alexandra J.; Madsen, Joseph R.; Kreiman, Gabriel

    2010-01-01

    SUMMARY We can recognize objects in a fraction of a second in spite of the presence of other objects [1–3]. The responses in macaque areas V4 and inferior temporal cortex [4–15] to a neuron’s preferred stimuli are typically suppressed by the addition of a second object within the receptive field (see however [16, 17]). How can this suppression be reconciled with rapid visual recognition in complex scenes? One option is that certain “special categories” are unaffected by other objects [18] but this leaves the problem unsolved for other categories. Another possibility is that serial attentional shifts help ameliorate the problem of distractor objects [19–21]. Yet, psychophysical studies [1–3], scalp recordings [1] and neurophysiological recordings [14, 16, 22–24], suggest that the initial sweep of visual processing contains a significant amount of information. We recorded intracranial field potentials in human visual cortex during presentation of flashes of two-object images. Visual selectivity from temporal cortex during the initial ~200 ms was largely robust to the presence of other objects. We could train linear decoders on the responses to isolated objects and decode information in two-object images. These observations are compatible with parallel, hierarchical and feed-forward theories of rapid visual recognition [25] and may provide a neural substrate to begin to unravel rapid recognition in natural scenes. PMID:20417105

  9. Reward processing in the value-driven attention network: reward signals tracking cue identity and location.

    PubMed

    Anderson, Brian A

    2017-03-01

    Through associative reward learning, arbitrary cues acquire the ability to automatically capture visual attention. Previous studies have examined the neural correlates of value-driven attentional orienting, revealing elevated activity within a network of brain regions encompassing the visual corticostriatal loop [caudate tail, lateral occipital complex (LOC) and early visual cortex] and intraparietal sulcus (IPS). Such attentional priority signals raise a broader question concerning how visual signals are combined with reward signals during learning to create a representation that is sensitive to the confluence of the two. This study examines reward signals during the cued reward training phase commonly used to generate value-driven attentional biases. High, compared with low, reward feedback preferentially activated the value-driven attention network, in addition to regions typically implicated in reward processing. Further examination of these reward signals within the visual system revealed information about the identity of the preceding cue in the caudate tail and LOC, and information about the location of the preceding cue in IPS, while early visual cortex represented both location and identity. The results reveal teaching signals within the value-driven attention network during associative reward learning, and further suggest functional specialization within different regions of this network during the acquisition of an integrated representation of stimulus value. © The Author (2016). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  10. Decoding visual object categories from temporal correlations of ECoG signals.

    PubMed

    Majima, Kei; Matsuo, Takeshi; Kawasaki, Keisuke; Kawai, Kensuke; Saito, Nobuhito; Hasegawa, Isao; Kamitani, Yukiyasu

    2014-04-15

    How visual object categories are represented in the brain is one of the key questions in neuroscience. Studies on low-level visual features have shown that relative timings or phases of neural activity between multiple brain locations encode information. However, whether such temporal patterns of neural activity are used in the representation of visual objects is unknown. Here, we examined whether and how visual object categories could be predicted (or decoded) from temporal patterns of electrocorticographic (ECoG) signals from the temporal cortex in five patients with epilepsy. We used temporal correlations between electrodes as input features, and compared the decoding performance with features defined by spectral power and phase from individual electrodes. While using power or phase alone, the decoding accuracy was significantly better than chance, correlations alone or those combined with power outperformed other features. Decoding performance with correlations was degraded by shuffling the order of trials of the same category in each electrode, indicating that the relative time series between electrodes in each trial is critical. Analysis using a sliding time window revealed that decoding performance with correlations began to rise earlier than that with power. This earlier increase in performance was replicated by a model using phase differences to encode categories. These results suggest that activity patterns arising from interactions between multiple neuronal units carry additional information on visual object categories. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Are visual peripheries forever young?

    PubMed

    Burnat, Kalina

    2015-01-01

    The paper presents a concept of lifelong plasticity of peripheral vision. Central vision processing is accepted as critical and irreplaceable for normal perception in humans. While peripheral processing chiefly carries information about motion stimuli features and redirects foveal attention to new objects, it can also take over functions typical for central vision. Here I review the data showing the plasticity of peripheral vision found in functional, developmental, and comparative studies. Even though it is well established that afferent projections from central and peripheral retinal regions are not established simultaneously during early postnatal life, central vision is commonly used as a general model of development of the visual system. Based on clinical studies and visually deprived animal models, I describe how central and peripheral visual field representations separately rely on early visual experience. Peripheral visual processing (motion) is more affected by binocular visual deprivation than central visual processing (spatial resolution). In addition, our own experimental findings show the possible recruitment of coarse peripheral vision for fine spatial analysis. Accordingly, I hypothesize that the balance between central and peripheral visual processing, established in the course of development, is susceptible to plastic adaptations during the entire life span, with peripheral vision capable of taking over central processing.

  12. Comparison of Middle Ear Visualization With Endoscopy and Microscopy.

    PubMed

    Bennett, Marc L; Zhang, Dongqing; Labadie, Robert F; Noble, Jack H

    2016-04-01

    The primary goal of chronic ear surgery is the creation of a safe, clean dry ear. For cholesteatomas, complete removal of disease is dependent on visualization. Conventional microscopy is adequate for most dissection, but various subregions of the middle ear are better visualized with endoscopy. The purpose of the present study was to quantitatively assess the improved visualization that endoscopes afford as compared with operating microscopes. Microscopic and endoscopic views were simulated using a three-dimensional model developed from temporal bone scans. Surface renderings of the ear canal and middle ear subsegments were defined and the percentage of visualization of each middle ear subsegment, both with and without ossicles, was then determined for the microscope as well as for 0-, 30-, and 45-degree endoscopes. Using this information, we analyzed which mode of visualization is best suited for dissection within a particular anatomical region. Using a 0-degree scope provides significantly more visualization of every subregion, except the antrum, compared with a microscope. In addition, angled scopes permit visualizing significantly more surface area of every subregion of the middle ear than straight scopes or microscopes. Endoscopes offer advantages for cholesteatoma dissection in difficult-to-visualize areas including the sinus tympani and epitympanum.

  13. Motor adaptation in complex sports - the influence of visual context information on the adaptation of the three-point shot to altered task demands in expert basketball players.

    PubMed

    Stöckel, Tino; Fries, Udo

    2013-01-01

    We examined the influence of visual context information on skilled motor behaviour and motor adaptation in basketball. The rules of basketball in Europe have recently changed, such that that the distance for three-point shots increased from 6.25 m to 6.75 m. As such, we tested the extent to which basketball experts can adapt to the longer distance when a) only the unfamiliar, new three-point line was provided as floor markings (NL group), or b) the familiar, old three-point line was provided in addition to the new floor markings (OL group). In the present study 20 expert basketball players performed 40 three-point shots from 6.25 m and 40 shots from 6.75 m. We assessed the percentage of hits and analysed the landing position of the ball. Results showed better adaptation of throwing performance to the longer distance when the old three-point line was provided as a visual landmark, compared to when only the new three-point line was provided. We hypothesise that the three-point line delivered relevant information needed to successfully adapt to the greater distance in the OL group, whereas it disturbed performance and ability to adapt in the NL group. The importance of visual landmarks on motor adaptation in basketball throwing is discussed relative to the influence of other information sources (i.e. angle of elevation relative to the basket) and sport practice.

  14. The impact of front-of-pack marketing attributes versus nutrition and health information on parents' food choices.

    PubMed

    Georgina Russell, Catherine; Burke, Paul F; Waller, David S; Wei, Edward

    2017-09-01

    Front-of-pack attributes have the potential to affect parents' food choices on behalf of their children and form one avenue through which strategies to address the obesogenic environment can be developed. Previous work has focused on the isolated effects of nutrition and health information (e.g. labeling systems, health claims), and how parents trade off this information against co-occurring marketing features (e.g. product imagery, cartoons) is unclear. A Discrete Choice Experiment was utilized to understand how front-of-pack nutrition, health and marketing attributes, as well as pricing, influenced parents' choices of cereal for their child. Packages varied with respect to the two elements of the Australian Health Star Rating system (stars and nutrient facts panel), along with written claims, product visuals, additional visuals, and price. A total of 520 parents (53% male) with a child aged between five and eleven years were recruited via an online panel company and completed the survey. Product visuals, followed by star ratings, were found to be the most significant attributes in driving choice, while written claims and other visuals were the least significant. Use of the Health Star Rating (HSR) system and other features were related to the child's fussiness level and parents' concerns about their child's weight with parents of fussy children, in particular, being less influenced by the HSR star information and price. The findings suggest that front-of-pack health labeling systems can affect choice when parents trade this information off against marketing attributes, yet some marketing attributes can be more influential, and not all parents utilize this information in the same way. Copyright © 2017. Published by Elsevier Ltd.

  15. Body sway adaptation to addition but not withdrawal of stabilizing visual information is delayed by a concurrent cognitive task.

    PubMed

    Honeine, Jean-Louis; Crisafulli, Oscar; Schieppati, Marco

    2017-02-01

    The aim of this study was to test the effects of a concurrent cognitive task on the promptness of the sensorimotor integration and reweighting processes following addition and withdrawal of vision. Fourteen subjects stood in tandem while vision was passively added and removed. Subjects performed a cognitive task, consisting of counting backward in steps of three, or were "mentally idle." We estimated the time intervals following addition and withdrawal of vision at which body sway began to change. We also estimated the time constant of the exponential change in body oscillation until the new level of sway was reached, consistent with the current visual state. Under the mentally idle condition, mean latency was 0.67 and 0.46 s and the mean time constant was 1.27 and 0.59 s for vision addition and withdrawal, respectively. Following addition of vision, counting backward delayed the latency by about 300 ms, without affecting the time constant. Following withdrawal, counting backward had no significant effect on either latency or time constant. The extension by counting backward of the time interval to stabilization onset on addition of vision suggests a competition for allocation of cortical resources. Conversely, the absence of cognitive task effect on the rapid onset of destabilization on vision withdrawal, and on the relevant reweighting time course, advocates the intervention of a subcortical process. Diverting attention from a challenging standing task discloses a cortical supervision on the process of sensorimotor integration of new balance-stabilizing information. A subcortical process would instead organize the response to removal of the stabilizing sensory input. NEW & NOTEWORTHY This study is the first to test the effect of an arithmetic task on the time course of balance readjustment following visual withdrawal or addition. Performing such a cognitive task increases the time delay following addition of vision but has no effect on withdrawal dynamics. This suggests that sensorimotor integration following addition of a stabilizing signal is performed at a cortical level, whereas the response to its withdrawal is "automatic" and accomplished at a subcortical level. Copyright © 2017 the American Physiological Society.

  16. Anorexia nervosa and body dysmorphic disorder are associated with abnormalities in processing visual information.

    PubMed

    Li, W; Lai, T M; Bohon, C; Loo, S K; McCurdy, D; Strober, M; Bookheimer, S; Feusner, J

    2015-07-01

    Anorexia nervosa (AN) and body dysmorphic disorder (BDD) are characterized by distorted body image and are frequently co-morbid with each other, although their relationship remains little studied. While there is evidence of abnormalities in visual and visuospatial processing in both disorders, no study has directly compared the two. We used two complementary modalities--event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI)--to test for abnormal activity associated with early visual signaling. We acquired fMRI and ERP data in separate sessions from 15 unmedicated individuals in each of three groups (weight-restored AN, BDD, and healthy controls) while they viewed images of faces and houses of different spatial frequencies. We used joint independent component analyses to compare activity in visual systems. AN and BDD groups demonstrated similar hypoactivity in early secondary visual processing regions and the dorsal visual stream when viewing low spatial frequency faces, linked to the N170 component, as well as in early secondary visual processing regions when viewing low spatial frequency houses, linked to the P100 component. Additionally, the BDD group exhibited hyperactivity in fusiform cortex when viewing high spatial frequency houses, linked to the N170 component. Greater activity in this component was associated with lower attractiveness ratings of faces. Results provide preliminary evidence of similar abnormal spatiotemporal activation in AN and BDD for configural/holistic information for appearance- and non-appearance-related stimuli. This suggests a common phenotype of abnormal early visual system functioning, which may contribute to perceptual distortions.

  17. Discussing State-of-the-Art Spatial Visualization Techniques Applicable for the Epidemiological Surveillance Data on the Example of Campylobacter spp. in Raw Chicken Meat.

    PubMed

    Plaza-Rodríguez, C; Appel, B; Kaesbohrer, A; Filter, M

    2016-08-01

    Within the European activities for the 'Monitoring and Collection of Information on Zoonoses', annually EFSA publishes a European report, including information related to the prevalence of Campylobacter spp. in Germany. Spatial epidemiology becomes here a fundamental tool for the generation of these reports, including the representation of prevalence as an essential element. Until now, choropleth maps are the default visualization technique applied in epidemiological monitoring and surveillance reports made by EFSA and German authorities. However, due to its limitations, it seems to be reasonable to explore alternative chart type. Four maps including choropleth, cartogram, graduated symbols and dot-density maps were created to visualize real-world sample data on the prevalence of Campylobacter spp. in raw chicken meat samples in Germany in 2011. In addition, adjacent and coincident maps were created to visualize also the associated uncertainty. As an outcome, we found that there is not a single data visualization technique that encompasses all the necessary features to visualize prevalence data alone or prevalence data together with their associated uncertainty. All the visualization techniques contemplated in this study demonstrated to have both advantages and disadvantages. To determine which visualization technique should be used for future reports, we recommend to create a dialogue between end-users and epidemiologists on the basis of sample data and charts. The final decision should also consider the knowledge and experience of end-users as well as the specific objective to be achieved with the charts. © 2015 The Authors. Zoonoses and Public Health Published by Blackwell Verlag GmbH.

  18. Multisensory Integration Strategy for Modality-Specific Loss of Inhibition Control in Older Adults

    PubMed Central

    Ryu, Hokyoung; Kim, Jae-Kwan; Jeong, Eunju

    2018-01-01

    Older adults are known to have lesser cognitive control capability and greater susceptibility to distraction than young adults. Previous studies have reported age-related problems in selective attention and inhibitory control, yielding mixed results depending on modality and context in which stimuli and tasks were presented. The purpose of the study was to empirically demonstrate a modality-specific loss of inhibitory control in processing audio-visual information with ageing. A group of 30 young adults (mean age = 25.23, Standard Deviation (SD) = 1.86) and 22 older adults (mean age = 55.91, SD = 4.92) performed the audio-visual contour identification task (AV-CIT). We compared performance of visual/auditory identification (Uni-V, Uni-A) with that of visual/auditory identification in the presence of distraction in counterpart modality (Multi-V, Multi-A). The findings showed a modality-specific effect on inhibitory control. Uni-V performance was significantly better than Multi-V, indicating that auditory distraction significantly hampered visual target identification. However, Multi-A performance was significantly enhanced compared to Uni-A, indicating that auditory target performance was significantly enhanced by visual distraction. Additional analysis showed an age-specific effect on enhancement between Uni-A and Multi-A depending on the level of visual inhibition. Together, our findings indicated that the loss of visual inhibitory control was beneficial for the auditory target identification presented in a multimodal context in older adults. A likely multisensory information processing strategy in the older adults was further discussed in relation to aged cognition. PMID:29641462

  19. Greater magnocellular saccadic suppression in high versus low autistic tendency suggests a causal path to local perceptual style.

    PubMed

    Crewther, David P; Crewther, Daniel; Bevan, Stephanie; Goodale, Melvyn A; Crewther, Sheila G

    2015-12-01

    Saccadic suppression-the reduction of visual sensitivity during rapid eye movements-has previously been proposed to reflect a specific suppression of the magnocellular visual system, with the initial neural site of that suppression at or prior to afferent visual information reaching striate cortex. Dysfunction in the magnocellular visual pathway has also been associated with perceptual and physiological anomalies in individuals with autism spectrum disorder or high autistic tendency, leading us to question whether saccadic suppression is altered in the broader autism phenotype. Here we show that individuals with high autistic tendency show greater saccadic suppression of low versus high spatial frequency gratings while those with low autistic tendency do not. In addition, those with high but not low autism spectrum quotient (AQ) demonstrated pre-cortical (35-45 ms) evoked potential differences (saccade versus fixation) to a large, low contrast, pseudo-randomly flashing bar. Both AQ groups showed similar differential visual evoked potential effects in later epochs (80-160 ms) at high contrast. Thus, the magnocellular theory of saccadic suppression appears untenable as a general description for the typically developing population. Our results also suggest that the bias towards local perceptual style reported in autism may be due to selective suppression of low spatial frequency information accompanying every saccadic eye movement.

  20. Vision after 53 years of blindness.

    PubMed

    Sikl, Radovan; Simecček, Michal; Porubanová-Norquist, Michaela; Bezdíček, Ondřej; Kremláček, Jan; Stodůlka, Pavel; Fine, Ione; Ostrovsky, Yuri

    2013-01-01

    Several studies have shown that visual recovery after blindness that occurs early in life is never complete. The current study investigated whether an extremely long period of blindness might also cause a permanent impairment of visual performance, even in a case of adult-onset blindness. We examined KP, a 71-year-old man who underwent a successful sight-restoring operation after 53 years of blindness. A set of psychophysical tests designed to assess KP's face perception, object recognition, and visual space perception abilities were conducted six months and eight months after the surgery. The results demonstrate that regardless of a lengthy period of normal vision and rich pre-accident perceptual experience, KP did not fully integrate this experience, and his visual performance remained greatly compromised. This was particularly evident when the tasks targeted finer levels of perceptual processing. In addition to the decreased robustness of his memory representations, which was hypothesized as the main factor determining visual impairment, other factors that may have affected KP's performance were considered, including compromised visual functions, problems with perceptual organization, deficits in the simultaneous processing of visual information, and reduced cognitive abilities.

  1. Color opponent receptive fields self-organize in a biophysical model of visual cortex via spike-timing dependent plasticity

    PubMed Central

    Eguchi, Akihiro; Neymotin, Samuel A.; Stringer, Simon M.

    2014-01-01

    Although many computational models have been proposed to explain orientation maps in primary visual cortex (V1), it is not yet known how similar clusters of color-selective neurons in macaque V1/V2 are connected and develop. In this work, we address the problem of understanding the cortical processing of color information with a possible mechanism of the development of the patchy distribution of color selectivity via computational modeling. Each color input is decomposed into a red, green, and blue representation and transmitted to the visual cortex via a simulated optic nerve in a luminance channel and red–green and blue–yellow opponent color channels. Our model of the early visual system consists of multiple topographically-arranged layers of excitatory and inhibitory neurons, with sparse intra-layer connectivity and feed-forward connectivity between layers. Layers are arranged based on anatomy of early visual pathways, and include a retina, lateral geniculate nucleus, and layered neocortex. Each neuron in the V1 output layer makes synaptic connections to neighboring neurons and receives the three types of signals in the different channels from the corresponding photoreceptor position. Synaptic weights are randomized and learned using spike-timing-dependent plasticity (STDP). After training with natural images, the neurons display heightened sensitivity to specific colors. Information-theoretic analysis reveals mutual information between particular stimuli and responses, and that the information reaches a maximum with fewer neurons in the higher layers, indicating that estimations of the input colors can be done using the output of fewer cells in the later stages of cortical processing. In addition, cells with similar color receptive fields form clusters. Analysis of spiking activity reveals increased firing synchrony between neurons when particular color inputs are presented or removed (ON-cell/OFF-cell). PMID:24659956

  2. Formulating qualitative features using interactive visualization for analysis of multivariate spatiotemporal data

    NASA Astrophysics Data System (ADS)

    Porter, M.; Hill, M. C.; Pierce, S. A.; Gil, Y.; Pennington, D. D.

    2017-12-01

    DiscoverWater is a web-based visualization tool developed to enable the visual representation of data, and thus, aid scientific and societal understanding of hydrologic systems. Open data sources are coalesced to, for example, illustrate the impacts on streamflow of irrigation withdrawals. Scientists and stakeholders are informed through synchronized time-series data plots that correlate multiple spatiotemporal datasets and an interactive time-evolving map that provides a spatial analytical context. Together, these components elucidate trends so that the user can try to envision the relations between groundwater-surface water interactions, the impacts of pumping on these interactions, and the interplay of climate. Aligning data in this manner has the capacity for interdisciplinary knowledge discovery and motivates dialogue about system processes that we seek to enhance through qualitative features informed through quantitative models. DiscoverWater and its connection is demonstrated using two field cases. First, it is used to visualize data sets from the High Plains aquifer, where reservoir- and groundwater-supported irrigation has affected the Arkansas River in western Kansas. Second, data and model results from Barton Springs segment of the Edwards aquifer in Texas reveal the effects of regional pumping on this important urbanizing aquifer system. Identifying what is interesting about the data and the modeled system in the two different case studies is a step towards moving typically static visualization capabilities to an adaptive framework. Additionally, the dashboard interface incorporates both quantitative and qualitative information about distinctive case studies in a machine-readable form, such that a catalog of qualitative models can capture subject matter expertise alongside associated datasets. As the catalog is expanded to include other case studies, the collection has potential to establish a standard framework able to inform intelligent system reasoning.

  3. Task–Technology Fit of Video Telehealth for Nurses in an Outpatient Clinic Setting

    PubMed Central

    Finkelstein, Stanley M.

    2014-01-01

    Abstract Background: Incorporating telehealth into outpatient care delivery supports management of consumer health between clinic visits. Task–technology fit is a framework for understanding how technology helps and/or hinders a person during work processes. Evaluating the task–technology fit of video telehealth for personnel working in a pediatric outpatient clinic and providing care between clinic visits ensures the information provided matches the information needed to support work processes. Materials and Methods: The workflow of advanced practice registered nurse (APRN) care coordination provided via telephone and video telehealth was described and measured using a mixed-methods workflow analysis protocol that incorporated cognitive ethnography and time–motion study. Qualitative and quantitative results were merged and analyzed within the task–technology fit framework to determine the workflow fit of video telehealth for APRN care coordination. Results: Incorporating video telehealth into APRN care coordination workflow provided visual information unavailable during telephone interactions. Despite additional tasks and interactions needed to obtain the visual information, APRN workflow efficiency, as measured by time, was not significantly changed. Analyzed within the task–technology fit framework, the increased visual information afforded by video telehealth supported the assessment and diagnostic information needs of the APRN. Conclusions: Telehealth must provide the right information to the right clinician at the right time. Evaluating task–technology fit using a mixed-methods protocol ensured rigorous analysis of fit within work processes and identified workflows that benefit most from the technology. PMID:24841219

  4. Representational dynamics of object recognition: Feedforward and feedback information flows.

    PubMed

    Goddard, Erin; Carlson, Thomas A; Dermody, Nadene; Woolgar, Alexandra

    2016-03-01

    Object perception involves a range of visual and cognitive processes, and is known to include both a feedfoward flow of information from early visual cortical areas to higher cortical areas, along with feedback from areas such as prefrontal cortex. Previous studies have found that low and high spatial frequency information regarding object identity may be processed over different timescales. Here we used the high temporal resolution of magnetoencephalography (MEG) combined with multivariate pattern analysis to measure information specifically related to object identity in peri-frontal and peri-occipital areas. Using stimuli closely matched in their low-level visual content, we found that activity in peri-occipital cortex could be used to decode object identity from ~80ms post stimulus onset, and activity in peri-frontal cortex could also be used to decode object identity from a later time (~265ms post stimulus onset). Low spatial frequency information related to object identity was present in the MEG signal at an earlier time than high spatial frequency information for peri-occipital cortex, but not for peri-frontal cortex. We additionally used Granger causality analysis to compare feedforward and feedback influences on representational content, and found evidence of both an early feedfoward flow and later feedback flow of information related to object identity. We discuss our findings in relation to existing theories of object processing and propose how the methods we use here could be used to address further questions of the neural substrates underlying object perception. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Augmented Reality in Scientific Publications-Taking the Visualization of 3D Structures to the Next Level.

    PubMed

    Wolle, Patrik; Müller, Matthias P; Rauh, Daniel

    2018-03-16

    The examination of three-dimensional structural models in scientific publications allows the reader to validate or invalidate conclusions drawn by the authors. However, either due to a (temporary) lack of access to proper visualization software or a lack of proficiency, this information is not necessarily available to every reader. As the digital revolution is quickly progressing, technologies have become widely available that overcome the limitations and offer to all the opportunity to appreciate models not only in 2D, but also in 3D. Additionally, mobile devices such as smartphones and tablets allow access to this information almost anywhere, at any time. Since access to such information has only recently become standard practice, we want to outline straightforward ways to incorporate 3D models in augmented reality into scientific publications, books, posters, and presentations and suggest that this should become general practice.

  6. On the selection and evaluation of visual display symbology Factors influencing search and identification times

    NASA Technical Reports Server (NTRS)

    Remington, Roger; Williams, Douglas

    1986-01-01

    Three single-target visual search tasks were used to evaluate a set of cathode-ray tube (CRT) symbols for a helicopter situation display. The search tasks were representative of the information extraction required in practice, and reaction time was used to measure the efficiency with which symbols could be located and identified. Familiar numeric symbols were responded to more quickly than graphic symbols. The addition of modifier symbols, such as a nearby flashing dot or surrounding square, had a greater disruptive effect on the graphic symbols than did the numeric characters. The results suggest that a symbol set is, in some respects, like a list that must be learned. Factors that affect the time to identify items in a memory task, such as familiarity and visual discriminability, also affect the time to identify symbols. This analogy has broad implications for the design of symbol sets. An attempt was made to model information access with this class of display.

  7. Topological Cacti: Visualizing Contour-based Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weber, Gunther H.; Bremer, Peer-Timo; Pascucci, Valerio

    2011-05-26

    Contours, the connected components of level sets, play an important role in understanding the global structure of a scalar field. In particular their nestingbehavior and topology-often represented in form of a contour tree-have been used extensively for visualization and analysis. However, traditional contour trees onlyencode structural properties like number of contours or the nesting of contours, but little quantitative information such as volume or other statistics. Here we use thesegmentation implied by a contour tree to compute a large number of per-contour (interval) based statistics of both the function defining the contour tree as well asother co-located functions. We introducemore » a new visual metaphor for contour trees, called topological cacti, that extends the traditional toporrery display of acontour tree to display additional quantitative information as width of the cactus trunk and length of its spikes. We apply the new technique to scalar fields ofvarying dimension and different measures to demonstrate the effectiveness of the approach.« less

  8. Experimental and Computational Studies of Cortical Neural Network Properties Through Signal Processing

    NASA Astrophysics Data System (ADS)

    Clawson, Wesley Patrick

    Previous studies, both theoretical and experimental, of network level dynamics in the cerebral cortex show evidence for a statistical phenomenon called criticality; a phenomenon originally studied in the context of phase transitions in physical systems and that is associated with favorable information processing in the context of the brain. The focus of this thesis is to expand upon past results with new experimentation and modeling to show a relationship between criticality and the ability to detect and discriminate sensory input. A line of theoretical work predicts maximal sensory discrimination as a functional benefit of criticality, which can then be characterized using mutual information between sensory input, visual stimulus, and neural response,. The primary finding of our experiments in the visual cortex in turtles and neuronal network modeling confirms this theoretical prediction. We show that sensory discrimination is maximized when visual cortex operates near criticality. In addition to presenting this primary finding in detail, this thesis will also address our preliminary results on change-point-detection in experimentally measured cortical dynamics.

  9. 78 FR 65180 - Airworthiness Directives; MD Helicopters, Inc., Helicopters

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-31

    ... reducing the retirement life of each tail rotor blade (blade), performing a one-time visual inspection of... required reporting information to the FAA within 24 hours following the one-time inspection. Since we... pitting and the shot peen surface's condition in addition to cracks and corrosion, and adds certain part...

  10. Profiling Oman education data using data visualization technique

    NASA Astrophysics Data System (ADS)

    Alalawi, Sultan Juma Sultan; Shaharanee, Izwan Nizal Mohd; Jamil, Jastini Mohd

    2016-10-01

    This research works presents an innovative data visualization technique to understand and visualize the information of Oman's education data generated from the Ministry of Education Oman "Educational Portal". The Ministry of Education in Sultanate of Oman have huge databases contains massive information. The volume of data in the database increase yearly as many students, teachers and employees enter into the database. The task for discovering and analyzing these vast volumes of data becomes increasingly difficult. Information visualization and data mining offer a better ways in dealing with large volume of information. In this paper, an innovative information visualization technique is developed to visualize the complex multidimensional educational data. Microsoft Excel Dashboard, Visual Basic Application (VBA) and Pivot Table are utilized to visualize the data. Findings from the summarization of the data are presented, and it is argued that information visualization can help related stakeholders to become aware of hidden and interesting information from large amount of data drowning in their educational portal.

  11. Neural Substrates of Visual Spatial Coding and Visual Feedback Control for Hand Movements in Allocentric and Target-Directed Tasks

    PubMed Central

    Thaler, Lore; Goodale, Melvyn A.

    2011-01-01

    Neuropsychological evidence suggests that different brain areas may be involved in movements that are directed at visual targets (e.g., pointing or reaching), and movements that are based on allocentric visual information (e.g., drawing or copying). Here we used fMRI to investigate the neural correlates of these two types of movements in healthy volunteers. Subjects (n = 14) performed right hand movements in either a target-directed task (moving a cursor to a target dot) or an allocentric task (moving a cursor to reproduce the distance and direction between two distal target dots) with or without visual feedback about their hand movement. Movements were monitored with an MR compatible touch panel. A whole brain analysis revealed that movements in allocentric conditions led to an increase in activity in the fundus of the left intra-parietal sulcus (IPS), in posterior IPS, in bilateral dorsal premotor cortex (PMd), and in the lateral occipital complex (LOC). Visual feedback in both target-directed and allocentric conditions led to an increase in activity in area MT+, superior parietal–occipital cortex (SPOC), and posterior IPS (all bilateral). In addition, we found that visual feedback affected brain activity differently in target-directed as compared to allocentric conditions, particularly in the pre-supplementary motor area, PMd, IPS, and parieto-occipital cortex. Our results, in combination with previous findings, suggest that the LOC is essential for allocentric visual coding and that SPOC is involved in visual feedback control. The differences in brain activity between target-directed and allocentric visual feedback conditions may be related to behavioral differences in visual feedback control. Our results advance the understanding of the visual coordinate frame used by the LOC. In addition, because of the nature of the allocentric task, our results have relevance for the understanding of neural substrates of magnitude estimation and vector coding of movements. PMID:21941474

  12. Windows to the soul: vision science as a tool for studying biological mechanisms of information processing deficits in schizophrenia.

    PubMed

    Yoon, Jong H; Sheremata, Summer L; Rokem, Ariel; Silver, Michael A

    2013-10-31

    Cognitive and information processing deficits are core features and important sources of disability in schizophrenia. Our understanding of the neural substrates of these deficits remains incomplete, in large part because the complexity of impairments in schizophrenia makes the identification of specific deficits very challenging. Vision science presents unique opportunities in this regard: many years of basic research have led to detailed characterization of relationships between structure and function in the early visual system and have produced sophisticated methods to quantify visual perception and characterize its neural substrates. We present a selective review of research that illustrates the opportunities for discovery provided by visual studies in schizophrenia. We highlight work that has been particularly effective in applying vision science methods to identify specific neural abnormalities underlying information processing deficits in schizophrenia. In addition, we describe studies that have utilized psychophysical experimental designs that mitigate generalized deficit confounds, thereby revealing specific visual impairments in schizophrenia. These studies contribute to accumulating evidence that early visual cortex is a useful experimental system for the study of local cortical circuit abnormalities in schizophrenia. The high degree of similarity across neocortical areas of neuronal subtypes and their patterns of connectivity suggests that insights obtained from the study of early visual cortex may be applicable to other brain regions. We conclude with a discussion of future studies that combine vision science and neuroimaging methods. These studies have the potential to address pressing questions in schizophrenia, including the dissociation of local circuit deficits vs. impairments in feedback modulation by cognitive processes such as spatial attention and working memory, and the relative contributions of glutamatergic and GABAergic deficits.

  13. [Allocation of attentional resource and monitoring processes under rapid serial visual presentation].

    PubMed

    Nishiura, K

    1998-08-01

    With the use of rapid serial visual presentation (RSVP), the present study investigated the cause of target intrusion errors and functioning of monitoring processes. Eighteen students participated in Experiment 1, and 24 in Experiment 2. In Experiment 1, different target intrusion errors were found depending on different kinds of letters --romaji, hiragana, and kanji. In Experiment 2, stimulus set size and context information were manipulated in an attempt to explore the cause of post-target intrusion errors. Results showed that as stimulus set size increased, the post-target intrusion errors also increased, but contextual information did not affect the errors. Results concerning mean report probability indicated that increased allocation of attentional resource to response-defining dimension was the cause of the errors. In addition, results concerning confidence rating showed that monitoring of temporal and contextual information was extremely accurate, but it was not so for stimulus information. These results suggest that attentional resource is different from monitoring resource.

  14. Sockeye: A 3D Environment for Comparative Genomics

    PubMed Central

    Montgomery, Stephen B.; Astakhova, Tamara; Bilenky, Mikhail; Birney, Ewan; Fu, Tony; Hassel, Maik; Melsopp, Craig; Rak, Marcin; Robertson, A. Gordon; Sleumer, Monica; Siddiqui, Asim S.; Jones, Steven J.M.

    2004-01-01

    Comparative genomics techniques are used in bioinformatics analyses to identify the structural and functional properties of DNA sequences. As the amount of available sequence data steadily increases, the ability to perform large-scale comparative analyses has become increasingly relevant. In addition, the growing complexity of genomic feature annotation means that new approaches to genomic visualization need to be explored. We have developed a Java-based application called Sockeye that uses three-dimensional (3D) graphics technology to facilitate the visualization of annotation and conservation across multiple sequences. This software uses the Ensembl database project to import sequence and annotation information from several eukaryotic species. A user can additionally import their own custom sequence and annotation data. Individual annotation objects are displayed in Sockeye by using custom 3D models. Ensembl-derived and imported sequences can be analyzed by using a suite of multiple and pair-wise alignment algorithms. The results of these comparative analyses are also displayed in the 3D environment of Sockeye. By using the Java3D API to visualize genomic data in a 3D environment, we are able to compactly display cross-sequence comparisons. This provides the user with a novel platform for visualizing and comparing genomic feature organization. PMID:15123592

  15. 32 CFR 811.8 - Forms prescribed and availability of publications.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... FORCE SALES AND SERVICES RELEASE, DISSEMINATION, AND SALE OF VISUAL INFORMATION MATERIALS § 811.8 Forms prescribed and availability of publications. (a) AF Form 833, Visual Information Request, AF Form 1340, Visual Information Support Center Workload Report, DD Form 1995, Visual Information (VI) Production...

  16. The role of 3-D interactive visualization in blind surveys of H I in galaxies

    NASA Astrophysics Data System (ADS)

    Punzo, D.; van der Hulst, J. M.; Roerdink, J. B. T. M.; Oosterloo, T. A.; Ramatsoku, M.; Verheijen, M. A. W.

    2015-09-01

    Upcoming H I surveys will deliver large datasets, and automated processing using the full 3-D information (two positional dimensions and one spectral dimension) to find and characterize H I objects is imperative. In this context, visualization is an essential tool for enabling qualitative and quantitative human control on an automated source finding and analysis pipeline. We discuss how Visual Analytics, the combination of automated data processing and human reasoning, creativity and intuition, supported by interactive visualization, enables flexible and fast interaction with the 3-D data, helping the astronomer to deal with the analysis of complex sources. 3-D visualization, coupled to modeling, provides additional capabilities helping the discovery and analysis of subtle structures in the 3-D domain. The requirements for a fully interactive visualization tool are: coupled 1-D/2-D/3-D visualization, quantitative and comparative capabilities, combined with supervised semi-automated analysis. Moreover, the source code must have the following characteristics for enabling collaborative work: open, modular, well documented, and well maintained. We review four state of-the-art, 3-D visualization packages assessing their capabilities and feasibility for use in the case of 3-D astronomical data.

  17. Stereo depth and the control of locomotive heading

    NASA Astrophysics Data System (ADS)

    Rushton, Simon K.; Harris, Julie M.

    1998-04-01

    Does the addition of stereoscopic depth aid steering--the perceptual control of locomotor heading--around an environment? This is a critical question when designing a tele-operation or Virtual Environment system, with implications for computational resources and visual comfort. We examined the role of stereoscopic depth in the perceptual control of heading by employing an active steering task. Three conditions were tested: stereoscopic depth; incorrect stereoscopic depth and no stereoscopic depth. Results suggest that stereoscopic depth does not improve performance in a visual control task. A further set of experiments examined the importance of a ground plane. As a ground plane is a common feature of all natural environments and provides a pictorial depth cue, it has been suggested that the visual system may be especially attuned to exploit its presence. Thus it would be predicted that a ground plane would aid judgments of locomotor heading. Results suggest that the presence of rich motion information in the lower visual field produces significant performance advantages and that provision of such information may prove a better target for system resources than stereoscopic depth. These findings have practical consequences for a system designer and also challenge previous theoretical and psychophysical perceptual research.

  18. VisIRR: A Visual Analytics System for Information Retrieval and Recommendation for Large-Scale Document Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choo, Jaegul; Kim, Hannah; Clarkson, Edward

    In this paper, we present an interactive visual information retrieval and recommendation system, called VisIRR, for large-scale document discovery. VisIRR effectively combines the paradigms of (1) a passive pull through query processes for retrieval and (2) an active push that recommends items of potential interest to users based on their preferences. Equipped with an efficient dynamic query interface against a large-scale corpus, VisIRR organizes the retrieved documents into high-level topics and visualizes them in a 2D space, representing the relationships among the topics along with their keyword summary. In addition, based on interactive personalized preference feedback with regard to documents,more » VisIRR provides document recommendations from the entire corpus, which are beyond the retrieved sets. Such recommended documents are visualized in the same space as the retrieved documents, so that users can seamlessly analyze both existing and newly recommended ones. This article presents novel computational methods, which make these integrated representations and fast interactions possible for a large-scale document corpus. We illustrate how the system works by providing detailed usage scenarios. Finally, we present preliminary user study results for evaluating the effectiveness of the system.« less

  19. VisIRR: A Visual Analytics System for Information Retrieval and Recommendation for Large-Scale Document Data

    DOE PAGES

    Choo, Jaegul; Kim, Hannah; Clarkson, Edward; ...

    2018-01-31

    In this paper, we present an interactive visual information retrieval and recommendation system, called VisIRR, for large-scale document discovery. VisIRR effectively combines the paradigms of (1) a passive pull through query processes for retrieval and (2) an active push that recommends items of potential interest to users based on their preferences. Equipped with an efficient dynamic query interface against a large-scale corpus, VisIRR organizes the retrieved documents into high-level topics and visualizes them in a 2D space, representing the relationships among the topics along with their keyword summary. In addition, based on interactive personalized preference feedback with regard to documents,more » VisIRR provides document recommendations from the entire corpus, which are beyond the retrieved sets. Such recommended documents are visualized in the same space as the retrieved documents, so that users can seamlessly analyze both existing and newly recommended ones. This article presents novel computational methods, which make these integrated representations and fast interactions possible for a large-scale document corpus. We illustrate how the system works by providing detailed usage scenarios. Finally, we present preliminary user study results for evaluating the effectiveness of the system.« less

  20. Data visualization methods, data visualization devices, data visualization apparatuses, and articles of manufacture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Alan E.; Crow, Vernon L.; Payne, Deborah A.

    Data visualization methods, data visualization devices, data visualization apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a data visualization method includes accessing a plurality of initial documents at a first moment in time, first processing the initial documents providing processed initial documents, first identifying a plurality of first associations of the initial documents using the processed initial documents, generating a first visualization depicting the first associations, accessing a plurality of additional documents at a second moment in time after the first moment in time, second processing the additional documents providing processed additional documents, secondmore » identifying a plurality of second associations of the additional documents and at least some of the initial documents, wherein the second identifying comprises identifying using the processed initial documents and the processed additional documents, and generating a second visualization depicting the second associations.« less

  1. Current food chain information provides insufficient information for modern meat inspection of pigs.

    PubMed

    Felin, Elina; Jukola, Elias; Raulo, Saara; Heinonen, Jaakko; Fredriksson-Ahomaa, Maria

    2016-05-01

    Meat inspection now incorporates a more risk-based approach for protecting human health against meat-borne biological hazards. Official post-mortem meat inspection of pigs has shifted to visual meat inspection. The official veterinarian decides on additional post-mortem inspection procedures, such as incisions and palpations. The decision is based on declarations in the food chain information (FCI), ante-mortem inspection and post-mortem inspection. However, a smooth slaughter and inspection process is essential. Therefore, one should be able to assess prior to slaughter which pigs are suitable for visual meat inspection only, and which need more profound inspection procedures. This study evaluates the usability of the FCI provided by pig producers and considered the possibility for risk ranking of incoming slaughter batches according to the previous meat inspection data and the current FCI. Eighty-five slaughter batches comprising 8954 fattening pigs were randomly selected at a slaughterhouse that receives animals from across Finland. The mortality rate, the FCI and the meat inspection results for each batch were obtained. The current FCI alone provided insufficient and inaccurate information for risk ranking purposes for meat inspection. The partial condemnation rate for a batch was best predicted by the partial condemnation rate calculated for all the pigs sent for slaughter from the same holding in the previous year (p<0.001) and by prior information on cough declared in the current FCI (p=0.02) statement. Training and information to producers are needed to make the FCI reporting procedures more accurate. Historical meat inspection data on pigs slaughtered from the same holdings and well-chosen symptoms/signs for reporting, should be included in the FCI to facilitate the allocation of pigs for visual inspection. The introduced simple scoring system can be easily used for additional information for directing batches to appropriate meat inspection procedures. To control the main biological public health hazards related to pork, serological surveillance should be done and the information obtained from analyses should be used as part of the FCI. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Deep visual-semantic for crowded video understanding

    NASA Astrophysics Data System (ADS)

    Deng, Chunhua; Zhang, Junwen

    2018-03-01

    Visual-semantic features play a vital role for crowded video understanding. Convolutional Neural Networks (CNNs) have experienced a significant breakthrough in learning representations from images. However, the learning of visualsemantic features, and how it can be effectively extracted for video analysis, still remains a challenging task. In this study, we propose a novel visual-semantic method to capture both appearance and dynamic representations. In particular, we propose a spatial context method, based on the fractional Fisher vector (FV) encoding on CNN features, which can be regarded as our main contribution. In addition, to capture temporal context information, we also applied fractional encoding method on dynamic images. Experimental results on the WWW crowed video dataset demonstrate that the proposed method outperform the state of the art.

  3. Using Tests Designed to Measure Individual Sensorimotor Subsystem Perfomance to Predict Locomotor Adaptability

    NASA Technical Reports Server (NTRS)

    Peters, B. T.; Caldwell, E. E.; Batson, C. D.; Guined, J. R.; DeDios, Y. E.; Stepanyan, V.; Gadd, N. E.; Szecsy, D. L.; Mulavara, A. P.; Seidler, R. D.; hide

    2014-01-01

    Astronauts experience sensorimotor disturbances during the initial exposure to microgravity and during the readapation phase following a return to a gravitational environment. These alterations may lead to disruption in the ability to perform mission critical functions during and after these gravitational transitions. Astronauts show significant inter-subject variation in adaptive capability following gravitational transitions. The way each individual's brain synthesizes the available visual, vestibular and somatosensory information is likely the basis for much of the variation. Identifying the presence of biases in each person's use of information available from these sensorimotor subsystems and relating it to their ability to adapt to a novel locomotor task will allow us to customize a training program designed to enhance sensorimotor adaptability. Eight tests are being used to measure sensorimotor subsystem performance. Three of these use measures of body sway to characterize balance during varying sensorimotor challenges. The effect of vision is assessed by repeating conditions with eyes open and eyes closed. Standing on foam, or on a support surface that pitches to maintain a constant ankle angle provide somatosensory challenges. Information from the vestibular system is isolated when vision is removed and the support surface is compromised, and it is challenged when the tasks are done while the head is in motion. The integration and dominance of visual information is assessed in three additional tests. The Rod & Frame Test measures the degree to which a subject's perception of the visual vertical is affected by the orientation of a tilted frame in the periphery. Locomotor visual dependence is determined by assessing how much an oscillating virtual visual world affects a treadmill-walking subject. In the third of the visual manipulation tests, subjects walk an obstacle course while wearing up-down reversing prisms. The two remaining tests include direct measures of knee and ankle proprioception and a functional movement assessment that screens for movement restrictions and asymmetries. To assess each subject's locomotor adaptability subjects walk for twenty minutes on a treadmill that oscillates laterally at 0.3 Hz. Throughout the test metabolic cost provides a measure of exertion and step frequency provides a measure of stability. Additionally, at four points during the perturbation period, reaction time tests are used to probe changes in the amount of mental effort being used to perform the task. As with the adaptive capability observed in astronauts during gravitational transitions, our data shows significant variability between subjects. To aid in the analysis of the results, custom software tools have been developed to enhance in the visualization of the large number of output variables. Preliminary analyses of the data collected to date do not show a strong relationship between adaptability and any single predictor variable. Analysis continues to identify a multifactorial predictor outcome "signature" that do inform us of locomotor adaptability.

  4. Information efficiency in visual communication

    NASA Astrophysics Data System (ADS)

    Alter-Gartenberg, Rachel; Rahman, Zia-ur

    1993-08-01

    This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.

  5. Information efficiency in visual communication

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel; Rahman, Zia-Ur

    1993-01-01

    This paper evaluates the quantization process in the context of the end-to-end performance of the visual-communication channel. Results show that the trade-off between data transmission and visual quality revolves around the information in the acquired signal, not around its energy. Improved information efficiency is gained by frequency dependent quantization that maintains the information capacity of the channel and reduces the entropy of the encoded signal. Restorations with energy bit-allocation lose both in sharpness and clarity relative to restorations with information bit-allocation. Thus, quantization with information bit-allocation is preferred for high information efficiency and visual quality in optimized visual communication.

  6. 3D Virtual Environment Used to Support Lighting System Management in a Building

    NASA Astrophysics Data System (ADS)

    Sampaio, A. Z.; Ferreira, M. M.; Rosário, D. P.

    The main aim of the research project, which is in progress at the UTL, is to develop a virtual interactive model as a tool to support decision-making in the planning of construction maintenance and facilities management. The virtual model gives the capacity to allow the user to transmit, visually and interactively, information related to the components of a building, defined as a function of the time variable. In addition, the analysis of solutions for repair work/substitution and inherent cost are predicted, the results being obtained interactively and visualized in the virtual environment itself. The first component of the virtual prototype concerns the management of lamps in a lighting system. It was applied in a study case. The interactive application allows the examination of the physical model, visualizing, for each element modeled in 3D and linked to a database, the corresponding technical information concerned with the use of the material, calculated for different points in time during their life. The control of a lamp stock, the constant updating of lifetime information and the planning of periodical local inspections are attended on the prototype. This is an important mean of cooperation between collaborators involved in the building management.

  7. Analysis of impact of geographic characteristics on suicide rate and visualization of result with Geographic Information System.

    PubMed

    Oka, Mayumi; Kubota, Takafumi; Tsubaki, Hiroe; Yamauchi, Keita

    2015-06-01

    The aim of our study was to understand the geographic characteristics of Japanese communities and the impact of these characteristics on suicide rates. We calculated the standardized mortality ratio from suicide statistics of 3318 municipalities from 1972 to 2002. Correlation analysis, multi-regression analysis and generalized additive model were used to find the relation between topographic and climatic variables and suicide rate. We visualized the relation between geographic characteristics and suicide rate on the map of Wakayama Prefecture, using the Geographic Information System. Our study showed that the geographic characteristics of each community are related with its suicide rate. The strongest factor among the geographic characteristics to increase the suicide rate was the slope of the habitable land. It is necessary to take the characteristics of each community into consideration when we work out measures of suicide prevention. Visualization of the findings on the local map should be helpful to promote understanding of problems and to share the information among various parties in charge of suicide prevention. © 2014 The Authors. Psychiatry and Clinical Neurosciences © 2014 Japanese Society of Psychiatry and Neurology.

  8. 32 CFR 811.3 - Official requests for visual information productions or materials.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... THE AIR FORCE SALES AND SERVICES RELEASE, DISSEMINATION, AND SALE OF VISUAL INFORMATION MATERIALS § 811.3 Official requests for visual information productions or materials. (a) Send official Air Force... 32 National Defense 6 2010-07-01 2010-07-01 false Official requests for visual information...

  9. 32 CFR 811.4 - Selling visual information materials.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... SERVICES RELEASE, DISSEMINATION, AND SALE OF VISUAL INFORMATION MATERIALS § 811.4 Selling visual information materials. (a) Air Force VI activities cannot sell materials. (b) HQ AFCIC/ITSM may approve the... 32 National Defense 6 2010-07-01 2010-07-01 false Selling visual information materials. 811.4...

  10. Skill dependent audiovisual integration in the fusiform induces repetition suppression.

    PubMed

    McNorgan, Chris; Booth, James R

    2015-02-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression

    PubMed Central

    McNorgan, Chris; Booth, James R.

    2015-01-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276

  12. Color Processing in the Early Visual System of Drosophila.

    PubMed

    Schnaitmann, Christopher; Haikala, Väinö; Abraham, Eva; Oberhauser, Vitus; Thestrup, Thomas; Griesbeck, Oliver; Reiff, Dierk F

    2018-01-11

    Color vision extracts spectral information by comparing signals from photoreceptors with different visual pigments. Such comparisons are encoded by color-opponent neurons that are excited at one wavelength and inhibited at another. Here, we examine the circuit implementation of color-opponent processing in the Drosophila visual system by combining two-photon calcium imaging with genetic dissection of visual circuits. We report that color-opponent processing of UV short /blue and UV long /green is already implemented in R7/R8 inner photoreceptor terminals of "pale" and "yellow" ommatidia, respectively. R7 and R8 photoreceptors of the same type of ommatidia mutually inhibit each other directly via HisCl1 histamine receptors and receive additional feedback inhibition that requires the second histamine receptor Ort. Color-opponent processing at the first visual synapse represents an unexpected commonality between Drosophila and vertebrates; however, the differences in the molecular and cellular implementation suggest that the same principles evolved independently. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Orientation-Selective Retinal Circuits in Vertebrates

    PubMed Central

    Antinucci, Paride; Hindges, Robert

    2018-01-01

    Visual information is already processed in the retina before it is transmitted to higher visual centers in the brain. This includes the extraction of salient features from visual scenes, such as motion directionality or contrast, through neurons belonging to distinct neural circuits. Some retinal neurons are tuned to the orientation of elongated visual stimuli. Such ‘orientation-selective’ neurons are present in the retinae of most, if not all, vertebrate species analyzed to date, with species-specific differences in frequency and degree of tuning. In some cases, orientation-selective neurons have very stereotyped functional and morphological properties suggesting that they represent distinct cell types. In this review, we describe the retinal cell types underlying orientation selectivity found in various vertebrate species, and highlight their commonalities and differences. In addition, we discuss recent studies that revealed the cellular, synaptic and circuit mechanisms at the basis of retinal orientation selectivity. Finally, we outline the significance of these findings in shaping our current understanding of how this fundamental neural computation is implemented in the visual systems of vertebrates. PMID:29467629

  14. Orientation-Selective Retinal Circuits in Vertebrates.

    PubMed

    Antinucci, Paride; Hindges, Robert

    2018-01-01

    Visual information is already processed in the retina before it is transmitted to higher visual centers in the brain. This includes the extraction of salient features from visual scenes, such as motion directionality or contrast, through neurons belonging to distinct neural circuits. Some retinal neurons are tuned to the orientation of elongated visual stimuli. Such 'orientation-selective' neurons are present in the retinae of most, if not all, vertebrate species analyzed to date, with species-specific differences in frequency and degree of tuning. In some cases, orientation-selective neurons have very stereotyped functional and morphological properties suggesting that they represent distinct cell types. In this review, we describe the retinal cell types underlying orientation selectivity found in various vertebrate species, and highlight their commonalities and differences. In addition, we discuss recent studies that revealed the cellular, synaptic and circuit mechanisms at the basis of retinal orientation selectivity. Finally, we outline the significance of these findings in shaping our current understanding of how this fundamental neural computation is implemented in the visual systems of vertebrates.

  15. Classification of cognitive systems dedicated to data sharing

    NASA Astrophysics Data System (ADS)

    Ogiela, Lidia; Ogiela, Marek R.

    2017-08-01

    In this paper will be presented classification of new cognitive information systems dedicated to cryptographic data splitting and sharing processes. Cognitive processes of semantic data analysis and interpretation, will be used to describe new classes of intelligent information and vision systems. In addition, cryptographic data splitting algorithms and cryptographic threshold schemes will be used to improve processes of secure and efficient information management with application of such cognitive systems. The utility of the proposed cognitive sharing procedures and distributed data sharing algorithms will be also presented. A few possible application of cognitive approaches for visual information management and encryption will be also described.

  16. The representation of visual depth perception based on the plenoptic function in the retina and its neural computation in visual cortex V1.

    PubMed

    Songnian, Zhao; Qi, Zou; Chang, Liu; Xuemin, Liu; Shousi, Sun; Jun, Qiu

    2014-04-23

    How it is possible to "faithfully" represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway's optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. 1. We introduce two different mathematical expressions of the plenoptic functions, Pw and Pv that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene.2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex.3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three-dimensional Cartesian coordinate system into a two dimensional plane reveals a corresponding mapping between a retinal image and the vanishing point and line.

  17. The representation of visual depth perception based on the plenoptic function in the retina and its neural computation in visual cortex V1

    PubMed Central

    2014-01-01

    Background How it is possible to “faithfully” represent a three-dimensional stereoscopic scene using Cartesian coordinates on a plane, and how three-dimensional perceptions differ between an actual scene and an image of the same scene are questions that have not yet been explored in depth. They seem like commonplace phenomena, but in fact, they are important and difficult issues for visual information processing, neural computation, physics, psychology, cognitive psychology, and neuroscience. Results The results of this study show that the use of plenoptic (or all-optical) functions and their dual plane parameterizations can not only explain the nature of information processing from the retina to the primary visual cortex and, in particular, the characteristics of the visual pathway’s optical system and its affine transformation, but they can also clarify the reason why the vanishing point and line exist in a visual image. In addition, they can better explain the reasons why a three-dimensional Cartesian coordinate system can be introduced into the two-dimensional plane to express a real three-dimensional scene. Conclusions 1. We introduce two different mathematical expressions of the plenoptic functions, P w and P v that can describe the objective world. We also analyze the differences between these two functions when describing visual depth perception, that is, the difference between how these two functions obtain the depth information of an external scene. 2. The main results include a basic method for introducing a three-dimensional Cartesian coordinate system into a two-dimensional plane to express the depth of a scene, its constraints, and algorithmic implementation. In particular, we include a method to separate the plenoptic function and proceed with the corresponding transformation in the retina and visual cortex. 3. We propose that size constancy, the vanishing point, and vanishing line form the basis of visual perception of the outside world, and that the introduction of a three-dimensional Cartesian coordinate system into a two dimensional plane reveals a corresponding mapping between a retinal image and the vanishing point and line. PMID:24755246

  18. Attention enhances multi-voxel representation of novel objects in frontal, parietal and visual cortices.

    PubMed

    Woolgar, Alexandra; Williams, Mark A; Rich, Anina N

    2015-04-01

    Selective attention is fundamental for human activity, but the details of its neural implementation remain elusive. One influential theory, the adaptive coding hypothesis (Duncan, 2001, An adaptive coding model of neural function in prefrontal cortex, Nature Reviews Neuroscience 2:820-829), proposes that single neurons in certain frontal and parietal regions dynamically adjust their responses to selectively encode relevant information. This selective representation may in turn support selective processing in more specialized brain regions such as the visual cortices. Here, we use multi-voxel decoding of functional magnetic resonance images to demonstrate selective representation of attended--and not distractor--objects in frontal, parietal, and visual cortices. In addition, we highlight a critical role for task demands in determining which brain regions exhibit selective coding. Strikingly, representation of attended objects in frontoparietal cortex was highest under conditions of high perceptual demand, when stimuli were hard to perceive and coding in early visual cortex was weak. Coding in early visual cortex varied as a function of attention and perceptual demand, while coding in higher visual areas was sensitive to the allocation of attention but robust to changes in perceptual difficulty. Consistent with high-profile reports, peripherally presented objects could also be decoded from activity at the occipital pole, a region which corresponds to the fovea. Our results emphasize the flexibility of frontoparietal and visual systems. They support the hypothesis that attention enhances the multi-voxel representation of information in the brain, and suggest that the engagement of this attentional mechanism depends critically on current task demands. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Educating the blind brain: a panorama of neural bases of vision and of training programs in organic neurovisual deficits

    PubMed Central

    Coubard, Olivier A.; Urbanski, Marika; Bourlon, Clémence; Gaumet, Marie

    2014-01-01

    Vision is a complex function, which is achieved by movements of the eyes to properly foveate targets at any location in 3D space and to continuously refresh neural information in the different visual pathways. The visual system involves five main routes originating in the retinas but varying in their destination within the brain: the occipital cortex, but also the superior colliculus (SC), the pretectum, the supra-chiasmatic nucleus, the nucleus of the optic tract and terminal dorsal, medial and lateral nuclei. Visual pathway architecture obeys systematization in sagittal and transversal planes so that visual information from left/right and upper/lower hemi-retinas, corresponding respectively to right/left and lower/upper visual fields, is processed ipsilaterally and ipsialtitudinally to hemi-retinas in left/right hemispheres and upper/lower fibers. Organic neurovisual deficits may occur at any level of this circuitry from the optic nerve to subcortical and cortical destinations, resulting in low or high-level visual deficits. In this didactic review article, we provide a panorama of the neural bases of eye movements and visual systems, and of related neurovisual deficits. Additionally, we briefly review the different schools of rehabilitation of organic neurovisual deficits, and show that whatever the emphasis is put on action or perception, benefits may be observed at both motor and perceptual levels. Given the extent of its neural bases in the brain, vision in its motor and perceptual aspects is also a useful tool to assess and modulate central nervous system (CNS) in general. PMID:25538575

  20. VisGets: coordinated visualizations for web-based information exploration and discovery.

    PubMed

    Dörk, Marian; Carpendale, Sheelagh; Collins, Christopher; Williamson, Carey

    2008-01-01

    In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets--interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.

  1. Visual Analytics of Surveillance Data on Foodborne Vibriosis, United States, 1973–2010

    PubMed Central

    Sims, Jennifer N.; Isokpehi, Raphael D.; Cooper, Gabrielle A.; Bass, Michael P.; Brown, Shyretha D.; St John, Alison L.; Gulig, Paul A.; Cohly, Hari H.P.

    2011-01-01

    Foodborne illnesses caused by microbial and chemical contaminants in food are a substantial health burden worldwide. In 2007, human vibriosis (non-cholera Vibrio infections) became a notifiable disease in the United States. In addition, Vibrio species are among the 31 major known pathogens transmitted through food in the United States. Diverse surveillance systems for foodborne pathogens also track outbreaks, illnesses, hospitalization and deaths due to non-cholera vibrios. Considering the recognition of vibriosis as a notifiable disease in the United States and the availability of diverse surveillance systems, there is a need for the development of easily deployed visualization and analysis approaches that can combine diverse data sources in an interactive manner. Current efforts to address this need are still limited. Visual analytics is an iterative process conducted via visual interfaces that involves collecting information, data preprocessing, knowledge representation, interaction, and decision making. We have utilized public domain outbreak and surveillance data sources covering 1973 to 2010, as well as visual analytics software to demonstrate integrated and interactive visualizations of data on foodborne outbreaks and surveillance of Vibrio species. Through the data visualization, we were able to identify unique patterns and/or novel relationships within and across datasets regarding (i) causative agent; (ii) foodborne outbreaks and illness per state; (iii) location of infection; (iv) vehicle (food) of infection; (v) anatomical site of isolation of Vibrio species; (vi) patients and complications of vibriosis; (vii) incidence of laboratory-confirmed vibriosis and V. parahaemolyticus outbreaks. The additional use of emerging visual analytics approaches for interaction with data on vibriosis, including non-foodborne related disease, can guide disease control and prevention as well as ongoing outbreak investigations. PMID:22174586

  2. Cognitive workload modulation through degraded visual stimuli: a single-trial EEG study

    NASA Astrophysics Data System (ADS)

    Yu, K.; Prasad, I.; Mir, H.; Thakor, N.; Al-Nashash, H.

    2015-08-01

    Objective. Our experiments explored the effect of visual stimuli degradation on cognitive workload. Approach. We investigated the subjective assessment, event-related potentials (ERPs) as well as electroencephalogram (EEG) as measures of cognitive workload. Main results. These experiments confirm that degradation of visual stimuli increases cognitive workload as assessed by subjective NASA task load index and confirmed by the observed P300 amplitude attenuation. Furthermore, the single-trial multi-level classification using features extracted from ERPs and EEG is found to be promising. Specifically, the adopted single-trial oscillatory EEG/ERP detection method achieved an average accuracy of 85% for discriminating 4 workload levels. Additionally, we found from the spatial patterns obtained from EEG signals that the frontal parts carry information that can be used for differentiating workload levels. Significance. Our results show that visual stimuli can modulate cognitive workload, and the modulation can be measured by the single trial EEG/ERP detection method.

  3. Chylothorax diagnosis: can the clinical chemistry laboratory do more?

    PubMed

    Gibbons, Stephen M; Ahmed, Farhan

    2015-01-01

    Chylothorax is a rare anatomical disruption of the thoracic duct associated with a significant degree of morbidity and mortality. Diagnosis usually relies upon lipid analysis and visual inspection of the pleural fluid. However, this may be subject to incorrect interpretation. The aim of this study was to compare pleural fluid lipid analysis and visual inspection against lipoprotein electrophoresis. Nine pleural effusion samples suspected of being chylothorax were analysed. A combination of fluid lipid analysis and visual inspection was compared with lipoprotein electrophoresis for the detection of chylothorax. There was 89% concordance between the two methods. Using lipoprotein electrophoresis as gold standard, calculated sensitivity, specificity, negative predictive value and positive predictive value for lipid analysis/visual inspection were 83%, 100%, 100% and 75%, respectively. Examination of pleural effusion samples by lipoprotein electrophoresis may provide important additional information in the diagnosis of chylothorax. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  4. Immunological multimetal deposition for rapid visualization of sweat fingerprints.

    PubMed

    He, Yayun; Xu, Linru; Zhu, Yu; Wei, Qianhui; Zhang, Meiqin; Su, Bin

    2014-11-10

    A simple method termed immunological multimetal deposition (iMMD) was developed for rapid visualization of sweat fingerprints with bare eyes, by combining the conventional MMD with the immunoassay technique. In this approach, antibody-conjugated gold nanoparticles (AuNPs) were used to specifically interact with the corresponding antigens in the fingerprint residue. The AuNPs serve as the nucleation sites for autometallographic deposition of silver particles from the silver staining solution, generating a dark ridge pattern for visual detection. Using fingerprints inked with human immunoglobulin G (hIgG), we obtained the optimal formulation of iMMD, which was then successfully applied to visualize sweat fingerprints through the detection of two secreted polypeptides, epidermal growth factor and lysozyme. In comparison with the conventional MMD, iMMD is faster and can provide additional information than just identification. Moreover, iMMD is facile and does not need expensive instruments. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Managing Spatial Selections With Contextual Snapshots

    PubMed Central

    Mindek, P; Gröller, M E; Bruckner, S

    2014-01-01

    Spatial selections are a ubiquitous concept in visualization. By localizing particular features, they can be analysed and compared in different views. However, the semantics of such selections often depend on specific parameter settings and it can be difficult to reconstruct them without additional information. In this paper, we present the concept of contextual snapshots as an effective means for managing spatial selections in visualized data. The selections are automatically associated with the context in which they have been created. Contextual snapshots can also be used as the basis for interactive integrated and linked views, which enable in-place investigation and comparison of multiple visual representations of data. Our approach is implemented as a flexible toolkit with well-defined interfaces for integration into existing systems. We demonstrate the power and generality of our techniques by applying them to several distinct scenarios such as the visualization of simulation data, the analysis of historical documents and the display of anatomical data. PMID:25821284

  6. CircosVCF: circos visualization of whole-genome sequence variations stored in VCF files.

    PubMed

    Drori, E; Levy, D; Smirin-Yosef, P; Rahimi, O; Salmon-Divon, M

    2017-05-01

    Visualization of whole-genomic variations in a meaningful manner assists researchers in gaining new insights into the underlying data, especially when it comes in the context of whole genome comparisons. CircosVCF is a web based visualization tool for genome-wide variant data described in VCF files, using circos plots. The user friendly interface of CircosVCF supports an interactive design of the circles in the plot, and the integration of additional information such as experimental data or annotations. The provided visualization capabilities give a broad overview of the genomic relationships between genomes, and allow identification of specific meaningful SNPs regions. CircosVCF was implemented in JavaScript and is available at http://www.ariel.ac.il/research/fbl/software. malisa@ariel.ac.il. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  7. Does silent reading speed in normal adult readers depend on early visual processes? evidence from event-related brain potentials.

    PubMed

    Korinth, Sebastian Peter; Sommer, Werner; Breznitz, Zvia

    2012-01-01

    Little is known about the relationship of reading speed and early visual processes in normal readers. Here we examined the association of the early P1, N170 and late N1 component in visual event-related potentials (ERPs) with silent reading speed and a number of additional cognitive skills in a sample of 52 adult German readers utilizing a Lexical Decision Task (LDT) and a Face Decision Task (FDT). Amplitudes of the N170 component in the LDT but, interestingly, also in the FDT correlated with behavioral tests measuring silent reading speed. We suggest that reading speed performance can be at least partially accounted for by the extraction of essential structural information from visual stimuli, consisting of a domain-general and a domain-specific expertise-based portion. © 2011 Elsevier Inc. All rights reserved.

  8. CyBy(2): a structure-based data management tool for chemical and biological data.

    PubMed

    Höck, Stefan; Riedl, Rainer

    2012-01-01

    We report the development of a powerful data management tool for chemical and biological data: CyBy(2). CyBy(2) is a structure-based information management tool used to store and visualize structural data alongside additional information such as project assignment, physical information, spectroscopic data, biological activity, functional data and synthetic procedures. The application consists of a database, an application server, used to query and update the database, and a client application with a rich graphical user interface (GUI) used to interact with the server.

  9. Constructing and Reading Visual Information: Visual Literacy for Library and Information Science Education

    ERIC Educational Resources Information Center

    Ma, Yan

    2015-01-01

    This article examines visual literacy education and research for library and information science profession to educate the information professionals who will be able to execute and implement the ACRL (Association of College and Research Libraries) Visual Literacy Competency Standards successfully. It is a continuing call for inclusion of visual…

  10. iPixel: a visual content-based and semantic search engine for retrieving digitized mammograms by using collective intelligence.

    PubMed

    Alor-Hernández, Giner; Pérez-Gallardo, Yuliana; Posada-Gómez, Rubén; Cortes-Robles, Guillermo; Rodríguez-González, Alejandro; Aguilar-Laserre, Alberto A

    2012-09-01

    Nowadays, traditional search engines such as Google, Yahoo and Bing facilitate the retrieval of information in the format of images, but the results are not always useful for the users. This is mainly due to two problems: (1) the semantic keywords are not taken into consideration and (2) it is not always possible to establish a query using the image features. This issue has been covered in different domains in order to develop content-based image retrieval (CBIR) systems. The expert community has focussed their attention on the healthcare domain, where a lot of visual information for medical analysis is available. This paper provides a solution called iPixel Visual Search Engine, which involves semantics and content issues in order to search for digitized mammograms. iPixel offers the possibility of retrieving mammogram features using collective intelligence and implementing a CBIR algorithm. Our proposal compares not only features with similar semantic meaning, but also visual features. In this sense, the comparisons are made in different ways: by the number of regions per image, by maximum and minimum size of regions per image and by average intensity level of each region. iPixel Visual Search Engine supports the medical community in differential diagnoses related to the diseases of the breast. The iPixel Visual Search Engine has been validated by experts in the healthcare domain, such as radiologists, in addition to experts in digital image analysis.

  11. The Relationship between Visual Attention and Visual Working Memory Encoding: A Dissociation between Covert and Overt Orienting

    PubMed Central

    Tas, A. Caglar; Luck, Steven J.; Hollingworth, Andrew

    2016-01-01

    There is substantial debate over whether visual working memory (VWM) and visual attention constitute a single system for the selection of task-relevant perceptual information or whether they are distinct systems that can be dissociated when their representational demands diverge. In the present study, we focused on the relationship between visual attention and the encoding of objects into visual working memory (VWM). Participants performed a color change-detection task. During the retention interval, a secondary object, irrelevant to the memory task, was presented. Participants were instructed either to execute an overt shift of gaze to this object (Experiments 1–3) or to attend it covertly (Experiments 4 and 5). Our goal was to determine whether these overt and covert shifts of attention disrupted the information held in VWM. We hypothesized that saccades, which typically introduce a memorial demand to bridge perceptual disruption, would lead to automatic encoding of the secondary object. However, purely covert shifts of attention, which introduce no such demand, would not result in automatic memory encoding. The results supported these predictions. Saccades to the secondary object produced substantial interference with VWM performance, but covert shifts of attention to this object produced no interference with VWM performance. These results challenge prevailing theories that consider attention and VWM to reflect a common mechanism. In addition, they indicate that the relationship between attention and VWM is dependent on the memorial demands of the orienting behavior. PMID:26854532

  12. Computational model for perception of objects and motions.

    PubMed

    Yang, WenLu; Zhang, LiQing; Ma, LiBo

    2008-06-01

    Perception of objects and motions in the visual scene is one of the basic problems in the visual system. There exist 'What' and 'Where' pathways in the superior visual cortex, starting from the simple cells in the primary visual cortex. The former is able to perceive objects such as forms, color, and texture, and the latter perceives 'where', for example, velocity and direction of spatial movement of objects. This paper explores brain-like computational architectures of visual information processing. We propose a visual perceptual model and computational mechanism for training the perceptual model. The computational model is a three-layer network. The first layer is the input layer which is used to receive the stimuli from natural environments. The second layer is designed for representing the internal neural information. The connections between the first layer and the second layer, called the receptive fields of neurons, are self-adaptively learned based on principle of sparse neural representation. To this end, we introduce Kullback-Leibler divergence as the measure of independence between neural responses and derive the learning algorithm based on minimizing the cost function. The proposed algorithm is applied to train the basis functions, namely receptive fields, which are localized, oriented, and bandpassed. The resultant receptive fields of neurons in the second layer have the characteristics resembling that of simple cells in the primary visual cortex. Based on these basis functions, we further construct the third layer for perception of what and where in the superior visual cortex. The proposed model is able to perceive objects and their motions with a high accuracy and strong robustness against additive noise. Computer simulation results in the final section show the feasibility of the proposed perceptual model and high efficiency of the learning algorithm.

  13. Basic abnormalities in visual processing affect face processing at an early age in autism spectrum disorder.

    PubMed

    Vlamings, Petra Hendrika Johanna Maria; Jonkman, Lisa Marthe; van Daalen, Emma; van der Gaag, Rutger Jan; Kemner, Chantal

    2010-12-15

    A detailed visual processing style has been noted in autism spectrum disorder (ASD); this contributes to problems in face processing and has been directly related to abnormal processing of spatial frequencies (SFs). Little is known about the early development of face processing in ASD and the relation with abnormal SF processing. We investigated whether young ASD children show abnormalities in low spatial frequency (LSF, global) and high spatial frequency (HSF, detailed) processing and explored whether these are crucially involved in the early development of face processing. Three- to 4-year-old children with ASD (n = 22) were compared with developmentally delayed children without ASD (n = 17). Spatial frequency processing was studied by recording visual evoked potentials from visual brain areas while children passively viewed gratings (HSF/LSF). In addition, children watched face stimuli with different expressions, filtered to include only HSF or LSF. Enhanced activity in visual brain areas was found in response to HSF versus LSF information in children with ASD, in contrast to control subjects. Furthermore, facial-expression processing was also primarily driven by detail in ASD. Enhanced visual processing of detailed (HSF) information is present early in ASD and occurs for neutral (gratings), as well as for socially relevant stimuli (facial expressions). These data indicate that there is a general abnormality in visual SF processing in early ASD and are in agreement with suggestions that a fast LSF subcortical face processing route might be affected in ASD. This could suggest that abnormal visual processing is causative in the development of social problems in ASD. Copyright © 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  14. Motion perception: behavior and neural substrate.

    PubMed

    Mather, George

    2011-05-01

    Visual motion perception is vital for survival. Single-unit recordings in primate primary visual cortex (V1) have revealed the existence of specialized motion sensing neurons; perceptual effects such as the motion after-effect demonstrate their importance for motion perception. Human psychophysical data on motion detection can be explained by a computational model of cortical motion sensors. Both psychophysical and physiological data reveal at least two classes of motion sensor capable of sensing motion in luminance-defined and texture-defined patterns, respectively. Psychophysical experiments also reveal that motion can be seen independently of motion sensor output, based on attentive tracking of visual features. Sensor outputs are inherently ambiguous, due to the problem of univariance in neural responses. In order to compute stimulus direction and speed, the visual system must compare the responses of many different sensors sensitive to different directions and speeds. Physiological data show that this computation occurs in the visual middle temporal (MT) area. Recent psychophysical studies indicate that information about spatial form may also play a role in motion computations. Adaptation studies show that the human visual system is selectively sensitive to large-scale optic flow patterns, and physiological studies indicate that cells in the middle superior temporal (MST) area derive this sensitivity from the combined responses of many MT cells. Extraretinal signals used to control eye movements are an important source of signals to cancel out the retinal motion responses generated by eye movements, though visual information also plays a role. A number of issues remain to be resolved at all levels of the motion-processing hierarchy. WIREs Cogni Sci 2011 2 305-314 DOI: 10.1002/wcs.110 For further resources related to this article, please visit the WIREs website Additional Supporting Information may be found in http://www.lifesci.sussex.ac.uk/home/George_Mather/Motion/index.html. Copyright © 2010 John Wiley & Sons, Ltd.

  15. The effect of a concurrent working memory task and temporal offsets on the integration of auditory and visual speech information.

    PubMed

    Buchan, Julie N; Munhall, Kevin G

    2012-01-01

    Audiovisual speech perception is an everyday occurrence of multisensory integration. Conflicting visual speech information can influence the perception of acoustic speech (namely the McGurk effect), and auditory and visual speech are integrated over a rather wide range of temporal offsets. This research examined whether the addition of a concurrent cognitive load task would affect the audiovisual integration in a McGurk speech task and whether the cognitive load task would cause more interference at increasing offsets. The amount of integration was measured by the proportion of responses in incongruent trials that did not correspond to the audio (McGurk response). An eye-tracker was also used to examine whether the amount of temporal offset and the presence of a concurrent cognitive load task would influence gaze behavior. Results from this experiment show a very modest but statistically significant decrease in the number of McGurk responses when subjects also perform a cognitive load task, and that this effect is relatively constant across the various temporal offsets. Participant's gaze behavior was also influenced by the addition of a cognitive load task. Gaze was less centralized on the face, less time was spent looking at the mouth and more time was spent looking at the eyes, when a concurrent cognitive load task was added to the speech task.

  16. Brain activity related to working memory for temporal order and object information.

    PubMed

    Roberts, Brooke M; Libby, Laura A; Inhoff, Marika C; Ranganath, Charan

    2017-06-08

    Maintaining items in an appropriate sequence is important for many daily activities; however, remarkably little is known about the neural basis of human temporal working memory. Prior work suggests that the prefrontal cortex (PFC) and medial temporal lobe (MTL), including the hippocampus, play a role in representing information about temporal order. The involvement of these areas in successful temporal working memory, however, is less clear. Additionally, it is unknown whether regions in the PFC and MTL support temporal working memory across different timescales, or at coarse or fine levels of temporal detail. To address these questions, participants were scanned while completing 3 working memory task conditions (Group, Position and Item) that were matched in terms of difficulty and the number of items to be actively maintained. Group and Position trials probed temporal working memory processes, requiring the maintenance of hierarchically organized coarse and fine temporal information, respectively. To isolate activation related to temporal working memory, Group and Position trials were contrasted against Item trials, which required detailed working memory maintenance of visual objects. Results revealed that working memory encoding and maintenance of temporal information relative to visual information was associated with increased activation in dorsolateral PFC (DLPFC), and perirhinal cortex (PRC). In contrast, maintenance of visual details relative to temporal information was characterized by greater activation of parahippocampal cortex (PHC), medial and anterior PFC, and retrosplenial cortex. In the hippocampus, a dissociation along the longitudinal axis was observed such that the anterior hippocampus was more active for working memory encoding and maintenance of visual detail information relative to temporal information, whereas the posterior hippocampus displayed the opposite effect. Posterior parietal cortex was the only region to show sensitivity to temporal working memory across timescales, and was particularly involved in the encoding and maintenance of fine temporal information relative to maintenance of temporal information at more coarse timescales. Collectively, these results highlight the involvement of PFC and MTL in temporal working memory processes, and suggest a dissociation in the type of working memory information represented along the longitudinal axis of the hippocampus. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Research on robot mobile obstacle avoidance control based on visual information

    NASA Astrophysics Data System (ADS)

    Jin, Jiang

    2018-03-01

    Robots to detect obstacles and control robots to avoid obstacles has been a key research topic of robot control. In this paper, a scheme of visual information acquisition is proposed. By judging visual information, the visual information is transformed into the information source of path processing. In accordance with the established route, in the process of encountering obstacles, the algorithm real-time adjustment trajectory to meet the purpose of intelligent control of mobile robots. Simulation results show that, through the integration of visual sensing information, the obstacle information is fully obtained, while the real-time and accuracy of the robot movement control is guaranteed.

  18. Hospital Information Systems for Clinical and Research Applications: A Survey of the Issues

    DTIC Science & Technology

    1983-06-01

    potentials for auditory and visual nervous system activity) is being used intensively in the field of neurophysiology (27, 108, 109). In addition, the high...user group: this provides a community of enlightened users who can share ideas and experiences. (NOTE: NCHSR support ended January 1, 1983.) .Masor

  19. Demonstrating a Web-Design Technique in a Distance-Learning Environment

    ERIC Educational Resources Information Center

    Zdenek, Sean

    2004-01-01

    Objective: To lead a brief training session over a distance-learning network. Type of speech: Informative. Point value: 20% of course grade. Requirements: (a) References: Not specified; (b) Length: 15 minutes; (c) Visual aid: Yes; (d) Outline: No; (e) Prerequisite reading: Chapters 12-16, 18 (Bailey, 2002); (f) Additional requirements: None. This…

  20. Near-Infrared Spectroscopy of Himalia An Irregular Jovian Satellite

    NASA Technical Reports Server (NTRS)

    Brown, R. H.; Baines, K.; Bellucci, G.; Bibring, J.-P.; Buratti, B.; Capaccioni, F.; Cerroni, P.; Clark, R.; Coradini, A.; Cruikshank, D.

    2002-01-01

    Spectra of the irregular Jovian satellite Himalia were obtained with the Visual and Infrared Mapping Spectrometer (VIMS) onboard Cassini during the Jupiter Flyby on December 18-19, 2000. These are the first spectral data of an irregular satellite beyond 2.5 microns. Additional information is contained in the original extended abstract.

  1. 78 FR 27867 - Airworthiness Directives; MD Helicopters Inc. Helicopters

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-13

    ... the retirement life of each tail rotor blade (blade), performing a one-time visual inspection of each... information to the FAA within 24 hours following the one-time inspection. Since we issued that AD, an accident... shot peen surface's condition in addition to cracks and corrosion, and would add certain part-numbered...

  2. Computer-Based Learning of Spelling Skills in Children with and without Dyslexia

    ERIC Educational Resources Information Center

    Kast, Monika; Baschera, Gian-Marco; Gross, Markus; Jancke, Lutz; Meyer, Martin

    2011-01-01

    Our spelling training software recodes words into multisensory representations comprising visual and auditory codes. These codes represent information about letters and syllables of a word. An enhanced version, developed for this study, contains an additional phonological code and an improved word selection controller relying on a phoneme-based…

  3. Manipulations of attention dissociate fragile visual short-term memory from visual working memory.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Lamme, Victor A F

    2011-05-01

    People often rely on information that is no longer in view, but maintained in visual short-term memory (VSTM). Traditionally, VSTM is thought to operate on either a short time-scale with high capacity - iconic memory - or a long time scale with small capacity - visual working memory. Recent research suggests that in addition, an intermediate stage of memory in between iconic memory and visual working memory exists. This intermediate stage has a large capacity and a lifetime of several seconds, but is easily overwritten by new stimulation. We therefore termed it fragile VSTM. In previous studies, fragile VSTM has been dissociated from iconic memory by the characteristics of the memory trace. In the present study, we dissociated fragile VSTM from visual working memory by showing a differentiation in their dependency on attention. A decrease in attention during presentation of the stimulus array greatly reduced the capacity of visual working memory, while this had only a small effect on the capacity of fragile VSTM. We conclude that fragile VSTM is a separate memory store from visual working memory. Thus, a tripartite division of VSTM appears to be in place, comprising iconic memory, fragile VSTM and visual working memory. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Processing time of addition or withdrawal of single or combined balance-stabilizing haptic and visual information

    PubMed Central

    Honeine, Jean-Louis; Crisafulli, Oscar; Sozzi, Stefania

    2015-01-01

    We investigated the integration time of haptic and visual input and their interaction during stance stabilization. Eleven subjects performed four tandem-stance conditions (60 trials each). Vision, touch, and both vision and touch were added and withdrawn. Furthermore, vision was replaced with touch and vice versa. Body sway, tibialis anterior, and peroneus longus activity were measured. Following addition or withdrawal of vision or touch, an integration time period elapsed before the earliest changes in sway were observed. Thereafter, sway varied exponentially to a new steady-state while reweighting occurred. Latencies of sway changes on sensory addition ranged from 0.6 to 1.5 s across subjects, consistently longer for touch than vision, and were regularly preceded by changes in muscle activity. Addition of vision and touch simultaneously shortened the latencies with respect to vision or touch separately, suggesting cooperation between sensory modalities. Latencies following withdrawal of vision or touch or both simultaneously were shorter than following addition. When vision was replaced with touch or vice versa, adding one modality did not interfere with the effect of withdrawal of the other, suggesting that integration of withdrawal and addition were performed in parallel. The time course of the reweighting process to reach the new steady-state was also shorter on withdrawal than addition. The effects of different sensory inputs on posture stabilization illustrate the operation of a time-consuming, possibly supraspinal process that integrates and fuses modalities for accurate balance control. This study also shows the facilitatory interaction of visual and haptic inputs in integration and reweighting of stance-stabilizing inputs. PMID:26334013

  5. Research review for information management

    NASA Technical Reports Server (NTRS)

    Bishop, Peter C.

    1988-01-01

    The goal of RICIS research in information management is to apply currently available technology to existing problems in information management. Research projects include the following: the Space Business Research Center (SBRC), the Management Information and Decision Support Environment (MIDSE), and the investigation of visual interface technology. Several additional projects issued reports. New projects include the following: (1) the AdaNET project to develop a technology transfer network for software engineering and the Ada programming language; and (2) work on designing a communication system for the Space Station Project Office at JSC. The central aim of all projects is to use information technology to help people work more productively.

  6. Prefrontal Neuronal Responses during Audiovisual Mnemonic Processing

    PubMed Central

    Hwang, Jaewon

    2015-01-01

    During communication we combine auditory and visual information. Neurophysiological research in nonhuman primates has shown that single neurons in ventrolateral prefrontal cortex (VLPFC) exhibit multisensory responses to faces and vocalizations presented simultaneously. However, whether VLPFC is also involved in maintaining those communication stimuli in working memory or combining stored information across different modalities is unknown, although its human homolog, the inferior frontal gyrus, is known to be important in integrating verbal information from auditory and visual working memory. To address this question, we recorded from VLPFC while rhesus macaques (Macaca mulatta) performed an audiovisual working memory task. Unlike traditional match-to-sample/nonmatch-to-sample paradigms, which use unimodal memoranda, our nonmatch-to-sample task used dynamic movies consisting of both facial gestures and the accompanying vocalizations. For the nonmatch conditions, a change in the auditory component (vocalization), the visual component (face), or both components was detected. Our results show that VLPFC neurons are activated by stimulus and task factors: while some neurons simply responded to a particular face or a vocalization regardless of the task period, others exhibited activity patterns typically related to working memory such as sustained delay activity and match enhancement/suppression. In addition, we found neurons that detected the component change during the nonmatch period. Interestingly, some of these neurons were sensitive to the change of both components and therefore combined information from auditory and visual working memory. These results suggest that VLPFC is not only involved in the perceptual processing of faces and vocalizations but also in their mnemonic processing. PMID:25609614

  7. PROACT: Iterative Design of a Patient-Centered Visualization for Effective Prostate Cancer Health Risk Communication.

    PubMed

    Hakone, Anzu; Harrison, Lane; Ottley, Alvitta; Winters, Nathan; Gutheil, Caitlin; Han, Paul K J; Chang, Remco

    2017-01-01

    Prostate cancer is the most common cancer among men in the US, and yet most cases represent localized cancer for which the optimal treatment is unclear. Accumulating evidence suggests that the available treatment options, including surgery and conservative treatment, result in a similar prognosis for most men with localized prostate cancer. However, approximately 90% of patients choose surgery over conservative treatment, despite the risk of severe side effects like erectile dysfunction and incontinence. Recent medical research suggests that a key reason is the lack of patient-centered tools that can effectively communicate personalized risk information and enable them to make better health decisions. In this paper, we report the iterative design process and results of developing the PROgnosis Assessment for Conservative Treatment (PROACT) tool, a personalized health risk communication tool for localized prostate cancer patients. PROACT utilizes two published clinical prediction models to communicate the patients' personalized risk estimates and compare treatment options. In collaboration with the Maine Medical Center, we conducted two rounds of evaluations with prostate cancer survivors and urologists to identify the design elements and narrative structure that effectively facilitate patient comprehension under emotional distress. Our results indicate that visualization can be an effective means to communicate complex risk information to patients with low numeracy and visual literacy. However, the visualizations need to be carefully chosen to balance readability with ease of comprehension. In addition, due to patients' charged emotional state, an intuitive narrative structure that considers the patients' information need is critical to aid the patients' comprehension of their risk information.

  8. The Associations between Visual Attention and Facial Expression Identification in Patients with Schizophrenia.

    PubMed

    Lin, I-Mei; Fan, Sheng-Yu; Huang, Tiao-Lai; Wu, Wan-Ting; Li, Shi-Ming

    2013-12-01

    Visual search is an important attention process that precedes the information processing. Visual search also mediates the relationship between cognition function (attention) and social cognition (such as facial expression identification). However, the association between visual attention and social cognition in patients with schizophrenia remains unknown. The purposes of this study were to examine the differences in visual search performance and facial expression identification between patients with schizophrenia and normal controls, and to explore the relationship between visual search performance and facial expression identification in patients with schizophrenia. Fourteen patients with schizophrenia (mean age=46.36±6.74) and 15 normal controls (mean age=40.87±9.33) participated this study. The visual search task, including feature search and conjunction search, and Japanese and Caucasian Facial Expression of Emotion were administered. Patients with schizophrenia had worse visual search performance both in feature search and conjunction search than normal controls, as well as had worse facial expression identification, especially in surprised and sadness. In addition, there were negative associations between visual search performance and facial expression identification in patients with schizophrenia, especially in surprised and sadness. However, this phenomenon was not showed in normal controls. Patients with schizophrenia who had visual search deficits had the impairment on facial expression identification. Increasing ability of visual search and facial expression identification may improve their social function and interpersonal relationship.

  9. Innovative Visualization Techniques applied to a Flood Scenario

    NASA Astrophysics Data System (ADS)

    Falcão, António; Ho, Quan; Lopes, Pedro; Malamud, Bruce D.; Ribeiro, Rita; Jern, Mikael

    2013-04-01

    The large and ever-increasing amounts of multi-dimensional, time-varying and geospatial digital information from multiple sources represent a major challenge for today's analysts. We present a set of visualization techniques that can be used for the interactive analysis of geo-referenced and time sampled data sets, providing an integrated mechanism and that aids the user to collaboratively explore, present and communicate visually complex and dynamic data. Here we present these concepts in the context of a 4 hour flood scenario from Lisbon in 2010, with data that includes measures of water column (flood height) every 10 minutes at a 4.5 m x 4.5 m resolution, topography, building damage, building information, and online base maps. Techniques we use include web-based linked views, multiple charts, map layers and storytelling. We explain two of these in more detail that are not currently in common use for visualization of data: storytelling and web-based linked views. Visual storytelling is a method for providing a guided but interactive process of visualizing data, allowing more engaging data exploration through interactive web-enabled visualizations. Within storytelling, a snapshot mechanism helps the author of a story to highlight data views of particular interest and subsequently share or guide others within the data analysis process. This allows a particular person to select relevant attributes for a snapshot, such as highlighted regions for comparisons, time step, class values for colour legend, etc. and provide a snapshot of the current application state, which can then be provided as a hyperlink and recreated by someone else. Since data can be embedded within this snapshot, it is possible to interactively visualize and manipulate it. The second technique, web-based linked views, includes multiple windows which interactively respond to the user selections, so that when selecting an object and changing it one window, it will automatically update in all the other windows. These concepts can be part of a collaborative platform, where multiple people share and work together on the data, via online access, which also allows its remote usage from a mobile platform. Storytelling augments analysis and decision-making capabilities allowing to assimilate complex situations and reach informed decisions, in addition to helping the public visualize information. In our visualization scenario, developed in the context of the VA-4D project for the European Space Agency (see http://www.ca3-uninova.org/project_va4d), we make use of the GAV (GeoAnalytics Visualization) framework, a web-oriented visual analytics application based on multiple interactive views. The final visualization that we produce includes multiple interactive views, including a dynamic multi-layer map surrounded by other visualizations such as bar charts, time graphs and scatter plots. The map provides flood and building information, on top of a base city map (street maps and/or satellite imagery provided by online map services such as Google Maps, Bing Maps etc.). Damage over time for selected buildings, damage for all buildings at a chosen time period, correlation between damage and water depth can be analysed in the other views. This interactive web-based visualization that incorporates the ideas of storytelling, web-based linked views, and other visualization techniques, for a 4 hour flood event in Lisbon in 2010, can be found online at http://www.ncomva.se/flash/projects/esa/flooding/.

  10. [Information processing speed and influential factors in multiple sclerosis].

    PubMed

    Zhang, M L; Xu, E H; Dong, H Q; Zhang, J W

    2016-04-19

    To study the information processing speed and the influential factors in multiple sclerosis (MS) patients. A total of 36 patients with relapsing-remitting MS (RRMS), 21 patients with secondary progressive MS (SPMS), and 50 healthy control subjects from Xuanwu Hospital of Capital Medical University between April 2010 and April 2012 were included into this cross-sectional study.Neuropsychological tests was conducted after the disease had been stable for 8 weeks, including information processing speed, memory, executive functions, language and visual perception.Correlation between information processing speed and depression, fatigue, Expanded Disability Status Scale (EDSS) were studied. (1)MS patient groups demonstrated cognitive deficits compared to healthy controls.The Symbol Digit Modalities Test (SDMT) (control group 57±12; RRMS group 46±17; SPMS group 35±10, P<0.05) and Paced Auditory Serial Addition Task (PASAT) (control group 85±18; RRMS group 77±20; SPMS group 57±20, P<0.05) impaired most.SPMS patients were more affected compared to patients with RRMS subtypes, and these differences were attenuated after control for physical disability level as measured by the EDSS scores.MS patients, especially SPMS subtype, were more severely impaired than control group in the verbal learning test, verbal fluency, Stroop C test planning time, while visual-spatial function and visual memory were relatively reserved in MS patients.(2) According to the Pearson univariate correlation analysis, age, depression, EDSS scores and fatigue were related with PASAT and SDMT tests (r=-0.41--0.61, P<0.05). Depression significantly affected the speed of information processing (P<0.05). Impairment of information processing speed, verbal memory and executive functioning are seen in MS patients, especially in SPMS subtype, while visual-spatial function is relatively reserved.Age, white matter change scales, EDSS scores, depression are negatively associated with information processing speed.

  11. Students' Understanding of Salt Dissolution: Visualizing Animation in the Chemistry Classroom

    NASA Astrophysics Data System (ADS)

    Malkoc, Ummuhan

    The present study explored the effect of animation implementation in learning a chemistry topic. 135 high school students taking chemistry class were selected for this study (quasi-experimental groups = 67 and control groups = 68). Independent samples t-tests were run to compare animation and control groups between and within the schools. The over-arching finding of this research indicated that when science teachers used animations while teaching salt dissolution phenomena, students will benefit the application of animations. In addition, the findings informed the TPACK framework on the idea that visual tools are important in students' understanding of salt dissolution concepts.

  12. CytoSPADE: high-performance analysis and visualization of high-dimensional cytometry data

    PubMed Central

    Linderman, Michael D.; Simonds, Erin F.; Qiu, Peng; Bruggner, Robert V.; Sheode, Ketaki; Meng, Teresa H.; Plevritis, Sylvia K.; Nolan, Garry P.

    2012-01-01

    Motivation: Recent advances in flow cytometry enable simultaneous single-cell measurement of 30+ surface and intracellular proteins. CytoSPADE is a high-performance implementation of an interface for the Spanning-tree Progression Analysis of Density-normalized Events algorithm for tree-based analysis and visualization of this high-dimensional cytometry data. Availability: Source code and binaries are freely available at http://cytospade.org and via Bioconductor version 2.10 onwards for Linux, OSX and Windows. CytoSPADE is implemented in R, C++ and Java. Contact: michael.linderman@mssm.edu Supplementary Information: Additional documentation available at http://cytospade.org. PMID:22782546

  13. Visual working memory for global, object, and part-based information.

    PubMed

    Patterson, Michael D; Bly, Benjamin Martin; Porcelli, Anthony J; Rypma, Bart

    2007-06-01

    We investigated visual working memory for novel objects and parts of novel objects. After a delay period, participants showed strikingly more accurate performance recognizing a single whole object than the parts of that object. This bias to remember whole objects, rather than parts, persisted even when the division between parts was clearly defined and the parts were disconnected from each other so that, in order to remember the single whole object, the participants needed to mentally combine the parts. In addition, the bias was confirmed when the parts were divided by color. These experiments indicated that holistic perceptual-grouping biases are automatically used to organize storage in visual working memory. In addition, our results suggested that the bias was impervious to top-down consciously directed control, because when task demands were manipulated through instruction and catch trials, the participants still recognized whole objects more quickly and more accurately than their parts. This bias persisted even when the whole objects were novel and the parts were familiar. We propose that visual working memory representations depend primarily on the global configural properties of whole objects, rather than part-based representations, even when the parts themselves can be clearly perceived as individual objects. This global configural bias beneficially reduces memory load on a capacity-limited system operating in a complex visual environment, because fewer distinct items must be remembered.

  14. Visual Working Memory Supports the Inhibition of Previously Processed Information: Evidence from Preview Search

    ERIC Educational Resources Information Center

    Al-Aidroos, Naseem; Emrich, Stephen M.; Ferber, Susanne; Pratt, Jay

    2012-01-01

    In four experiments we assessed whether visual working memory (VWM) maintains a record of previously processed visual information, allowing old information to be inhibited, and new information to be prioritized. Specifically, we evaluated whether VWM contributes to the inhibition (i.e., visual marking) of previewed distractors in a preview search.…

  15. 32 CFR 813.1 - Purpose of the visual information documentation (VIDOC) program.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Purpose of the visual information documentation (VIDOC) program. 813.1 Section 813.1 National Defense Department of Defense (Continued) DEPARTMENT OF THE AIR FORCE SALES AND SERVICES VISUAL INFORMATION DOCUMENTATION PROGRAM § 813.1 Purpose of the visual information documentation (VIDOC) program....

  16. Emotional Effects in Visual Information Processing

    DTIC Science & Technology

    2009-10-24

    1 Emotional Effects on Visual Information Processing FA4869-08-0004 AOARD 074018 Report October 24, 2009...TITLE AND SUBTITLE Emotional Effects in Visual Information Processing 5a. CONTRACT NUMBER FA48690810004 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...objective of this research project was to investigate how emotion influences visual information processing and the neural correlates of the effects

  17. Greater magnocellular saccadic suppression in high versus low autistic tendency suggests a causal path to local perceptual style

    PubMed Central

    Crewther, David P.; Crewther, Daniel; Bevan, Stephanie; Goodale, Melvyn A.; Crewther, Sheila G.

    2015-01-01

    Saccadic suppression—the reduction of visual sensitivity during rapid eye movements—has previously been proposed to reflect a specific suppression of the magnocellular visual system, with the initial neural site of that suppression at or prior to afferent visual information reaching striate cortex. Dysfunction in the magnocellular visual pathway has also been associated with perceptual and physiological anomalies in individuals with autism spectrum disorder or high autistic tendency, leading us to question whether saccadic suppression is altered in the broader autism phenotype. Here we show that individuals with high autistic tendency show greater saccadic suppression of low versus high spatial frequency gratings while those with low autistic tendency do not. In addition, those with high but not low autism spectrum quotient (AQ) demonstrated pre-cortical (35–45 ms) evoked potential differences (saccade versus fixation) to a large, low contrast, pseudo-randomly flashing bar. Both AQ groups showed similar differential visual evoked potential effects in later epochs (80–160 ms) at high contrast. Thus, the magnocellular theory of saccadic suppression appears untenable as a general description for the typically developing population. Our results also suggest that the bias towards local perceptual style reported in autism may be due to selective suppression of low spatial frequency information accompanying every saccadic eye movement. PMID:27019719

  18. Audiovisual integration of emotional signals in voice and face: an event-related fMRI study.

    PubMed

    Kreifelts, Benjamin; Ethofer, Thomas; Grodd, Wolfgang; Erb, Michael; Wildgruber, Dirk

    2007-10-01

    In a natural environment, non-verbal emotional communication is multimodal (i.e. speech melody, facial expression) and multifaceted concerning the variety of expressed emotions. Understanding these communicative signals and integrating them into a common percept is paramount to successful social behaviour. While many previous studies have focused on the neurobiology of emotional communication in the auditory or visual modality alone, far less is known about multimodal integration of auditory and visual non-verbal emotional information. The present study investigated this process using event-related fMRI. Behavioural data revealed that audiovisual presentation of non-verbal emotional information resulted in a significant increase in correctly classified stimuli when compared with visual and auditory stimulation. This behavioural gain was paralleled by enhanced activation in bilateral posterior superior temporal gyrus (pSTG) and right thalamus, when contrasting audiovisual to auditory and visual conditions. Further, a characteristic of these brain regions, substantiating their role in the emotional integration process, is a linear relationship between the gain in classification accuracy and the strength of the BOLD response during the bimodal condition. Additionally, enhanced effective connectivity between audiovisual integration areas and associative auditory and visual cortices was observed during audiovisual stimulation, offering further insight into the neural process accomplishing multimodal integration. Finally, we were able to document an enhanced sensitivity of the putative integration sites to stimuli with emotional non-verbal content as compared to neutral stimuli.

  19. Web-GIS-based SARS epidemic situation visualization

    NASA Astrophysics Data System (ADS)

    Lu, Xiaolin

    2004-03-01

    In order to research, perform statistical analysis and broadcast the information of SARS epidemic situation according to the relevant spatial position, this paper proposed a unified global visualization information platform for SARS epidemic situation based on Web-GIS and scientific virtualization technology. To setup the unified global visual information platform, the architecture of Web-GIS based interoperable information system is adopted to enable public report SARS virus information to health cure center visually by using the web visualization technology. A GIS java applet is used to visualize the relationship between spatial graphical data and virus distribution, and other web based graphics figures such as curves, bars, maps and multi-dimensional figures are used to visualize the relationship between SARS virus tendency with time, patient number or locations. The platform is designed to display the SARS information in real time, simulate visually for real epidemic situation and offer an analyzing tools for health department and the policy-making government department to support the decision-making for preventing against the SARS epidemic virus. It could be used to analyze the virus condition through visualized graphics interface, isolate the areas of virus source, and control the virus condition within shortest time. It could be applied to the visualization field of SARS preventing systems for SARS information broadcasting, data management, statistical analysis, and decision supporting.

  20. Multimodal fusion of polynomial classifiers for automatic person recgonition

    NASA Astrophysics Data System (ADS)

    Broun, Charles C.; Zhang, Xiaozheng

    2001-03-01

    With the prevalence of the information age, privacy and personalization are forefront in today's society. As such, biometrics are viewed as essential components of current evolving technological systems. Consumers demand unobtrusive and non-invasive approaches. In our previous work, we have demonstrated a speaker verification system that meets these criteria. However, there are additional constraints for fielded systems. The required recognition transactions are often performed in adverse environments and across diverse populations, necessitating robust solutions. There are two significant problem areas in current generation speaker verification systems. The first is the difficulty in acquiring clean audio signals in all environments without encumbering the user with a head- mounted close-talking microphone. Second, unimodal biometric systems do not work with a significant percentage of the population. To combat these issues, multimodal techniques are being investigated to improve system robustness to environmental conditions, as well as improve overall accuracy across the population. We propose a multi modal approach that builds on our current state-of-the-art speaker verification technology. In order to maintain the transparent nature of the speech interface, we focus on optical sensing technology to provide the additional modality-giving us an audio-visual person recognition system. For the audio domain, we use our existing speaker verification system. For the visual domain, we focus on lip motion. This is chosen, rather than static face or iris recognition, because it provides dynamic information about the individual. In addition, the lip dynamics can aid speech recognition to provide liveness testing. The visual processing method makes use of both color and edge information, combined within Markov random field MRF framework, to localize the lips. Geometric features are extracted and input to a polynomial classifier for the person recognition process. A late integration approach, based on a probabilistic model, is employed to combine the two modalities. The system is tested on the XM2VTS database combined with AWGN in the audio domain over a range of signal-to-noise ratios.

  1. Nonretinotopic visual processing in the brain.

    PubMed

    Melcher, David; Morrone, Maria Concetta

    2015-01-01

    A basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.

  2. Feature-Based Memory-Driven Attentional Capture: Visual Working Memory Content Affects Visual Attention

    ERIC Educational Resources Information Center

    Olivers, Christian N. L.; Meijer, Frank; Theeuwes, Jan

    2006-01-01

    In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by…

  3. Degradation of labial information modifies audiovisual speech perception in cochlear-implanted children.

    PubMed

    Huyse, Aurélie; Berthommier, Frédéric; Leybaert, Jacqueline

    2013-01-01

    The aim of the present study was to examine audiovisual speech integration in cochlear-implanted children and in normally hearing children exposed to degraded auditory stimuli. Previous studies have shown that speech perception in cochlear-implanted users is biased toward the visual modality when audition and vision provide conflicting information. Our main question was whether an experimentally designed degradation of the visual speech cue would increase the importance of audition in the response pattern. The impact of auditory proficiency was also investigated. A group of 31 children with cochlear implants and a group of 31 normally hearing children matched for chronological age were recruited. All children with cochlear implants had profound congenital deafness and had used their implants for at least 2 years. Participants had to perform an /aCa/ consonant-identification task in which stimuli were presented randomly in three conditions: auditory only, visual only, and audiovisual (congruent and incongruent McGurk stimuli). In half of the experiment, the visual speech cue was normal; in the other half (visual reduction) a degraded visual signal was presented, aimed at preventing lipreading of good quality. The normally hearing children received a spectrally reduced speech signal (simulating the input delivered by the cochlear implant). First, performance in visual-only and in congruent audiovisual modalities were decreased, showing that the visual reduction technique used here was efficient at degrading lipreading. Second, in the incongruent audiovisual trials, visual reduction led to a major increase in the number of auditory based responses in both groups. Differences between proficient and nonproficient children were found in both groups, with nonproficient children's responses being more visual and less auditory than those of proficient children. Further analysis revealed that differences between visually clear and visually reduced conditions and between groups were not only because of differences in unisensory perception but also because of differences in the process of audiovisual integration per se. Visual reduction led to an increase in the weight of audition, even in cochlear-implanted children, whose perception is generally dominated by vision. This result suggests that the natural bias in favor of vision is not immutable. Audiovisual speech integration partly depends on the experimental situation, which modulates the informational content of the sensory channels and the weight that is awarded to each of them. Consequently, participants, whether deaf with cochlear implants or having normal hearing, not only base their perception on the most reliable modality but also award it an additional weight.

  4. Increased Complexities in Visual Search Behavior in Skilled Players for a Self-Paced Aiming Task

    PubMed Central

    Chia, Jingyi S.; Burns, Stephen F.; Barrett, Laura A.; Chow, Jia Y.

    2017-01-01

    The badminton serve is an important shot for winning a rally in a match. It combines good technique with the ability to accurately integrate visual information from the shuttle, racket, opponent, and intended landing point. Despite its importance and repercussive nature, to date no study has looked at the visual search behaviors during badminton service in the singles discipline. Unlike anticipatory tasks (e.g., shot returns), the serve presents an opportunity to explore the role of visual search behaviors in movement control for self-paced tasks. Accordingly, this study examined skill-related differences in visual behavior during the badminton singles serve. Skilled (n = 12) and less skilled (n = 12) participants performed 30 serves to a live opponent, while real-time eye movements were captured using a mobile gaze registration system. Frame-by-frame analyses of 662 serves were made and the skilled players took a longer preparatory time before serving. Visual behavior of the skilled players was characterized by significantly greater number of fixations on more areas of interest per trial than the less skilled. In addition, the skilled players spent a significantly longer time fixating on the court and net, whereas the less skilled players found the shuttle to be more informative. Quiet eye (QE) duration (indicative of superior sports performance) however, did not differ significantly between groups which has implications on the perceived importance of QE in the badminton serve. Moreover, while visual behavior differed by skill level, considerable individual differences were also observed especially within the skilled players. This augments the need for not just group-level analyses, but individualized analysis for a more accurate representation of visual behavior. Findings from this study thus provide an insight to the possible visual search strategies as players serve in net-barrier games. Moreover, this study highlighted an important aspect of badminton relating to deception and the implications of interpreting visual behavior of players. PMID:28659850

  5. Increased Complexities in Visual Search Behavior in Skilled Players for a Self-Paced Aiming Task.

    PubMed

    Chia, Jingyi S; Burns, Stephen F; Barrett, Laura A; Chow, Jia Y

    2017-01-01

    The badminton serve is an important shot for winning a rally in a match. It combines good technique with the ability to accurately integrate visual information from the shuttle, racket, opponent, and intended landing point. Despite its importance and repercussive nature, to date no study has looked at the visual search behaviors during badminton service in the singles discipline. Unlike anticipatory tasks (e.g., shot returns), the serve presents an opportunity to explore the role of visual search behaviors in movement control for self-paced tasks. Accordingly, this study examined skill-related differences in visual behavior during the badminton singles serve. Skilled ( n = 12) and less skilled ( n = 12) participants performed 30 serves to a live opponent, while real-time eye movements were captured using a mobile gaze registration system. Frame-by-frame analyses of 662 serves were made and the skilled players took a longer preparatory time before serving. Visual behavior of the skilled players was characterized by significantly greater number of fixations on more areas of interest per trial than the less skilled. In addition, the skilled players spent a significantly longer time fixating on the court and net, whereas the less skilled players found the shuttle to be more informative. Quiet eye (QE) duration (indicative of superior sports performance) however, did not differ significantly between groups which has implications on the perceived importance of QE in the badminton serve. Moreover, while visual behavior differed by skill level, considerable individual differences were also observed especially within the skilled players. This augments the need for not just group-level analyses, but individualized analysis for a more accurate representation of visual behavior. Findings from this study thus provide an insight to the possible visual search strategies as players serve in net-barrier games. Moreover, this study highlighted an important aspect of badminton relating to deception and the implications of interpreting visual behavior of players.

  6. Image Feature Types and Their Predictions of Aesthetic Preference and Naturalness

    PubMed Central

    Ibarra, Frank F.; Kardan, Omid; Hunter, MaryCarol R.; Kotabe, Hiroki P.; Meyer, Francisco A. C.; Berman, Marc G.

    2017-01-01

    Previous research has investigated ways to quantify visual information of a scene in terms of a visual processing hierarchy, i.e., making sense of visual environment by segmentation and integration of elementary sensory input. Guided by this research, studies have developed categories for low-level visual features (e.g., edges, colors), high-level visual features (scene-level entities that convey semantic information such as objects), and how models of those features predict aesthetic preference and naturalness. For example, in Kardan et al. (2015a), 52 participants provided aesthetic preference and naturalness ratings, which are used in the current study, for 307 images of mixed natural and urban content. Kardan et al. (2015a) then developed a model using low-level features to predict aesthetic preference and naturalness and could do so with high accuracy. What has yet to be explored is the ability of higher-level visual features (e.g., horizon line position relative to viewer, geometry of building distribution relative to visual access) to predict aesthetic preference and naturalness of scenes, and whether higher-level features mediate some of the association between the low-level features and aesthetic preference or naturalness. In this study we investigated these relationships and found that low- and high- level features explain 68.4% of the variance in aesthetic preference ratings and 88.7% of the variance in naturalness ratings. Additionally, several high-level features mediated the relationship between the low-level visual features and aaesthetic preference. In a multiple mediation analysis, the high-level feature mediators accounted for over 50% of the variance in predicting aesthetic preference. These results show that high-level visual features play a prominent role predicting aesthetic preference, but do not completely eliminate the predictive power of the low-level visual features. These strong predictors provide powerful insights for future research relating to landscape and urban design with the aim of maximizing subjective well-being, which could lead to improved health outcomes on a larger scale. PMID:28503158

  7. Data Visualization Using Immersive Virtual Reality Tools

    NASA Astrophysics Data System (ADS)

    Cioc, Alexandru; Djorgovski, S. G.; Donalek, C.; Lawler, E.; Sauer, F.; Longo, G.

    2013-01-01

    The growing complexity of scientific data poses serious challenges for an effective visualization. Data sets, e.g., catalogs of objects detected in sky surveys, can have a very high dimensionality, ~ 100 - 1000. Visualizing such hyper-dimensional data parameter spaces is essentially impossible, but there are ways of visualizing up to ~ 10 dimensions in a pseudo-3D display. We have been experimenting with the emerging technologies of immersive virtual reality (VR) as a platform for a scientific, interactive, collaborative data visualization. Our initial experiments used the virtual world of Second Life, and more recently VR worlds based on its open source code, OpenSimulator. There we can visualize up to ~ 100,000 data points in ~ 7 - 8 dimensions (3 spatial and others encoded as shapes, colors, sizes, etc.), in an immersive virtual space where scientists can interact with their data and with each other. We are now developing a more scalable visualization environment using the popular (practically an emerging standard) Unity 3D Game Engine, coded using C#, JavaScript, and the Unity Scripting Language. This visualization tool can be used through a standard web browser, or a standalone browser of its own. Rather than merely plotting data points, the application creates interactive three-dimensional objects of various shapes, colors, and sizes, and of course the XYZ positions, encoding various dimensions of the parameter space, that can be associated interactively. Multiple users can navigate through this data space simultaneously, either with their own, independent vantage points, or with a shared view. At this stage ~ 100,000 data points can be easily visualized within seconds on a simple laptop. The displayed data points can contain linked information; e.g., upon a clicking on a data point, a webpage with additional information can be rendered within the 3D world. A range of functionalities has been already deployed, and more are being added. We expect to make this visualization tool freely available to the academic community within a few months, on an experimental (beta testing) basis.

  8. Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion

    PubMed Central

    Fajen, Brett R.; Matthis, Jonathan S.

    2013-01-01

    Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983

  9. Perception and control of rotorcraft flight

    NASA Technical Reports Server (NTRS)

    Owen, Dean H.

    1991-01-01

    Three topics which can be applied to rotorcraft flight are examined: (1) the nature of visual information; (2) what visual information is informative about; and (3) the control of visual information. The anchorage of visual perception is defined as the distribution of structure in the surrounding optical array or the distribution of optical structure over the retinal surface. A debate was provoked about whether the referent of visual event perception, and in turn control, is optical motion, kinetics, or dynamics. The interface of control theory and visual perception is also considered. The relationships among these problems is the basis of this article.

  10. Immersion ultrasonography: simultaneous A-scan and B-scan.

    PubMed

    Coleman, D J; Dallow, R L; Smith, M E

    1979-01-01

    In eyes with opaque media, ophthalmic ultrasound provides a unique source of information that can dramatically affect the course of patient management. In addition, when an ocular abnormality can be visualized, ultrasonography provides information that supplements and complements other diagnostic testing. It provides documentation and differentiation of abnormal states, such as vitreous hemorrhage and intraocular tumor, as well as differentiation of orbital tumors from inflammatory causes of exophthalmos. Additional capabilities of ultrasound are biometric determinations for calculation of intraocular lens implant powers and drug-effectiveness studies. Maximal information is derived from ultrasonography when A-scan and B-scan techniques are employed simultaneously. Flexibility of electronics, variable-frequency transducers, and the use of several different manual scanning patterns aid in detection and interpretation of results. The immersion system of ultrasonography provides these features optimally.

  11. Visualizing Spatially Varying Distribution Data

    NASA Technical Reports Server (NTRS)

    Kao, David; Luo, Alison; Dungan, Jennifer L.; Pang, Alex; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Box plot is a compact representation that encodes the minimum, maximum, mean, median, and quarters information of a distribution. In practice, a single box plot is drawn for each variable of interest. With the advent of more accessible computing power, we are now facing the problem of visual icing data where there is a distribution at each 2D spatial location. Simply extending the box plot technique to distributions over 2D domain is not straightforward. One challenge is reducing the visual clutter if a box plot is drawn over each grid location in the 2D domain. This paper presents and discusses two general approaches, using parametric statistics and shape descriptors, to present 2D distribution data sets. Both approaches provide additional insights compared to the traditional box plot technique

  12. An infrared/video fusion system for military robotics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, A.W.; Roberts, R.S.

    1997-08-05

    Sensory information is critical to the telerobotic operation of mobile robots. In particular, visual sensors are a key component of the sensor package on a robot engaged in urban military operations. Visual sensors provide the robot operator with a wealth of information including robot navigation and threat assessment. However, simple countermeasures such as darkness, smoke, or blinding by a laser, can easily neutralize visual sensors. In order to provide a robust visual sensing system, an infrared sensor is required to augment the primary visual sensor. An infrared sensor can acquire useful imagery in conditions that incapacitate a visual sensor. Amore » simple approach to incorporating an infrared sensor into the visual sensing system is to display two images to the operator: side-by-side visual and infrared images. However, dual images might overwhelm the operator with information, and result in degraded robot performance. A better solution is to combine the visual and infrared images into a single image that maximizes scene information. Fusing visual and infrared images into a single image demands balancing the mixture of visual and infrared information. Humans are accustom to viewing and interpreting visual images. They are not accustom to viewing or interpreting infrared images. Hence, the infrared image must be used to enhance the visual image, not obfuscate it.« less

  13. Factors influencing self-reported vision-related activity limitation in the visually impaired.

    PubMed

    Tabrett, Daryl R; Latham, Keziah

    2011-07-15

    The use of patient-reported outcome (PRO) measures to assess self-reported difficulty in visual activities is common in patients with impaired vision. This study determines the visual and psychosocial factors influencing patients' responses to self-report measures, to aid in understanding what is being measured. One hundred visually impaired participants completed the Activity Inventory (AI), which assesses self-reported, vision-related activity limitation (VRAL) in the task domains of reading, mobility, visual information, and visual motor tasks. Participants also completed clinical tests of visual function (distance visual acuity and near reading performance both with and without low vision aids [LVAs], contrast sensitivity, visual fields, and depth discrimination), and questionnaires assessing depressive symptoms, social support, adjustment to visual loss, and personality. Multiple regression analyses identified that an acuity measure (distance or near), and, to a lesser extent, near reading performance without LVAs, visual fields, and contrast sensitivity best explained self-reported VRAL (28%-50% variance explained). Significant psychosocial correlates were depression and adjustment, explaining an additional 6% to 19% unique variance. Dependent on task domain, the parameters assessed explained 59% to 71% of the variance in self-reported VRAL. Visual function, most notably acuity without LVAs, is the best predictor of self-reported VRAL assessed by the AI. Depression and adjustment to visual loss also significantly influence self-reported VRAL, largely independent of the severity of visual loss and most notably in the less vision-specific tasks. The results suggest that rehabilitation strategies addressing depression and adjustment could improve perceived visual disability.

  14. A Notation for Rapid Specification of Information Visualization

    ERIC Educational Resources Information Center

    Lee, Sang Yun

    2013-01-01

    This thesis describes a notation for rapid specification of information visualization, which can be used as a theoretical framework of integrating various types of information visualization, and its applications at a conceptual level. The notation is devised to codify the major characteristics of data/visual structures in conventionally-used…

  15. [Imaging Mass Spectrometry in Histopathologic Analysis].

    PubMed

    Yamazaki, Fumiyoshi; Seto, Mitsutoshi

    2015-04-01

    Matrix-assisted laser desorption/ionization (MALDI)-imaging mass spectrometry (IMS) enables visualization of the distribution of a range of biomolecules by integrating biochemical information from mass spectrometry with positional information from microscopy. IMS identifies a target molecule. In addition, IMS enables global analysis of biomolecules containing unknown molecules by detecting the ratio of the molecular weight to electric charge without any target, which makes it possible to identify novel molecules. IMS generates data on the distribution of lipids and small molecules in tissues, which is difficult to visualize with either conventional counter-staining or immunohistochemistry. In this review, we firstly introduce the principle of imaging mass spectrometry and recent advances in the sample preparation method. Secondly, we present findings regarding biological samples, especially pathological ones. Finally, we discuss the limitations and problems of the IMS technique and clinical application, such as in drug development.

  16. Designed Natural Spaces: Informal Gardens Are Perceived to Be More Restorative than Formal Gardens

    PubMed Central

    Twedt, Elyssa; Rainey, Reuben M.; Proffitt, Dennis R.

    2016-01-01

    Experimental research shows that there are perceived and actual benefits to spending time in natural spaces compared to urban spaces, such as reduced cognitive fatigue, improved mood, and reduced stress. Whereas past research has focused primarily on distinguishing between distinct categories of spaces (i.e., nature vs. urban), less is known about variability in perceived restorative potential of environments within a particular category of outdoor spaces, such as gardens. Conceptually, gardens are often considered to be restorative spaces and to contain an abundance of natural elements, though there is great variability in how gardens are designed that might impact their restorative potential. One common practice for classifying gardens is along a spectrum ranging from “formal or geometric” to “informal or naturalistic,” which often corresponds to the degree to which built or natural elements are present, respectively. In the current study, we tested whether participants use design informality as a cue to predict perceived restorative potential of different gardens. Participants viewed a set of gardens and rated each on design informality, perceived restorative potential, naturalness, and visual appeal. Participants perceived informal gardens to have greater restorative potential than formal gardens. In addition, gardens that were more visually appealing and more natural-looking were perceived to have greater restorative potential than less visually appealing and less natural gardens. These perceptions and precedents are highly relevant for the design of gardens and other similar green spaces intended to provide relief from stress and to foster cognitive restoration. PMID:26903899

  17. Open Source Web-Based Solutions for Disseminating and Analyzing Flood Hazard Information at the Community Level

    NASA Astrophysics Data System (ADS)

    Santillan, M. M.-M.; Santillan, J. R.; Morales, E. M. O.

    2017-09-01

    We discuss in this paper the development, including the features and functionalities, of an open source web-based flood hazard information dissemination and analytical system called "Flood EViDEns". Flood EViDEns is short for "Flood Event Visualization and Damage Estimations", an application that was developed by the Caraga State University to address the needs of local disaster managers in the Caraga Region in Mindanao, Philippines in accessing timely and relevant flood hazard information before, during and after the occurrence of flood disasters at the community (i.e., barangay and household) level. The web application made use of various free/open source web mapping and visualization technologies (GeoServer, GeoDjango, OpenLayers, Bootstrap), various geospatial datasets including LiDAR-derived elevation and information products, hydro-meteorological data, and flood simulation models to visualize various scenarios of flooding and its associated damages to infrastructures. The Flood EViDEns application facilitates the release and utilization of this flood-related information through a user-friendly front end interface consisting of web map and tables. A public version of the application can be accessed at http://121.97.192.11:8082/. The application is currently expanded to cover additional sites in Mindanao, Philippines through the "Geo-informatics for the Systematic Assessment of Flood Effects and Risks for a Resilient Mindanao" or the "Geo-SAFER Mindanao" Program.

  18. Designed Natural Spaces: Informal Gardens Are Perceived to Be More Restorative than Formal Gardens.

    PubMed

    Twedt, Elyssa; Rainey, Reuben M; Proffitt, Dennis R

    2016-01-01

    Experimental research shows that there are perceived and actual benefits to spending time in natural spaces compared to urban spaces, such as reduced cognitive fatigue, improved mood, and reduced stress. Whereas past research has focused primarily on distinguishing between distinct categories of spaces (i.e., nature vs. urban), less is known about variability in perceived restorative potential of environments within a particular category of outdoor spaces, such as gardens. Conceptually, gardens are often considered to be restorative spaces and to contain an abundance of natural elements, though there is great variability in how gardens are designed that might impact their restorative potential. One common practice for classifying gardens is along a spectrum ranging from "formal or geometric" to "informal or naturalistic," which often corresponds to the degree to which built or natural elements are present, respectively. In the current study, we tested whether participants use design informality as a cue to predict perceived restorative potential of different gardens. Participants viewed a set of gardens and rated each on design informality, perceived restorative potential, naturalness, and visual appeal. Participants perceived informal gardens to have greater restorative potential than formal gardens. In addition, gardens that were more visually appealing and more natural-looking were perceived to have greater restorative potential than less visually appealing and less natural gardens. These perceptions and precedents are highly relevant for the design of gardens and other similar green spaces intended to provide relief from stress and to foster cognitive restoration.

  19. High resolution renderings and interactive visualization of the 2006 Huntington Beach experiment

    NASA Astrophysics Data System (ADS)

    Im, T.; Nayak, A.; Keen, C.; Samilo, D.; Matthews, J.

    2006-12-01

    The Visualization Center at the Scripps Institution of Oceanography investigates innovative ways to represent graphically interactive 3D virtual landscapes and to produce high resolution, high quality renderings of Earth sciences data and the sensors and instruments used to collect the data . Among the Visualization Center's most recent work is the visualization of the Huntington Beach experiment, a study launched in July 2006 by the Southern California Ocean Observing System (http://www.sccoos.org/) to record and synthesize data of the Huntington Beach coastal region. Researchers and students at the Visualization Center created visual presentations that combine bathymetric data provided by SCCOOS with USGS aerial photography and with 3D polygonal models of sensors created in Maya into an interactive 3D scene using the Fledermaus suite of visualization tools (http://www.ivs3d.com). In addition, the Visualization Center has produced high definition (HD) animations of SCCOOS sensor instruments (e.g. REMUS, drifters, spray glider, nearshore mooring, OCSD/USGS mooring and CDIP mooring) using the Maya modeling and animation software and rendered over multiple nodes of the OptIPuter Visualization Cluster at Scripps. These visualizations are aimed at providing researchers with a broader context of sensor locations relative to geologic characteristics, to promote their use as an educational resource for informal education settings and increasing public awareness, and also as an aid for researchers' proposals and presentations. These visualizations are available for download on the Visualization Center website at http://siovizcenter.ucsd.edu/sccoos/hb2006.php.

  20. Auditory and visual interactions between the superior and inferior colliculi in the ferret.

    PubMed

    Stitt, Iain; Galindo-Leon, Edgar; Pieper, Florian; Hollensteiner, Karl J; Engler, Gerhard; Engel, Andreas K

    2015-05-01

    The integration of visual and auditory spatial information is important for building an accurate perception of the external world, but the fundamental mechanisms governing such audiovisual interaction have only partially been resolved. The earliest interface between auditory and visual processing pathways is in the midbrain, where the superior (SC) and inferior colliculi (IC) are reciprocally connected in an audiovisual loop. Here, we investigate the mechanisms of audiovisual interaction in the midbrain by recording neural signals from the SC and IC simultaneously in anesthetized ferrets. Visual stimuli reliably produced band-limited phase locking of IC local field potentials (LFPs) in two distinct frequency bands: 6-10 and 15-30 Hz. These visual LFP responses co-localized with robust auditory responses that were characteristic of the IC. Imaginary coherence analysis confirmed that visual responses in the IC were not volume-conducted signals from the neighboring SC. Visual responses in the IC occurred later than retinally driven superficial SC layers and earlier than deep SC layers that receive indirect visual inputs, suggesting that retinal inputs do not drive visually evoked responses in the IC. In addition, SC and IC recording sites with overlapping visual spatial receptive fields displayed stronger functional connectivity than sites with separate receptive fields, indicating that visual spatial maps are aligned across both midbrain structures. Reciprocal coupling between the IC and SC therefore probably serves the dynamic integration of visual and auditory representations of space. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  1. The sensory components of high-capacity iconic memory and visual working memory.

    PubMed

    Bradley, Claire; Pearson, Joel

    2012-01-01

    EARLY VISUAL MEMORY CAN BE SPLIT INTO TWO PRIMARY COMPONENTS: a high-capacity, short-lived iconic memory followed by a limited-capacity visual working memory that can last many seconds. Whereas a large number of studies have investigated visual working memory for low-level sensory features, much research on iconic memory has used more "high-level" alphanumeric stimuli such as letters or numbers. These two forms of memory are typically examined separately, despite an intrinsic overlap in their characteristics. Here, we used a purely sensory paradigm to examine visual short-term memory for 10 homogeneous items of three different visual features (color, orientation and motion) across a range of durations from 0 to 6 s. We found that the amount of information stored in iconic memory is smaller for motion than for color or orientation. Performance declined exponentially with longer storage durations and reached chance levels after ∼2 s. Further experiments showed that performance for the 10 items at 1 s was contingent on unperturbed attentional resources. In addition, for orientation stimuli, performance was contingent on the location of stimuli in the visual field, especially for short cue delays. Overall, our results suggest a smooth transition between an automatic, high-capacity, feature-specific sensory-iconic memory, and an effortful "lower-capacity" visual working memory.

  2. Visual perceptual and handwriting skills in children with Developmental Coordination Disorder.

    PubMed

    Prunty, Mellissa; Barnett, Anna L; Wilmut, Kate; Plumb, Mandy

    2016-10-01

    Children with Developmental Coordination Disorder demonstrate a lack of automaticity in handwriting as measured by pauses during writing. Deficits in visual perception have been proposed in the literature as underlying mechanisms of handwriting difficulties in children with DCD. The aim of this study was to examine whether correlations exist between measures of visual perception and visual motor integration with measures of the handwriting product and process in children with DCD. The performance of twenty-eight 8-14year-old children who met the DSM-5 criteria for DCD was compared with 28 typically developing (TD) age and gender-matched controls. The children completed the Developmental Test of Visual Motor Integration (VMI) and the Test of Visual Perceptual Skills (TVPS). Group comparisons were made, correlations were conducted between the visual perceptual measures and handwriting measures and the sensitivity and specificity examined. The DCD group performed below the TD group on the VMI and TVPS. There were no significant correlations between the VMI or TVPS and any of the handwriting measures in the DCD group. In addition, both tests demonstrated low sensitivity. Clinicians should execute caution in using visual perceptual measures to inform them about handwriting skill in children with DCD. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Localization of MEG human brain responses to retinotopic visual stimuli with contrasting source reconstruction approaches

    PubMed Central

    Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine

    2014-01-01

    Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268

  4. Auditory, visual, and auditory-visual perceptions of emotions by young children with hearing loss versus children with normal hearing.

    PubMed

    Most, Tova; Michaelis, Hilit

    2012-08-01

    This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify happiness, anger, sadness, and fear expressed by an actress when uttering the same neutral nonsense sentence. Their auditory, visual, and auditory-visual perceptions of the emotional content were assessed. The accuracy of emotion perception among children with HL was lower than that of the NH children in all 3 conditions: auditory, visual, and auditory-visual. Perception through the combined auditory-visual mode significantly surpassed the auditory or visual modes alone in both groups, indicating that children with HL utilized the auditory information for emotion perception. No significant differences in perception emerged according to degree of HL. In addition, children with profound HL and cochlear implants did not perform differently from children with less severe HL who used hearing aids. The relatively high accuracy of emotion perception by children with HL may be explained by their intensive rehabilitation, which emphasizes suprasegmental and paralinguistic aspects of verbal communication.

  5. SmartAdP: Visual Analytics of Large-scale Taxi Trajectories for Selecting Billboard Locations.

    PubMed

    Liu, Dongyu; Weng, Di; Li, Yuhong; Bao, Jie; Zheng, Yu; Qu, Huamin; Wu, Yingcai

    2017-01-01

    The problem of formulating solutions immediately and comparing them rapidly for billboard placements has plagued advertising planners for a long time, owing to the lack of efficient tools for in-depth analyses to make informed decisions. In this study, we attempt to employ visual analytics that combines the state-of-the-art mining and visualization techniques to tackle this problem using large-scale GPS trajectory data. In particular, we present SmartAdP, an interactive visual analytics system that deals with the two major challenges including finding good solutions in a huge solution space and comparing the solutions in a visual and intuitive manner. An interactive framework that integrates a novel visualization-driven data mining model enables advertising planners to effectively and efficiently formulate good candidate solutions. In addition, we propose a set of coupled visualizations: a solution view with metaphor-based glyphs to visualize the correlation between different solutions; a location view to display billboard locations in a compact manner; and a ranking view to present multi-typed rankings of the solutions. This system has been demonstrated using case studies with a real-world dataset and domain-expert interviews. Our approach can be adapted for other location selection problems such as selecting locations of retail stores or restaurants using trajectory data.

  6. Projector-Based Augmented Reality for Quality Inspection of Scanned Objects

    NASA Astrophysics Data System (ADS)

    Kern, J.; Weinmann, M.; Wursthorn, S.

    2017-09-01

    After scanning or reconstructing the geometry of objects, we need to inspect the result of our work. Are there any parts missing? Is every detail covered in the desired quality? We typically do this by looking at the resulting point clouds or meshes of our objects on-screen. What, if we could see the information directly visualized on the object itself? Augmented reality is the generic term for bringing virtual information into our real environment. In our paper, we show how we can project any 3D information like thematic visualizations or specific monitoring information with reference to our object onto the object's surface itself, thus augmenting it with additional information. For small objects that could for instance be scanned in a laboratory, we propose a low-cost method involving a projector-camera system to solve this task. The user only needs a calibration board with coded fiducial markers to calibrate the system and to estimate the projector's pose later on for projecting textures with information onto the object's surface. Changes within the projected 3D information or of the projector's pose will be applied in real-time. Our results clearly reveal that such a simple setup will deliver a good quality of the augmented information.

  7. Right-hemispheric dominance for visual remapping in humans.

    PubMed

    Pisella, L; Alahyane, N; Blangero, A; Thery, F; Blanc, S; Pelisson, D

    2011-02-27

    We review evidence showing a right-hemispheric dominance for visuo-spatial processing and representation in humans. Accordingly, visual disorganization symptoms (intuitively related to remapping impairments) are observed in both neglect and constructional apraxia. More specifically, we review findings from the intervening saccade paradigm in humans--and present additional original data--which suggest a specific role of the asymmetrical network at the temporo-parietal junction (TPJ) in the right hemisphere in visual remapping: following damage to the right dorsal posterior parietal cortex (PPC) as well as part of the corpus callosum connecting the PPC to the frontal lobes, patient OK in a double-step saccadic task exhibited an impairment when the second saccade had to be directed rightward. This singular and lateralized deficit cannot result solely from the patient's cortical lesion and, therefore, we propose that it is due to his callosal lesion that may specifically interrupt the interhemispheric transfer of information necessary to execute accurate rightward saccades towards a remapped target location. This suggests a specialized right-hemispheric network for visuo-spatial remapping that subsequently transfers target location information to downstream planning regions, which are symmetrically organized.

  8. Visual Control for Multirobot Organized Rendezvous.

    PubMed

    Lopez-Nicolas, G; Aranda, M; Mezouar, Y; Sagues, C

    2012-08-01

    This paper addresses the problem of visual control of a set of mobile robots. In our framework, the perception system consists of an uncalibrated flying camera performing an unknown general motion. The robots are assumed to undergo planar motion considering nonholonomic constraints. The goal of the control task is to drive the multirobot system to a desired rendezvous configuration relying solely on visual information given by the flying camera. The desired multirobot configuration is defined with an image of the set of robots in that configuration without any additional information. We propose a homography-based framework relying on the homography induced by the multirobot system that gives a desired homography to be used to define the reference target, and a new image-based control law that drives the robots to the desired configuration by imposing a rigidity constraint. This paper extends our previous work, and the main contributions are that the motion constraints on the flying camera are removed, the control law is improved by reducing the number of required steps, the stability of the new control law is proved, and real experiments are provided to validate the proposal.

  9. Illusory conjunctions in simultanagnosia: coarse coding of visual feature location?

    PubMed

    McCrea, Simon M; Buxbaum, Laurel J; Coslett, H Branch

    2006-01-01

    Simultanagnosia is a disorder characterized by an inability to see more than one object at a time. We report a simultanagnosic patient (ED) with bilateral posterior infarctions who produced frequent illusory conjunctions on tasks involving form and surface features (e.g., a red T) and form alone. ED also produced "blend" errors in which features of one familiar perceptual unit appeared to migrate to another familiar perceptual unit (e.g., "RO" read as "PQ"). ED often misread scrambled letter strings as a familiar word (e.g., "hmoe" read as "home"). Finally, ED's success in reporting two letters in an array was inversely related to the distance between the letters. These findings are consistent with the hypothesis that ED's illusory reflect coarse coding of visual feature location that is ameliorated in part by top-down information from object and word recognition systems; the findings are also consistent, however, with Treisman's Feature Integration Theory. Finally, the data provide additional support for the claim that the dorsal parieto-occipital cortex is implicated in the binding of visual feature information.

  10. Task relevance modulates the cortical representation of feature conjunctions in the target template.

    PubMed

    Reeder, Reshanne R; Hanke, Michael; Pollmann, Stefan

    2017-07-03

    Little is known about the cortical regions involved in representing task-related content in preparation for visual task performance. Here we used representational similarity analysis (RSA) to investigate the BOLD response pattern similarity between task relevant and task irrelevant feature dimensions during conjunction viewing and target template maintenance prior to visual search. Subjects were cued to search for a spatial frequency (SF) or orientation of a Gabor grating and we measured BOLD signal during cue and delay periods before the onset of a search display. RSA of delay period activity revealed that widespread regions in frontal, posterior parietal, and occipitotemporal cortices showed general representational differences between task relevant and task irrelevant dimensions (e.g., orientation vs. SF). In contrast, RSA of cue period activity revealed sensory-related representational differences between cue images (regardless of task) at the occipital pole and additionally in the frontal pole. Our data show that task and sensory information are represented differently during viewing and during target template maintenance, and that task relevance modulates the representation of visual information across the cortex.

  11. The effect of extended sensory range via the EyeCane sensory substitution device on the characteristics of visionless virtual navigation.

    PubMed

    Maidenbaum, Shachar; Levy-Tzedek, Shelly; Chebat, Daniel Robert; Namer-Furstenberg, Rinat; Amedi, Amir

    2014-01-01

    Mobility training programs for helping the blind navigate through unknown places with a White-Cane significantly improve their mobility. However, what is the effect of new assistive technologies, offering more information to the blind user, on the underlying premises of these programs such as navigation patterns? We developed the virtual-EyeCane, a minimalistic sensory substitution device translating single-point-distance into auditory cues identical to the EyeCane's in the real world. We compared performance in virtual environments when using the virtual-EyeCane, a virtual-White-Cane, no device and visual navigation. We show that the characteristics of virtual-EyeCane navigation differ from navigation with a virtual-White-Cane or no device, and that virtual-EyeCane users complete more levels successfully, taking shorter paths and with less collisions than these groups, and we demonstrate the relative similarity of virtual-EyeCane and visual navigation patterns. This suggests that additional distance information indeed changes navigation patterns from virtual-White-Cane use, and brings them closer to visual navigation.

  12. Working memory load and the retro-cue effect: A diffusion model account.

    PubMed

    Shepherdson, Peter; Oberauer, Klaus; Souza, Alessandra S

    2018-02-01

    Retro-cues (i.e., cues presented between the offset of a memory array and the onset of a probe) have consistently been found to enhance performance in working memory tasks, sometimes ameliorating the deleterious effects of increased memory load. However, the mechanism by which retro-cues exert their influence remains a matter of debate. To inform this debate, we applied a hierarchical diffusion model to data from 4 change detection experiments using single item, location-specific probes (i.e., a local recognition task) with either visual or verbal memory stimuli. Results showed that retro-cues enhanced the quality of information entering the decision process-especially for visual stimuli-and decreased the time spent on nondecisional processes. Further, cues interacted with memory load primarily on nondecision time, decreasing or abolishing load effects. To explain these findings, we propose an account whereby retro-cues act primarily to reduce the time taken to access the relevant representation in memory upon probe presentation, and in addition protect cued representations from visual interference. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. Attention, working memory, and phenomenal experience of WM content: memory levels determined by different types of top-down modulation.

    PubMed

    Jacob, Jane; Jacobs, Christianne; Silvanto, Juha

    2015-01-01

    What is the role of top-down attentional modulation in consciously accessing working memory (WM) content? In influential WM models, information can exist in different states, determined by allocation of attention; placing the original memory representation in the center of focused attention gives rise to conscious access. Here we discuss various lines of evidence indicating that such attentional modulation is not sufficient for memory content to be phenomenally experienced. We propose that, in addition to attentional modulation of the memory representation, another type of top-down modulation is required: suppression of all incoming visual information, via inhibition of early visual cortex. In this view, there are three distinct memory levels, as a function of the top-down control associated with them: (1) Nonattended, nonconscious associated with no attentional modulation; (2) attended, phenomenally nonconscious memory, associated with attentional enhancement of the actual memory trace; (3) attended, phenomenally conscious memory content, associated with enhancement of the memory trace and top-down suppression of all incoming visual input.

  14. Right-hemispheric dominance for visual remapping in humans

    PubMed Central

    Pisella, L.; Alahyane, N.; Blangero, A.; Thery, F.; Blanc, S.; Pelisson, D.

    2011-01-01

    We review evidence showing a right-hemispheric dominance for visuo-spatial processing and representation in humans. Accordingly, visual disorganization symptoms (intuitively related to remapping impairments) are observed in both neglect and constructional apraxia. More specifically, we review findings from the intervening saccade paradigm in humans—and present additional original data—which suggest a specific role of the asymmetrical network at the temporo-parietal junction (TPJ) in the right hemisphere in visual remapping: following damage to the right dorsal posterior parietal cortex (PPC) as well as part of the corpus callosum connecting the PPC to the frontal lobes, patient OK in a double-step saccadic task exhibited an impairment when the second saccade had to be directed rightward. This singular and lateralized deficit cannot result solely from the patient's cortical lesion and, therefore, we propose that it is due to his callosal lesion that may specifically interrupt the interhemispheric transfer of information necessary to execute accurate rightward saccades towards a remapped target location. This suggests a specialized right-hemispheric network for visuo-spatial remapping that subsequently transfers target location information to downstream planning regions, which are symmetrically organized. PMID:21242144

  15. A bibliometric and visual analysis of global geo-ontology research

    NASA Astrophysics Data System (ADS)

    Li, Lin; Liu, Yu; Zhu, Haihong; Ying, Shen; Luo, Qinyao; Luo, Heng; Kuai, Xi; Xia, Hui; Shen, Hang

    2017-02-01

    In this paper, the results of a bibliometric and visual analysis of geo-ontology research articles collected from the Web of Science (WOS) database between 1999 and 2014 are presented. The numbers of national institutions and published papers are visualized and a global research heat map is drawn, illustrating an overview of global geo-ontology research. In addition, we present a chord diagram of countries and perform a visual cluster analysis of a knowledge co-citation network of references, disclosing potential academic communities and identifying key points, main research areas, and future research trends. The International Journal of Geographical Information Science, Progress in Human Geography, and Computers & Geosciences are the most active journals. The USA makes the largest contributions to geo-ontology research by virtue of its highest numbers of independent and collaborative papers, and its dominance was also confirmed in the country chord diagram. The majority of institutions are in the USA, Western Europe, and Eastern Asia. Wuhan University, University of Munster, and the Chinese Academy of Sciences are notable geo-ontology institutions. Keywords such as "Semantic Web," "GIS," and "space" have attracted a great deal of attention. "Semantic granularity in ontology-driven geographic information systems, "Ontologies in support of activities in geographical space" and "A translation approach to portable ontology specifications" have the highest cited centrality. Geographical space, computer-human interaction, and ontology cognition are the three main research areas of geo-ontology. The semantic mismatch between the producers and users of ontology data as well as error propagation in interdisciplinary and cross-linguistic data reuse needs to be solved. In addition, the development of geo-ontology modeling primitives based on OWL (Web Ontology Language)and finding methods to automatically rework data in Semantic Web are needed. Furthermore, the topological relations between geographical entities still require further study.

  16. Understanding the Role of the Modality Principle in Multimedia Learning Environments

    ERIC Educational Resources Information Center

    Oberfoell, A.; Correia, A.

    2016-01-01

    The modality principle states that low-experience learners more successfully understand information that uses narration rather than on-screen text. This is due to the idea that on-screen text may produce a cognitive overload if it is accompanied by other visual elements. Other studies provided additional data and support for the modality principle…

  17. Using Maps in Web Analytics to Evaluate the Impact of Web-Based Extension Programs

    ERIC Educational Resources Information Center

    Veregin, Howard

    2015-01-01

    Maps can be a valuable addition to the Web analytics toolbox for Extension programs that use the Web to disseminate information. Extension professionals use Web analytics tools to evaluate program impacts. Maps add a unique perspective through visualization and analysis of geographic patterns and their relationships to other variables. Maps can…

  18. Bringing Meaning to Numbers: The Impact of Evaluative Categories on Decisions

    ERIC Educational Resources Information Center

    Peters, Ellen; Dieckmann, Nathan F.; Vastfjall, Daniel; Mertz, C. K.; Slovic, Paul; Hibbard, Judith H.

    2009-01-01

    Decision makers are often quite poor at using numeric information in decisions. The results of 4 experiments demonstrate that a manipulation of evaluative meaning (i.e., the extent to which an attribute can be mapped onto a good/bad scale; this manipulation is accomplished through the addition of visual boundary lines and evaluative labels to a…

  19. GRIDVIEW: Recent Improvements in Research and Education Software for Exploring Mars Topography

    NASA Technical Reports Server (NTRS)

    Roark, J. H.; Frey, H. V.

    2001-01-01

    We have developed an Interactive Data Language (IDL) scientific visualization software tool called GRIDVIEW that can be used in research and education to explore and study the most recent Mars Orbiter Laser Altimeter (MOLA) gridded topography of Mars (http://denali.gsfc.nasa.gov/mola_pub/gridview). Additional information is contained in the original extended abstract.

  20. Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul

    2014-06-01

    3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.

  1. Brain signal complexity rises with repetition suppression in visual learning.

    PubMed

    Lafontaine, Marc Philippe; Lacourse, Karine; Lina, Jean-Marc; McIntosh, Anthony R; Gosselin, Frédéric; Théoret, Hugo; Lippé, Sarah

    2016-06-21

    Neuronal activity associated with visual processing of an unfamiliar face gradually diminishes when it is viewed repeatedly. This process, known as repetition suppression (RS), is involved in the acquisition of familiarity. Current models suggest that RS results from interactions between visual information processing areas located in the occipito-temporal cortex and higher order areas, such as the dorsolateral prefrontal cortex (DLPFC). Brain signal complexity, which reflects information dynamics of cortical networks, has been shown to increase as unfamiliar faces become familiar. However, the complementarity of RS and increases in brain signal complexity have yet to be demonstrated within the same measurements. We hypothesized that RS and brain signal complexity increase occur simultaneously during learning of unfamiliar faces. Further, we expected alteration of DLPFC function by transcranial direct current stimulation (tDCS) to modulate RS and brain signal complexity over the occipito-temporal cortex. Participants underwent three tDCS conditions in random order: right anodal/left cathodal, right cathodal/left anodal and sham. Following tDCS, participants learned unfamiliar faces, while an electroencephalogram (EEG) was recorded. Results revealed RS over occipito-temporal electrode sites during learning, reflected by a decrease in signal energy, a measure of amplitude. Simultaneously, as signal energy decreased, brain signal complexity, as estimated with multiscale entropy (MSE), increased. In addition, prefrontal tDCS modulated brain signal complexity over the right occipito-temporal cortex during the first presentation of faces. These results suggest that although RS may reflect a brain mechanism essential to learning, complementary processes reflected by increases in brain signal complexity, may be instrumental in the acquisition of novel visual information. Such processes likely involve long-range coordinated activity between prefrontal and lower order visual areas. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  2. Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter.

    PubMed

    Kim, Seongwan; Ban, Yuseok; Lee, Sangyoun

    2017-01-17

    The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor), usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor's stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity.

  3. Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter

    PubMed Central

    Kim, Seongwan; Ban, Yuseok; Lee, Sangyoun

    2017-01-01

    The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detection, rather than hand gesture recognition. Additionally, the majority of the works that take advantage of multiple modalities (e.g., the combination of a thermal sensor and a visual sensor), usually adopting simple fusion approaches between the two modalities. As both thermal sensors and visual sensors have their own shortcomings and strengths, we propose a novel joint filter-based hand gesture recognition method to simultaneously exploit the strengths and compensate the shortcomings of each. Our study is motivated by the investigation of the mutual supplementation between thermal and visual information in low feature level for the consistent representation of a hand in the presence of varying lighting conditions. Accordingly, our proposed method leverages the thermal sensor’s stability against luminance and the visual sensors textural detail, while complementing the low resolution and halo effect of thermal sensors and the weakness against illumination of visual sensors. A conventional region tracking method and a deep convolutional neural network have been leveraged to track the trajectory of a hand gesture and to recognize the hand gesture, respectively. Our experimental results show stability in recognizing a hand gesture against varying lighting conditions based on the contribution of the joint kernels of spatial adjacency and thermal range similarity. PMID:28106716

  4. MoZis: mobile zoo information system: a case study for the city of Osnabrueck

    NASA Astrophysics Data System (ADS)

    Michel, Ulrich

    2007-10-01

    This paper describes a new project of the Institute for Geoinformatics and Remote Sensing, funded by the German Federal Foundation for the Environment (DBU, Deutsche Bundesstiftung Umwelt www.dbu.de). The goal of this project is to develop a mobile zoo information system for Pocket PCs and Smart phones. Visitors of the zoo will be able to use their own mobile devices or use Pocket PCs, which could be borrowed from the zoo to navigate around the zoo's facilities. The system will also provide additional multimedia based information such as audio-based material, animal video clips, and maps of their natural habitat. People could have access to the project at the zoo via wireless local area network or by downloading the necessary files using a home internet connection. Our software environment consists of proprietary and non-proprietary software solutions in order to make it as flexible as possible. Our first prototype was developed with Visual Studio 2003 and Visual Basic.Net.

  5. From Objects to Landmarks: The Function of Visual Location Information in Spatial Navigation

    PubMed Central

    Chan, Edgar; Baumann, Oliver; Bellgrove, Mark A.; Mattingley, Jason B.

    2012-01-01

    Landmarks play an important role in guiding navigational behavior. A host of studies in the last 15 years has demonstrated that environmental objects can act as landmarks for navigation in different ways. In this review, we propose a parsimonious four-part taxonomy for conceptualizing object location information during navigation. We begin by outlining object properties that appear to be important for a landmark to attain salience. We then systematically examine the different functions of objects as navigational landmarks based on previous behavioral and neuroanatomical findings in rodents and humans. Evidence is presented showing that single environmental objects can function as navigational beacons, or act as associative or orientation cues. In addition, we argue that extended surfaces or boundaries can act as landmarks by providing a frame of reference for encoding spatial information. The present review provides a concise taxonomy of the use of visual objects as landmarks in navigation and should serve as a useful reference for future research into landmark-based spatial navigation. PMID:22969737

  6. A recurrent neural model for proto-object based contour integration and figure-ground segregation.

    PubMed

    Hu, Brian; Niebur, Ernst

    2017-12-01

    Visual processing of objects makes use of both feedforward and feedback streams of information. However, the nature of feedback signals is largely unknown, as is the identity of the neuronal populations in lower visual areas that receive them. Here, we develop a recurrent neural model to address these questions in the context of contour integration and figure-ground segregation. A key feature of our model is the use of grouping neurons whose activity represents tentative objects ("proto-objects") based on the integration of local feature information. Grouping neurons receive input from an organized set of local feature neurons, and project modulatory feedback to those same neurons. Additionally, inhibition at both the local feature level and the object representation level biases the interpretation of the visual scene in agreement with principles from Gestalt psychology. Our model explains several sets of neurophysiological results (Zhou et al. Journal of Neuroscience, 20(17), 6594-6611 2000; Qiu et al. Nature Neuroscience, 10(11), 1492-1499 2007; Chen et al. Neuron, 82(3), 682-694 2014), and makes testable predictions about the influence of neuronal feedback and attentional selection on neural responses across different visual areas. Our model also provides a framework for understanding how object-based attention is able to select both objects and the features associated with them.

  7. Rocinante, a virtual collaborative visualizer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, M.J.; Ice, L.G.

    1996-12-31

    With the goal of improving the ability of people around the world to share the development and use of intelligent systems, Sandia National Laboratories` Intelligent Systems and Robotics Center is developing new Virtual Collaborative Engineering (VCE) and Virtual Collaborative Control (VCC) technologies. A key area of VCE and VCC research is in shared visualization of virtual environments. This paper describes a Virtual Collaborative Visualizer (VCV), named Rocinante, that Sandia developed for VCE and VCC applications. Rocinante allows multiple participants to simultaneously view dynamic geometrically-defined environments. Each viewer can exclude extraneous detail or include additional information in the scene as desired.more » Shared information can be saved and later replayed in a stand-alone mode. Rocinante automatically scales visualization requirements with computer system capabilities. Models with 30,000 polygons and 4 Megabytes of texture display at 12 to 15 frames per second (fps) on an SGI Onyx and at 3 to 8 fps (without texture) on Indigo 2 Extreme computers. In its networked mode, Rocinante synchronizes its local geometric model with remote simulators and sensory systems by monitoring data transmitted through UDP packets. Rocinante`s scalability and performance make it an ideal VCC tool. Users throughout the country can monitor robot motions and the thinking behind their motion planners and simulators.« less

  8. A Web-based Visualization System for Three Dimensional Geological Model using Open GIS

    NASA Astrophysics Data System (ADS)

    Nemoto, T.; Masumoto, S.; Nonogaki, S.

    2017-12-01

    A three dimensional geological model is an important information in various fields such as environmental assessment, urban planning, resource development, waste management and disaster mitigation. In this study, we have developed a web-based visualization system for 3D geological model using free and open source software. The system has been successfully implemented by integrating web mapping engine MapServer and geographic information system GRASS. MapServer plays a role of mapping horizontal cross sections of 3D geological model and a topographic map. GRASS provides the core components for management, analysis and image processing of the geological model. Online access to GRASS functions has been enabled using PyWPS that is an implementation of WPS (Web Processing Service) Open Geospatial Consortium (OGC) standard. The system has two main functions. Two dimensional visualization function allows users to generate horizontal and vertical cross sections of 3D geological model. These images are delivered via WMS (Web Map Service) and WPS OGC standards. Horizontal cross sections are overlaid on the topographic map. A vertical cross section is generated by clicking a start point and an end point on the map. Three dimensional visualization function allows users to visualize geological boundary surfaces and a panel diagram. The user can visualize them from various angles by mouse operation. WebGL is utilized for 3D visualization. WebGL is a web technology that brings hardware-accelerated 3D graphics to the browser without installing additional software. The geological boundary surfaces can be downloaded to incorporate the geologic structure in a design on CAD and model for various simulations. This study was supported by JSPS KAKENHI Grant Number JP16K00158.

  9. Congruent representation of visual and acoustic space in the superior colliculus of the echolocating bat Phyllostomus discolor.

    PubMed

    Hoffmann, Susanne; Vega-Zuniga, Tomas; Greiter, Wolfgang; Krabichler, Quirin; Bley, Alexandra; Matthes, Mariana; Zimmer, Christiane; Firzlaff, Uwe; Luksch, Harald

    2016-11-01

    The midbrain superior colliculus (SC) commonly features a retinotopic representation of visual space in its superficial layers, which is congruent with maps formed by multisensory neurons and motor neurons in its deep layers. Information flow between layers is suggested to enable the SC to mediate goal-directed orienting movements. While most mammals strongly rely on vision for orienting, some species such as echolocating bats have developed alternative strategies, which raises the question how sensory maps are organized in these animals. We probed the visual system of the echolocating bat Phyllostomus discolor and found that binocular high acuity vision is frontally oriented and thus aligned with the biosonar system, whereas monocular visual fields cover a large area of peripheral space. For the first time in echolocating bats, we could show that in contrast with other mammals, visual processing is restricted to the superficial layers of the SC. The topographic representation of visual space, however, followed the general mammalian pattern. In addition, we found a clear topographic representation of sound azimuth in the deeper collicular layers, which was congruent with the superficial visual space map and with a previously documented map of orienting movements. Especially for bats navigating at high speed in densely structured environments, it is vitally important to transfer and coordinate spatial information between sensors and motor systems. Here, we demonstrate first evidence for the existence of congruent maps of sensory space in the bat SC that might serve to generate a unified representation of the environment to guide motor actions. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  10. Use of Context in Video Processing

    NASA Astrophysics Data System (ADS)

    Wu, Chen; Aghajan, Hamid

    Interpreting an event or a scene based on visual data often requires additional contextual information. Contextual information may be obtained from different sources. In this chapter, we discuss two broad categories of contextual sources: environmental context and user-centric context. Environmental context refers to information derived from domain knowledge or from concurrently sensed effects in the area of operation. User-centric context refers to information obtained and accumulated from the user. Both types of context can include static or dynamic contextual elements. Examples from a smart home environment are presented to illustrate how different types of contextual data can be applied to aid the decision-making process.

  11. Audio-visual integration during speech perception in prelingually deafened Japanese children revealed by the McGurk effect.

    PubMed

    Tona, Risa; Naito, Yasushi; Moroto, Saburo; Yamamoto, Rinko; Fujiwara, Keizo; Yamazaki, Hiroshi; Shinohara, Shogo; Kikuchi, Masahiro

    2015-12-01

    To investigate the McGurk effect in profoundly deafened Japanese children with cochlear implants (CI) and in normal-hearing children. This was done to identify how children with profound deafness using CI established audiovisual integration during the speech acquisition period. Twenty-four prelingually deafened children with CI and 12 age-matched normal-hearing children participated in this study. Responses to audiovisual stimuli were compared between deafened and normal-hearing controls. Additionally, responses of the children with CI younger than 6 years of age were compared with those of the children with CI at least 6 years of age at the time of the test. Responses to stimuli combining auditory labials and visual non-labials were significantly different between deafened children with CI and normal-hearing controls (p<0.05). Additionally, the McGurk effect tended to be more induced in deafened children older than 6 years of age than in their younger counterparts. The McGurk effect was more significantly induced in prelingually deafened Japanese children with CI than in normal-hearing, age-matched Japanese children. Despite having good speech-perception skills and auditory input through their CI, from early childhood, deafened children may use more visual information in speech perception than normal-hearing children. As children using CI need to communicate based on insufficient speech signals coded by CI, additional activities of higher-order brain function may be necessary to compensate for the incomplete auditory input. This study provided information on the influence of deafness on the development of audiovisual integration related to speech, which could contribute to our further understanding of the strategies used in spoken language communication by prelingually deafened children. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification.

    PubMed

    Diamant, Idit; Klang, Eyal; Amitai, Michal; Konen, Eli; Goldberger, Jacob; Greenspan, Hayit

    2017-06-01

    We present a novel variant of the bag-of-visual-words (BoVW) method for automated medical image classification. Our approach improves the BoVW model by learning a task-driven dictionary of the most relevant visual words per task using a mutual information-based criterion. Additionally, we generate relevance maps to visualize and localize the decision of the automatic classification algorithm. These maps demonstrate how the algorithm works and show the spatial layout of the most relevant words. We applied our algorithm to three different tasks: chest x-ray pathology identification (of four pathologies: cardiomegaly, enlarged mediastinum, right consolidation, and left consolidation), liver lesion classification into four categories in computed tomography (CT) images and benign/malignant clusters of microcalcifications (MCs) classification in breast mammograms. Validation was conducted on three datasets: 443 chest x-rays, 118 portal phase CT images of liver lesions, and 260 mammography MCs. The proposed method improves the classical BoVW method for all tested applications. For chest x-ray, area under curve of 0.876 was obtained for enlarged mediastinum identification compared to 0.855 using classical BoVW (with p-value 0.01). For MC classification, a significant improvement of 4% was achieved using our new approach (with p-value = 0.03). For liver lesion classification, an improvement of 6% in sensitivity and 2% in specificity were obtained (with p-value 0.001). We demonstrated that classification based on informative selected set of words results in significant improvement. Our new BoVW approach shows promising results in clinically important domains. Additionally, it can discover relevant parts of images for the task at hand without explicit annotations for training data. This can provide computer-aided support for medical experts in challenging image analysis tasks.

  13. Intraoperative adaptation and visualization of preoperative risk analyses for oncologic liver surgery

    NASA Astrophysics Data System (ADS)

    Hansen, Christian; Schlichting, Stefan; Zidowitz, Stephan; Köhn, Alexander; Hindennach, Milo; Kleemann, Markus; Peitgen, Heinz-Otto

    2008-03-01

    Tumor resections from the liver are complex surgical interventions. With recent planning software, risk analyses based on individual liver anatomy can be carried out preoperatively. However, additional tumors within the liver are frequently detected during oncological interventions using intraoperative ultrasound. These tumors are not visible in preoperative data and their existence may require changes to the resection strategy. We propose a novel method that allows an intraoperative risk analysis adaptation by merging newly detected tumors with a preoperative risk analysis. To determine the exact positions and sizes of these tumors we make use of a navigated ultrasound-system. A fast communication protocol enables our application to exchange crucial data with this navigation system during an intervention. A further motivation for our work is to improve the visual presentation of a moving ultrasound plane within a complex 3D planning model including vascular systems, tumors, and organ surfaces. In case the ultrasound plane is located inside the liver, occlusion of the ultrasound plane by the planning model is an inevitable problem for the applied visualization technique. Our system allows the surgeon to focus on the ultrasound image while perceiving context-relevant planning information. To improve orientation ability and distance perception, we include additional depth cues by applying new illustrative visualization algorithms. Preliminary evaluations confirm that in case of intraoperatively detected tumors a risk analysis adaptation is beneficial for precise liver surgery. Our new GPU-based visualization approach provides the surgeon with a simultaneous visualization of planning models and navigated 2D ultrasound data while minimizing occlusion problems.

  14. Situational analysis of communication of HIV and AIDS information to persons with visual impairment: a case of Kang'onga Production Centre in Ndola, Zambia.

    PubMed

    Chintende, Grace Nsangwe; Sitali, Doreen; Michelo, Charles; Mweemba, Oliver

    2017-04-04

    Despite the increases in health promotion and educational programs on HIV and AIDS, lack of information and communication on HIV and AIDS for the visually impaired persons continues. The underlying factors that create the information and communication gaps have not been fully explored in Zambia. It is therefore important that, this situational analysis on HIV and AIDS information dissemination to persons with visual impairments at Kang'onga Production Centre in Ndola was conducted. The study commenced in December 2014 to May 2015. A qualitative case study design was employed. The study used two focus group discussions with males and females. Each group comprised twelve participants. Eight in-depth interviews involving the visually impaired persons and five key informants working with visually impaired persons were conducted. Data was analysed thematically using NVIVO 8 software. Ethical clearance was sought from Excellency in Research Ethics and Science. Reference Number 2014-May-030. It was established that most visually impaired people lacked knowledge on the cause, transmission and treatment of HIV and AIDS resulting in misconceptions. It was revealed that health promoters and people working with the visually impaired did not have specific HIV and AIDS information programs in Zambia. Further, it was discovered that the media, information education communication and health education were channels through which the visually impaired accessed HIV and AIDS information. Discrimination, stigma, lack of employment opportunities, funding and poverty were among the many challenges identified which the visually impaired persons faced in accessing HIV and AIDS information. Integration of the visually impaired in HIV and AIDS programs would increase funding for economic empowerment and health promotions in order to improve communication on HIV and AIDS information. The study showed that, the visually impaired persons in Zambia are not catered for in the dissemination of HIV and AIDS information. Available information is not user-friendly because it is in unreadable formats thereby increasing the potential for misinformation and limitations to their access. This calls for innovations in the communication on HIV and AIDS information health promotion to the target groups.

  15. Listening to data from the 2011 magnitude 9.0 Tohoku-Oki, Japan, earthquake

    NASA Astrophysics Data System (ADS)

    Peng, Z.; Aiken, C.; Kilb, D. L.; Shelly, D. R.; Enescu, B.

    2011-12-01

    It is important for seismologists to effectively convey information about catastrophic earthquakes, such as the magnitude 9.0 earthquake in Tohoku-Oki, Japan, to general audience who may not necessarily be well-versed in the language of earthquake seismology. Given recent technological advances, previous approaches of using "snapshot" static images to represent earthquake data is now becoming obsolete, and the favored venue to explain complex wave propagation inside the solid earth and interactions among earthquakes is now visualizations that include auditory information. Here, we convert seismic data into visualizations that include sounds, the latter being a term known as 'audification', or continuous 'sonification'. By combining seismic auditory and visual information, static "snapshots" of earthquake data come to life, allowing pitch and amplitude changes to be heard in sync with viewed frequency changes in the seismograms and associated spectragrams. In addition, these visual and auditory media allow the viewer to relate earthquake generated seismic signals to familiar sounds such as thunder, popcorn popping, rattlesnakes, firecrackers, etc. We present a free software package that uses simple MATLAB tools and Apple Inc's QuickTime Pro to automatically convert seismic data into auditory movies. We focus on examples of seismic data from the 2011 Tohoku-Oki earthquake. These examples range from near-field strong motion recordings that demonstrate the complex source process of the mainshock and early aftershocks, to far-field broadband recordings that capture remotely triggered deep tremor and shallow earthquakes. We envision audification of seismic data, which is geared toward a broad range of audiences, will be increasingly used to convey information about notable earthquakes and research frontiers in earthquake seismology (tremor, dynamic triggering, etc). Our overarching goal is that sharing our new visualization tool will foster an interest in seismology, not just for young scientists but also for people of all ages.

  16. Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE.

    PubMed

    Demelo, Jonathan; Parsons, Paul; Sedig, Kamran

    2017-02-02

    Diverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks. Our objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem. We developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches. Formative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem. Our strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be transferred successfully to other contexts. ©Jonathan Demelo, Paul Parsons, Kamran Sedig. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 02.02.2017.

  17. Posterior parietal cortex mediates encoding and maintenance processes in change blindness.

    PubMed

    Tseng, Philip; Hsu, Tzu-Yu; Muggleton, Neil G; Tzeng, Ovid J L; Hung, Daisy L; Juan, Chi-Hung

    2010-03-01

    It is commonly accepted that right posterior parietal cortex (PPC) plays an important role in updating spatial representations, directing visuospatial attention, and planning actions. However, recent studies suggest that right PPC may also be involved in processes that are more closely associated with our visual awareness as its activation level positively correlates with successful conscious change detection (Beck, D.M., Rees, G., Frith, C.D., & Lavie, N. (2001). Neural correlates of change detection and change blindness. Nature Neuroscience, 4, 645-650.). Furthermore, disruption of its activity increases the occurrences of change blindness, thus suggesting a causal role for right PPC in change detection (Beck, D.M., Muggleton, N., Walsh, V., & Lavie, N. (2006). Right parietal cortex plays a critical role in change blindness. Cerebral Cortex, 16, 712-717.). In the context of a 1-shot change detection paradigm, we applied transcranial magnetic stimulation (TMS) during different time intervals to elucidate the temporally precise involvement of PPC in change detection. While subjects attempted to detect changes between two image sets separated by a brief time interval, TMS was applied either during the presentation of picture 1 when subjects were encoding and maintaining information into visual short-term memory, or picture 2 when subjects were retrieving information relating to picture 1 and comparing it to picture 2. Our results show that change blindness occurred more often when TMS was applied during the viewing of picture 1, which implies that right PPC plays a crucial role in the processes of encoding and maintaining information in visual short-term memory. In addition, since our stimuli did not involve changes in spatial locations, our findings also support previous studies suggesting that PPC may be involved in the processes of encoding non-spatial visual information (Todd, J.J. & Marois, R. (2004). Capacity limit of visual short-term memory in human posterior parietal cortex. Nature, 428, 751-754.). Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  18. Ontology-Driven Search and Triage: Design of a Web-Based Visual Interface for MEDLINE

    PubMed Central

    2017-01-01

    Background Diverse users need to search health and medical literature to satisfy open-ended goals such as making evidence-based decisions and updating their knowledge. However, doing so is challenging due to at least two major difficulties: (1) articulating information needs using accurate vocabulary and (2) dealing with large document sets returned from searches. Common search interfaces such as PubMed do not provide adequate support for exploratory search tasks. Objective Our objective was to improve support for exploratory search tasks by combining two strategies in the design of an interactive visual interface by (1) using a formal ontology to help users build domain-specific knowledge and vocabulary and (2) providing multi-stage triaging support to help mitigate the information overload problem. Methods We developed a Web-based tool, Ontology-Driven Visual Search and Triage Interface for MEDLINE (OVERT-MED), to test our design ideas. We implemented a custom searchable index of MEDLINE, which comprises approximately 25 million document citations. We chose a popular biomedical ontology, the Human Phenotype Ontology (HPO), to test our solution to the vocabulary problem. We implemented multistage triaging support in OVERT-MED, with the aid of interactive visualization techniques, to help users deal with large document sets returned from searches. Results Formative evaluation suggests that the design features in OVERT-MED are helpful in addressing the two major difficulties described above. Using a formal ontology seems to help users articulate their information needs with more accurate vocabulary. In addition, multistage triaging combined with interactive visualizations shows promise in mitigating the information overload problem. Conclusions Our strategies appear to be valuable in addressing the two major problems in exploratory search. Although we tested OVERT-MED with a particular ontology and document collection, we anticipate that our strategies can be transferred successfully to other contexts. PMID:28153818

  19. Seeing is believing: on the use of image databases for visually exploring plant organelle dynamics.

    PubMed

    Mano, Shoji; Miwa, Tomoki; Nishikawa, Shuh-ichi; Mimura, Tetsuro; Nishimura, Mikio

    2009-12-01

    Organelle dynamics vary dramatically depending on cell type, developmental stage and environmental stimuli, so that various parameters, such as size, number and behavior, are required for the description of the dynamics of each organelle. Imaging techniques are superior to other techniques for describing organelle dynamics because these parameters are visually exhibited. Therefore, as the results can be seen immediately, investigators can more easily grasp organelle dynamics. At present, imaging techniques are emerging as fundamental tools in plant organelle research, and the development of new methodologies to visualize organelles and the improvement of analytical tools and equipment have allowed the large-scale generation of image and movie data. Accordingly, image databases that accumulate information on organelle dynamics are an increasingly indispensable part of modern plant organelle research. In addition, image databases are potentially rich data sources for computational analyses, as image and movie data reposited in the databases contain valuable and significant information, such as size, number, length and velocity. Computational analytical tools support image-based data mining, such as segmentation, quantification and statistical analyses, to extract biologically meaningful information from each database and combine them to construct models. In this review, we outline the image databases that are dedicated to plant organelle research and present their potential as resources for image-based computational analyses.

  20. Probabilistic visual and electromagnetic data fusion for robust drift-free sequential mosaicking: application to fetoscopy

    PubMed Central

    Tella-Amo, Marcel; Peter, Loic; Shakir, Dzhoshkun I.; Deprest, Jan; Iglesias, Juan Eugenio; Ourselin, Sebastien

    2018-01-01

    Abstract. The most effective treatment for twin-to-twin transfusion syndrome is laser photocoagulation of the shared vascular anastomoses in the placenta. Vascular connections are extremely challenging to locate due to their caliber and the reduced field-of-view of the fetoscope. Therefore, mosaicking techniques are beneficial to expand the scene, facilitate navigation, and allow vessel photocoagulation decision-making. Local vision-based mosaicking algorithms inherently drift over time due to the use of pairwise transformations. We propose the use of an electromagnetic tracker (EMT) sensor mounted at the tip of the fetoscope to obtain camera pose measurements, which we incorporate into a probabilistic framework with frame-to-frame visual information to achieve globally consistent sequential mosaics. We parametrize the problem in terms of plane and camera poses constrained by EMT measurements to enforce global consistency while leveraging pairwise image relationships in a sequential fashion through the use of local bundle adjustment. We show that our approach is drift-free and performs similarly to state-of-the-art global alignment techniques like bundle adjustment albeit with much less computational burden. Additionally, we propose a version of bundle adjustment that uses EMT information. We demonstrate the robustness to EMT noise and loss of visual information and evaluate mosaics for synthetic, phantom-based and ex vivo datasets. PMID:29487889

  1. Impaired visual recognition of biological motion in schizophrenia.

    PubMed

    Kim, Jejoong; Doop, Mikisha L; Blake, Randolph; Park, Sohee

    2005-09-15

    Motion perception deficits have been suggested to be an important feature of schizophrenia but the behavioral consequences of such deficits are unknown. Biological motion refers to the movements generated by living beings. The human visual system rapidly and effortlessly detects and extracts socially relevant information from biological motion. A deficit in biological motion perception may have significant consequences for detecting and interpreting social information. Schizophrenia patients and matched healthy controls were tested on two visual tasks: recognition of human activity portrayed in point-light animations (biological motion task) and a perceptual control task involving detection of a grouped figure against the background noise (global-form task). Both tasks required detection of a global form against background noise but only the biological motion task required the extraction of motion-related information. Schizophrenia patients performed as well as the controls in the global-form task, but were significantly impaired on the biological motion task. In addition, deficits in biological motion perception correlated with impaired social functioning as measured by the Zigler social competence scale [Zigler, E., Levine, J. (1981). Premorbid competence in schizophrenia: what is being measured? Journal of Consulting and Clinical Psychology, 49, 96-105.]. The deficit in biological motion processing, which may be related to the previously documented deficit in global motion processing, could contribute to abnormal social functioning in schizophrenia.

  2. Advertising for AIDS drugs: it's everywhere lately, but is it helpful?

    PubMed

    Mirken, B

    1998-07-01

    The recent proliferation of direct to consumer (DTC) advertisements for prescription drugs, including HIV/AIDS drugs, can present a confusing and unrealistic picture of treatment options and outcomes; however, supporters claim that it stimulates awareness of treatment options and encourages dialogue between doctors and patients. The Food and Drug Administration (FDA), which regulates DTC advertising, requires that manufacturers disclose a complete description of benefits and adverse effects, similar to the information on the product's label. This balance of information applies to the written portion of the ad, but not to the visual message, which is arguably the most powerful part of the advertisement. Many of the visuals in the AIDS drugs advertisements misconstrue the effect of the virus on the patients. However, the FDA has not yet developed restrictions to more accurately control the visual component of advertisements, in order to depict the downside of disease. Additionally, manufacturers whose advertisements match the wording on their labels have an easier time getting acceptance from the FDA, but use more technical language than the typical lay person can understand. Reliance on the FDA- approved label description, restricts the drug companies from promoting off-label uses of their products, and also does not allow for the constantly changing information of a drug's effectiveness.

  3. Incidental category learning and cognitive load in a multisensory environment across childhood.

    PubMed

    Broadbent, H J; Osborne, T; Rea, M; Peng, A; Mareschal, D; Kirkham, N Z

    2018-06-01

    Multisensory information has been shown to facilitate learning (Bahrick & Lickliter, 2000; Broadbent, White, Mareschal, & Kirkham, 2017; Jordan & Baker, 2011; Shams & Seitz, 2008). However, although research has examined the modulating effect of unisensory and multisensory distractors on multisensory processing, the extent to which a concurrent unisensory or multisensory cognitive load task would interfere with or support multisensory learning remains unclear. This study examined the role of concurrent task modality on incidental category learning in 6- to 10-year-olds. Participants were engaged in a multisensory learning task while also performing either a unisensory (visual or auditory only) or multisensory (audiovisual) concurrent task (CT). We found that engaging in an auditory CT led to poorer performance on incidental category learning compared with an audiovisual or visual CT, across groups. In 6-year-olds, category test performance was at chance in the auditory-only CT condition, suggesting auditory concurrent tasks may interfere with learning in younger children, but the addition of visual information may serve to focus attention. These findings provide novel insight into the use of multisensory concurrent information on incidental learning. Implications for the deployment of multisensory learning tasks within education across development and developmental changes in modality dominance and ability to switch flexibly across modalities are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. What’s the Gist? The influence of schemas on the neural correlates underlying true and false memories

    PubMed Central

    Webb, Christina E.; Turney, Indira C.; Dennis, Nancy A.

    2017-01-01

    The current study used a novel scene paradigm to investigate the role of encoding schemas on memory. Specifically, the study examined the influence of a strong encoding schema on retrieval of both schematic and non-schematic information, as well as false memories for information associated with the schema. Additionally, the separate roles of recollection and familiarity in both veridical and false memory retrieval were examined. The study identified several novel results. First, while many common neural regions mediated both schematic and non-schematic retrieval success, schematic recollection exhibited greater activation in visual cortex and hippocampus, regions commonly shown to mediate detailed retrieval. More effortful cognitive control regions in the prefrontal and parietal cortices, on the other hand, supported non-schematic recollection, while lateral temporal cortices supported familiarity-based retrieval of non-schematic items. Second, both true and false recollection, as well as familiarity, were mediated by activity in left middle temporal gyrus, a region associated with semantic processing and retrieval of schematic gist. Moreover, activity in this region was greater for both false recollection and false familiarity, suggesting a greater reliance on lateral temporal cortices for retrieval of illusory memories, irrespective of memory strength. Consistent with previous false memory studies, visual cortex showed increased activity for true compared to false recollection, suggesting that visual cortices are critical for distinguishing between previously viewed targets and related lures at retrieval. Additionally, the absence of common visual activity between true and false retrieval suggests that, unlike previous studies utilizing visual stimuli, when false memories are predicated on schematic gist and not perceptual overlap, there is little reliance on visual processes during false memory retrieval. Finally, the medial temporal lobe exhibited an interesting dissociation, showing greater activity for true compared to false recollection, as well as for false compared to true familiarity. These results provided an indication as to how different types of items are retrieved when studied within a highly schematic context. Results both replicate and extend previous true and false memory findings, supporting the Fuzzy Trace Theory. PMID:27697593

  5. What's the gist? The influence of schemas on the neural correlates underlying true and false memories.

    PubMed

    Webb, Christina E; Turney, Indira C; Dennis, Nancy A

    2016-12-01

    The current study used a novel scene paradigm to investigate the role of encoding schemas on memory. Specifically, the study examined the influence of a strong encoding schema on retrieval of both schematic and non-schematic information, as well as false memories for information associated with the schema. Additionally, the separate roles of recollection and familiarity in both veridical and false memory retrieval were examined. The study identified several novel results. First, while many common neural regions mediated both schematic and non-schematic retrieval success, schematic recollection exhibited greater activation in visual cortex and hippocampus, regions commonly shown to mediate detailed retrieval. More effortful cognitive control regions in the prefrontal and parietal cortices, on the other hand, supported non-schematic recollection, while lateral temporal cortices supported familiarity-based retrieval of non-schematic items. Second, both true and false recollection, as well as familiarity, were mediated by activity in left middle temporal gyrus, a region associated with semantic processing and retrieval of schematic gist. Moreover, activity in this region was greater for both false recollection and false familiarity, suggesting a greater reliance on lateral temporal cortices for retrieval of illusory memories, irrespective of memory strength. Consistent with previous false memory studies, visual cortex showed increased activity for true compared to false recollection, suggesting that visual cortices are critical for distinguishing between previously viewed targets and related lures at retrieval. Additionally, the absence of common visual activity between true and false retrieval suggests that, unlike previous studies utilizing visual stimuli, when false memories are predicated on schematic gist and not perceptual overlap, there is little reliance on visual processes during false memory retrieval. Finally, the medial temporal lobe exhibited an interesting dissociation, showing greater activity for true compared to false recollection, as well as for false compared to true familiarity. These results provided an indication as to how different types of items are retrieved when studied within a highly schematic context. Results both replicate and extend previous true and false memory findings, supporting the Fuzzy Trace Theory. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Integrating natural language processing and web GIS for interactive knowledge domain visualization

    NASA Astrophysics Data System (ADS)

    Du, Fangming

    Recent years have seen a powerful shift towards data-rich environments throughout society. This has extended to a change in how the artifacts and products of scientific knowledge production can be analyzed and understood. Bottom-up approaches are on the rise that combine access to huge amounts of academic publications with advanced computer graphics and data processing tools, including natural language processing. Knowledge domain visualization is one of those multi-technology approaches, with its aim of turning domain-specific human knowledge into highly visual representations in order to better understand the structure and evolution of domain knowledge. For example, network visualizations built from co-author relations contained in academic publications can provide insight on how scholars collaborate with each other in one or multiple domains, and visualizations built from the text content of articles can help us understand the topical structure of knowledge domains. These knowledge domain visualizations need to support interactive viewing and exploration by users. Such spatialization efforts are increasingly looking to geography and GIS as a source of metaphors and practical technology solutions, even when non-georeferenced information is managed, analyzed, and visualized. When it comes to deploying spatialized representations online, web mapping and web GIS can provide practical technology solutions for interactive viewing of knowledge domain visualizations, from panning and zooming to the overlay of additional information. This thesis presents a novel combination of advanced natural language processing - in the form of topic modeling - with dimensionality reduction through self-organizing maps and the deployment of web mapping/GIS technology towards intuitive, GIS-like, exploration of a knowledge domain visualization. A complete workflow is proposed and implemented that processes any corpus of input text documents into a map form and leverages a web application framework to let users explore knowledge domain maps interactively. This workflow is implemented and demonstrated for a data set of more than 66,000 conference abstracts.

  7. Visual Working Memory Enhances the Neural Response to Matching Visual Input.

    PubMed

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-07-12

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind's eye after termination of its retinal input. It is hypothesized that information maintained in visual working memory relies on the same neural populations that process visual input. Accordingly, the content of visual working memory is known to affect our conscious perception of concurrent visual input. Here, we demonstrate for the first time that visual input elicits an enhanced neural response when it matches the content of visual working memory, both in terms of signal strength and information content. Copyright © 2017 the authors 0270-6474/17/376638-10$15.00/0.

  8. Information visualization: Beyond traditional engineering

    NASA Technical Reports Server (NTRS)

    Thomas, James J.

    1995-01-01

    This presentation addresses a different aspect of the human-computer interface; specifically the human-information interface. This interface will be dominated by an emerging technology called Information Visualization (IV). IV goes beyond the traditional views of computer graphics, CADS, and enables new approaches for engineering. IV specifically must visualize text, documents, sound, images, and video in such a way that the human can rapidly interact with and understand the content structure of information entities. IV is the interactive visual interface between humans and their information resources.

  9. Differential processing of binocular and monocular gloss cues in human visual cortex

    PubMed Central

    Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W.

    2016-01-01

    The visual impression of an object's surface reflectance (“gloss”) relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. PMID:26912596

  10. Auditory short-term memory activation during score reading.

    PubMed

    Simoens, Veerle L; Tervaniemi, Mari

    2013-01-01

    Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback.

  11. Auditory Short-Term Memory Activation during Score Reading

    PubMed Central

    Simoens, Veerle L.; Tervaniemi, Mari

    2013-01-01

    Performing music on the basis of reading a score requires reading ahead of what is being played in order to anticipate the necessary actions to produce the notes. Score reading thus not only involves the decoding of a visual score and the comparison to the auditory feedback, but also short-term storage of the musical information due to the delay of the auditory feedback during reading ahead. This study investigates the mechanisms of encoding of musical information in short-term memory during such a complicated procedure. There were three parts in this study. First, professional musicians participated in an electroencephalographic (EEG) experiment to study the slow wave potentials during a time interval of short-term memory storage in a situation that requires cross-modal translation and short-term storage of visual material to be compared with delayed auditory material, as it is the case in music score reading. This delayed visual-to-auditory matching task was compared with delayed visual-visual and auditory-auditory matching tasks in terms of EEG topography and voltage amplitudes. Second, an additional behavioural experiment was performed to determine which type of distractor would be the most interfering with the score reading-like task. Third, the self-reported strategies of the participants were also analyzed. All three parts of this study point towards the same conclusion according to which during music score reading, the musician most likely first translates the visual score into an auditory cue, probably starting around 700 or 1300 ms, ready for storage and delayed comparison with the auditory feedback. PMID:23326487

  12. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    PubMed

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  13. Egocentric Direction and Position Perceptions are Dissociable Based on Only Static Lane Edge Information

    PubMed Central

    Nakashima, Ryoichi; Iwai, Ritsuko; Ueda, Sayako; Kumada, Takatsune

    2015-01-01

    When observers perceive several objects in a space, at the same time, they should effectively perceive their own position as a viewpoint. However, little is known about observers’ percepts of their own spatial location based on the visual scene information viewed from them. Previous studies indicate that two distinct visual spatial processes exist in the locomotion situation: the egocentric position perception and egocentric direction perception. Those studies examined such perceptions in information rich visual environments where much dynamic and static visual information was available. This study examined these two perceptions in information of impoverished environments, including only static lane edge information (i.e., limited information). We investigated the visual factors associated with static lane edge information that may affect these perceptions. Especially, we examined the effects of the two factors on egocentric direction and position perceptions. One is the “uprightness factor” that “far” visual information is seen at upper location than “near” visual information. The other is the “central vision factor” that observers usually look at “far” visual information using central vision (i.e., foveal vision) whereas ‘near’ visual information using peripheral vision. Experiment 1 examined the effect of the “uprightness factor” using normal and inverted road images. Experiment 2 examined the effect of the “central vision factor” using normal and transposed road images where the upper half of the normal image was presented under the lower half. Experiment 3 aimed to replicate the results of Experiments 1 and 2. Results showed that egocentric direction perception is interfered with image inversion or image transposition, whereas egocentric position perception is robust against these image transformations. That is, both “uprightness” and “central vision” factors are important for egocentric direction perception, but not for egocentric position perception. Therefore, the two visual spatial perceptions about observers’ own viewpoints are fundamentally dissociable. PMID:26648895

  14. Coordinating Council. Seventh Meeting: Acquisitions

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The theme for this NASA Scientific and Technical Information Program Coordinating Council meeting was Acquisitions. In addition to NASA and the NASA Center for AeroSpace Information (CASI) presentations, the report contains fairly lengthy visuals about acquisitions at the Defense Technical Information Center. CASI's acquisitions program and CASI's proactive acquisitions activity were described. There was a presentation on the document evaluation process at CASI. A talk about open literature scope and coverage at the American Institute of Aeronautics and Astronautics was also given. An overview of the STI Program's Acquisitions Experts Committee was given next. Finally acquisitions initiatives of the NASA STI program were presented.

  15. Common neural substrates for visual working memory and attention.

    PubMed

    Mayer, Jutta S; Bittner, Robert A; Nikolić, Danko; Bledowski, Christoph; Goebel, Rainer; Linden, David E J

    2007-06-01

    Humans are severely limited in their ability to memorize visual information over short periods of time. Selective attention has been implicated as a limiting factor. Here we used functional magnetic resonance imaging to test the hypothesis that this limitation is due to common neural resources shared by visual working memory (WM) and selective attention. We combined visual search and delayed discrimination of complex objects and independently modulated the demands on selective attention and WM encoding. Participants were presented with a search array and performed easy or difficult visual search in order to encode one or three complex objects into visual WM. Overlapping activation for attention-demanding visual search and WM encoding was observed in distributed posterior and frontal regions. In the right prefrontal cortex and bilateral insula blood oxygen-level-dependent activation additively increased with increased WM load and attentional demand. Conversely, several visual, parietal and premotor areas showed overlapping activation for the two task components and were severely reduced in their WM load response under the condition with high attentional demand. Regions in the left prefrontal cortex were selectively responsive to WM load. Areas selectively responsive to high attentional demand were found within the right prefrontal and bilateral occipital cortex. These results indicate that encoding into visual WM and visual selective attention require to a high degree access to common neural resources. We propose that competition for resources shared by visual attention and WM encoding can limit processing capabilities in distributed posterior brain regions.

  16. Cue reliability and a landmark stability heuristic determine relative weighting between egocentric and allocentric visual information in memory-guided reach.

    PubMed

    Byrne, Patrick A; Crawford, J Douglas

    2010-06-01

    It is not known how egocentric visual information (location of a target relative to the self) and allocentric visual information (location of a target relative to external landmarks) are integrated to form reach plans. Based on behavioral data from rodents and humans we hypothesized that the degree of stability in visual landmarks would influence the relative weighting. Furthermore, based on numerous cue-combination studies we hypothesized that the reach system would act like a maximum-likelihood estimator (MLE), where the reliability of both cues determines their relative weighting. To predict how these factors might interact we developed an MLE model that weighs egocentric and allocentric information based on their respective reliabilities, and also on an additional stability heuristic. We tested the predictions of this model in 10 human subjects by manipulating landmark stability and reliability (via variable amplitude vibration of the landmarks and variable amplitude gaze shifts) in three reach-to-touch tasks: an egocentric control (reaching without landmarks), an allocentric control (reaching relative to landmarks), and a cue-conflict task (involving a subtle landmark "shift" during the memory interval). Variability from all three experiments was used to derive parameters for the MLE model, which was then used to simulate egocentric-allocentric weighting in the cue-conflict experiment. As predicted by the model, landmark vibration--despite its lack of influence on pointing variability (and thus allocentric reliability) in the control experiment--had a strong influence on egocentric-allocentric weighting. A reduced model without the stability heuristic was unable to reproduce this effect. These results suggest heuristics for extrinsic cue stability are at least as important as reliability for determining cue weighting in memory-guided reaching.

  17. Contribution of Visual Information about Ball Trajectory to Baseball Hitting Accuracy

    PubMed Central

    Higuchi, Takatoshi; Nagami, Tomoyuki; Nakata, Hiroki; Watanabe, Masakazu; Isaka, Tadao; Kanosue, Kazuyuki

    2016-01-01

    The contribution of visual information about a pitched ball to the accuracy of baseball-bat contact may vary depending on the part of trajectory seen. The purpose of the present study was to examine the relationship between hitting accuracy and the segment of the trajectory of the flying ball that can be seen by the batter. Ten college baseball field players participated in the study. The systematic error and standardized variability of ball-bat contact on the bat coordinate system and pitcher-to-catcher direction when hitting a ball launched from a pitching machine were measured with or without visual occlusion and analyzed using analysis of variance. The visual occlusion timing included occlusion from 150 milliseconds (ms) after the ball release (R+150), occlusion from 150 ms before the expected arrival of the launched ball at the home plate (A-150), and a condition with no occlusion (NO). Twelve trials in each condition were performed using two ball speeds (31.9 m·s-1 and 40.3 m·s-1). Visual occlusion did not affect the mean location of ball-bat contact in the bat’s long axis, short axis, and pitcher-to-catcher directions. Although the magnitude of standardized variability was significantly smaller in the bat’s short axis direction than in the bat’s long axis and pitcher-to-catcher directions (p < 0.001), additional visible time from the R+150 condition to the A-150 and NO conditions resulted in a further decrease in standardized variability only in the bat’s short axis direction (p < 0.05). The results suggested that there is directional specificity in the magnitude of standardized variability with different visible time. The present study also confirmed the limitation to visual information is the later part of the ball trajectory for improving hitting accuracy, which is likely due to visuo-motor delay. PMID:26848742

  18. Transcranial direct current stimulation (tDCS) facilitates overall visual search response times but does not interact with visual search task factors

    PubMed Central

    Gordon, Barry

    2018-01-01

    Whether transcranial direct current stimulation (tDCS) affects mental functions, and how any such effects arise from its neural effects, continue to be debated. We investigated whether tDCS applied over the visual cortex (Oz) with a vertex (Cz) reference might affect response times (RTs) in a visual search task. We also examined whether any significant tDCS effects would interact with task factors (target presence, discrimination difficulty, and stimulus brightness) that are known to selectively influence one or the other of the two information processing stages posited by current models of visual search. Based on additive factor logic, we expected that the pattern of interactions involving a significant tDCS effect could help us colocalize the tDCS effect to one (or both) of the processing stages. In Experiment 1 (n = 12), anodal tDCS improved RTs significantly; cathodal tDCS produced a nonsignificant trend toward improvement. However, there were no interactions between the anodal tDCS effect and target presence or discrimination difficulty. In Experiment 2 (n = 18), we manipulated stimulus brightness along with target presence and discrimination difficulty. Anodal and cathodal tDCS both produced significant improvements in RTs. Again, the tDCS effects did not interact with any of the task factors. In Experiment 3 (n = 16), electrodes were placed at Cz and on the upper arm, to test for a possible effect of incidental stimulation of the motor regions under Cz. No effect of tDCS on RTs was found. These findings strengthen the case for tDCS having real effects on cerebral information processing. However, these effects did not clearly arise from either of the two processing stages of the visual search process. We suggest that this is because tDCS has a DIFFUSE, pervasive action across the task-relevant neuroanatomical region(s), not a discrete effect in terms of information processing stages. PMID:29558513

  19. Contribution of Visual Information about Ball Trajectory to Baseball Hitting Accuracy.

    PubMed

    Higuchi, Takatoshi; Nagami, Tomoyuki; Nakata, Hiroki; Watanabe, Masakazu; Isaka, Tadao; Kanosue, Kazuyuki

    2016-01-01

    The contribution of visual information about a pitched ball to the accuracy of baseball-bat contact may vary depending on the part of trajectory seen. The purpose of the present study was to examine the relationship between hitting accuracy and the segment of the trajectory of the flying ball that can be seen by the batter. Ten college baseball field players participated in the study. The systematic error and standardized variability of ball-bat contact on the bat coordinate system and pitcher-to-catcher direction when hitting a ball launched from a pitching machine were measured with or without visual occlusion and analyzed using analysis of variance. The visual occlusion timing included occlusion from 150 milliseconds (ms) after the ball release (R+150), occlusion from 150 ms before the expected arrival of the launched ball at the home plate (A-150), and a condition with no occlusion (NO). Twelve trials in each condition were performed using two ball speeds (31.9 m·s-1 and 40.3 m·s-1). Visual occlusion did not affect the mean location of ball-bat contact in the bat's long axis, short axis, and pitcher-to-catcher directions. Although the magnitude of standardized variability was significantly smaller in the bat's short axis direction than in the bat's long axis and pitcher-to-catcher directions (p < 0.001), additional visible time from the R+150 condition to the A-150 and NO conditions resulted in a further decrease in standardized variability only in the bat's short axis direction (p < 0.05). The results suggested that there is directional specificity in the magnitude of standardized variability with different visible time. The present study also confirmed the limitation to visual information is the later part of the ball trajectory for improving hitting accuracy, which is likely due to visuo-motor delay.

  20. Transcranial direct current stimulation (tDCS) facilitates overall visual search response times but does not interact with visual search task factors.

    PubMed

    Sung, Kyongje; Gordon, Barry

    2018-01-01

    Whether transcranial direct current stimulation (tDCS) affects mental functions, and how any such effects arise from its neural effects, continue to be debated. We investigated whether tDCS applied over the visual cortex (Oz) with a vertex (Cz) reference might affect response times (RTs) in a visual search task. We also examined whether any significant tDCS effects would interact with task factors (target presence, discrimination difficulty, and stimulus brightness) that are known to selectively influence one or the other of the two information processing stages posited by current models of visual search. Based on additive factor logic, we expected that the pattern of interactions involving a significant tDCS effect could help us colocalize the tDCS effect to one (or both) of the processing stages. In Experiment 1 (n = 12), anodal tDCS improved RTs significantly; cathodal tDCS produced a nonsignificant trend toward improvement. However, there were no interactions between the anodal tDCS effect and target presence or discrimination difficulty. In Experiment 2 (n = 18), we manipulated stimulus brightness along with target presence and discrimination difficulty. Anodal and cathodal tDCS both produced significant improvements in RTs. Again, the tDCS effects did not interact with any of the task factors. In Experiment 3 (n = 16), electrodes were placed at Cz and on the upper arm, to test for a possible effect of incidental stimulation of the motor regions under Cz. No effect of tDCS on RTs was found. These findings strengthen the case for tDCS having real effects on cerebral information processing. However, these effects did not clearly arise from either of the two processing stages of the visual search process. We suggest that this is because tDCS has a DIFFUSE, pervasive action across the task-relevant neuroanatomical region(s), not a discrete effect in terms of information processing stages.

  1. Honeybees in a virtual reality environment learn unique combinations of colour and shape.

    PubMed

    Rusch, Claire; Roth, Eatai; Vinauger, Clément; Riffell, Jeffrey A

    2017-10-01

    Honeybees are well-known models for the study of visual learning and memory. Whereas most of our knowledge of learned responses comes from experiments using free-flying bees, a tethered preparation would allow fine-scale control of the visual stimuli as well as accurate characterization of the learned responses. Unfortunately, conditioning procedures using visual stimuli in tethered bees have been limited in their efficacy. In this study, using a novel virtual reality environment and a differential training protocol in tethered walking bees, we show that the majority of honeybees learn visual stimuli, and need only six paired training trials to learn the stimulus. We found that bees readily learn visual stimuli that differ in both shape and colour. However, bees learn certain components over others (colour versus shape), and visual stimuli are learned in a non-additive manner with the interaction of specific colour and shape combinations being crucial for learned responses. To better understand which components of the visual stimuli the bees learned, the shape-colour association of the stimuli was reversed either during or after training. Results showed that maintaining the visual stimuli in training and testing phases was necessary to elicit visual learning, suggesting that bees learn multiple components of the visual stimuli. Together, our results demonstrate a protocol for visual learning in restrained bees that provides a powerful tool for understanding how components of a visual stimulus elicit learned responses as well as elucidating how visual information is processed in the honeybee brain. © 2017. Published by The Company of Biologists Ltd.

  2. The contribution of foveal and peripheral visual information to ensemble representation of face race.

    PubMed

    Jung, Wonmo; Bülthoff, Isabelle; Armann, Regine G M

    2017-11-01

    The brain can only attend to a fraction of all the information that is entering the visual system at any given moment. One way of overcoming the so-called bottleneck of selective attention (e.g., J. M. Wolfe, Võ, Evans, & Greene, 2011) is to make use of redundant visual information and extract summarized statistical information of the whole visual scene. Such ensemble representation occurs for low-level features of textures or simple objects, but it has also been reported for complex high-level properties. While the visual system has, for example, been shown to compute summary representations of facial expression, gender, or identity, it is less clear whether perceptual input from all parts of the visual field contributes equally to the ensemble percept. Here we extend the line of ensemble-representation research into the realm of race and look at the possibility that ensemble perception relies on weighting visual information differently depending on its origin from either the fovea or the visual periphery. We find that observers can judge the mean race of a set of faces, similar to judgments of mean emotion from faces and ensemble representations in low-level domains of visual processing. We also find that while peripheral faces seem to be taken into account for the ensemble percept, far more weight is given to stimuli presented foveally than peripherally. Whether this precision weighting of information stems from differences in the accuracy with which the visual system processes information across the visual field or from statistical inferences about the world needs to be determined by further research.

  3. Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation

    PubMed Central

    Lusk, Laina G.; Mitchel, Aaron D.

    2016-01-01

    Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation. PMID:26869959

  4. Electrophysiological spatiotemporal dynamics during implicit visual threat processing.

    PubMed

    DeLaRosa, Bambi L; Spence, Jeffrey S; Shakal, Scott K M; Motes, Michael A; Calley, Clifford S; Calley, Virginia I; Hart, John; Kraut, Michael A

    2014-11-01

    Numerous studies have found evidence for corticolimbic theta band electroencephalographic (EEG) oscillations in the neural processing of visual stimuli perceived as threatening. However, varying temporal and topographical patterns have emerged, possibly due to varying arousal levels of the stimuli. In addition, recent studies suggest neural oscillations in delta, theta, alpha, and beta-band frequencies play a functional role in information processing in the brain. This study implemented a data-driven PCA based analysis investigating the spatiotemporal dynamics of electroencephalographic delta, theta, alpha, and beta-band frequencies during an implicit visual threat processing task. While controlling for the arousal dimension (the intensity of emotional activation), we found several spatial and temporal differences for threatening compared to nonthreatening visual images. We detected an early posterior increase in theta power followed by a later frontal increase in theta power, greatest for the threatening condition. There was also a consistent left lateralized beta desynchronization for the threatening condition. Our results provide support for a dynamic corticolimbic network, with theta and beta band activity indexing processes pivotal in visual threat processing. Published by Elsevier Inc.

  5. Domino: Extracting, Comparing, and Manipulating Subsets across Multiple Tabular Datasets

    PubMed Central

    Gratzl, Samuel; Gehlenborg, Nils; Lex, Alexander; Pfister, Hanspeter; Streit, Marc

    2016-01-01

    Answering questions about complex issues often requires analysts to take into account information contained in multiple interconnected datasets. A common strategy in analyzing and visualizing large and heterogeneous data is dividing it into meaningful subsets. Interesting subsets can then be selected and the associated data and the relationships between the subsets visualized. However, neither the extraction and manipulation nor the comparison of subsets is well supported by state-of-the-art techniques. In this paper we present Domino, a novel multiform visualization technique for effectively representing subsets and the relationships between them. By providing comprehensive tools to arrange, combine, and extract subsets, Domino allows users to create both common visualization techniques and advanced visualizations tailored to specific use cases. In addition to the novel technique, we present an implementation that enables analysts to manage the wide range of options that our approach offers. Innovative interactive features such as placeholders and live previews support rapid creation of complex analysis setups. We introduce the technique and the implementation using a simple example and demonstrate scalability and effectiveness in a use case from the field of cancer genomics. PMID:26356916

  6. The Effects of Presentation Method and Information Density on Visual Search Ability and Working Memory Load

    ERIC Educational Resources Information Center

    Chang, Ting-Wen; Kinshuk; Chen, Nian-Shing; Yu, Pao-Ta

    2012-01-01

    This study investigates the effects of successive and simultaneous information presentation methods on learner's visual search ability and working memory load for different information densities. Since the processing of information in the brain depends on the capacity of visual short-term memory (VSTM), the limited information processing capacity…

  7. System to provide 3D information on geological anomaly zone in deep subsea

    NASA Astrophysics Data System (ADS)

    Kim, W.; Kwon, O.; Kim, D.

    2017-12-01

    The study on building the ultra long and deep subsea tunnel of which length is 50km and depth is 200m at least, respectively, is underway in Korea. To analyze the geotechnical information required for designing and building subsea tunnel, topographic/geologiccal information analysis using 2D seabed geophysical prospecting and topographic, geologic, exploration and boring data were analyzed comprehensively and as a result, automation method to identify the geological structure zone under seabed which is needed to design the deep and long seabed tunnel was developed using geostatistical analysis. In addition, software using 3D visualized ground information to provide the information includes Gocad, MVS, Vulcan and DIMINE. This study is intended to analyze the geological anomaly zone for ultra deep seabed l and visualize the geological investigation result so as to develop the exclusive system for processing the ground investigation information which is convenient for the users. Particularly it's compatible depending on file of geophysical prospecting result and is realizable in Layer form and for 3D view as well. The data to be processed by 3D seabed information system includes (1) deep seabed topographic information, (2) geological anomaly zone, (3) geophysical prospecting, (4) boring investigation result and (5) 3D visualization of the section on seabed tunnel route. Each data has own characteristics depending on data and interface to allow interlocking with other data is granted. In each detail function, input data is displayed in a single space and each element is selectable to identify the further information as a project. Program creates the project when initially implemented and all output from detail information is stored by project unit. Each element representing detail information is stored in image file and is supported to store in text file as well. It also has the function to transfer, expand/reduce and rotate the model. To represent the all elements in 3D visualized platform, coordinate and time information are added to the data or data group to establish the conceptual model as a whole. This research was supported by the Korea Agency for Infrastructure Technology Advancement under the Ministry of Land, Infrastructure and Transport of the Korean government(Project Number: 13 Construction Research T01).

  8. Activating gender stereotypes during online spoken language processing: evidence from Visual World Eye Tracking.

    PubMed

    Pyykkönen, Pirita; Hyönä, Jukka; van Gompel, Roger P G

    2010-01-01

    This study used the visual world eye-tracking method to investigate activation of general world knowledge related to gender-stereotypical role names in online spoken language comprehension in Finnish. The results showed that listeners activated gender stereotypes elaboratively in story contexts where this information was not needed to build coherence. Furthermore, listeners made additional inferences based on gender stereotypes to revise an already established coherence relation. Both results are consistent with mental models theory (e.g., Garnham, 2001). They are harder to explain by the minimalist account (McKoon & Ratcliff, 1992) which suggests that people limit inferences to those needed to establish coherence in discourse.

  9. Mental rotation impairs attention shifting and short-term memory encoding: neurophysiological evidence against the response-selection bottleneck model of dual-task performance.

    PubMed

    Pannebakker, Merel M; Jolicœur, Pierre; van Dam, Wessel O; Band, Guido P H; Ridderinkhof, K Richard; Hommel, Bernhard

    2011-09-01

    Dual tasks and their associated delays have often been used to examine the boundaries of processing in the brain. We used the dual-task procedure and recorded event-related potentials (ERPs) to investigate how mental rotation of a first stimulus (S1) influences the shifting of visual-spatial attention to a second stimulus (S2). Visual-spatial attention was monitored by using the N2pc component of the ERP. In addition, we examined the sustained posterior contralateral negativity (SPCN) believed to index the retention of information in visual short-term memory. We found modulations of both the N2pc and the SPCN, suggesting that engaging mechanisms of mental rotation impairs the deployment of visual-spatial attention and delays the passage of a representation of S2 into visual short-term memory. Both results suggest interactions between mental rotation and visual-spatial attention in capacity-limited processing mechanisms indicating that response selection is not pivotal in dual-task delays and all three processes are likely to share a common resource like executive control. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Visual cues and listening effort: individual variability.

    PubMed

    Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y

    2011-10-01

    To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and 2 presentation modalities (audio only [AO] and auditory-visual [AV]). Signal-to-noise ratios were adjusted to provide matched speech recognition across audio-only and AV noise conditions. Also measured were subjective perceptions of listening effort and 2 predictive variables: (a) lipreading ability and (b) WMC. Objective and subjective results indicated that listening effort increased in the presence of noise, but on average the addition of visual cues did not significantly affect the magnitude of listening effort. Although there was substantial individual variability, on average participants who were better lipreaders or had larger WMCs demonstrated reduced listening effort in noise in AV conditions. Overall, the results support the hypothesis that integrating auditory and visual cues requires cognitive resources in some participants. The data indicate that low lipreading ability or low WMC is associated with relatively effortful integration of auditory and visual information in noise.

  11. Interactive visualization of vegetation dynamics

    USGS Publications Warehouse

    Reed, B.C.; Swets, D.; Bard, L.; Brown, J.; Rowland, James

    2001-01-01

    Satellite imagery provides a mechanism for observing seasonal dynamics of the landscape that have implications for near real-time monitoring of agriculture, forest, and range resources. This study illustrates a technique for visualizing timely information on key events during the growing season (e.g., onset, peak, duration, and end of growing season), as well as the status of the current growing season with respect to the recent historical average. Using time-series analysis of normalized difference vegetation index (NDVI) data from the advanced very high resolution radiometer (AVHRR) satellite sensor, seasonal dynamics can be derived. We have developed a set of Java-based visualization and analysis tools to make comparisons between the seasonal dynamics of the current year with those from the past twelve years. In addition, the visualization tools allow the user to query underlying databases such as land cover or administrative boundaries to analyze the seasonal dynamics of areas of their own interest. The Java-based tools (data exploration and visualization analysis or DEVA) use a Web-based client-server model for processing the data. The resulting visualization and analysis, available via the Internet, is of value to those responsible for land management decisions, resource allocation, and at-risk population targeting.

  12. Behavior Selection of Mobile Robot Based on Integration of Multimodal Information

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Kaneko, Masahide

    Recently, biologically inspired robots have been developed to acquire the capacity for directing visual attention to salient stimulus generated from the audiovisual environment. On purpose to realize this behavior, a general method is to calculate saliency maps to represent how much the external information attracts the robot's visual attention, where the audiovisual information and robot's motion status should be involved. In this paper, we represent a visual attention model where three modalities, that is, audio information, visual information and robot's motor status are considered, while the previous researches have not considered all of them. Firstly, we introduce a 2-D density map, on which the value denotes how much the robot pays attention to each spatial location. Then we model the attention density using a Bayesian network where the robot's motion statuses are involved. Secondly, the information from both of audio and visual modalities is integrated with the attention density map in integrate-fire neurons. The robot can direct its attention to the locations where the integrate-fire neurons are fired. Finally, the visual attention model is applied to make the robot select the visual information from the environment, and react to the content selected. Experimental results show that it is possible for robots to acquire the visual information related to their behaviors by using the attention model considering motion statuses. The robot can select its behaviors to adapt to the dynamic environment as well as to switch to another task according to the recognition results of visual attention.

  13. Self-motion perception in autism is compromised by visual noise but integrated optimally across multiple senses

    PubMed Central

    Zaidel, Adam; Goin-Kochel, Robin P.; Angelaki, Dora E.

    2015-01-01

    Perceptual processing in autism spectrum disorder (ASD) is marked by superior low-level task performance and inferior complex-task performance. This observation has led to theories of defective integration in ASD of local parts into a global percept. Despite mixed experimental results, this notion maintains widespread influence and has also motivated recent theories of defective multisensory integration in ASD. Impaired ASD performance in tasks involving classic random dot visual motion stimuli, corrupted by noise as a means to manipulate task difficulty, is frequently interpreted to support this notion of global integration deficits. By manipulating task difficulty independently of visual stimulus noise, here we test the hypothesis that heightened sensitivity to noise, rather than integration deficits, may characterize ASD. We found that although perception of visual motion through a cloud of dots was unimpaired without noise, the addition of stimulus noise significantly affected adolescents with ASD, more than controls. Strikingly, individuals with ASD demonstrated intact multisensory (visual–vestibular) integration, even in the presence of noise. Additionally, when vestibular motion was paired with pure visual noise, individuals with ASD demonstrated a different strategy than controls, marked by reduced flexibility. This result could be simulated by using attenuated (less reliable) and inflexible (not experience-dependent) Bayesian priors in ASD. These findings question widespread theories of impaired global and multisensory integration in ASD. Rather, they implicate increased sensitivity to sensory noise and less use of prior knowledge in ASD, suggesting increased reliance on incoming sensory information. PMID:25941373

  14. The relationship between visual attention and visual working memory encoding: A dissociation between covert and overt orienting.

    PubMed

    Tas, A Caglar; Luck, Steven J; Hollingworth, Andrew

    2016-08-01

    There is substantial debate over whether visual working memory (VWM) and visual attention constitute a single system for the selection of task-relevant perceptual information or whether they are distinct systems that can be dissociated when their representational demands diverge. In the present study, we focused on the relationship between visual attention and the encoding of objects into VWM. Participants performed a color change-detection task. During the retention interval, a secondary object, irrelevant to the memory task, was presented. Participants were instructed either to execute an overt shift of gaze to this object (Experiments 1-3) or to attend it covertly (Experiments 4 and 5). Our goal was to determine whether these overt and covert shifts of attention disrupted the information held in VWM. We hypothesized that saccades, which typically introduce a memorial demand to bridge perceptual disruption, would lead to automatic encoding of the secondary object. However, purely covert shifts of attention, which introduce no such demand, would not result in automatic memory encoding. The results supported these predictions. Saccades to the secondary object produced substantial interference with VWM performance, but covert shifts of attention to this object produced no interference with VWM performance. These results challenge prevailing theories that consider attention and VWM to reflect a common mechanism. In addition, they indicate that the relationship between attention and VWM is dependent on the memorial demands of the orienting behavior. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Sensory Substitution: The Spatial Updating of Auditory Scenes "Mimics" the Spatial Updating of Visual Scenes.

    PubMed

    Pasqualotto, Achille; Esenkaya, Tayfun

    2016-01-01

    Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or "soundscapes". Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD).

  16. Thinking graphically: Connecting vision and cognition during graph comprehension.

    PubMed

    Ratwani, Raj M; Trafton, J Gregory; Boehm-Davis, Deborah A

    2008-03-01

    Task analytic theories of graph comprehension account for the perceptual and conceptual processes required to extract specific information from graphs. Comparatively, the processes underlying information integration have received less attention. We propose a new framework for information integration that highlights visual integration and cognitive integration. During visual integration, pattern recognition processes are used to form visual clusters of information; these visual clusters are then used to reason about the graph during cognitive integration. In 3 experiments, the processes required to extract specific information and to integrate information were examined by collecting verbal protocol and eye movement data. Results supported the task analytic theories for specific information extraction and the processes of visual and cognitive integration for integrative questions. Further, the integrative processes scaled up as graph complexity increased, highlighting the importance of these processes for integration in more complex graphs. Finally, based on this framework, design principles to improve both visual and cognitive integration are described. PsycINFO Database Record (c) 2008 APA, all rights reserved

  17. Real-World Application of Robust Design Optimization Assisted by Response Surface Approximation and Visual Data-Mining

    NASA Astrophysics Data System (ADS)

    Shimoyama, Koji; Jeong, Shinkyu; Obayashi, Shigeru

    A new approach for multi-objective robust design optimization was proposed and applied to a real-world design problem with a large number of objective functions. The present approach is assisted by response surface approximation and visual data-mining, and resulted in two major gains regarding computational time and data interpretation. The Kriging model for response surface approximation can markedly reduce the computational time for predictions of robustness. In addition, the use of self-organizing maps as a data-mining technique allows visualization of complicated design information between optimality and robustness in a comprehensible two-dimensional form. Therefore, the extraction and interpretation of trade-off relations between optimality and robustness of design, and also the location of sweet spots in the design space, can be performed in a comprehensive manner.

  18. Translating novel findings of perceptual-motor codes into the neuro-rehabilitation of movement disorders.

    PubMed

    Pazzaglia, Mariella; Galli, Giulia

    2015-01-01

    The bidirectional flow of perceptual and motor information has recently proven useful as rehabilitative tool for re-building motor memories. We analyzed how the visual-motor approach has been successfully applied in neurorehabilitation, leading to surprisingly rapid and effective improvements in action execution. We proposed that the contribution of multiple sensory channels during treatment enables individuals to predict and optimize motor behavior, having a greater effect than visual input alone. We explored how the state-of-the-art neuroscience techniques show direct evidence that employment of visual-motor approach leads to increased motor cortex excitability and synaptic and cortical map plasticity. This super-additive response to multimodal stimulation may maximize neural plasticity, potentiating the effect of conventional treatment, and will be a valuable approach when it comes to advances in innovative methodologies.

  19. How dolphins see the world: a comparison with chimpanzees and humans.

    PubMed

    Tomonaga, Masaki; Uwano, Yuka; Saito, Toyoshi

    2014-01-16

    Bottlenose dolphins use auditory (or echoic) information to recognise their environments, and many studies have described their echolocation perception abilities. However, relatively few systematic studies have examined their visual perception. We tested dolphins on a visual-matching task using two-dimensional geometric forms including various features. Based on error patterns, we used multidimensional scaling to analyse perceptual similarities among stimuli. In addition to dolphins, we conducted comparable tests with terrestrial species: chimpanzees were tested on a computer-controlled matching task and humans were tested on a rating task. The overall perceptual similarities among stimuli in dolphins were similar to those in the two species of primates. These results clearly indicate that the visual world is perceived similarly by the three species of mammals, even though each has adapted to a different environment and has differing degrees of dependence on vision.

  20. eLoom and Flatland: specification, simulation and visualization engines for the study of arbitrary hierarchical neural architectures.

    PubMed

    Caudell, Thomas P; Xiao, Yunhai; Healy, Michael J

    2003-01-01

    eLoom is an open source graph simulation software tool, developed at the University of New Mexico (UNM), that enables users to specify and simulate neural network models. Its specification language and libraries enables users to construct and simulate arbitrary, potentially hierarchical network structures on serial and parallel processing systems. In addition, eLoom is integrated with UNM's Flatland, an open source virtual environments development tool to provide real-time visualizations of the network structure and activity. Visualization is a useful method for understanding both learning and computation in artificial neural networks. Through 3D animated pictorially representations of the state and flow of information in the network, a better understanding of network functionality is achieved. ART-1, LAPART-II, MLP, and SOM neural networks are presented to illustrate eLoom and Flatland's capabilities.

  1. Imprinting modulates processing of visual information in the visual wulst of chicks.

    PubMed

    Maekawa, Fumihiko; Komine, Okiru; Sato, Katsushige; Kanamatsu, Tomoyuki; Uchimura, Motoaki; Tanaka, Kohichi; Ohki-Hamazaki, Hiroko

    2006-11-14

    Imprinting behavior is one form of learning and memory in precocial birds. With the aim of elucidating of the neural basis for visual imprinting, we focused on visual information processing. A lesion in the visual wulst, which is similar functionally to the mammalian visual cortex, caused anterograde amnesia in visual imprinting behavior. Since the color of an object was one of the important cues for imprinting, we investigated color information processing in the visual wulst. Intrinsic optical signals from the visual wulst were detected in the early posthatch period and the peak regions of responses to red, green, and blue were spatially organized from the caudal to the nasal regions in dark-reared chicks. This spatial representation of color recognition showed plastic changes, and the response pattern along the antero-posterior axis of the visual wulst altered according to the color the chick was imprinted to. These results indicate that the thalamofugal pathway is critical for learning the imprinting stimulus and that the visual wulst shows learning-related plasticity and may relay processed visual information to indicate the color of the imprint stimulus to the memory storage region, e.g., the intermediate medial mesopallium.

  2. Imprinting modulates processing of visual information in the visual wulst of chicks

    PubMed Central

    Maekawa, Fumihiko; Komine, Okiru; Sato, Katsushige; Kanamatsu, Tomoyuki; Uchimura, Motoaki; Tanaka, Kohichi; Ohki-Hamazaki, Hiroko

    2006-01-01

    Background Imprinting behavior is one form of learning and memory in precocial birds. With the aim of elucidating of the neural basis for visual imprinting, we focused on visual information processing. Results A lesion in the visual wulst, which is similar functionally to the mammalian visual cortex, caused anterograde amnesia in visual imprinting behavior. Since the color of an object was one of the important cues for imprinting, we investigated color information processing in the visual wulst. Intrinsic optical signals from the visual wulst were detected in the early posthatch period and the peak regions of responses to red, green, and blue were spatially organized from the caudal to the nasal regions in dark-reared chicks. This spatial representation of color recognition showed plastic changes, and the response pattern along the antero-posterior axis of the visual wulst altered according to the color the chick was imprinted to. Conclusion These results indicate that the thalamofugal pathway is critical for learning the imprinting stimulus and that the visual wulst shows learning-related plasticity and may relay processed visual information to indicate the color of the imprint stimulus to the memory storage region, e.g., the intermediate medial mesopallium. PMID:17101060

  3. Visual adaptation dominates bimodal visual-motor action adaptation

    PubMed Central

    de la Rosa, Stephan; Ferstl, Ylva; Bülthoff, Heinrich H.

    2016-01-01

    A long standing debate revolves around the question whether visual action recognition primarily relies on visual or motor action information. Previous studies mainly examined the contribution of either visual or motor information to action recognition. Yet, the interaction of visual and motor action information is particularly important for understanding action recognition in social interactions, where humans often observe and execute actions at the same time. Here, we behaviourally examined the interaction of visual and motor action recognition processes when participants simultaneously observe and execute actions. We took advantage of behavioural action adaptation effects to investigate behavioural correlates of neural action recognition mechanisms. In line with previous results, we find that prolonged visual exposure (visual adaptation) and prolonged execution of the same action with closed eyes (non-visual motor adaptation) influence action recognition. However, when participants simultaneously adapted visually and motorically – akin to simultaneous execution and observation of actions in social interactions - adaptation effects were only modulated by visual but not motor adaptation. Action recognition, therefore, relies primarily on vision-based action recognition mechanisms in situations that require simultaneous action observation and execution, such as social interactions. The results suggest caution when associating social behaviour in social interactions with motor based information. PMID:27029781

  4. Sensitivity to timing and order in human visual cortex

    PubMed Central

    Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.

    2014-01-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116

  5. A content analysis of visual cancer information: prevalence and use of photographs and illustrations in printed health materials.

    PubMed

    King, Andy J

    2015-01-01

    Researchers and practitioners have an increasing interest in visual components of health information and health communication messages. This study contributes to this evolving body of research by providing an account of the visual images and information featured in printed cancer communication materials. Using content analysis, 147 pamphlets and 858 images were examined to determine how frequently images are used in printed materials, what types of images are used, what information is conveyed visually, and whether or not current recommendations for the inclusion of visual content were being followed. Although visual messages were found to be common in printed health materials, existing recommendations about the inclusion of visual content were only partially followed. Results are discussed in terms of how relevant theoretical frameworks in the areas of behavior change and visual persuasion seem to be used in these materials, as well as how more theory-oriented research is necessary in visual messaging efforts.

  6. A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera

    PubMed Central

    Takizawa, Hotaka; Orita, Kazunori; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji

    2017-01-01

    The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots. PMID:28165403

  7. A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera.

    PubMed

    Takizawa, Hotaka; Orita, Kazunori; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji

    2017-02-04

    The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots.

  8. Comparative analysis and visualization of multiple collinear genomes

    PubMed Central

    2012-01-01

    Background Genome browsers are a common tool used by biologists to visualize genomic features including genes, polymorphisms, and many others. However, existing genome browsers and visualization tools are not well-suited to perform meaningful comparative analysis among a large number of genomes. With the increasing quantity and availability of genomic data, there is an increased burden to provide useful visualization and analysis tools for comparison of multiple collinear genomes such as the large panels of model organisms which are the basis for much of the current genetic research. Results We have developed a novel web-based tool for visualizing and analyzing multiple collinear genomes. Our tool illustrates genome-sequence similarity through a mosaic of intervals representing local phylogeny, subspecific origin, and haplotype identity. Comparative analysis is facilitated through reordering and clustering of tracks, which can vary throughout the genome. In addition, we provide local phylogenetic trees as an alternate visualization to assess local variations. Conclusions Unlike previous genome browsers and viewers, ours allows for simultaneous and comparative analysis. Our browser provides intuitive selection and interactive navigation about features of interest. Dynamic visualizations adjust to scale and data content making analysis at variable resolutions and of multiple data sets more informative. We demonstrate our genome browser for an extensive set of genomic data sets composed of almost 200 distinct mouse laboratory strains. PMID:22536897

  9. Stepping Into Science Data: Data Visualization in Virtual Reality

    NASA Astrophysics Data System (ADS)

    Skolnik, S.

    2017-12-01

    Have you ever seen people get really excited about science data? Navteca, along with the Earth Science Technology Office (ESTO), within the Earth Science Division of NASA's Science Mission Directorate have been exploring virtual reality (VR) technology for the next generation of Earth science technology information systems. One of their first joint experiments was visualizing climate data from the Goddard Earth Observing System Model (GEOS) in VR, and the resulting visualizations greatly excited the scientific community. This presentation will share the value of VR for science, such as the capability of permitting the observer to interact with data rendered in real-time, make selections, and view volumetric data in an innovative way. Using interactive VR hardware (headset and controllers), the viewer steps into the data visualizations, physically moving through three-dimensional structures that are traditionally displayed as layers or slices, such as cloud and storm systems from NASA's Global Precipitation Measurement (GPM). Results from displaying this precipitation and cloud data show that there is interesting potential for scientific visualization, 3D/4D visualizations, and inter-disciplinary studies using VR. Additionally, VR visualizations can be leveraged as 360 content for scientific communication and outreach and VR can be used as a tool to engage policy and decision makers, as well as the public.

  10. Information Visualization and Proposing New Interface for Movie Retrieval System (IMDB)

    ERIC Educational Resources Information Center

    Etemadpour, Ronak; Masood, Mona; Belaton, Bahari

    2010-01-01

    This research studies the development of a new prototype of visualization in support of movie retrieval. The goal of information visualization is unveiling of large amounts of data or abstract data set using visual presentation. With this knowledge the main goal is to develop a 2D presentation of information on movies from the IMDB (Internet Movie…

  11. Walking through doorways causes forgetting: environmental integration.

    PubMed

    Radvansky, Gabriel A; Tamplin, Andrea K; Krawietz, Sabine A

    2010-12-01

    Memory for objects declines when people move from one location to another (the location updating effect). However, it is unclear whether this is attributable to event model updating or to task demands. The focus here was on the degree of integration for probed-for information with the experienced environment. In prior research, the probes were verbal labels of visual objects. Experiment 1 assessed whether this was a consequence of an item-probe mismatch, as with transfer-appropriate processing. Visual probes were used to better coordinate what was seen with the nature of the memory probe. In Experiment 2, people received additional word pairs to remember, which were less well integrated with the environment, to assess whether the probed-for information needed to be well integrated. The results showed location updating effects in both cases. These data are consistent with an event cognition view that mental updating of a dynamic event disrupts memory.

  12. Visual space under free viewing conditions.

    PubMed

    Doumen, Michelle J A; Kappers, Astrid M L; Koenderink, Jan J

    2005-10-01

    Most research on visual space has been done under restricted viewing conditions and in reduced environments. In our experiments, observers performed an exocentric pointing task, a collinearity task, and a parallelity task in a entirely visible room. We varied the relative distances between the objects and the observer and the separation angle between the two objects. We were able to compare our data directly with data from experiments in an environment with less monocular depth information present. We expected that in a richer environment and under less restrictive viewing conditions, the settings would deviate less from the veridical settings. However, large systematic deviations from veridical settings were found for all three tasks. The structure of these deviations was task dependent, and the structure and the deviations themselves were comparable to those obtained under more restricted circumstances. Thus, the additional information was not used effectively by the observers.

  13. Institute for Brain and Neural Systems

    DTIC Science & Technology

    2009-10-06

    to deal with computational complexity when analyzing large amounts of information in visual scenes. It seems natural that in addition to exploring...algorithms using methods from statistical pattern recognition and machine learning. Over the last fifteen years, significant advances had been made in...recognition, robustness to noise and ability to cope with significant variations in lighting conditions. Identifying an occluded target adds another layer of

  14. Skylab explores the Earth

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Data from visual observations are integrated with results of analyses of approxmately 600 of the nearly 2000 photographs taken of Earth during the 84-day Skylab 4 mission to provide additional information on (1) Earth features and processes; (2) operational procedures and constraints in observing and photographing the planet; and (3) the use of man in real-time analysis of oceanic and atmospheric phenomena.

  15. Introducing 3D Visualization of Statistical Data in Education Using the i-Use Platform: Examples from Greece

    ERIC Educational Resources Information Center

    Rizou, Ourania; Klonari, Aikaterini

    2016-01-01

    In the 21st century, the age of information and technology, there is an increasing importance to statistical literacy for everyday life. In addition, education innovation and globalisation in the past decade in Europe has resulted in a new perceived complexity of reality that affected the curriculum and statistics education, with a shift from…

  16. Visual Receptive Field Heterogeneity and Functional Connectivity of Adjacent Neurons in Primate Frontoparietal Association Cortices.

    PubMed

    Viswanathan, Pooja; Nieder, Andreas

    2017-09-13

    The basic organization principles of the primary visual cortex (V1) are commonly assumed to also hold in the association cortex such that neurons within a cortical column share functional connectivity patterns and represent the same region of the visual field. We mapped the visual receptive fields (RFs) of neurons recorded at the same electrode in the ventral intraparietal area (VIP) and the lateral prefrontal cortex (PFC) of rhesus monkeys. We report that the spatial characteristics of visual RFs between adjacent neurons differed considerably, with increasing heterogeneity from VIP to PFC. In addition to RF incongruences, we found differential functional connectivity between putative inhibitory interneurons and pyramidal cells in PFC and VIP. These findings suggest that local RF topography vanishes with hierarchical distance from visual cortical input and argue for increasingly modified functional microcircuits in noncanonical association cortices that contrast V1. SIGNIFICANCE STATEMENT Our visual field is thought to be represented faithfully by the early visual brain areas; all the information from a certain region of the visual field is conveyed to neurons situated close together within a functionally defined cortical column. We examined this principle in the association areas, PFC, and ventral intraparietal area of rhesus monkeys and found that adjacent neurons represent markedly different areas of the visual field. This is the first demonstration of such noncanonical organization of these brain areas. Copyright © 2017 the authors 0270-6474/17/378919-10$15.00/0.

  17. Diversification of visual media retrieval results using saliency detection

    NASA Astrophysics Data System (ADS)

    Muratov, Oleg; Boato, Giulia; De Natale, Franesco G. B.

    2013-03-01

    Diversification of retrieval results allows for better and faster search. Recently there has been proposed different methods for diversification of image retrieval results mainly utilizing text information and techniques imported from natural language processing domain. However, images contain visual information that is impossible to describe in text and the use of visual features is inevitable. Visual saliency is information about the main object of an image implicitly included by humans while creating visual content. For this reason it is naturally to exploit this information for the task of diversification of the content. In this work we study whether visual saliency can be used for the task of diversification and propose a method for re-ranking image retrieval results using saliency. The evaluation has shown that the use of saliency information results in higher diversity of retrieval results.

  18. The multiple sclerosis visual pathway cohort: understanding neurodegeneration in MS.

    PubMed

    Martínez-Lapiscina, Elena H; Fraga-Pumar, Elena; Gabilondo, Iñigo; Martínez-Heras, Eloy; Torres-Torres, Ruben; Ortiz-Pérez, Santiago; Llufriu, Sara; Tercero, Ana; Andorra, Magi; Roca, Marc Figueras; Lampert, Erika; Zubizarreta, Irati; Saiz, Albert; Sanchez-Dalmau, Bernardo; Villoslada, Pablo

    2014-12-15

    Multiple Sclerosis (MS) is an immune-mediated disease of the Central Nervous System with two major underlying etiopathogenic processes: inflammation and neurodegeneration. The latter determines the prognosis of this disease. MS is the main cause of non-traumatic disability in middle-aged populations. The MS-VisualPath Cohort was set up to study the neurodegenerative component of MS using advanced imaging techniques by focusing on analysis of the visual pathway in a middle-aged MS population in Barcelona, Spain. We started the recruitment of patients in the early phase of MS in 2010 and it remains permanently open. All patients undergo a complete neurological and ophthalmological examination including measurements of physical and disability (Expanded Disability Status Scale; Multiple Sclerosis Functional Composite and neuropsychological tests), disease activity (relapses) and visual function testing (visual acuity, color vision and visual field). The MS-VisualPath protocol also assesses the presence of anxiety and depressive symptoms (Hospital Anxiety and Depression Scale), general quality of life (SF-36) and visual quality of life (25-Item National Eye Institute Visual Function Questionnaire with the 10-Item Neuro-Ophthalmic Supplement). In addition, the imaging protocol includes both retinal (Optical Coherence Tomography and Wide-Field Fundus Imaging) and brain imaging (Magnetic Resonance Imaging). Finally, multifocal Visual Evoked Potentials are used to perform neurophysiological assessment of the visual pathway. The analysis of the visual pathway with advance imaging and electrophysilogical tools in parallel with clinical information will provide significant and new knowledge regarding neurodegeneration in MS and provide new clinical and imaging biomarkers to help monitor disease progression in these patients.

  19. Contribution of intraoperative neuromonitoring to the identification of the external branch of superior laryngeal nerve

    PubMed Central

    Aygün, Nurcihan; Uludağ, Mehmet; İşgör, Adnan

    2017-01-01

    Objective We evaluated the contribution of intraoperative neuromonitoring to the visual and functional identification of the external branch of the superior laryngeal nerve. Material and Methods The prospectively collected data of patients who underwent thyroid surgery with intraoperative neuromonitoring for external branch of the superior laryngeal nerve exploration were assessed retrospectively. The surface endotracheal tube-based Medtronic NIM3 intraoperative neuromonitoring device was used. The external branch of the superior laryngeal nerve function was evaluated by the cricothyroid muscle twitch. In addition, contribution of external branch of the superior laryngeal nerve to the vocal cord adduction was evaluated using electromyographic records. Results The study included data of 126 (female, 103; male, 23) patients undergoing thyroid surgery, with a mean age of 46.2±12.2 years (range, 18–75 years), and 215 neck sides were assessed. Two hundred and one (93.5%) of 215 external branch of the superior laryngeal nerves were identified, of which 60 (27.9%) were identified visually before being stimulated with a monopolar stimulator probe. Eighty-nine (41.4%) external branch of the superior laryngeal nerves were identified visually after being identified with a probe. Although 52 (24.1%) external branch of the superior laryngeal nerves were identified with a probe, they were not visualized. Intraoperative neuromonitoring provided a significant contribution to visual (p<0.001) and functional (p<0.001) identification of external branch of the superior laryngeal nerves. Additionally, positive electromyographic responses were recorded from 160 external branch of the superior laryngeal nerves (74.4%). Conclusion Intraoperative neuromonitoring provides an important contribution to visual and functional identification of external branch of the superior laryngeal nerves. We believe that it can not be predicted whether the external branch of the superior laryngeal nerve is at risk or not and the nerve is often invisible; thus, intraoperative neuromonitoring may routinely be used in superior pole dissection. Glottic electromyography response obtained via external branch of the superior laryngeal nerve stimulation provides quantifiable information in addition to the simple visualization of the cricothyroid muscle twitch. PMID:28944328

  20. Information Technology and Transcription of Reading Materials for the Visually Impaired Persons in Nigeria

    ERIC Educational Resources Information Center

    Nkiko, Christopher; Atinmo, Morayo I.; Michael-Onuoha, Happiness Chijioke; Ilogho, Julie E.; Fagbohun, Michael O.; Ifeakachuku, Osinulu; Adetomiwa, Basiru; Usman, Kazeem Omeiza

    2018-01-01

    Studies have shown inadequate reading materials for the visually impaired in Nigeria. Information technology has greatly advanced the provision of information to the visually impaired in other industrialized climes. This study investigated the extent of application of information technology to the transcription of reading materials for the…

  1. Some Issues Concerning Access to Information by Blind and Partially Sighted Pupils.

    ERIC Educational Resources Information Center

    Green, Christopher F.

    This paper examines problems faced by visually-impaired secondary pupils in gaining access to information in print. The ever-increasing volume of information available inundates the sighted and is largely inaccessible in print format to the visually impaired. Important issues of availability for the visually impaired include whether information is…

  2. Evolution of attention mechanisms for early visual processing

    NASA Astrophysics Data System (ADS)

    Müller, Thomas; Knoll, Alois

    2011-03-01

    Early visual processing as a method to speed up computations on visual input data has long been discussed in the computer vision community. The general target of a such approaches is to filter nonrelevant information from the costly higher-level visual processing algorithms. By insertion of this additional filter layer the overall approach can be speeded up without actually changing the visual processing methodology. Being inspired by the layered architecture of the human visual processing apparatus, several approaches for early visual processing have been recently proposed. Most promising in this field is the extraction of a saliency map to determine regions of current attention in the visual field. Such saliency can be computed in a bottom-up manner, i.e. the theory claims that static regions of attention emerge from a certain color footprint, and dynamic regions of attention emerge from connected blobs of textures moving in a uniform way in the visual field. Top-down saliency effects are either unconscious through inherent mechanisms like inhibition-of-return, i.e. within a period of time the attention level paid to a certain region automatically decreases if the properties of that region do not change, or volitional through cognitive feedback, e.g. if an object moves consistently in the visual field. These bottom-up and top-down saliency effects have been implemented and evaluated in a previous computer vision system for the project JAST. In this paper an extension applying evolutionary processes is proposed. The prior vision system utilized multiple threads to analyze the regions of attention delivered from the early processing mechanism. Here, in addition, multiple saliency units are used to produce these regions of attention. All of these saliency units have different parameter-sets. The idea is to let the population of saliency units create regions of attention, then evaluate the results with cognitive feedback and finally apply the genetic mechanism: mutation and cloning of the best performers and extinction of the worst performers considering computation of regions of attention. A fitness function can be derived by evaluating, whether relevant objects are found in the regions created. It can be seen from various experiments, that the approach significantly speeds up visual processing, especially regarding robust ealtime object recognition, compared to an approach not using saliency based preprocessing. Furthermore, the evolutionary algorithm improves the overall performance of the preprocessing system in terms of quality, as the system automatically and autonomously tunes the saliency parameters. The computational overhead produced by periodical clone/delete/mutation operations can be handled well within the realtime constraints of the experimental computer vision system. Nevertheless, limitations apply whenever the visual field does not contain any significant saliency information for some time, but the population still tries to tune the parameters - overfitting avoids generalization in this case and the evolutionary process may be reset by manual intervention.

  3. Beyond Control Panels: Direct Manipulation for Visual Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; Bradel, Lauren; North, Chris

    2013-07-19

    Information Visualization strives to provide visual representations through which users can think about and gain insight into information. By leveraging the visual and cognitive systems of humans, complex relationships and phenomena occurring within datasets can be uncovered by exploring information visually. Interaction metaphors for such visualizations are designed to enable users direct control over the filters, queries, and other parameters controlling how the data is visually represented. Through the evolution of information visualization, more complex mathematical and data analytic models are being used to visualize relationships and patterns in data – creating the field of Visual Analytics. However, the expectationsmore » for how users interact with these visualizations has remained largely unchanged – focused primarily on the direct manipulation of parameters of the underlying mathematical models. In this article we present an opportunity to evolve the methodology for user interaction from the direct manipulation of parameters through visual control panels, to interactions designed specifically for visual analytic systems. Instead of focusing on traditional direct manipulation of mathematical parameters, the evolution of the field can be realized through direct manipulation within the visual representation – where users can not only gain insight, but also interact. This article describes future directions and research challenges that fundamentally change the meaning of direct manipulation with regards to visual analytics, advancing the Science of Interaction.« less

  4. The Perceptual Root of Object-Based Storage: An Interactive Model of Perception and Visual Working Memory

    ERIC Educational Resources Information Center

    Gao, Tao; Gao, Zaifeng; Li, Jie; Sun, Zhongqiang; Shen, Mowei

    2011-01-01

    Mainstream theories of visual perception assume that visual working memory (VWM) is critical for integrating online perceptual information and constructing coherent visual experiences in changing environments. Given the dynamic interaction between online perception and VWM, we propose that how visual information is processed during visual…

  5. Auditory, Visual, and Auditory-Visual Perception of Vowels by Hearing-Impaired Children.

    ERIC Educational Resources Information Center

    Hack, Zarita Caplan; Erber, Norman P.

    1982-01-01

    Vowels were presented through auditory, visual, and auditory-visual modalities to 18 hearing impaired children (12 to 15 years old) having good, intermediate, and poor auditory word recognition skills. All the groups had difficulty with acoustic information and visual information alone. The first two groups had only moderate difficulty identifying…

  6. Indicators of suboptimal performance embedded in the Wechsler Memory Scale-Fourth Edition (WMS-IV).

    PubMed

    Bouman, Zita; Hendriks, Marc P H; Schmand, Ben A; Kessels, Roy P C; Aldenkamp, Albert P

    2016-01-01

    Recognition and visual working memory tasks from the Wechsler Memory Scale-Fourth Edition (WMS-IV) have previously been documented as useful indicators for suboptimal performance. The present study examined the clinical utility of the Dutch version of the WMS-IV (WMS-IV-NL) for the identification of suboptimal performance using an analogue study design. The patient group consisted of 59 mixed-etiology patients; the experimental malingerers were 50 healthy individuals who were asked to simulate cognitive impairment as a result of a traumatic brain injury; the last group consisted of 50 healthy controls who were instructed to put forth full effort. Experimental malingerers performed significantly lower on all WMS-IV-NL tasks than did the patients and healthy controls. A binary logistic regression analysis was performed on the experimental malingerers and the patients. The first model contained the visual working memory subtests (Spatial Addition and Symbol Span) and the recognition tasks of the following subtests: Logical Memory, Verbal Paired Associates, Designs, Visual Reproduction. The results showed an overall classification rate of 78.4%, and only Spatial Addition explained a significant amount of variation (p < .001). Subsequent logistic regression analysis and receiver operating characteristic (ROC) analysis supported the discriminatory power of the subtest Spatial Addition. A scaled score cutoff of <4 produced 93% specificity and 52% sensitivity for detection of suboptimal performance. The WMS-IV-NL Spatial Addition subtest may provide clinically useful information for the detection of suboptimal performance.

  7. Manneristic behaviors of visually impaired children.

    PubMed

    Molloy, Alysha; Rowe, Fiona J

    2011-09-01

    To review the literature on visual impairment in children in order to determine which manneristic behaviors are associated with visual impairment, and to establish why these behaviors occur and whether severity of visual impairment influences these behaviors. A literature search utilizing PubMed, OVID, Google Scholar, and Web of Knowledge databases was performed. The University of Liverpool ( www.liv.ac.uk/orthoptics/research ) and local library facilities were also searched. The main manneristic or stereotypic behaviors associated with visual impairment are eye-manipulatory behaviors, such as eye poking and rocking. The degree of visual impairment influences the type of behavior exhibited by visually impaired children. Totally blind children are more likely to adopt body and head movements whereas sight-impaired children tend to adopt eye-manipulatory behaviors and rocking. The mannerisms exhibited most frequently are those that provide a specific stimulation to the child. Theories to explain these behaviors include behavioral, developmental, functional, and neurobiological approaches. Although the precise etiology of these behaviors is unknown, it is recognized that each of the theories is useful in providing some explanation of why certain behaviors may occur. The age at which the frequency of these behaviors decreases is associated with the child's increasing development, thus those visually impaired children with additional disabilities, whose development is impaired, are at an increased risk of developing and maintaining these behaviors. Certain manneristic behaviors of the visually impaired child may also help indicate the cause of visual impairment. There is a wide range of manneristic behaviors exhibited by visually impaired children. Some of these behaviors appear to be particularly associated with certain causes of visual impairment or severity of visual impairment, thus they may supply the practitioner with useful information. Further research into the prevalence of these behaviors in the visually impaired child is required in order to provide effective management.

  8. Implications on visual apperception: energy, duration, structure and synchronization.

    PubMed

    Bókkon, I; Vimal, Ram Lakhan Pandey

    2010-07-01

    Although primary visual cortex (V1 or striate) activity per se is not sufficient for visual apperception (normal conscious visual experiences and conscious functions such as detection, discrimination, and recognition), the same is also true for extrastriate visual areas (such as V2, V3, V4/V8/VO, V5/M5/MST, IT, and GF). In the lack of V1 area, visual signals can still reach several extrastriate parts but appear incapable of generating normal conscious visual experiences. It is scarcely emphasized in the scientific literature that conscious perceptions and representations must have also essential energetic conditions. These energetic conditions are achieved by spatiotemporal networks of dynamic mitochondrial distributions inside neurons. However, the highest density of neurons in neocortex (number of neurons per degree of visual angle) devoted to representing the visual field is found in retinotopic V1. It means that the highest mitochondrial (energetic) activity can be achieved in mitochondrial cytochrome oxidase-rich V1 areas. Thus, V1 bear the highest energy allocation for visual representation. In addition, the conscious perceptions also demand structural conditions, presence of adequate duration of information representation, and synchronized neural processes and/or 'interactive hierarchical structuralism.' For visual apperception, various visual areas are involved depending on context such as stimulus characteristics such as color, form/shape, motion, and other features. Here, we focus primarily on V1 where specific mitochondrial-rich retinotopic structures are found; we will concisely discuss V2 where smaller riches of these structures are found. We also point out that residual brain states are not fully reflected in active neural patterns after visual perception. Namely, after visual perception, subliminal residual states are not being reflected in passive neural recording techniques, but require active stimulation to be revealed.

  9. A Hybrid Synthetic Vision System for the Tele-operation of Unmanned Vehicles

    NASA Technical Reports Server (NTRS)

    Delgado, Frank; Abernathy, Mike

    2004-01-01

    A system called SmartCam3D (SC3D) has been developed to provide enhanced situational awareness for operators of a remotely piloted vehicle. SC3D is a Hybrid Synthetic Vision System (HSVS) that combines live sensor data with information from a Synthetic Vision System (SVS). By combining the dual information sources, the operators are afforded the advantages of each approach. The live sensor system provides real-time information for the region of interest. The SVS provides information rich visuals that will function under all weather and visibility conditions. Additionally, the combination of technologies allows the system to circumvent some of the limitations from each approach. Video sensor systems are not very useful when visibility conditions are hampered by rain, snow, sand, fog, and smoke, while a SVS can suffer from data freshness problems. Typically, an aircraft or satellite flying overhead collects the data used to create the SVS visuals. The SVS data could have been collected weeks, months, or even years ago. To that extent, the information from an SVS visual could be outdated and possibly inaccurate. SC3D was used in the remote cockpit during flight tests of the X-38 132 and 131R vehicles at the NASA Dryden Flight Research Center. SC3D was also used during the operation of military Unmanned Aerial Vehicles. This presentation will provide an overview of the system, the evolution of the system, the results of flight tests, and future plans. Furthermore, the safety benefits of the SC3D over traditional and pure synthetic vision systems will be discussed.

  10. Securing information display by use of visual cryptography.

    PubMed

    Yamamoto, Hirotsugu; Hayasaki, Yoshio; Nishida, Nobuo

    2003-09-01

    We propose a secure display technique based on visual cryptography. The proposed technique ensures the security of visual information. The display employs a decoding mask based on visual cryptography. Without the decoding mask, the displayed information cannot be viewed. The viewing zone is limited by the decoding mask so that only one person can view the information. We have developed a set of encryption codes to maintain the designed viewing zone and have demonstrated a display that provides a limited viewing zone.

  11. Effects of aging on identifying emotions conveyed by point-light walkers.

    PubMed

    Spencer, Justine M Y; Sekuler, Allison B; Bennett, Patrick J; Giese, Martin A; Pilz, Karin S

    2016-02-01

    The visual system is able to recognize human motion simply from point lights attached to the major joints of an actor. Moreover, it has been shown that younger adults are able to recognize emotions from such dynamic point-light displays. Previous research has suggested that the ability to perceive emotional stimuli changes with age. For example, it has been shown that older adults are impaired in recognizing emotional expressions from static faces. In addition, it has been shown that older adults have difficulties perceiving visual motion, which might be helpful to recognize emotions from point-light displays. In the current study, 4 experiments were completed in which older and younger adults were asked to identify 3 emotions (happy, sad, and angry) displayed by 4 types of point-light walkers: upright and inverted normal walkers, which contained both local motion and global form information; upright scrambled walkers, which contained only local motion information; and upright random-position walkers, which contained only global form information. Overall, emotion discrimination accuracy was lower in older participants compared with younger participants, specifically when identifying sad and angry point-light walkers. In addition, observers in both age groups were able to recognize emotions from all types of point-light walkers, suggesting that both older and younger adults are able to recognize emotions from point-light walkers on the basis of local motion or global form. (c) 2016 APA, all rights reserved).

  12. The Multisensory Attentional Consequences of Tool Use: A Functional Magnetic Resonance Imaging Study

    PubMed Central

    Holmes, Nicholas P.; Spence, Charles; Hansen, Peter C.; Mackay, Clare E.; Calvert, Gemma A.

    2008-01-01

    Background Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used. Methodology/Principal Findings We tested this hypothesis by scanning healthy human participants' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position. Conclusions/Significance These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use. PMID:18958150

  13. Absolute Depth Sensitivity in Cat Primary Visual Cortex under Natural Viewing Conditions.

    PubMed

    Pigarev, Ivan N; Levichkina, Ekaterina V

    2016-01-01

    Mechanisms of 3D perception, investigated in many laboratories, have defined depth either relative to the fixation plane or to other objects in the visual scene. It is obvious that for efficient perception of the 3D world, additional mechanisms of depth constancy could operate in the visual system to provide information about absolute distance. Neurons with properties reflecting some features of depth constancy have been described in the parietal and extrastriate occipital cortical areas. It has also been shown that, for some neurons in the visual area V1, responses to stimuli of constant angular size differ at close and remote distances. The present study was designed to investigate whether, in natural free gaze viewing conditions, neurons tuned to absolute depths can be found in the primary visual cortex (area V1). Single-unit extracellular activity was recorded from the visual cortex of waking cats sitting on a trolley in front of a large screen. The trolley was slowly approaching the visual scene, which consisted of stationary sinusoidal gratings of optimal orientation rear-projected over the whole surface of the screen. Each neuron was tested with two gratings, with spatial frequency of one grating being twice as high as that of the other. Assuming that a cell is tuned to a spatial frequency, its maximum response to the grating with a spatial frequency twice as high should be shifted to a distance half way closer to the screen in order to attain the same size of retinal projection. For hypothetical neurons selective to absolute depth, location of the maximum response should remain at the same distance irrespective of the type of stimulus. It was found that about 20% of neurons in our experimental paradigm demonstrated sensitivity to particular distances independently of the spatial frequencies of the gratings. We interpret these findings as an indication of the use of absolute depth information in the primary visual cortex.

  14. High Performance Real-Time Visualization of Voluminous Scientific Data Through the NOAA Earth Information System (NEIS).

    NASA Astrophysics Data System (ADS)

    Stewart, J.; Hackathorn, E. J.; Joyce, J.; Smith, J. S.

    2014-12-01

    Within our community data volume is rapidly expanding. These data have limited value if one cannot interact or visualize the data in a timely manner. The scientific community needs the ability to dynamically visualize, analyze, and interact with these data along with other environmental data in real-time regardless of the physical location or data format. Within the National Oceanic Atmospheric Administration's (NOAA's), the Earth System Research Laboratory (ESRL) is actively developing the NOAA Earth Information System (NEIS). Previously, the NEIS team investigated methods of data discovery and interoperability. The recent focus shifted to high performance real-time visualization allowing NEIS to bring massive amounts of 4-D data, including output from weather forecast models as well as data from different observations (surface obs, upper air, etc...) in one place. Our server side architecture provides a real-time stream processing system which utilizes server based NVIDIA Graphical Processing Units (GPU's) for data processing, wavelet based compression, and other preparation techniques for visualization, allows NEIS to minimize the bandwidth and latency for data delivery to end-users. Client side, users interact with NEIS services through the visualization application developed at ESRL called TerraViz. Terraviz is developed using the Unity game engine and takes advantage of the GPU's allowing a user to interact with large data sets in real time that might not have been possible before. Through these technologies, the NEIS team has improved accessibility to 'Big Data' along with providing tools allowing novel visualization and seamless integration of data across time and space regardless of data size, physical location, or data format. These capabilities provide the ability to see the global interactions and their importance for weather prediction. Additionally, they allow greater access than currently exists helping to foster scientific collaboration and new ideas. This presentation will provide an update of the recent enhancements of the NEIS architecture and visualization capabilities, challenges faced, as well as ongoing research activities related to this project.

  15. The Sensory Components of High-Capacity Iconic Memory and Visual Working Memory

    PubMed Central

    Bradley, Claire; Pearson, Joel

    2012-01-01

    Early visual memory can be split into two primary components: a high-capacity, short-lived iconic memory followed by a limited-capacity visual working memory that can last many seconds. Whereas a large number of studies have investigated visual working memory for low-level sensory features, much research on iconic memory has used more “high-level” alphanumeric stimuli such as letters or numbers. These two forms of memory are typically examined separately, despite an intrinsic overlap in their characteristics. Here, we used a purely sensory paradigm to examine visual short-term memory for 10 homogeneous items of three different visual features (color, orientation and motion) across a range of durations from 0 to 6 s. We found that the amount of information stored in iconic memory is smaller for motion than for color or orientation. Performance declined exponentially with longer storage durations and reached chance levels after ∼2 s. Further experiments showed that performance for the 10 items at 1 s was contingent on unperturbed attentional resources. In addition, for orientation stimuli, performance was contingent on the location of stimuli in the visual field, especially for short cue delays. Overall, our results suggest a smooth transition between an automatic, high-capacity, feature-specific sensory-iconic memory, and an effortful “lower-capacity” visual working memory. PMID:23055993

  16. The semantic category-based grouping in the Multiple Identity Tracking task.

    PubMed

    Wei, Liuqing; Zhang, Xuemin; Li, Zhen; Liu, Jingyao

    2018-01-01

    In the Multiple Identity Tracking (MIT) task, categorical distinctions between targets and distractors have been found to facilitate tracking (Wei, Zhang, Lyu, & Li in Frontiers in Psychology, 7, 589, 2016). The purpose of this study was to further investigate the reasons for the facilitation effect, through six experiments. The results of Experiments 1-3 excluded the potential explanations of visual distinctiveness, attentional distribution strategy, and a working memory mechanism, respectively. When objects' visual information was preserved and categorical information was removed, the facilitation effect disappeared, suggesting that the visual distinctiveness between targets and distractors was not the main reason for the facilitation effect. Moreover, the facilitation effect was not the result of strategically shifting the attentional distribution, because the targets received more attention than the distractors in all conditions. Additionally, the facilitation effect did not come about because the identities of targets were encoded and stored in visual working memory to assist in the recovery from tracking errors; when working memory was disturbed by the object identities changing during tracking, the facilitation effect still existed. Experiments 4 and 5 showed that observers grouped targets together and segregated them from distractors on the basis of their categorical information. By doing this, observers could largely avoid distractor interference with tracking and improve tracking performance. Finally, Experiment 6 indicated that category-based grouping is not an automatic, but a goal-directed and effortful, strategy. In summary, the present findings show that a semantic category-based target-grouping mechanism exists in the MIT task, which is likely to be the major reason for the tracking facilitation effect.

  17. A Class of Visual Neurons with Wide-Field Properties Is Required for Local Motion Detection.

    PubMed

    Fisher, Yvette E; Leong, Jonathan C S; Sporar, Katja; Ketkar, Madhura D; Gohl, Daryl M; Clandinin, Thomas R; Silies, Marion

    2015-12-21

    Visual motion cues are used by many animals to guide navigation across a wide range of environments. Long-standing theoretical models have made predictions about the computations that compare light signals across space and time to detect motion. Using connectomic and physiological approaches, candidate circuits that can implement various algorithmic steps have been proposed in the Drosophila visual system. These pathways connect photoreceptors, via interneurons in the lamina and the medulla, to direction-selective cells in the lobula and lobula plate. However, the functional architecture of these circuits remains incompletely understood. Here, we use a forward genetic approach to identify the medulla neuron Tm9 as critical for motion-evoked behavioral responses. Using in vivo calcium imaging combined with genetic silencing, we place Tm9 within motion-detecting circuitry. Tm9 receives functional inputs from the lamina neurons L3 and, unexpectedly, L1 and passes information onto the direction-selective T5 neuron. Whereas the morphology of Tm9 suggested that this cell would inform circuits about local points in space, we found that the Tm9 spatial receptive field is large. Thus, this circuit informs elementary motion detectors about a wide region of the visual scene. In addition, Tm9 exhibits sustained responses that provide a tonic signal about incoming light patterns. Silencing Tm9 dramatically reduces the response amplitude of T5 neurons under a broad range of different motion conditions. Thus, our data demonstrate that sustained and wide-field signals are essential for elementary motion processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Hearing Shapes: Event-related Potentials Reveal the Time Course of Auditory-Visual Sensory Substitution.

    PubMed

    Graulty, Christian; Papaioannou, Orestis; Bauer, Phoebe; Pitts, Michael A; Canseco-Gonzalez, Enriqueta

    2018-04-01

    In auditory-visual sensory substitution, visual information (e.g., shape) can be extracted through strictly auditory input (e.g., soundscapes). Previous studies have shown that image-to-sound conversions that follow simple rules [such as the Meijer algorithm; Meijer, P. B. L. An experimental system for auditory image representation. Transactions on Biomedical Engineering, 39, 111-121, 1992] are highly intuitive and rapidly learned by both blind and sighted individuals. A number of recent fMRI studies have begun to explore the neuroplastic changes that result from sensory substitution training. However, the time course of cross-sensory information transfer in sensory substitution is largely unexplored and may offer insights into the underlying neural mechanisms. In this study, we recorded ERPs to soundscapes before and after sighted participants were trained with the Meijer algorithm. We compared these posttraining versus pretraining ERP differences with those of a control group who received the same set of 80 auditory/visual stimuli but with arbitrary pairings during training. Our behavioral results confirmed the rapid acquisition of cross-sensory mappings, and the group trained with the Meijer algorithm was able to generalize their learning to novel soundscapes at impressive levels of accuracy. The ERP results revealed an early cross-sensory learning effect (150-210 msec) that was significantly enhanced in the algorithm-trained group compared with the control group as well as a later difference (420-480 msec) that was unique to the algorithm-trained group. These ERP modulations are consistent with previous fMRI results and provide additional insight into the time course of cross-sensory information transfer in sensory substitution.

  19. Visual Knowledge in Tactical Planning: Preliminary Knowledge Acquisition Phase 1 Technical Report

    DTIC Science & Technology

    1990-04-05

    MANAGEMENT INFORMATION , COMMUNICATIONS, AND COMPUTER SCIENCES Visual Knowledge in Tactical Planning: Preliminary Knowledge Acquisition Phase I Technical...perceived provides information in multiple modalities and, in fact, we may rely on a non-verbal mode for much of our understanding of the situation...some tasks, almost all the pertinent information is provided via diagrams, maps, znd other illustrations. Visual Knowledge Visual experience forms a

  20. Experience and information loss in auditory and visual memory.

    PubMed

    Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K

    2017-07-01

    Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.

  1. Evidence for Separate Contributions of High and Low Spatial Frequencies during Visual Word Recognition.

    PubMed

    Winsler, Kurt; Holcomb, Phillip J; Midgley, Katherine J; Grainger, Jonathan

    2017-01-01

    Previous studies have shown that different spatial frequency information processing streams interact during the recognition of visual stimuli. However, it is a matter of debate as to the contributions of high and low spatial frequency (HSF and LSF) information for visual word recognition. This study examined the role of different spatial frequencies in visual word recognition using event-related potential (ERP) masked priming. EEG was recorded from 32 scalp sites in 30 English-speaking adults in a go/no-go semantic categorization task. Stimuli were white characters on a neutral gray background. Targets were uppercase five letter words preceded by a forward-mask (#######) and a 50 ms lowercase prime. Primes were either the same word (repeated) or a different word (un-repeated) than the subsequent target and either contained only high, only low, or full spatial frequency information. Additionally within each condition, half of the prime-target pairs were high lexical frequency, and half were low. In the full spatial frequency condition, typical ERP masked priming effects were found with an attenuated N250 (sub-lexical) and N400 (lexical-semantic) for repeated compared to un-repeated primes. For HSF primes there was a weaker N250 effect which interacted with lexical frequency, a significant reversal of the effect around 300 ms, and an N400-like effect for only high lexical frequency word pairs. LSF primes did not produce any of the classic ERP repetition priming effects, however they did elicit a distinct early effect around 200 ms in the opposite direction of typical repetition effects. HSF information accounted for many of the masked repetition priming ERP effects and therefore suggests that HSFs are more crucial for word recognition. However, LSFs did produce their own pattern of priming effects indicating that larger scale information may still play a role in word recognition.

  2. Influences of selective adaptation on perception of audiovisual speech

    PubMed Central

    Dias, James W.; Cook, Theresa C.; Rosenblum, Lawrence D.

    2016-01-01

    Research suggests that selective adaptation in speech is a low-level process dependent on sensory-specific information shared between the adaptor and test-stimuli. However, previous research has only examined how adaptors shift perception of unimodal test stimuli, either auditory or visual. In the current series of experiments, we investigated whether adaptation to cross-sensory phonetic information can influence perception of integrated audio-visual phonetic information. We examined how selective adaptation to audio and visual adaptors shift perception of speech along an audiovisual test continuum. This test-continuum consisted of nine audio-/ba/-visual-/va/ stimuli, ranging in visual clarity of the mouth. When the mouth was clearly visible, perceivers “heard” the audio-visual stimulus as an integrated “va” percept 93.7% of the time (e.g., McGurk & MacDonald, 1976). As visibility of the mouth became less clear across the nine-item continuum, the audio-visual “va” percept weakened, resulting in a continuum ranging in audio-visual percepts from /va/ to /ba/. Perception of the test-stimuli was tested before and after adaptation. Changes in audiovisual speech perception were observed following adaptation to visual-/va/ and audiovisual-/va/, but not following adaptation to auditory-/va/, auditory-/ba/, or visual-/ba/. Adaptation modulates perception of integrated audio-visual speech by modulating the processing of sensory-specific information. The results suggest that auditory and visual speech information are not completely integrated at the level of selective adaptation. PMID:27041781

  3. What Drives Memory-Driven Attentional Capture? The Effects of Memory Type, Display Type, and Search Type

    ERIC Educational Resources Information Center

    Olivers, Christian N. L.

    2009-01-01

    An important question is whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. Some past research has indicated that they do: Singleton distractors interfered more strongly with a visual search task when they…

  4. Supporting Visual Literacy in the School Library Media Center: Developmental, Socio-Cultural, and Experiential Considerations and Scenarios

    ERIC Educational Resources Information Center

    Cooper, Linda Z.

    2008-01-01

    Children are natural visual learners--they have been absorbing information visually since birth. They welcome opportunities to learn via images as well as to generate visual information themselves, and these opportunities present themselves every day. The importance of visual literacy can be conveyed through conversations and the teachable moment,…

  5. Information Virtulization in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Bryson, Steve; Kwak, Dochan (Technical Monitor)

    2001-01-01

    Virtual Environments provide a natural setting for a wide range of information visualization applications, particularly wlieit the information to be visualized is defined on a three-dimensional domain (Bryson, 1996). This chapter provides an overview of the issues that arise when designing and implementing an information visualization application in a virtual environment. Many design issues that arise, such as, e.g., issues of display, user tracking are common to any application of virtual environments. In this chapter we focus on those issues that are special to information visualization applications, as issues of wider concern are addressed elsewhere in this book.

  6. Processing reafferent and exafferent visual information for action and perception.

    PubMed

    Reichenbach, Alexandra; Diedrichsen, Jörn

    2015-01-01

    A recent study suggests that reafferent hand-related visual information utilizes a privileged, attention-independent processing channel for motor control. This process was termed visuomotor binding to reflect its proposed function: linking visual reafferences to the corresponding motor control centers. Here, we ask whether the advantage of processing reafferent over exafferent visual information is a specific feature of the motor processing stream or whether the improved processing also benefits the perceptual processing stream. Human participants performed a bimanual reaching task in a cluttered visual display, and one of the visual hand cursors could be displaced laterally during the movement. We measured the rapid feedback responses of the motor system as well as matched perceptual judgments of which cursor was displaced. Perceptual judgments were either made by watching the visual scene without moving or made simultaneously to the reaching tasks, such that the perceptual processing stream could also profit from the specialized processing of reafferent information in the latter case. Our results demonstrate that perceptual judgments in the heavily cluttered visual environment were improved when performed based on reafferent information. Even in this case, however, the filtering capability of the perceptual processing stream suffered more from the increasing complexity of the visual scene than the motor processing stream. These findings suggest partly shared and partly segregated processing of reafferent information for vision for motor control versus vision for perception.

  7. Guidance of attention by information held in working memory.

    PubMed

    Calleja, Marissa Ortiz; Rich, Anina N

    2013-05-01

    Information held in working memory (WM) can guide attention during visual search. The authors of recent studies have interpreted the effect of holding verbal labels in WM as guidance of visual attention by semantic information. In a series of experiments, we tested how attention is influenced by visual features versus category-level information about complex objects held in WM. Participants either memorized an object's image or its category. While holding this information in memory, they searched for a target in a four-object search display. On exact-match trials, the memorized item reappeared as a distractor in the search display. On category-match trials, another exemplar of the memorized item appeared as a distractor. On neutral trials, none of the distractors were related to the memorized object. We found attentional guidance in visual search on both exact-match and category-match trials in Experiment 1, in which the exemplars were visually similar. When we controlled for visual similarity among the exemplars by using four possible exemplars (Exp. 2) or by using two exemplars rated as being visually dissimilar (Exp. 3), we found attentional guidance only on exact-match trials when participants memorized the object's image. The same pattern of results held when the target was invariant (Exps. 2-3) and when the target was defined semantically and varied in visual features (Exp. 4). The findings of these experiments suggest that attentional guidance by WM requires active visual information.

  8. WebGIVI: a web-based gene enrichment analysis and visualization tool.

    PubMed

    Sun, Liang; Zhu, Yongnan; Mahmood, A S M Ashique; Tudor, Catalina O; Ren, Jia; Vijay-Shanker, K; Chen, Jian; Schmidt, Carl J

    2017-05-04

    A major challenge of high throughput transcriptome studies is presenting the data to researchers in an interpretable format. In many cases, the outputs of such studies are gene lists which are then examined for enriched biological concepts. One approach to help the researcher interpret large gene datasets is to associate genes and informative terms (iTerm) that are obtained from the biomedical literature using the eGIFT text-mining system. However, examining large lists of iTerm and gene pairs is a daunting task. We have developed WebGIVI, an interactive web-based visualization tool ( http://raven.anr.udel.edu/webgivi/ ) to explore gene:iTerm pairs. WebGIVI was built via Cytoscape and Data Driven Document JavaScript libraries and can be used to relate genes to iTerms and then visualize gene and iTerm pairs. WebGIVI can accept a gene list that is used to retrieve the gene symbols and corresponding iTerm list. This list can be submitted to visualize the gene iTerm pairs using two distinct methods: a Concept Map or a Cytoscape Network Map. In addition, WebGIVI also supports uploading and visualization of any two-column tab separated data. WebGIVI provides an interactive and integrated network graph of gene and iTerms that allows filtering, sorting, and grouping, which can aid biologists in developing hypothesis based on the input gene lists. In addition, WebGIVI can visualize hundreds of nodes and generate a high-resolution image that is important for most of research publications. The source code can be freely downloaded at https://github.com/sunliang3361/WebGIVI . The WebGIVI tutorial is available at http://raven.anr.udel.edu/webgivi/tutorial.php .

  9. Impact of visual and somatosensory deprivation on dynamic balance in adolescent idiopathic scoliosis.

    PubMed

    Kuo, Fang-Chuan; Wang, Nai-Hwei; Hong, Chang-Zern

    2010-11-01

    A cross-sectional study of balance control in adolescents with idiopathic scoliosis (AIS). To investigate the impact of visual and somatosensory deprivation on the dynamic balance in AIS patients and to discuss electromyographic (EMG) and posture sway findings. Most studies focus on posture sway in quiet standing controls with little effort on examining muscle-activated patterns in dynamic standing controls. Twenty-two AIS patients and 22 age-matched normal subjects were studied. To understand how visual and somatosensory information could modulate standing balance, balance tests with the Biodex stability system were performed on a moving platform under 3 conditions: visual feedback provided (VF), eyes closed (EC), and standing on a sponge pad with visual feedback provided (SV). Muscular activities of bilateral lumbar multifidi, gluteus medii, and gastrocnemii muscles were recorded with a telemetry EMG system. AIS patients had normal balance index and amplitude and duration of EMG similar to those of normal subjects in the balance test. However, the onset latency of right gastrocnemius was earlier in AIS patients than in normal subjects. In addition, body-side asymmetry was noted on muscle strength and onset latency in AIS subjects. Under EC condition, lumbar multifidi, and gluteus medii activities were higher than those under SV and VF conditions (P < 0.05). Under SV condition, the medial-lateral tilting angle was less than that under VF and EC conditions. In addition, the active duration of right gluteus medius was shorter under SV condition (P < 0.05). The dynamic balance control is particularly disruptive under visual deprivation with increasing lumbar multifidi and gluteus medii activities for compensation. Sponge pad can cause decrease in frontal plane tilting and gluteus medii effort. The asymmetric muscle strength and onset timing are attributed to anatomic deformation as opposed to neurologic etiological factors.

  10. An information maximization model of eye movements

    NASA Technical Reports Server (NTRS)

    Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra

    2005-01-01

    We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.

  11. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion

    PubMed Central

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828

  12. The contribution of visual areas to speech comprehension: a PET study in cochlear implants patients and normal-hearing subjects.

    PubMed

    Giraud, Anne Lise; Truy, Eric

    2002-01-01

    Early visual cortex can be recruited by meaningful sounds in the absence of visual information. This occurs in particular in cochlear implant (CI) patients whose dependency on visual cues in speech comprehension is increased. Such cross-modal interaction mirrors the response of early auditory cortex to mouth movements (speech reading) and may reflect the natural expectancy of the visual counterpart of sounds, lip movements. Here we pursue the hypothesis that visual activations occur specifically in response to meaningful sounds. We performed PET in both CI patients and controls, while subjects listened either to their native language or to a completely unknown language. A recruitment of early visual cortex, the left posterior inferior temporal gyrus (ITG) and the left superior parietal cortex was observed in both groups. While no further activation occurred in the group of normal-hearing subjects, CI patients additionally recruited the right perirhinal/fusiform and mid-fusiform, the right temporo-occipito-parietal (TOP) junction and the left inferior prefrontal cortex (LIPF, Broca's area). This study confirms a participation of visual cortical areas in semantic processing of speech sounds. Observation of early visual activation in normal-hearing subjects shows that auditory-to-visual cross-modal effects can also be recruited under natural hearing conditions. In cochlear implant patients, speech activates the mid-fusiform gyrus in the vicinity of the so-called face area. This suggests that specific cross-modal interaction involving advanced stages in the visual processing hierarchy develops after cochlear implantation and may be the correlate of increased usage of lip-reading.

  13. The informativity of sound modulates crossmodal facilitation of visual discrimination: a fMRI study.

    PubMed

    Li, Qi; Yu, Hongtao; Li, Xiujun; Sun, Hongzan; Yang, Jingjing; Li, Chunlin

    2017-01-18

    Many studies have investigated behavioral crossmodal facilitation when a visual stimulus is accompanied by a concurrent task-irrelevant sound. Lippert and colleagues reported that a concurrent task-irrelevant sound reduced the uncertainty of the timing of the visual display and improved perceptional responses (informative sound). However, the neural mechanism by which the informativity of sound affected crossmodal facilitation of visual discrimination remained unclear. In this study, we used event-related functional MRI to investigate the neural mechanisms underlying the role of informativity of sound in crossmodal facilitation of visual discrimination. Significantly faster reaction times were observed when there was an informative relationship between auditory and visual stimuli. The functional MRI results showed sound informativity-induced activation enhancement including the left fusiform gyrus and the right lateral occipital complex. Further correlation analysis showed that the right lateral occipital complex was significantly correlated with the behavioral benefit in reaction times. This suggests that this region was modulated by the informative relationship within audiovisual stimuli that was learnt during the experiment, resulting in late-stage multisensory integration and enhanced behavioral responses.

  14. Differential processing of binocular and monocular gloss cues in human visual cortex.

    PubMed

    Sun, Hua-Chun; Di Luca, Massimiliano; Ban, Hiroshi; Muryy, Alexander; Fleming, Roland W; Welchman, Andrew E

    2016-06-01

    The visual impression of an object's surface reflectance ("gloss") relies on a range of visual cues, both monocular and binocular. Whereas previous imaging work has identified processing within ventral visual areas as important for monocular cues, little is known about cortical areas involved in processing binocular cues. Here, we used human functional MRI (fMRI) to test for brain areas selectively involved in the processing of binocular cues. We manipulated stereoscopic information to create four conditions that differed in their disparity structure and in the impression of surface gloss that they evoked. We performed multivoxel pattern analysis to find areas whose fMRI responses allow classes of stimuli to be distinguished based on their depth structure vs. material appearance. We show that higher dorsal areas play a role in processing binocular gloss information, in addition to known ventral areas involved in material processing, with ventral area lateral occipital responding to both object shape and surface material properties. Moreover, we tested for similarities between the representation of gloss from binocular cues and monocular cues. Specifically, we tested for transfer in the decoding performance of an algorithm trained on glossy vs. matte objects defined by either binocular or by monocular cues. We found transfer effects from monocular to binocular cues in dorsal visual area V3B/kinetic occipital (KO), suggesting a shared representation of the two cues in this area. These results indicate the involvement of mid- to high-level visual circuitry in the estimation of surface material properties, with V3B/KO potentially playing a role in integrating monocular and binocular cues. Copyright © 2016 the American Physiological Society.

  15. Movement Sonification: Effects on Motor Learning beyond Rhythmic Adjustments.

    PubMed

    Effenberg, Alfred O; Fehse, Ursula; Schmitz, Gerd; Krueger, Bjoern; Mechling, Heinz

    2016-01-01

    Motor learning is based on motor perception and emergent perceptual-motor representations. A lot of behavioral research is related to single perceptual modalities but during last two decades the contribution of multimodal perception on motor behavior was discovered more and more. A growing number of studies indicates an enhanced impact of multimodal stimuli on motor perception, motor control and motor learning in terms of better precision and higher reliability of the related actions. Behavioral research is supported by neurophysiological data, revealing that multisensory integration supports motor control and learning. But the overwhelming part of both research lines is dedicated to basic research. Besides research in the domains of music, dance and motor rehabilitation, there is almost no evidence for enhanced effectiveness of multisensory information on learning of gross motor skills. To reduce this gap, movement sonification is used here in applied research on motor learning in sports. Based on the current knowledge on the multimodal organization of the perceptual system, we generate additional real-time movement information being suitable for integration with perceptual feedback streams of visual and proprioceptive modality. With ongoing training, synchronously processed auditory information should be initially integrated into the emerging internal models, enhancing the efficacy of motor learning. This is achieved by a direct mapping of kinematic and dynamic motion parameters to electronic sounds, resulting in continuous auditory and convergent audiovisual or audio-proprioceptive stimulus arrays. In sharp contrast to other approaches using acoustic information as error-feedback in motor learning settings, we try to generate additional movement information suitable for acceleration and enhancement of adequate sensorimotor representations and processible below the level of consciousness. In the experimental setting, participants were asked to learn a closed motor skill (technique acquisition of indoor rowing). One group was treated with visual information and two groups with audiovisual information (sonification vs. natural sounds). For all three groups learning became evident and remained stable. Participants treated with additional movement sonification showed better performance compared to both other groups. Results indicate that movement sonification enhances motor learning of a complex gross motor skill-even exceeding usually expected acoustic rhythmic effects on motor learning.

  16. Movement Sonification: Effects on Motor Learning beyond Rhythmic Adjustments

    PubMed Central

    Effenberg, Alfred O.; Fehse, Ursula; Schmitz, Gerd; Krueger, Bjoern; Mechling, Heinz

    2016-01-01

    Motor learning is based on motor perception and emergent perceptual-motor representations. A lot of behavioral research is related to single perceptual modalities but during last two decades the contribution of multimodal perception on motor behavior was discovered more and more. A growing number of studies indicates an enhanced impact of multimodal stimuli on motor perception, motor control and motor learning in terms of better precision and higher reliability of the related actions. Behavioral research is supported by neurophysiological data, revealing that multisensory integration supports motor control and learning. But the overwhelming part of both research lines is dedicated to basic research. Besides research in the domains of music, dance and motor rehabilitation, there is almost no evidence for enhanced effectiveness of multisensory information on learning of gross motor skills. To reduce this gap, movement sonification is used here in applied research on motor learning in sports. Based on the current knowledge on the multimodal organization of the perceptual system, we generate additional real-time movement information being suitable for integration with perceptual feedback streams of visual and proprioceptive modality. With ongoing training, synchronously processed auditory information should be initially integrated into the emerging internal models, enhancing the efficacy of motor learning. This is achieved by a direct mapping of kinematic and dynamic motion parameters to electronic sounds, resulting in continuous auditory and convergent audiovisual or audio-proprioceptive stimulus arrays. In sharp contrast to other approaches using acoustic information as error-feedback in motor learning settings, we try to generate additional movement information suitable for acceleration and enhancement of adequate sensorimotor representations and processible below the level of consciousness. In the experimental setting, participants were asked to learn a closed motor skill (technique acquisition of indoor rowing). One group was treated with visual information and two groups with audiovisual information (sonification vs. natural sounds). For all three groups learning became evident and remained stable. Participants treated with additional movement sonification showed better performance compared to both other groups. Results indicate that movement sonification enhances motor learning of a complex gross motor skill—even exceeding usually expected acoustic rhythmic effects on motor learning. PMID:27303255

  17. Cortical activation during Braille reading is influenced by early visual experience in subjects with severe visual disability: a correlational fMRI study.

    PubMed

    Melzer, P; Morgan, V L; Pickens, D R; Price, R R; Wall, R S; Ebner, F F

    2001-11-01

    Functional magnetic resonance imaging was performed on blind adults resting and reading Braille. The strongest activation was found in primary somatic sensory/motor cortex on both cortical hemispheres. Additional foci of activation were situated in the parietal, temporal, and occipital lobes where visual information is processed in sighted persons. The regions were differentiated most in the correlation of their time courses of activation with resting and reading. Differences in magnitude and expanse of activation were substantially less significant. Among the traditionally visual areas, the strength of correlation was greatest in posterior parietal cortex and moderate in occipitotemporal, lateral occipital, and primary visual cortex. It was low in secondary visual cortex as well as in dorsal and ventral inferior temporal cortex and posterior middle temporal cortex. Visual experience increased the strength of correlation in all regions except dorsal inferior temporal and posterior parietal cortex. The greatest statistically significant increase, i.e., approximately 30%, was in ventral inferior temporal and posterior middle temporal cortex. In these regions, words are analyzed semantically, which may be facilitated by visual experience. In contrast, visual experience resulted in a slight, insignificant diminution of the strength of correlation in dorsal inferior temporal cortex where language is analyzed phonetically. These findings affirm that posterior temporal regions are engaged in the processing of written language. Moreover, they suggest that this function is modified by early visual experience. Furthermore, visual experience significantly strengthened the correlation of activation and Braille reading in occipital regions traditionally involved in the processing of visual features and object recognition suggesting a role for visual imagery. Copyright 2001 Wiley-Liss, Inc.

  18. Visual and haptic integration in the estimation of softness of deformable objects

    PubMed Central

    Cellini, Cristiano; Kaim, Lukas; Drewing, Knut

    2013-01-01

    Softness perception intrinsically relies on haptic information. However, through everyday experiences we learn correspondences between felt softness and the visual effects of exploratory movements that are executed to feel softness. Here, we studied how visual and haptic information is integrated to assess the softness of deformable objects. Participants discriminated between the softness of two softer or two harder objects using only-visual, only-haptic or both visual and haptic information. We assessed the reliabilities of the softness judgments using the method of constant stimuli. In visuo-haptic trials, discrepancies between the two senses' information allowed us to measure the contribution of the individual senses to the judgments. Visual information (finger movement and object deformation) was simulated using computer graphics; input in visual trials was taken from previous visuo-haptic trials. Participants were able to infer softness from vision alone, and vision considerably contributed to bisensory judgments (∼35%). The visual contribution was higher than predicted from models of optimal integration (senses are weighted according to their reliabilities). Bisensory judgments were less reliable than predicted from optimal integration. We conclude that the visuo-haptic integration of softness information is biased toward vision, rather than being optimal, and might even be guided by a fixed weighting scheme. PMID:25165510

  19. Mandarin Visual Speech Information

    ERIC Educational Resources Information Center

    Chen, Trevor H.

    2010-01-01

    While the auditory-only aspects of Mandarin speech are heavily-researched and well-known in the field, this dissertation addresses its lesser-known aspects: The visual and audio-visual perception of Mandarin segmental information and lexical-tone information. Chapter II of this dissertation focuses on the audiovisual perception of Mandarin…

  20. Sensitivity to timing and order in human visual cortex.

    PubMed

    Singer, Jedediah M; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel

    2015-03-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. Copyright © 2015 the American Physiological Society.

Top