Sample records for extended visual system

  1. Invariant visual object recognition: a model, with lighting invariance.

    PubMed

    Rolls, Edmund T; Stringer, Simon M

    2006-01-01

    How are invariant representations of objects formed in the visual cortex? We describe a neurophysiological and computational approach which focusses on a feature hierarchy model in which invariant representations can be built by self-organizing learning based on the statistics of the visual input. The model can use temporal continuity in an associative synaptic learning rule with a short term memory trace, and/or it can use spatial continuity in Continuous Transformation learning. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and in this paper we show also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in for example spatial and object search tasks. The model has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene.

  2. Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet

    PubMed Central

    Rolls, Edmund T.

    2012-01-01

    Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus. PMID:22723777

  3. Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.

    PubMed

    Rolls, Edmund T

    2012-01-01

    Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.

  4. The Visual System's Intrinsic Bias and Knowledge of Size Mediate Perceived Size and Location in the Dark

    ERIC Educational Resources Information Center

    Zhou, Liu; He, Zijiang J.; Ooi, Teng Leng

    2013-01-01

    Dimly lit targets in the dark are perceived as located about an implicit slanted surface that delineates the visual system's intrinsic bias (Ooi, Wu, & He, 2001). If the intrinsic bias reflects the internal model of visual space--as proposed here--its influence should extend beyond target localization. Our first 2 experiments demonstrated that…

  5. Seeking Information with an Information Visualization System: A Study of Cognitive Styles

    ERIC Educational Resources Information Center

    Yuan, Xiaojun; Zhang, Xiangman; Chen, Chaomei; Avery, Joshua M.

    2011-01-01

    Introduction: This study investigated the effect of cognitive styles on users' information-seeking task performance using a knowledge domain information visualization system called CiteSpace. Method: Sixteen graduate students participated in a user experiment. Each completed an extended cognitive style analysis wholistic-analytic test (the…

  6. Neural substrates of smoking cue reactivity: A meta-analysis of fMRI studies

    PubMed Central

    Engelmann, Jeffrey M.; Versace, Francesco; Robinson, Jason D.; Minnix, Jennifer A.; Lam, Cho Y.; Cui, Yong; Brown, Victoria L.; Cinciripini, Paul M.

    2012-01-01

    Reactivity to smoking-related cues may be an important factor that precipitates relapse in smokers who are trying to quit. The neurobiology of smoking cue reactivity has been investigated in several fMRI studies. We combined the results of these studies using activation likelihood estimation, a meta-analytic technique for fMRI data. Results of the meta-analysis indicated that smoking cues reliably evoke larger fMRI responses than neutral cues in the extended visual system, precuneus, posterior cingulate gyrus, anterior cingulate gyrus, dorsal and medial prefrontal cortex, insula, and dorsal striatum. Subtraction meta-analyses revealed that parts of the extended visual system and dorsal prefrontal cortex are more reliably responsive to smoking cues in deprived smokers than in non-deprived smokers, and that short-duration cues presented in event-related designs produce larger responses in the extended visual system than long-duration cues presented in blocked designs. The areas that were found to be responsive to smoking cues agree with theories of the neurobiology of cue reactivity, with two exceptions. First, there was a reliable cue reactivity effect in the precuneus, which is not typically considered a brain region important to addiction. Second, we found no significant effect in the nucleus accumbens, an area that plays a critical role in addiction, but this effect may have been due to technical difficulties associated with measuring fMRI data in that region. The results of this meta-analysis suggest that the extended visual system should receive more attention in future studies of smoking cue reactivity. PMID:22206965

  7. Global processing in amblyopia: a review

    PubMed Central

    Hamm, Lisa M.; Black, Joanna; Dai, Shuan; Thompson, Benjamin

    2014-01-01

    Amblyopia is a neurodevelopmental disorder of the visual system that is associated with disrupted binocular vision during early childhood. There is evidence that the effects of amblyopia extend beyond the primary visual cortex to regions of the dorsal and ventral extra-striate visual cortex involved in visual integration. Here, we review the current literature on global processing deficits in observers with either strabismic, anisometropic, or deprivation amblyopia. A range of global processing tasks have been used to investigate the extent of the cortical deficit in amblyopia including: global motion perception, global form perception, face perception, and biological motion. These tasks appear to be differentially affected by amblyopia. In general, observers with unilateral amblyopia appear to show deficits for local spatial processing and global tasks that require the segregation of signal from noise. In bilateral cases, the global processing deficits are exaggerated, and appear to extend to specialized perceptual systems such as those involved in face processing. PMID:24987383

  8. A knowledge based system for scientific data visualization

    NASA Technical Reports Server (NTRS)

    Senay, Hikmet; Ignatius, Eve

    1992-01-01

    A knowledge-based system, called visualization tool assistant (VISTA), which was developed to assist scientists in the design of scientific data visualization techniques, is described. The system derives its knowledge from several sources which provide information about data characteristics, visualization primitives, and effective visual perception. The design methodology employed by the system is based on a sequence of transformations which decomposes a data set into a set of data partitions, maps this set of partitions to visualization primitives, and combines these primitives into a composite visualization technique design. Although the primary function of the system is to generate an effective visualization technique design for a given data set by using principles of visual perception the system also allows users to interactively modify the design, and renders the resulting image using a variety of rendering algorithms. The current version of the system primarily supports visualization techniques having applicability in earth and space sciences, although it may easily be extended to include other techniques useful in other disciplines such as computational fluid dynamics, finite-element analysis and medical imaging.

  9. Scientific Visualization and Computational Science: Natural Partners

    NASA Technical Reports Server (NTRS)

    Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Scientific visualization is developing rapidly, stimulated by computational science, which is gaining acceptance as a third alternative to theory and experiment. Computational science is based on numerical simulations of mathematical models derived from theory. But each individual simulation is like a hypothetical experiment; initial conditions are specified, and the result is a record of the observed conditions. Experiments can be simulated for situations that can not really be created or controlled. Results impossible to measure can be computed.. Even for observable values, computed samples are typically much denser. Numerical simulations also extend scientific exploration where the mathematics is analytically intractable. Numerical simulations are used to study phenomena from subatomic to intergalactic scales and from abstract mathematical structures to pragmatic engineering of everyday objects. But computational science methods would be almost useless without visualization. The obvious reason is that the huge amounts of data produced require the high bandwidth of the human visual system, and interactivity adds to the power. Visualization systems also provide a single context for all the activities involved from debugging the simulations, to exploring the data, to communicating the results. Most of the presentations today have their roots in image processing, where the fundamental task is: Given an image, extract information about the scene. Visualization has developed from computer graphics, and the inverse task: Given a scene description, make an image. Visualization extends the graphics paradigm by expanding the possible input. The goal is still to produce images; the difficulty is that the input is not a scene description displayable by standard graphics methods. Visualization techniques must either transform the data into a scene description or extend graphics techniques to display this odd input. Computational science is a fertile field for visualization research because the results vary so widely and include things that have no known appearance. The amount of data creates additional challenges for both hardware and software systems. Evaluations of visualization should ultimately reflect the insight gained into the scientific phenomena. So making good visualizations requires consideration of characteristics of the user and the purpose of the visualization. Knowledge about human perception and graphic design is also relevant. It is this breadth of knowledge that stimulates proposals for multidisciplinary visualization teams and intelligent visualization assistant software. Visualization is an immature field, but computational science is stimulating research on a broad front.

  10. Extending human perception of electromagnetic radiation to the UV region through biologically inspired photochromic fuzzy logic (BIPFUL) systems.

    PubMed

    Gentili, Pier Luigi; Rightler, Amanda L; Heron, B Mark; Gabbutt, Christopher D

    2016-01-25

    Photochromic fuzzy logic systems have been designed that extend human visual perception into the UV region. The systems are founded on a detailed knowledge of the activation wavelengths and quantum yields of a series of thermally reversible photochromic compounds. By appropriate matching of the photochromic behaviour unique colour signatures are generated in response differing UV activation frequencies.

  11. RGB-D SLAM Combining Visual Odometry and Extended Information Filter

    PubMed Central

    Zhang, Heng; Liu, Yanli; Tan, Jindong; Xiong, Naixue

    2015-01-01

    In this paper, we present a novel RGB-D SLAM system based on visual odometry and an extended information filter, which does not require any other sensors or odometry. In contrast to the graph optimization approaches, this is more suitable for online applications. A visual dead reckoning algorithm based on visual residuals is devised, which is used to estimate motion control input. In addition, we use a novel descriptor called binary robust appearance and normals descriptor (BRAND) to extract features from the RGB-D frame and use them as landmarks. Furthermore, considering both the 3D positions and the BRAND descriptors of the landmarks, our observation model avoids explicit data association between the observations and the map by marginalizing the observation likelihood over all possible associations. Experimental validation is provided, which compares the proposed RGB-D SLAM algorithm with just RGB-D visual odometry and a graph-based RGB-D SLAM algorithm using the publicly-available RGB-D dataset. The results of the experiments demonstrate that our system is quicker than the graph-based RGB-D SLAM algorithm. PMID:26263990

  12. Value-driven attentional capture in the auditory domain.

    PubMed

    Anderson, Brian A

    2016-01-01

    It is now well established that the visual attention system is shaped by reward learning. When visual features are associated with a reward outcome, they acquire high priority and can automatically capture visual attention. To date, evidence for value-driven attentional capture has been limited entirely to the visual system. In the present study, I demonstrate that previously reward-associated sounds also capture attention, interfering more strongly with the performance of a visual task. This finding suggests that value-driven attention reflects a broad principle of information processing that can be extended to other sensory modalities and that value-driven attention can bias cross-modal stimulus competition.

  13. Ergodic theory and visualization. II. Fourier mesochronic plots visualize (quasi)periodic sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levnajić, Zoran; Department of Mechanical Engineering, University of California Santa Barbara, Santa Barbara, California 93106; Mezić, Igor

    We present an application and analysis of a visualization method for measure-preserving dynamical systems introduced by I. Mezić and A. Banaszuk [Physica D 197, 101 (2004)], based on frequency analysis and Koopman operator theory. This extends our earlier work on visualization of ergodic partition [Z. Levnajić and I. Mezić, Chaos 20, 033114 (2010)]. Our method employs the concept of Fourier time average [I. Mezić and A. Banaszuk, Physica D 197, 101 (2004)], and is realized as a computational algorithms for visualization of periodic and quasi-periodic sets in the phase space. The complement of periodic phase space partition contains chaotic zone,more » and we show how to identify it. The range of method's applicability is illustrated using well-known Chirikov standard map, while its potential in illuminating higher-dimensional dynamics is presented by studying the Froeschlé map and the Extended Standard Map.« less

  14. Ergodic theory and visualization. II. Fourier mesochronic plots visualize (quasi)periodic sets.

    PubMed

    Levnajić, Zoran; Mezić, Igor

    2015-05-01

    We present an application and analysis of a visualization method for measure-preserving dynamical systems introduced by I. Mezić and A. Banaszuk [Physica D 197, 101 (2004)], based on frequency analysis and Koopman operator theory. This extends our earlier work on visualization of ergodic partition [Z. Levnajić and I. Mezić, Chaos 20, 033114 (2010)]. Our method employs the concept of Fourier time average [I. Mezić and A. Banaszuk, Physica D 197, 101 (2004)], and is realized as a computational algorithms for visualization of periodic and quasi-periodic sets in the phase space. The complement of periodic phase space partition contains chaotic zone, and we show how to identify it. The range of method's applicability is illustrated using well-known Chirikov standard map, while its potential in illuminating higher-dimensional dynamics is presented by studying the Froeschlé map and the Extended Standard Map.

  15. ParaView visualization of Abaqus output on the mechanical deformation of complex microstructures

    NASA Astrophysics Data System (ADS)

    Liu, Qingbin; Li, Jiang; Liu, Jie

    2017-02-01

    Abaqus® is a popular software suite for finite element analysis. It delivers linear and nonlinear analyses of mechanical and fluid dynamics, includes multi-body system and multi-physics coupling. However, the visualization capability of Abaqus using its CAE module is limited. Models from microtomography have extremely complicated structures, and datasets of Abaqus output are huge, requiring a visualization tool more powerful than Abaqus/CAE. We convert Abaqus output into the XML-based VTK format by developing a Python script and then using ParaView to visualize the results. Such capabilities as volume rendering, tensor glyphs, superior animation and other filters allow ParaView to offer excellent visualizing manifestations. ParaView's parallel visualization makes it possible to visualize very big data. To support full parallel visualization, the Python script achieves data partitioning by reorganizing all nodes, elements and the corresponding results on those nodes and elements. The data partition scheme minimizes data redundancy and works efficiently. Given its good readability and extendibility, the script can be extended to the processing of more different problems in Abaqus. We share the script with Abaqus users on GitHub.

  16. Conjunctive Coding of Complex Object Features

    PubMed Central

    Erez, Jonathan; Cusack, Rhodri; Kendall, William; Barense, Morgan D.

    2016-01-01

    Critical to perceiving an object is the ability to bind its constituent features into a cohesive representation, yet the manner by which the visual system integrates object features to yield a unified percept remains unknown. Here, we present a novel application of multivoxel pattern analysis of neuroimaging data that allows a direct investigation of whether neural representations integrate object features into a whole that is different from the sum of its parts. We found that patterns of activity throughout the ventral visual stream (VVS), extending anteriorly into the perirhinal cortex (PRC), discriminated between the same features combined into different objects. Despite this sensitivity to the unique conjunctions of features comprising objects, activity in regions of the VVS, again extending into the PRC, was invariant to the viewpoints from which the conjunctions were presented. These results suggest that the manner in which our visual system processes complex objects depends on the explicit coding of the conjunctions of features comprising them. PMID:25921583

  17. Understanding visualization: a formal approach using category theory and semiotics.

    PubMed

    Vickers, Paul; Faith, Joe; Rossiter, Nick

    2013-06-01

    This paper combines the vocabulary of semiotics and category theory to provide a formal analysis of visualization. It shows how familiar processes of visualization fit the semiotic frameworks of both Saussure and Peirce, and extends these structures using the tools of category theory to provide a general framework for understanding visualization in practice, including: Relationships between systems, data collected from those systems, renderings of those data in the form of representations, the reading of those representations to create visualizations, and the use of those visualizations to create knowledge and understanding of the system under inspection. The resulting framework is validated by demonstrating how familiar information visualization concepts (such as literalness, sensitivity, redundancy, ambiguity, generalizability, and chart junk) arise naturally from it and can be defined formally and precisely. This paper generalizes previous work on the formal characterization of visualization by, inter alia, Ziemkiewicz and Kosara and allows us to formally distinguish properties of the visualization process that previous work does not.

  18. Integration of bio-inspired, control-based visual and olfactory data for the detection of an elusive target

    NASA Astrophysics Data System (ADS)

    Duong, Tuan A.; Duong, Nghi; Le, Duong

    2017-01-01

    In this paper, we present an integration technique using a bio-inspired, control-based visual and olfactory receptor system to search for elusive targets in practical environments where the targets cannot be seen obviously by either sensory data. Bio-inspired Visual System is based on a modeling of extended visual pathway which consists of saccadic eye movements and visual pathway (vertebrate retina, lateral geniculate nucleus and visual cortex) to enable powerful target detections of noisy, partial, incomplete visual data. Olfactory receptor algorithm, namely spatial invariant independent component analysis, that was developed based on data of old factory receptor-electronic nose (enose) of Caltech, is adopted to enable the odorant target detection in an unknown environment. The integration of two systems is a vital approach and sets up a cornerstone for effective and low-cost of miniaturized UAVs or fly robots for future DOD and NASA missions, as well as for security systems in Internet of Things environments.

  19. Visualizing driving forces of spatially extended systems using the recurrence plot framework

    NASA Astrophysics Data System (ADS)

    Riedl, Maik; Marwan, Norbert; Kurths, Jürgen

    2017-12-01

    The increasing availability of highly resolved spatio-temporal data leads to new opportunities as well as challenges in many scientific disciplines such as climatology, ecology or epidemiology. This allows more detailed insights into the investigated spatially extended systems. However, this development needs advanced techniques of data analysis which go beyond standard linear tools since the more precise consideration often reveals nonlinear phenomena, for example threshold effects. One of these tools is the recurrence plot approach which has been successfully applied to the description of complex systems. Using this technique's power of visualization, we propose the analysis of the local minima of the underlying distance matrix in order to display driving forces of spatially extended systems. The potential of this novel idea is demonstrated by the analysis of the chlorophyll concentration and the sea surface temperature in the Southern California Bight. We are able not only to confirm the influence of El Niño events on the phytoplankton growth in this region but also to confirm two discussed regime shifts in the California current system. This new finding underlines the power of the proposed approach and promises new insights into other complex systems.

  20. Visual Image Sensor Organ Replacement: Implementation

    NASA Technical Reports Server (NTRS)

    Maluf, A. David (Inventor)

    2011-01-01

    Method and system for enhancing or extending visual representation of a selected region of a visual image, where visual representation is interfered with or distorted, by supplementing a visual signal with at least one audio signal having one or more audio signal parameters that represent one or more visual image parameters, such as vertical and/or horizontal location of the region; region brightness; dominant wavelength range of the region; change in a parameter value that characterizes the visual image, with respect to a reference parameter value; and time rate of change in a parameter value that characterizes the visual image. Region dimensions can be changed to emphasize change with time of a visual image parameter.

  1. Signal-to-noise ratio for the wide field-planetary camera of the Space Telescope

    NASA Technical Reports Server (NTRS)

    Zissa, D. E.

    1984-01-01

    Signal-to-noise ratios for the Wide Field Camera and Planetary Camera of the Space Telescope were calculated as a function of integration time. Models of the optical systems and CCD detector arrays were used with a 27th visual magnitude point source and a 25th visual magnitude per arc-sq. second extended source. A 23rd visual magnitude per arc-sq. second background was assumed. The models predicted signal-to-noise ratios of 10 within 4 hours for the point source centered on a signal pixel. Signal-to-noise ratios approaching 10 are estimated for approximately 0.25 x 0.25 arc-second areas within the extended source after 10 hours integration.

  2. Monocular Visual Odometry Based on Trifocal Tensor Constraint

    NASA Astrophysics Data System (ADS)

    Chen, Y. J.; Yang, G. L.; Jiang, Y. X.; Liu, X. Y.

    2018-02-01

    For the problem of real-time precise localization in the urban street, a monocular visual odometry based on Extend Kalman fusion of optical-flow tracking and trifocal tensor constraint is proposed. To diminish the influence of moving object, such as pedestrian, we estimate the motion of the camera by extracting the features on the ground, which improves the robustness of the system. The observation equation based on trifocal tensor constraint is derived, which can form the Kalman filter alone with the state transition equation. An Extend Kalman filter is employed to cope with the nonlinear system. Experimental results demonstrate that, compares with Yu’s 2-step EKF method, the algorithm is more accurate which meets the needs of real-time accurate localization in cities.

  3. Computer-based visual communication in aphasia.

    PubMed

    Steele, R D; Weinrich, M; Wertz, R T; Kleczewska, M K; Carlson, G S

    1989-01-01

    The authors describe their recently developed Computer-aided VIsual Communication (C-VIC) system, and report results of single-subject experimental designs probing its use with five chronic, severely impaired aphasic individuals. Studies replicate earlier results obtained with a non-computerized system, demonstrate patient competence with the computer implementation, extend the system's utility, and identify promising areas of application. Results of the single-subject experimental designs clarify patients' learning, generalization, and retention patterns, and highlight areas of performance difficulties. Future directions for the project are indicated.

  4. Neural Summation in the Hawkmoth Visual System Extends the Limits of Vision in Dim Light.

    PubMed

    Stöckl, Anna Lisa; O'Carroll, David Charles; Warrant, Eric James

    2016-03-21

    Most of the world's animals are active in dim light and depend on good vision for the tasks of daily life. Many have evolved visual adaptations that permit a performance superior to that of manmade imaging devices [1]. In insects, a major model visual system, nocturnal species show impressive visual abilities ranging from flight control [2, 3], to color discrimination [4, 5], to navigation using visual landmarks [6-8] or dim celestial compass cues [9, 10]. In addition to optical adaptations that improve their sensitivity in dim light [11], neural summation of light in space and time-which enhances the coarser and slower features of the scene at the expense of noisier finer and faster features-has been suggested to improve sensitivity in theoretical [12-14], anatomical [15-17], and behavioral [18-20] studies. How these summation strategies function neurally is, however, presently unknown. Here, we quantified spatial and temporal summation in the motion vision pathway of a nocturnal hawkmoth. We show that spatial and temporal summation combine supralinearly to substantially increase contrast sensitivity and visual information rate over four decades of light intensity, enabling hawkmoths to see at light levels 100 times dimmer than without summation. Our results reveal how visual motion is calculated neurally in dim light and how spatial and temporal summation improve sensitivity while simultaneously maximizing spatial and temporal resolution, thus extending models of insect motion vision derived predominantly from diurnal flies. Moreover, the summation strategies we have revealed may benefit manmade vision systems optimized for variable light levels [21]. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. FGF /FGFR Signal Induces Trachea Extension in the Drosophila Visual System

    PubMed Central

    Chu, Wei-Chen; Lee, Yuan-Ming; Henry Sun, Yi

    2013-01-01

    The Drosophila compound eye is a large sensory organ that places a high demand on oxygen supplied by the tracheal system. Although the development and function of the Drosophila visual system has been extensively studied, the development and contribution of its tracheal system has not been systematically examined. To address this issue, we studied the tracheal patterns and developmental process in the Drosophila visual system. We found that the retinal tracheae are derived from air sacs in the head, and the ingrowth of retinal trachea begin at mid-pupal stage. The tracheal development has three stages. First, the air sacs form near the optic lobe in 42-47% of pupal development (pd). Second, in 47-52% pd, air sacs extend branches along the base of the retina following a posterior-to-anterior direction and further form the tracheal network under the fenestrated membrane (TNUFM). Third, the TNUFM extend fine branches into the retina following a proximal-to-distal direction after 60% pd. Furthermore, we found that the trachea extension in both retina and TNUFM are dependent on the FGF(Bnl)/FGFR(Btl) signaling. Our results also provided strong evidence that the photoreceptors are the source of the Bnl ligand to guide the trachea ingrowth. Our work is the first systematic study of the tracheal development in the visual system, and also the first study demonstrating the interactions of two well-studied systems: the eye and trachea. PMID:23991208

  6. The onset of visual experience gates auditory cortex critical periods

    PubMed Central

    Mowery, Todd M.; Kotak, Vibhakar C.; Sanes, Dan H.

    2016-01-01

    Sensory systems influence one another during development and deprivation can lead to cross-modal plasticity. As auditory function begins before vision, we investigate the effect of manipulating visual experience during auditory cortex critical periods (CPs) by assessing the influence of early, normal and delayed eyelid opening on hearing loss-induced changes to membrane and inhibitory synaptic properties. Early eyelid opening closes the auditory cortex CPs precociously and dark rearing prevents this effect. In contrast, delayed eyelid opening extends the auditory cortex CPs by several additional days. The CP for recovery from hearing loss is also closed prematurely by early eyelid opening and extended by delayed eyelid opening. Furthermore, when coupled with transient hearing loss that animals normally fully recover from, very early visual experience leads to inhibitory deficits that persist into adulthood. Finally, we demonstrate a functional projection from the visual to auditory cortex that could mediate these effects. PMID:26786281

  7. Visual completion from 2D cross-sections: Implications for visual theory and STEM education and practice.

    PubMed

    Gagnier, Kristin Michod; Shipley, Thomas F

    2016-01-01

    Accurately inferring three-dimensional (3D) structure from only a cross-section through that structure is not possible. However, many observers seem to be unaware of this fact. We present evidence for a 3D amodal completion process that may explain this phenomenon and provide new insights into how the perceptual system processes 3D structures. Across four experiments, observers viewed cross-sections of common objects and reported whether regions visible on the surface extended into the object. If they reported that the region extended, they were asked to indicate the orientation of extension or that the 3D shape was unknowable from the cross-section. Across Experiments 1, 2, and 3, participants frequently inferred 3D forms from surface views, showing a specific prior to report that regions in the cross-section extend straight back into the object, with little variance in orientation. In Experiment 3, we examined whether 3D visual inferences made from cross-sections are similar to other cases of amodal completion by examining how the inferences were influenced by observers' knowledge of the objects. Finally, in Experiment 4, we demonstrate that these systematic visual inferences are unlikely to result from demand characteristics or response biases. We argue that these 3D visual inferences have been largely unrecognized by the perception community, and have implications for models of 3D visual completion and science education.

  8. Integrating visual learning within a model-based ATR system

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark; Nebrich, Mark

    2017-05-01

    Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.

  9. NetCDF-CF: Supporting Earth System Science with Data Access, Analysis, and Visualization

    NASA Astrophysics Data System (ADS)

    Davis, E.; Zender, C. S.; Arctur, D. K.; O'Brien, K.; Jelenak, A.; Santek, D.; Dixon, M. J.; Whiteaker, T. L.; Yang, K.

    2017-12-01

    NetCDF-CF is a community-developed convention for storing and describing earth system science data in the netCDF binary data format. It is an OGC recognized standard with numerous existing FOSS (Free and Open Source Software) and commercial software tools can explore, analyze, and visualize data that is stored and described as netCDF-CF data. To better support a larger segment of the earth system science community, a number of efforts are underway to extend the netCDF-CF convention with the goal of increasing the types of data that can be represented as netCDF-CF data. This presentation will provide an overview and update of work to extend the existing netCDF-CF convention. It will detail the types of earth system science data currently supported by netCDF-CF and the types of data targeted for support by current netCDF-CF convention development efforts. It will also describe some of the tools that support the use of netCDF-CF compliant datasets, the types of data they support, and efforts to extend them to handle the new data types that netCDF-CF will support.

  10. Extended Visual Glances Away from the Roadway are Associated with ADHD- and Texting-Related Driving Performance Deficits in Adolescents.

    PubMed

    Kingery, Kathleen M; Narad, Megan; Garner, Annie A; Antonini, Tanya N; Tamm, Leanne; Epstein, Jeffery N

    2015-08-01

    The purpose of the research study was to determine whether ADHD- and texting-related driving impairments are mediated by extended visual glances away from the roadway. Sixty-one adolescents (ADHD =28, non-ADHD =33; 62% male; 11% minority) aged 16-17 with a valid driver's license were videotaped while engaging in a driving simulation that included a No Distraction, Hands-Free Phone Conversation, and Texting condition. Two indicators of visual inattention were coded: 1) percentage of time with eyes diverted from the roadway; and 2) number of extended (greater than 2 s) visual glances away from the roadway. Adolescents with ADHD displayed significantly more visual inattention to the roadway on both visual inattention measures. Increased lane position variability among adolescents with ADHD compared to those without ADHD during the Hands-Free Phone Conversation and Texting conditions was mediated by an increased number of extended glances away from the roadway. Similarly, texting resulted in decreased visual attention to the roadway. Finally, increased lane position variability during texting was also mediated by the number of extended glances away from the roadway. Both ADHD and texting impair visual attention to the roadway and the consequence of this visual inattention is increased lane position variability. Visual inattention is implicated as a possible mechanism for ADHD- and texting-related deficits and suggests that driving interventions designed to address ADHD- or texting-related deficits in adolescents need to focus on decreasing extended glances away from the roadway.

  11. Object-processing neural efficiency differentiates object from spatial visualizers.

    PubMed

    Motes, Michael A; Malach, Rafael; Kozhevnikov, Maria

    2008-11-19

    The visual system processes object properties and spatial properties in distinct subsystems, and we hypothesized that this distinction might extend to individual differences in visual processing. We conducted a functional MRI study investigating the neural underpinnings of individual differences in object versus spatial visual processing. Nine participants of high object-processing ability ('object' visualizers) and eight participants of high spatial-processing ability ('spatial' visualizers) were scanned, while they performed an object-processing task. Object visualizers showed lower bilateral neural activity in lateral occipital complex and lower right-lateralized neural activity in dorsolateral prefrontal cortex. The data indicate that high object-processing ability is associated with more efficient use of visual-object resources, resulting in less neural activity in the object-processing pathway.

  12. Transient visual responses reset the phase of low-frequency oscillations in the skeletomotor periphery.

    PubMed

    Wood, Daniel K; Gu, Chao; Corneil, Brian D; Gribble, Paul L; Goodale, Melvyn A

    2015-08-01

    We recorded muscle activity from an upper limb muscle while human subjects reached towards peripheral targets. We tested the hypothesis that the transient visual response sweeps not only through the central nervous system, but also through the peripheral nervous system. Like the transient visual response in the central nervous system, stimulus-locked muscle responses (< 100 ms) were sensitive to stimulus contrast, and were temporally and spatially dissociable from voluntary orienting activity. Also, the arrival of visual responses reduced the variability of muscle activity by resetting the phase of ongoing low-frequency oscillations. This latter finding critically extends the emerging evidence that the feedforward visual sweep reduces neural variability via phase resetting. We conclude that, when sensory information is relevant to a particular effector, detailed information about the sensorimotor transformation, even from the earliest stages, is found in the peripheral nervous system. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  13. The visual system in migraine: from the bench side to the office.

    PubMed

    Kowacs, Pedro A; Utiumi, Marco A; Piovesan, Elcio J

    2015-02-01

    Throughout history, migraine-associated visual symptoms have puzzled patients, doctors, and neuroscientists. The visual aspects of migraine extend far beyond the aura phenomena, and have several clinical implications. A narrative review was conducted, beginning with migraine mechanisms, then followed by pertinent aspects of the anatomy of visual pathways, clinical features, implications of the visual system on therapy, migraine on visually impaired populations, treatment of visual auras and ocular (retinal) migraine, effect of prophylactic migraine treatments on visual aura, visual symptoms induced by anti-migraine or anti-headache drugs, and differential diagnosis. A comprehensive narrative review from both basic and clinical standpoints on the visual aspects of migraine was attained; however, the results were biased to provide any useful information for the clinician. This paper achieved its goals of addressing and condensing information on the pathophysiology of the visual aspects of migraine and its clinical aspects, especially with regards to therapy, making it useful not only for those unfamiliar to the theme but to experienced physicians as well. © 2015 American Headache Society.

  14. Extended visual glances away from the roadway are associated with ADHD- and texting-related driving performance deficits in adolescents

    PubMed Central

    Kingery, Kathleen M.; Narad, Megan; Garner, Annie A.; Antonini, Tanya N.; Tamm, Leanne; Epstein, Jeffery N.

    2014-01-01

    The purpose of the research study was to determine whether ADHD- and texting-related driving impairments are mediated by extended visual glances away from the roadway. Sixty-one adolescents (ADHD = 28, non-ADHD = 33; 62% male; 11% minority) aged 16–17 with a valid driver’s license were videotaped while engaging in a driving simulation that included a No Distraction, Hands-Free Phone Conversation, and Texting condition. Two indicators of visual inattention were coded: 1) percentage of time with eyes diverted from the roadway; and 2) number of extended (greater than 2 seconds) visual glances away from the roadway. Adolescents with ADHD displayed significantly more visual inattention to the roadway on both visual inattention measures. Increased lane position variability among adolescents with ADHD compared to those without ADHD during the Hands-Free Phone Conversation and Texting conditions was mediated by an increased number of extended glances away from the roadway. Similarly, texting resulted in decreased visual attention to the roadway. Finally, increased lane position variability during texting was also mediated by the number of extended glances away from the roadway. Both ADHD and texting impair visual attention to the roadway and the consequence of this visual inattention is increased lane position variability. Visual inattention is implicated as a possible mechanism for ADHD- and texting-related deficits and suggests that driving interventions designed to address ADHD- or texting-related deficits in adolescents need to focus on decreasing extended glances away from the roadway. PMID:25416444

  15. Selecting and perceiving multiple visual objects

    PubMed Central

    Xu, Yaoda; Chun, Marvin M.

    2010-01-01

    To explain how multiple visual objects are attended and perceived, we propose that our visual system first selects a fixed number of about four objects from a crowded scene based on their spatial information (object individuation) and then encode their details (object identification). We describe the involvement of the inferior intra-parietal sulcus (IPS) in object individuation and the superior IPS and higher visual areas in object identification. Our neural object-file theory synthesizes and extends existing ideas in visual cognition and is supported by behavioral and neuroimaging results. It provides a better understanding of the role of the different parietal areas in encoding visual objects and can explain various forms of capacity-limited processing in visual cognition such as working memory. PMID:19269882

  16. Moving to higher ground: The dynamic field theory and the dynamics of visual cognition

    PubMed Central

    Johnson, Jeffrey S.; Spencer, John P.; Schöner, Gregor

    2009-01-01

    In the present report, we describe a new dynamic field theory that captures the dynamics of visuo-spatial cognition. This theory grew out of the dynamic systems approach to motor control and development, and is grounded in neural principles. The initial application of dynamic field theory to issues in visuo-spatial cognition extended concepts of the motor approach to decision making in a sensori-motor context, and, more recently, to the dynamics of spatial cognition. Here we extend these concepts still further to address topics in visual cognition, including visual working memory for non-spatial object properties, the processes that underlie change detection, and the ‘binding problem’ in vision. In each case, we demonstrate that the general principles of the dynamic field approach can unify findings in the literature and generate novel predictions. We contend that the application of these concepts to visual cognition avoids the pitfalls of reductionist approaches in cognitive science, and points toward a formal integration of brains, bodies, and behavior. PMID:19173013

  17. Collaborative volume visualization with applications to underwater acoustic signal processing

    NASA Astrophysics Data System (ADS)

    Jarvis, Susan; Shane, Richard T.

    2000-08-01

    Distributed collaborative visualization systems represent a technology whose time has come. Researchers at the Fraunhofer Center for Research in Computer Graphics have been working in the areas of collaborative environments and high-end visualization systems for several years. The medical application. TeleInVivo, is an example of a system which marries visualization and collaboration. With TeleInvivo, users can exchange and collaboratively interact with volumetric data sets in geographically distributed locations. Since examination of many physical phenomena produce data that are naturally volumetric, the visualization frameworks used by TeleInVivo have been extended for non-medical applications. The system can now be made compatible with almost any dataset that can be expressed in terms of magnitudes within a 3D grid. Coupled with advances in telecommunications, telecollaborative visualization is now possible virtually anywhere. Expert data quality assurance and analysis can occur remotely and interactively without having to send all the experts into the field. Building upon this point-to-point concept of collaborative visualization, one can envision a larger pooling of resources to form a large overview of a region of interest from contributions of numerous distributed members.

  18. Technical note: real-time web-based wireless visual guidance system for radiotherapy.

    PubMed

    Lee, Danny; Kim, Siyong; Palta, Jatinder R; Kim, Taeho

    2017-06-01

    Describe a Web-based wireless visual guidance system that mitigates issues associated with hard-wired audio-visual aided patient interactive motion management systems that are cumbersome to use in routine clinical practice. Web-based wireless visual display duplicates an existing visual display of a respiratory-motion management system for visual guidance. The visual display of the existing system is sent to legacy Web clients over a private wireless network, thereby allowing a wireless setting for real-time visual guidance. In this study, active breathing coordinator (ABC) trace was used as an input for visual display, which captured and transmitted to Web clients. Virtual reality goggles require two (left and right eye view) images for visual display. We investigated the performance of Web-based wireless visual guidance by quantifying (1) the network latency of visual displays between an ABC computer display and Web clients of a laptop, an iPad mini 2 and an iPhone 6, and (2) the frame rate of visual display on the Web clients in frames per second (fps). The network latency of visual display between the ABC computer and Web clients was about 100 ms and the frame rate was 14.0 fps (laptop), 9.2 fps (iPad mini 2) and 11.2 fps (iPhone 6). In addition, visual display for virtual reality goggles was successfully shown on the iPhone 6 with 100 ms and 11.2 fps. A high network security was maintained by utilizing the private network configuration. This study demonstrated that a Web-based wireless visual guidance can be a promising technique for clinical motion management systems, which require real-time visual display of their outputs. Based on the results of this study, our approach has the potential to reduce clutter associated with wired-systems, reduce space requirements, and extend the use of medical devices from static usage to interactive and dynamic usage in a radiotherapy treatment vault.

  19. Cognitive approaches for patterns analysis and security applications

    NASA Astrophysics Data System (ADS)

    Ogiela, Marek R.; Ogiela, Lidia

    2017-08-01

    In this paper will be presented new opportunities for developing innovative solutions for semantic pattern classification and visual cryptography, which will base on cognitive and bio-inspired approaches. Such techniques can be used for evaluation of the meaning of analyzed patterns or encrypted information, and allow to involve such meaning into the classification task or encryption process. It also allows using some crypto-biometric solutions to extend personalized cryptography methodologies based on visual pattern analysis. In particular application of cognitive information systems for semantic analysis of different patterns will be presented, and also a novel application of such systems for visual secret sharing will be described. Visual shares for divided information can be created based on threshold procedure, which may be dependent on personal abilities to recognize some image details visible on divided images.

  20. Comparative analysis of visual outcomes with 4 intraocular lenses: Monofocal, multifocal, and extended range of vision.

    PubMed

    Pedrotti, Emilio; Carones, Francesco; Aiello, Francesco; Mastropasqua, Rodolfo; Bruni, Enrico; Bonacci, Erika; Talli, Pietro; Nucci, Carlo; Mariotti, Cesare; Marchini, Giorgio

    2018-02-01

    To compare the visual acuity, refractive outcomes, and quality of vision in patients with bilateral implantation of 4 intraocular lenses (IOLs). Department of Neurosciences, Biomedicine and Movement Sciences, Eye Clinic, University of Verona, Verona, and Carones Ophthalmology Center, Milano, Italy. Prospective case series. The study included patients who had bilateral cataract surgery with the implantation of 1 of 4 IOLs as follows: Tecnis 1-piece monofocal (monofocal IOL), Tecnis Symfony extended range of vision (extended-range-of-vision IOL), Restor +2.5 diopter (D) (+2.5 D multifocal IOL), and Restor +3.0 D (+3.0 D multifocal IOL). Visual acuity, refractive outcome, defocus curve, objective optical quality, contrast sensitivity, spectacle independence, and glare perception were evaluated 6 months after surgery. The study comprised 185 patients. The extended-range-of-vision IOL (55 patients) showed better distance visual outcomes than the monofocal IOL (30 patients) and high-addition apodized diffractive-refractive multifocal IOLs (P ≤ .002). The +3.0 D multifocal IOL (50 patients) showed the best near visual outcomes (P < .001). The +2.5 D multifocal IOL (50 patients) and extended-range-of-vision IOL provided significantly better intermediate visual outcomes than the other 2 IOLs, with significantly better vision for a defocus level of -1.5 D (P < .001). Better spectacle independence was shown for the +2.5 D multifocal IOL and extended-range-of-vision IOL (P < .001). The extended-range-of-vision IOL and +2.5 D multifocal IOL provided significantly better intermediate visual restoration after cataract surgery than the monofocal IOL and +3.0 D multifocal IOL, with significantly better quality of vision with the extended-range-of-vision IOL. Copyright © 2018 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  1. Two-out-of-two color matching based visual cryptography schemes.

    PubMed

    Machizaud, Jacques; Fournel, Thierry

    2012-09-24

    Visual cryptography which consists in sharing a secret message between transparencies has been extended to color prints. In this paper, we propose a new visual cryptography scheme based on color matching. The stacked printed media reveal a uniformly colored message decoded by the human visual system. In contrast with the previous color visual cryptography schemes, the proposed one enables to share images without pixel expansion and to detect a forgery as the color of the message is kept secret. In order to correctly print the colors on the media and to increase the security of the scheme, we use spectral models developed for color reproduction describing printed colors from an optical point of view.

  2. Models Extracted from Text for System-Software Safety Analyses

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.

    2010-01-01

    This presentation describes extraction and integration of requirements information and safety information in visualizations to support early review of completeness, correctness, and consistency of lengthy and diverse system safety analyses. Software tools have been developed and extended to perform the following tasks: 1) extract model parts and safety information from text in interface requirements documents, failure modes and effects analyses and hazard reports; 2) map and integrate the information to develop system architecture models and visualizations for safety analysts; and 3) provide model output to support virtual system integration testing. This presentation illustrates the methods and products with a rocket motor initiation case.

  3. Better-Than-Visual Technologies for Next Generation Air Transportation System Terminal Maneuvering Area Operations

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Bailey, Randall E.; Shelton, Kevin J.; Jones, Denise R.; Kramer, Lynda J.; Arthur, Jarvis J., III; Williams, Steve P.; Barmore, Bryan E.; Ellis, Kyle E.; Rehfeld, Sherri A.

    2011-01-01

    A consortium of industry, academia and government agencies are devising new concepts for future U.S. aviation operations under the Next Generation Air Transportation System (NextGen). Many key capabilities are being identified to enable NextGen, including the concept of Equivalent Visual Operations (EVO) replicating the capacity and safety of today's visual flight rules (VFR) in all-weather conditions. NASA is striving to develop the technologies and knowledge to enable EVO and to extend EVO towards a Better-Than-Visual (BTV) operational concept. The BTV operational concept uses an electronic means to provide sufficient visual references of the external world and other required flight references on flight deck displays that enable VFR-like operational tempos and maintain and improve the safety of VFR while using VFR-like procedures in all-weather conditions. NASA Langley Research Center (LaRC) research on technologies to enable the concept of BTV is described.

  4. Query2Question: Translating Visualization Interaction into Natural Language.

    PubMed

    Nafari, Maryam; Weaver, Chris

    2015-06-01

    Richly interactive visualization tools are increasingly popular for data exploration and analysis in a wide variety of domains. Existing systems and techniques for recording provenance of interaction focus either on comprehensive automated recording of low-level interaction events or on idiosyncratic manual transcription of high-level analysis activities. In this paper, we present the architecture and translation design of a query-to-question (Q2Q) system that automatically records user interactions and presents them semantically using natural language (written English). Q2Q takes advantage of domain knowledge and uses natural language generation (NLG) techniques to translate and transcribe a progression of interactive visualization states into a visual log of styled text that complements and effectively extends the functionality of visualization tools. We present Q2Q as a means to support a cross-examination process in which questions rather than interactions are the focus of analytic reasoning and action. We describe the architecture and implementation of the Q2Q system, discuss key design factors and variations that effect question generation, and present several visualizations that incorporate Q2Q for analysis in a variety of knowledge domains.

  5. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer.

    PubMed

    Douglas, David B; Boone, John M; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice.

  6. Augmentation of Cognition and Perception Through Advanced Synthetic Vision Technology

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Kramer, Lynda J.; Bailey, Randall E.; Arthur, Jarvis J.; Williams, Steve P.; McNabb, Jennifer

    2005-01-01

    Synthetic Vision System technology augments reality and creates a virtual visual meteorological condition that extends a pilot's cognitive and perceptual capabilities during flight operations when outside visibility is restricted. The paper describes the NASA Synthetic Vision System for commercial aviation with an emphasis on how the technology achieves Augmented Cognition objectives.

  7. Essentials of photometry for clinical electrophysiology of vision.

    PubMed

    McCulloch, Daphne L; Hamilton, Ruth

    2010-08-01

    Electrophysiological testing of the visual system requires familiarity with photometry. This technical note outlines the concepts of photometry with a focus on information relevant to clinical ERG and VEP testing. Topics include photometric quantities, consideration of pupil size, specification of brief extended flash stimuli, and the influence of the spectral composition of visual stimuli. Standard units and terms are explained in the context of the ISCEV standards and guidelines for clinical electrophysiology of vision.

  8. Applied estimation for hybrid dynamical systems using perceptional information

    NASA Astrophysics Data System (ADS)

    Plotnik, Aaron M.

    This dissertation uses the motivating example of robotic tracking of mobile deep ocean animals to present innovations in robotic perception and estimation for hybrid dynamical systems. An approach to estimation for hybrid systems is presented that utilizes uncertain perceptional information about the system's mode to improve tracking of its mode and continuous states. This results in significant improvements in situations where previously reported methods of estimation for hybrid systems perform poorly due to poor distinguishability of the modes. The specific application that motivates this research is an automatic underwater robotic observation system that follows and films individual deep ocean animals. A first version of such a system has been developed jointly by the Stanford Aerospace Robotics Laboratory and Monterey Bay Aquarium Research Institute (MBARI). This robotic observation system is successfully fielded on MBARI's ROVs, but agile specimens often evade the system. When a human ROV pilot performs this task, one advantage that he has over the robotic observation system in these situations is the ability to use visual perceptional information about the target, immediately recognizing any changes in the specimen's behavior mode. With the approach of the human pilot in mind, a new version of the robotic observation system is proposed which is extended to (a) derive perceptional information (visual cues) about the behavior mode of the tracked specimen, and (b) merge this dissimilar, discrete and uncertain information with more traditional continuous noisy sensor data by extending existing algorithms for hybrid estimation. These performance enhancements are enabled by integrating techniques in hybrid estimation, computer vision and machine learning. First, real-time computer vision and classification algorithms extract a visual observation of the target's behavior mode. Existing hybrid estimation algorithms are extended to admit this uncertain but discrete observation, complementing the information available from more traditional sensors. State tracking is achieved using a new form of Rao-Blackwellized particle filter called the mode-observed Gaussian Particle Filter. Performance is demonstrated using data from simulation and data collected on actual specimens in the ocean. The framework for estimation using both traditional and perceptional information is easily extensible to other stochastic hybrid systems with mode-related perceptional observations available.

  9. A high-quality high-fidelity visualization of the September 11 attack on the World Trade Center.

    PubMed

    Rosen, Paul; Popescu, Voicu; Hoffmann, Christoph; Irfanoglu, Ayhan

    2008-01-01

    In this application paper, we describe the efforts of a multidisciplinary team towards producing a visualization of the September 11 Attack on the North Tower of New York's World Trade Center. The visualization was designed to meet two requirements. First, the visualization had to depict the impact with high fidelity, by closely following the laws of physics. Second, the visualization had to be eloquent to a nonexpert user. This was achieved by first designing and computing a finite-element analysis (FEA) simulation of the impact between the aircraft and the top 20 stories of the building, and then by visualizing the FEA results with a state-of-the-art commercial animation system. The visualization was enabled by an automatic translator that converts the simulation data into an animation system 3D scene. We built upon a previously developed translator. The translator was substantially extended to enable and control visualization of fire and of disintegrating elements, to better scale with the number of nodes and number of states, to handle beam elements with complex profiles, and to handle smoothed particle hydrodynamics liquid representation. The resulting translator is a powerful automatic and scalable tool for high-quality visualization of FEA results.

  10. The Simplest Chronoscope V: A Theory of Dual Primary and Secondary Reaction Time Systems.

    PubMed

    Montare, Alberto

    2016-12-01

    Extending work by Montare, visual simple reaction time, choice reaction time, discriminative reaction time, and overall reaction time scores obtained from college students by the simplest chronoscope (a falling meterstick) method were significantly faster as well as significantly less variable than scores of the same individuals from electromechanical reaction timers (machine method). Results supported the existence of dual reaction time systems: an ancient primary reaction time system theoretically activating the V5 parietal area of the dorsal visual stream that evolved to process significantly faster sensory-motor reactions to sudden stimulations arising from environmental objects in motion, and a secondary reaction time system theoretically activating the V4 temporal area of the ventral visual stream that subsequently evolved to process significantly slower sensory-perceptual-motor reactions to sudden stimulations arising from motionless colored objects. © The Author(s) 2016.

  11. Primary Central Nervous System Lymphoma of Optic Chiasma: Endoscopic Endonasal Treatment.

    PubMed

    Ozdemir, Evin Singar; Yildirim, Ali Erdem; Can, Aslihan Yavas

    2018-01-01

    Isolated primary central nervous system lymphoma arising from anterior visual pathway is very rare. A 76-year-old immunocompetent previously healthy man presented bilateral decreased visual acuity in 1 month. Pituitary magnetic resonans imaging (MRI) showed a lobulated mass with homogeneous enhancement after gadolinium administration that arising from optic chiasm suggested that inflammatory disease or an optic glioma. The patient underwent an extended endoscopic endonasal transsphenoidal surgery. Postoperative course and outcomes were wonderful. Histopathological diagnosis was diffuse large B-cell lymphoma. The patient underwent investigations for systemic lymphomatous involvement, did not detect any evidence of systemic disease. In this case, we claimed that differential diagnoses of anterior visual pathway lesions are difficult because of similarity of lesions on clinical and radiological examinations. Biopsy is essential for these lesions. As a biopsy technique, endoscopic endonasal transsphenoidal approach is safer and more effective than open procedures.

  12. Musicians have enhanced audiovisual multisensory binding: experience-dependent effects in the double-flash illusion.

    PubMed

    Bidelman, Gavin M

    2016-10-01

    Musical training is associated with behavioral and neurophysiological enhancements in auditory processing for both musical and nonmusical sounds (e.g., speech). Yet, whether the benefits of musicianship extend beyond enhancements to auditory-specific skills and impact multisensory (e.g., audiovisual) processing has yet to be fully validated. Here, we investigated multisensory integration of auditory and visual information in musicians and nonmusicians using a double-flash illusion, whereby the presentation of multiple auditory stimuli (beeps) concurrent with a single visual object (flash) induces an illusory perception of multiple flashes. We parametrically varied the onset asynchrony between auditory and visual events (leads and lags of ±300 ms) to quantify participants' "temporal window" of integration, i.e., stimuli in which auditory and visual cues were fused into a single percept. Results show that musically trained individuals were both faster and more accurate at processing concurrent audiovisual cues than their nonmusician peers; nonmusicians had a higher susceptibility for responding to audiovisual illusions and perceived double flashes over an extended range of onset asynchronies compared to trained musicians. Moreover, temporal window estimates indicated that musicians' windows (<100 ms) were ~2-3× shorter than nonmusicians' (~200 ms), suggesting more refined multisensory integration and audiovisual binding. Collectively, findings indicate a more refined binding of auditory and visual cues in musically trained individuals. We conclude that experience-dependent plasticity of intensive musical experience extends beyond simple listening skills, improving multimodal processing and the integration of multiple sensory systems in a domain-general manner.

  13. Knowledge Visualizations: A Tool to Achieve Optimized Operational Decision Making and Data Integration

    DTIC Science & Technology

    2015-06-01

    Hadoop Distributed File System (HDFS) without any integration with Accumulo-based Knowledge Stores based on OWL/RDF. 4. Cloud Based The Apache Software...BTW, 7(12), pp. 227–241. Godin, A. & Akins, D. (2014). Extending DCGS-N naval tactical clouds from in-storage to in-memory for the integrated fires...VISUALIZATIONS: A TOOL TO ACHIEVE OPTIMIZED OPERATIONAL DECISION MAKING AND DATA INTEGRATION by Paul C. Hudson Jeffrey A. Rzasa June 2015 Thesis

  14. cellVIEW: a Tool for Illustrative and Multi-Scale Rendering of Large Biomolecular Datasets

    PubMed Central

    Le Muzic, Mathieu; Autin, Ludovic; Parulek, Julius; Viola, Ivan

    2017-01-01

    In this article we introduce cellVIEW, a new system to interactively visualize large biomolecular datasets on the atomic level. Our tool is unique and has been specifically designed to match the ambitions of our domain experts to model and interactively visualize structures comprised of several billions atom. The cellVIEW system integrates acceleration techniques to allow for real-time graphics performance of 60 Hz display rate on datasets representing large viruses and bacterial organisms. Inspired by the work of scientific illustrators, we propose a level-of-detail scheme which purpose is two-fold: accelerating the rendering and reducing visual clutter. The main part of our datasets is made out of macromolecules, but it also comprises nucleic acids strands which are stored as sets of control points. For that specific case, we extend our rendering method to support the dynamic generation of DNA strands directly on the GPU. It is noteworthy that our tool has been directly implemented inside a game engine. We chose to rely on a third party engine to reduce software development work-load and to make bleeding-edge graphics techniques more accessible to the end-users. To our knowledge cellVIEW is the only suitable solution for interactive visualization of large bimolecular landscapes on the atomic level and is freely available to use and extend. PMID:29291131

  15. Airflow Hazard Visualization for Helicopter Pilots: Flight Simulation Study Results

    NASA Technical Reports Server (NTRS)

    Aragon, Cecilia R.; Long, Kurtis R.

    2005-01-01

    Airflow hazards such as vortices or low level wind shear have been identified as a primary contributing factor in many helicopter accidents. US Navy ships generate airwakes over their decks, creating potentially hazardous conditions for shipboard rotorcraft launch and recovery. Recent sensor developments may enable the delivery of airwake data to the cockpit, where visualizing the hazard data may improve safety and possibly extend ship/helicopter operational envelopes. A prototype flight-deck airflow hazard visualization system was implemented on a high-fidelity rotorcraft flight dynamics simulator. Experienced helicopter pilots, including pilots from all five branches of the military, participated in a usability study of the system. Data was collected both objectively from the simulator and subjectively from post-test questionnaires. Results of the data analysis are presented, demonstrating a reduction in crash rate and other trends that illustrate the potential of airflow hazard visualization to improve flight safety.

  16. User Centered, Application Independent Visualization of National Airspace Data

    NASA Technical Reports Server (NTRS)

    Murphy, James R.; Hinton, Susan E.

    2011-01-01

    This paper describes an application independent software tool, IV4D, built to visualize animated and still 3D National Airspace System (NAS) data specifically for aeronautics engineers who research aggregate, as well as single, flight efficiencies and behavior. IV4D was origin ally developed in a joint effort between the National Aeronautics and Space Administration (NASA) and the Air Force Research Laboratory (A FRL) to support the visualization of air traffic data from the Airspa ce Concept Evaluation System (ACES) simulation program. The three mai n challenges tackled by IV4D developers were: 1) determining how to d istill multiple NASA data formats into a few minimal dataset types; 2 ) creating an environment, consisting of a user interface, heuristic algorithms, and retained metadata, that facilitates easy setup and fa st visualization; and 3) maximizing the user?s ability to utilize the extended range of visualization available with AFRL?s existing 3D te chnologies. IV4D is currently being used by air traffic management re searchers at NASA?s Ames and Langley Research Centers to support data visualizations.

  17. MindSeer: a portable and extensible tool for visualization of structural and functional neuroimaging data

    PubMed Central

    Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F

    2007-01-01

    Background Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. Results We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: . Conclusion MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine. PMID:17937818

  18. MindSeer: a portable and extensible tool for visualization of structural and functional neuroimaging data.

    PubMed

    Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F

    2007-10-15

    Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: http://sig.biostr.washington.edu/projects/MindSeer. MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine.

  19. Precise visual navigation using multi-stereo vision and landmark matching

    NASA Astrophysics Data System (ADS)

    Zhu, Zhiwei; Oskiper, Taragay; Samarasekera, Supun; Kumar, Rakesh

    2007-04-01

    Traditional vision-based navigation system often drifts over time during navigation. In this paper, we propose a set of techniques which greatly reduce the long term drift and also improve its robustness to many failure conditions. In our approach, two pairs of stereo cameras are integrated to form a forward/backward multi-stereo camera system. As a result, the Field-Of-View of the system is extended significantly to capture more natural landmarks from the scene. This helps to increase the pose estimation accuracy as well as reduce the failure situations. Secondly, a global landmark matching technique is used to recognize the previously visited locations during navigation. Using the matched landmarks, a pose correction technique is used to eliminate the accumulated navigation drift. Finally, in order to further improve the robustness of the system, measurements from low-cost Inertial Measurement Unit (IMU) and Global Positioning System (GPS) sensors are integrated with the visual odometry in an extended Kalman Filtering framework. Our system is significantly more accurate and robust than previously published techniques (1~5% localization error) over long-distance navigation both indoors and outdoors. Real world experiments on a human worn system show that the location can be estimated within 1 meter over 500 meters (around 0.1% localization error averagely) without the use of GPS information.

  20. A prototype feature system for feature retrieval using relationships

    USGS Publications Warehouse

    Choi, J.; Usery, E.L.

    2009-01-01

    Using a feature data model, geographic phenomena can be represented effectively by integrating space, theme, and time. This paper extends and implements a feature data model that supports query and visualization of geographic features using their non-spatial and temporal relationships. A prototype feature-oriented geographic information system (FOGIS) is then developed and storage of features named Feature Database is designed. Buildings from the U.S. Marine Corps Base, Camp Lejeune, North Carolina and subways in Chicago, Illinois are used to test the developed system. The results of the applications show the strength of the feature data model and the developed system 'FOGIS' when they utilize non-spatial and temporal relationships in order to retrieve and visualize individual features.

  1. Federated Giovanni: A Distributed Web Service for Analysis and Visualization of Remote Sensing Data

    NASA Technical Reports Server (NTRS)

    Lynnes, Chris

    2014-01-01

    The Geospatial Interactive Online Visualization and Analysis Interface (Giovanni) is a popular tool for users of the Goddard Earth Sciences Data and Information Services Center (GES DISC) and has been in use for over a decade. It provides a wide variety of algorithms and visualizations to explore large remote sensing datasets without having to download the data and without having to write readers and visualizers for it. Giovanni is now being extended to enable its capabilities at other data centers within the Earth Observing System Data and Information System (EOSDIS). This Federated Giovanni will allow four other data centers to add and maintain their data within Giovanni on behalf of their user community. Those data centers are the Physical Oceanography Distributed Active Archive Center (PO.DAAC), MODIS Adaptive Processing System (MODAPS), Ocean Biology Processing Group (OBPG), and Land Processes Distributed Active Archive Center (LP DAAC). Three tiers are supported: Tier 1 (GES DISC-hosted) gives the remote data center a data management interface to add and maintain data, which are provided through the Giovanni instance at the GES DISC. Tier 2 packages Giovanni up as a virtual machine for distribution to and deployment by the other data centers. Data variables are shared among data centers by sharing documents from the Solr database that underpins Giovanni's data management capabilities. However, each data center maintains their own instance of Giovanni, exposing the variables of most interest to their user community. Tier 3 is a Shared Source model, in which the data centers cooperate to extend the infrastructure by contributing source code.

  2. Visual System Neural Responses to Laser Exposure from Local Q-Switched Pulses and Extended Source CW Speckle Patterns.

    DTIC Science & Technology

    1985-09-30

    layers of the retina as seen in retinitis pigmentosa (Wolbarsht & Landers, 1980; Stefansson et al, 1981 a). Those are all long-term effects with a delay...block numoer) FIELD GROUP SUB-GROUP retinal damage center-surround 20 05 laser injury cat retina 20 06 visual perception N02 anesthesia 19. ABSTRACT...Continue on reverse if necessary and identify by block number) The reports of retinal damage from exposure to short pulse laser energy without any

  3. Two memories for geographical slant: separation and interdependence of action and awareness

    NASA Technical Reports Server (NTRS)

    Creem, S. H.; Proffitt, D. R.; Kaiser, M. K. (Principal Investigator)

    1998-01-01

    The present study extended previous findings of geographical slant perception, in which verbal judgments of the incline of hills were greatly overestimated but motoric (haptic) adjustments were much more accurate. In judging slant from memory following a brief or extended time delay, subjects' verbal judgments were greater than those given when viewing hills. Motoric estimates differed depending on the length of the delay and place of response. With a short delay, motoric adjustments made in the proximity of the hill did not differ from those evoked during perception. When given a longer delay or when taken away from the hill, subjects' motoric responses increased along with the increase in verbal reports. These results suggest two different memorial influences on action. With a short delay at the hill, memory for visual guidance is separate from the explicit memory informing the conscious response. With short or long delays away from the hill, short-term visual guidance memory no longer persists, and both motor and verbal responses are driven by an explicit representation. These results support recent research involving visual guidance from memory, where actions become influenced by conscious awareness, and provide evidence for communication between the "what" and "how" visual processing systems.

  4. Two Adults with Multiple Disabilities Use a Computer-Aided Telephone System to Make Phone Calls Independently

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; O'Reilly, Mark F.; Singh, Nirbhay N.; Sigafoos, Jeff; Oliva, Doretta; Alberti, Gloria; Lang, Russell

    2011-01-01

    This study extended the assessment of a newly developed computer-aided telephone system with two participants (adults) who presented with blindness or severe visual impairment and motor or motor and intellectual disabilities. For each participant, the study was carried out according to an ABAB design, in which the A represented baseline phases and…

  5. Visual acuity and quality of life in dry eye disease: Proceedings of the OCEAN group meeting.

    PubMed

    Benítez-Del-Castillo, José; Labetoulle, Marc; Baudouin, Christophe; Rolando, Maurizio; Akova, Yonca A; Aragona, Pasquale; Geerling, Gerd; Merayo-Lloves, Jesús; Messmer, Elisabeth M; Boboridis, Kostas

    2017-04-01

    Dry eye disease (DED) results in tear film instability and hyperosmolarity, inflammation of the ocular surface and, ultimately, visual disturbance that can significantly impact a patient's quality of life. The effects on visual acuity result in difficulties with driving, reading and computer use and negatively impact psychological health. These effects also extend to the workplace, with a loss of productivity and quality of work causing substantial economic losses. The effects of DED and the impact on vision experienced by patients may not be given sufficient importance by ophthalmologists. Functional visual acuity (FVA) is a measure of visual acuity after sustained eye opening without blinking for at least 10 s and mimics the sustained visual acuity of daily life. Measuring dynamic FVA allows the detection of impaired visual function in patients with DED who may display normal conventional visual acuity. There are currently several tests and methods that can be used to measure dynamic visual function: the SSC-350 FVA measurement system, assessment of best-corrected visual acuity decay using the interblink visual acuity decay test, serial measurements of ocular and corneal higher order aberrations, and measurement of dynamic vision quality using the Optical Quality Analysis System. Although the equipment for these methods may be too large or unaffordable for use in clinical practice, FVA testing is an important assessment for DED. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. The contribution of foveal and peripheral visual information to ensemble representation of face race.

    PubMed

    Jung, Wonmo; Bülthoff, Isabelle; Armann, Regine G M

    2017-11-01

    The brain can only attend to a fraction of all the information that is entering the visual system at any given moment. One way of overcoming the so-called bottleneck of selective attention (e.g., J. M. Wolfe, Võ, Evans, & Greene, 2011) is to make use of redundant visual information and extract summarized statistical information of the whole visual scene. Such ensemble representation occurs for low-level features of textures or simple objects, but it has also been reported for complex high-level properties. While the visual system has, for example, been shown to compute summary representations of facial expression, gender, or identity, it is less clear whether perceptual input from all parts of the visual field contributes equally to the ensemble percept. Here we extend the line of ensemble-representation research into the realm of race and look at the possibility that ensemble perception relies on weighting visual information differently depending on its origin from either the fovea or the visual periphery. We find that observers can judge the mean race of a set of faces, similar to judgments of mean emotion from faces and ensemble representations in low-level domains of visual processing. We also find that while peripheral faces seem to be taken into account for the ensemble percept, far more weight is given to stimuli presented foveally than peripherally. Whether this precision weighting of information stems from differences in the accuracy with which the visual system processes information across the visual field or from statistical inferences about the world needs to be determined by further research.

  7. Identifying and individuating cognitive systems: a task-based distributed cognition alternative to agent-based extended cognition.

    PubMed

    Davies, Jim; Michaelian, Kourken

    2016-08-01

    This article argues for a task-based approach to identifying and individuating cognitive systems. The agent-based extended cognition approach faces a problem of cognitive bloat and has difficulty accommodating both sub-individual cognitive systems ("scaling down") and some supra-individual cognitive systems ("scaling up"). The standard distributed cognition approach can accommodate a wider variety of supra-individual systems but likewise has difficulties with sub-individual systems and faces the problem of cognitive bloat. We develop a task-based variant of distributed cognition designed to scale up and down smoothly while providing a principled means of avoiding cognitive bloat. The advantages of the task-based approach are illustrated by means of two parallel case studies: re-representation in the human visual system and in a biomedical engineering laboratory.

  8. Discovering and visualizing indirect associations between biomedical concepts

    PubMed Central

    Tsuruoka, Yoshimasa; Miwa, Makoto; Hamamoto, Kaisei; Tsujii, Jun'ichi; Ananiadou, Sophia

    2011-01-01

    Motivation: Discovering useful associations between biomedical concepts has been one of the main goals in biomedical text-mining, and understanding their biomedical contexts is crucial in the discovery process. Hence, we need a text-mining system that helps users explore various types of (possibly hidden) associations in an easy and comprehensible manner. Results: This article describes FACTA+, a real-time text-mining system for finding and visualizing indirect associations between biomedical concepts from MEDLINE abstracts. The system can be used as a text search engine like PubMed with additional features to help users discover and visualize indirect associations between important biomedical concepts such as genes, diseases and chemical compounds. FACTA+ inherits all functionality from its predecessor, FACTA, and extends it by incorporating three new features: (i) detecting biomolecular events in text using a machine learning model, (ii) discovering hidden associations using co-occurrence statistics between concepts, and (iii) visualizing associations to improve the interpretability of the output. To the best of our knowledge, FACTA+ is the first real-time web application that offers the functionality of finding concepts involving biomolecular events and visualizing indirect associations of concepts with both their categories and importance. Availability: FACTA+ is available as a web application at http://refine1-nactem.mc.man.ac.uk/facta/, and its visualizer is available at http://refine1-nactem.mc.man.ac.uk/facta-visualizer/. Contact: tsuruoka@jaist.ac.jp PMID:21685059

  9. Supplement to photographic catalog of selected planetary size comparisons

    NASA Technical Reports Server (NTRS)

    Meszaros, Stephen Paul

    1991-01-01

    This document updates and extends the photographic catalog of selected planetary size comparisons. It utilizes photographs taken by NASA spacecraft to illustrate size comparisons of planets and moons of the solar system. Global views are depicted at the same scale, within each comparison, allowing size relationships to be studied visually.

  10. The Next Generation of Ground Operations Command and Control; Scripting in C Sharp and Visual Basic

    NASA Technical Reports Server (NTRS)

    Ritter, George; Pedoto, Ramon

    2010-01-01

    This slide presentation reviews the use of scripting languages in Ground Operations Command and Control. It describes the use of scripting languages in a historical context, the advantages and disadvantages of scripts. It describes the Enhanced and Redesigned Scripting (ERS) language, that was designed to combine the features of a scripting language and the graphical and IDE richness of a programming language with the utility of scripting languages. ERS uses the Microsoft Visual Studio programming environment and offers custom controls that enable an ERS developer to extend the Visual Basic and C sharp language interface with the Payload Operations Integration Center (POIC) telemetry and command system.

  11. Extending helicopter operations to meet future integrated transportation needs.

    PubMed

    Stanton, Neville A; Plant, Katherine L; Roberts, Aaron P; Harvey, Catherine; Thomas, T Glyn

    2016-03-01

    Helicopters have the potential to be an integral part of the future transport system. They offer a means of rapid transit in an overly populated transport environment. However, one of the biggest limitations on rotary wing flight is their inability to fly in degraded visual conditions in the critical phases of approach and landing. This paper presents a study that developed and evaluated a Head up Display (HUD) to assist rotary wing pilots by extending landing to degraded visual conditions. The HUD was developed with the assistance of the Cognitive Work Analysis method as an approach for analysing the cognitive work of landing the helicopter. The HUD was tested in a fixed based flight simulator with qualified helicopter pilots. A qualitative analysis to assess situation awareness and workload found that the HUD enabled safe landing in degraded conditions whilst simultaneously enhancing situation awareness and reducing workload. Continued development in this area has the potential to extend the operational capability of helicopters in the future. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  12. Intraocular methotrexate can induce extended remission in some patients in noninfectious uveitis.

    PubMed

    Taylor, Simon R J; Banker, Alay; Schlaen, Ariel; Couto, Cristobal; Matthe, Egbert; Joshi, Lavnish; Menezo, Victor; Nguyen, Ethan; Tomkins-Netzer, Oren; Bar, Asaf; Morarji, Jiten; McCluskey, Peter; Lightman, Sue

    2013-01-01

    To assess the outcomes of the intravitreal administration of methotrexate in uveitis. Multicenter, retrospective interventional case series of patients with noninfectious uveitis. Thirty-eight eyes of 30 patients were enrolled, including a total of 54 intravitreal injections of methotrexate at a dose of 400 µg in 0.1 mL. The primary outcome measure was visual acuity. Secondary outcome measures included control of intraocular inflammation and cystoid macular edema, time to relapse, development of adverse events, and levels of systemic corticosteroid and immunosuppressive therapy. Methotrexate proved effective in controlling intraocular inflammation and improving vision in 30 of 38 eyes (79%). The side effect profile was good, with no reported serious ocular adverse events and only one patient having an intraocular pressure of >21 mmHg. Of the 30 eyes that responded to treatment, 8 relapsed, but 22 (73%) entered an extended period of remission, with the Kaplan-Meier estimate of median time to relapse for the whole group being 17 months. The eight eyes that relapsed were reinjected and all responded to treatment. One eye relapsed at 3 months, but 7 eyes again entered extended remission. Of the 14 patients on systemic therapy at the start of the study, 8 (57%) were able to significantly reduce this following intravitreal methotrexate injection. In patients with uveitis and uveitic cystoid macular edema, intravitreal MTX can effectively improve visual acuity and reduce cystoid macular edema and, in some patients, allows the reduction of immunosuppressive therapy. Some patients relapse at 3 to 4 months, but a large proportion (73%) enter an extended period of remission of up to 18 months. This larger study extends the results obtained from previous smaller studies suggesting the viability of intravitreal methotrexate as a treatment option in uveitis.

  13. Vision in the dimmest habitats on earth.

    PubMed

    Warrant, Eric

    2004-10-01

    A very large proportion of the world's animal species are active in dim light, either under the cover of night or in the depths of the sea. The worlds they see can be dim and extended, with light reaching the eyes from all directions at once, or they can be composed of bright point sources, like the multitudes of stars seen in a clear night sky or the rare sparks of bioluminescence that are visible in the deep sea. The eye designs of nocturnal and deep-sea animals have evolved in response to these two very different types of habitats, being optimised for maximum sensitivity to extended scenes, or to point sources, or to both. After describing the many visual adaptations that have evolved across the animal kingdom for maximising sensitivity to extended and point-source scenes, I then use case studies from the recent literature to show how these adaptations have endowed nocturnal animals with excellent vision. Nocturnal animals can see colour and negotiate dimly illuminated obstacles during flight. They can also navigate using learned terrestrial landmarks, the constellations of stars or the dim pattern of polarised light formed around the moon. The conclusion from these studies is clear: nocturnal habitats are just as rich in visual details as diurnal habitats are, and nocturnal animals have evolved visual systems capable of exploiting them. The same is certainly true of deep-sea animals, as future research will no doubt reveal.

  14. Increased Content Knowledge of Students with Visual Impairments as a Result of Extended Descriptions

    ERIC Educational Resources Information Center

    Ely, Richard; Emerson, Robert Wall; Maggiore, Theresa; Rothberg, Madeleine; O'Connell, Trisha; Hudson, Laurel

    2006-01-01

    The National Center for Accessible Media has developed a technology and protocol for inserting extended, enhanced descriptions of visually based concepts into artificially paused digital video. These "eDescriptions" describe material not fully explained by a narrator and provide analogies and explanation specifically designed for…

  15. Three-Dimensional Visualization with Large Data Sets: A Simulation of Spreading Cortical Depression in Human Brain

    PubMed Central

    Ertürk, Korhan Levent; Şengül, Gökhan

    2012-01-01

    We developed 3D simulation software of human organs/tissues; we developed a database to store the related data, a data management system to manage the created data, and a metadata system for the management of data. This approach provides two benefits: first of all the developed system does not require to keep the patient's/subject's medical images on the system, providing less memory usage. Besides the system also provides 3D simulation and modification options, which will help clinicians to use necessary tools for visualization and modification operations. The developed system is tested in a case study, in which a 3D human brain model is created and simulated from 2D MRI images of a human brain, and we extended the 3D model to include the spreading cortical depression (SCD) wave front, which is an electrical phoneme that is believed to cause the migraine. PMID:23258956

  16. Preprocessing of emotional visual information in the human piriform cortex.

    PubMed

    Schulze, Patrick; Bestgen, Anne-Kathrin; Lech, Robert K; Kuchinke, Lars; Suchan, Boris

    2017-08-23

    This study examines the processing of visual information by the olfactory system in humans. Recent data point to the processing of visual stimuli by the piriform cortex, a region mainly known as part of the primary olfactory cortex. Moreover, the piriform cortex generates predictive templates of olfactory stimuli to facilitate olfactory processing. This study fills the gap relating to the question whether this region is also capable of preprocessing emotional visual information. To gain insight into the preprocessing and transfer of emotional visual information into olfactory processing, we recorded hemodynamic responses during affective priming using functional magnetic resonance imaging (fMRI). Odors of different valence (pleasant, neutral and unpleasant) were primed by images of emotional facial expressions (happy, neutral and disgust). Our findings are the first to demonstrate that the piriform cortex preprocesses emotional visual information prior to any olfactory stimulation and that the emotional connotation of this preprocessing is subsequently transferred and integrated into an extended olfactory network for olfactory processing.

  17. How visualization layout relates to locus of control and other personality factors.

    PubMed

    Ziemkiewicz, Caroline; Ottley, Alvitta; Crouser, R Jordan; Yauilla, Ashley Rye; Su, Sara L; Ribarsky, William; Chang, Remco

    2013-07-01

    Existing research suggests that individual personality differences are correlated with a user's speed and accuracy in solving problems with different types of complex visualization systems. We extend this research by isolating factors in personality traits as well as in the visualizations that could have contributed to the observed correlation. We focus on a personality trait known as "locus of control” (LOC), which represents a person's tendency to see themselves as controlled by or in control of external events. To isolate variables of the visualization design, we control extraneous factors such as color, interaction, and labeling. We conduct a user study with four visualizations that gradually shift from a list metaphor to a containment metaphor and compare the participants' speed, accuracy, and preference with their locus of control and other personality factors. Our findings demonstrate that there is indeed a correlation between the two: participants with an internal locus of control perform more poorly with visualizations that employ a containment metaphor, while those with an external locus of control perform well with such visualizations. These results provide evidence for the externalization theory of visualization. Finally, we propose applications of these findings to adaptive visual analytics and visualization evaluation.

  18. Method and system for providing autonomous control of a platform

    NASA Technical Reports Server (NTRS)

    Seelinger, Michael J. (Inventor); Yoder, John-David (Inventor)

    2012-01-01

    The present application provides a system for enabling instrument placement from distances on the order of five meters, for example, and increases accuracy of the instrument placement relative to visually-specified targets. The system provides precision control of a mobile base of a rover and onboard manipulators (e.g., robotic arms) relative to a visually-specified target using one or more sets of cameras. The system automatically compensates for wheel slippage and kinematic inaccuracy ensuring accurate placement (on the order of 2 mm, for example) of the instrument relative to the target. The system provides the ability for autonomous instrument placement by controlling both the base of the rover and the onboard manipulator using a single set of cameras. To extend the distance from which the placement can be completed to nearly five meters, target information may be transferred from navigation cameras (used for long-range) to front hazard cameras (used for positioning the manipulator).

  19. Vision in two cyprinid fish: implications for collective behavior

    PubMed Central

    Moore, Bret A.; Tyrrell, Luke P.; Fernández-Juricic, Esteban

    2015-01-01

    Many species of fish rely on their visual systems to interact with conspecifics and these interactions can lead to collective behavior. Individual-based models have been used to predict collective interactions; however, these models generally make simplistic assumptions about the sensory systems that are applied without proper empirical testing to different species. This could limit our ability to predict (and test empirically) collective behavior in species with very different sensory requirements. In this study, we characterized components of the visual system in two species of cyprinid fish known to engage in visually dependent collective interactions (zebrafish Danio rerio and golden shiner Notemigonus crysoleucas) and derived quantitative predictions about the positioning of individuals within schools. We found that both species had relatively narrow binocular and blind fields and wide visual coverage. However, golden shiners had more visual coverage in the vertical plane (binocular field extending behind the head) and higher visual acuity than zebrafish. The centers of acute vision (areae) of both species projected in the fronto-dorsal region of the visual field, but those of the zebrafish projected more dorsally than those of the golden shiner. Based on this visual sensory information, we predicted that: (a) predator detection time could be increased by >1,000% in zebrafish and >100% in golden shiners with an increase in nearest neighbor distance, (b) zebrafish schools would have a higher roughness value (surface area/volume ratio) than those of golden shiners, (c) and that nearest neighbor distance would vary from 8 to 20 cm to visually resolve conspecific striping patterns in both species. Overall, considering between-species differences in the sensory system of species exhibiting collective behavior could change the predictions about the positioning of individuals in the group as well as the shape of the school, which can have implications for group cohesion. We suggest that more effort should be invested in assessing the role of the sensory system in shaping local interactions driving collective behavior. PMID:26290783

  20. Visuomotor Transformation in the Fly Gaze Stabilization System

    PubMed Central

    Huston, Stephen J; Krapp, Holger G

    2008-01-01

    For sensory signals to control an animal's behavior, they must first be transformed into a format appropriate for use by its motor systems. This fundamental problem is faced by all animals, including humans. Beyond simple reflexes, little is known about how such sensorimotor transformations take place. Here we describe how the outputs of a well-characterized population of fly visual interneurons, lobula plate tangential cells (LPTCs), are used by the animal's gaze-stabilizing neck motor system. The LPTCs respond to visual input arising from both self-rotations and translations of the fly. The neck motor system however is involved in gaze stabilization and thus mainly controls compensatory head rotations. We investigated how the neck motor system is able to selectively extract rotation information from the mixed responses of the LPTCs. We recorded extracellularly from fly neck motor neurons (NMNs) and mapped the directional preferences across their extended visual receptive fields. Our results suggest that—like the tangential cells—NMNs are tuned to panoramic retinal image shifts, or optic flow fields, which occur when the fly rotates about particular body axes. In many cases, tangential cells and motor neurons appear to be tuned to similar axes of rotation, resulting in a correlation between the coordinate systems the two neural populations employ. However, in contrast to the primarily monocular receptive fields of the tangential cells, most NMNs are sensitive to visual motion presented to either eye. This results in the NMNs being more selective for rotation than the LPTCs. Thus, the neck motor system increases its rotation selectivity by a comparatively simple mechanism: the integration of binocular visual motion information. PMID:18651791

  1. Traversing Time and Space from the Blessing Window

    NASA Astrophysics Data System (ADS)

    Huang, Ya-Ling

    2013-02-01

    The visual graphics for the holographic artwork "Blessing Window" were created from observations of Tainan city, with a focus on the beauty of Chinese characters, their typographic. The concept of movement in the artwork is from a traditional Chinese philosophy, "When the mountain does not move, the road extends, when the road does not extend to the destination, the heart will extend". One multiplex-hologram and an interactive installation were used to combine the visual concepts of typography and the philosophy.

  2. A new spherical scanning system for infrared reflectography of paintings

    NASA Astrophysics Data System (ADS)

    Gargano, M.; Cavaliere, F.; Viganò, D.; Galli, A.; Ludwig, N.

    2017-03-01

    Infrared reflectography is an imaging technique used to visualize the underdrawings of ancient paintings; it relies on the fact that most pigment layers are quite transparent to infrared radiation in the spectral band between 0.8 μm and 2.5 μm. InGaAs sensor cameras are nowadays the most used devices to visualize the underdrawings but due to the small size of the detectors, these cameras are usually mounted on scanning systems to record high resolution reflectograms. This work describes a portable scanning system prototype based on a peculiar spherical scanning system built through a light weight and low cost motorized head. The motorized head was built with the purpose of allowing the refocusing adjustment needed to compensate the variable camera-painting distance during the rotation of the camera. The prototype has been tested first in laboratory and then in-situ for the Giotto panel "God the Father with Angels" with a 256 pixel per inch resolution. The system performance is comparable with that of other reflectographic devices with the advantage of extending the scanned area up to 1 m × 1 m, with a 40 min scanning time. The present configuration can be easily modified to increase the resolution up to 560 pixels per inch or to extend the scanned area up to 2 m × 2 m.

  3. Program Visualization

    DTIC Science & Technology

    1983-02-22

    Scientific Computing Symposiumn an Han-Machine C om unication (1965) 57-71. 1353 Sutherland , l.3., SUICUPADs a man-machine graphical comunica- tie. systems...Institute and was held at the university’s Idylwild Campus. July 1982 Craig Fields and Clint Kelly of DARPA visited CCA on July 6. Christopher Herot and...on December 9. We gave him an extended PV slide presentation and a demonstration of the system. Clint Kelly of DARPA visited on January 13, and he

  4. Trifocal Tensor-Based Adaptive Visual Trajectory Tracking Control of Mobile Robots.

    PubMed

    Chen, Jian; Jia, Bingxi; Zhang, Kaixiang

    2017-11-01

    In this paper, a trifocal tensor-based approach is proposed for the visual trajectory tracking task of a nonholonomic mobile robot equipped with a roughly installed monocular camera. The desired trajectory is expressed by a set of prerecorded images, and the robot is regulated to track the desired trajectory using visual feedback. Trifocal tensor is exploited to obtain the orientation and scaled position information used in the control system, and it works for general scenes owing to the generality of trifocal tensor. In the previous works, the start, current, and final images are required to share enough visual information to estimate the trifocal tensor. However, this requirement can be easily violated for perspective cameras with limited field of view. In this paper, key frame strategy is proposed to loosen this requirement, extending the workspace of the visual servo system. Considering the unknown depth and extrinsic parameters (installing position of the camera), an adaptive controller is developed based on Lyapunov methods. The proposed control strategy works for almost all practical circumstances, including both trajectory tracking and pose regulation tasks. Simulations are made based on the virtual experimentation platform (V-REP) to evaluate the effectiveness of the proposed approach.

  5. Purtscher's retinopathy associated with acute pancreatitis.

    PubMed

    Hamp, Ania M; Chu, Edward; Slagle, William S; Hamp, Robert C; Joy, Jeffrey T; Morris, Robert W

    2014-02-01

    Purtscher's retinopathy is a rare condition that is associated with complement-activating systemic diseases such as acute pancreatitis. After pancreatic injury or inflammation, proteases such as trypsin activate the complement system and can potentially cause coagulation and leukoembolization of retinal precapillary arterioles. Specifically, intermediate-sized emboli are sufficiently small enough to pass through larger arteries yet large enough to remain lodged in precapillary arterioles and cause the clinical appearance of Purtscher's retinopathy. This pathology may present with optic nerve edema, impaired visual acuity, visual field loss, as well as retinal findings such as cotton-wool spots, retinal hemorrhage, artery attenuation, venous dilation, and Purtscher flecken. A 57-year-old white man presented with an acute onset of visual field scotomas and decreased visual acuity 1 week after being hospitalized for acute pancreatitis. The retinal examination revealed multiple regions of discrete retinal whitening surrounding the disk, extending through the macula bilaterally, as well as bilateral optic nerve hemorrhages. The patient identified paracentral bilateral visual field defects on Amsler Grid testing, which was confirmed with subsequent Humphrey visual field analysis. Although the patient presented with an atypical underlying etiology, he exhibited classic retinal findings for Purtscher's retinopathy. After 2 months, best corrected visual acuity improved and the retinal whitening was nearly resolved; however, bilateral paracentral visual field defects remained. Purtscher's retinopathy has a distinctive clinical presentation and is typically associated with thoracic trauma but may be a sequela of nontraumatic systemic disease such as acute pancreatitis. Patients diagnosed with acute pancreatitis should have an eye examination to rule out Purtscher's retinopathy. Although visual improvement is possible, patients should be educated that there may be permanent ocular sequelae.

  6. Compensation for Transport Delays Produced by Computer Image Generation Systems. Cooperative Training Series.

    ERIC Educational Resources Information Center

    Ricard, G. L.; And Others

    The cooperative Navy/Air Force project described is aimed at the problem of image-flutter encountered when visual displays that present computer generated images are used for the simulation of certain flying situations. Two experiments are described which extend laboratory work on delay compensation schemes to the simulation of formation flight in…

  7. A generalized 3D framework for visualization of planetary data.

    NASA Astrophysics Data System (ADS)

    Larsen, K. W.; De Wolfe, A. W.; Putnam, B.; Lindholm, D. M.; Nguyen, D.

    2016-12-01

    As the volume and variety of data returned from planetary exploration missions continues to expand, new tools and technologies are needed to explore the data and answer questions about the formation and evolution of the solar system. We have developed a 3D visualization framework that enables the exploration of planetary data from multiple instruments on the MAVEN mission to Mars. This framework not only provides the opportunity for cross-instrument visualization, but is extended to include model data as well, helping to bridge the gap between theory and observation. This is made possible through the use of new web technologies, namely LATIS, a data server that can stream data and spacecraft ephemerides to a web browser, and Cesium, a Javascript library for 3D globes. The common visualization framework we have developed is flexible and modular so that it can easily be adapted for additional missions. In addition to demonstrating the combined data and modeling capabilities of the system for the MAVEN mission, we will display the first ever near real-time `QuickLook', interactive, 4D data visualization for the Magnetospheric Multiscale Mission (MMS). In this application, data from all four spacecraft can be manipulated and visualized as soon as the data is ingested into the MMS Science Data Center, less than one day after collection.

  8. Developing Verbal and Visual Literacy through Experiences in the Visual Arts: 25 Tips for Teachers

    ERIC Educational Resources Information Center

    Johnson, Margaret H.

    2008-01-01

    Including talk about art--conversing with children about artwork, their own and others'--as a component of visual art activities extends children's experiences in and understanding of visual messages. Johnson discusses practices that help children develop visual and verbal expression through active experiences with the visual arts. She offers 25…

  9. Hazardous sign detection for safety applications in traffic monitoring

    NASA Astrophysics Data System (ADS)

    Benesova, Wanda; Kottman, Michal; Sidla, Oliver

    2012-01-01

    The transportation of hazardous goods in public streets systems can pose severe safety threats in case of accidents. One of the solutions for these problems is an automatic detection and registration of vehicles which are marked with dangerous goods signs. We present a prototype system which can detect a trained set of signs in high resolution images under real-world conditions. This paper compares two different methods for the detection: bag of visual words (BoW) procedure and our approach presented as pairs of visual words with Hough voting. The results of an extended series of experiments are provided in this paper. The experiments show that the size of visual vocabulary is crucial and can significantly affect the recognition success rate. Different code-book sizes have been evaluated for this detection task. The best result of the first method BoW was 67% successfully recognized hazardous signs, whereas the second method proposed in this paper - pairs of visual words and Hough voting - reached 94% of correctly detected signs. The experiments are designed to verify the usability of the two proposed approaches in a real-world scenario.

  10. A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements

    PubMed Central

    Mohsenzadeh, Yalda; Dash, Suryadeep; Crawford, J. Douglas

    2016-01-01

    In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks. PMID:27242452

  11. PedVizApi: a Java API for the interactive, visual analysis of extended pedigrees.

    PubMed

    Fuchsberger, Christian; Falchi, Mario; Forer, Lukas; Pramstaller, Peter P

    2008-01-15

    PedVizApi is a Java API (application program interface) for the visual analysis of large and complex pedigrees. It provides all the necessary functionality for the interactive exploration of extended genealogies. While available packages are mostly focused on a static representation or cannot be added to an existing application, PedVizApi is a highly flexible open source library for the efficient construction of visual-based applications for the analysis of family data. An extensive demo application and a R interface is provided. http://www.pedvizapi.org

  12. On a common circle: natural scenes and Gestalt rules.

    PubMed

    Sigman, M; Cecchi, G A; Gilbert, C D; Magnasco, M O

    2001-02-13

    To understand how the human visual system analyzes images, it is essential to know the structure of the visual environment. In particular, natural images display consistent statistical properties that distinguish them from random luminance distributions. We have studied the geometric regularities of oriented elements (edges or line segments) present in an ensemble of visual scenes, asking how much information the presence of a segment in a particular location of the visual scene carries about the presence of a second segment at different relative positions and orientations. We observed strong long-range correlations in the distribution of oriented segments that extend over the whole visual field. We further show that a very simple geometric rule, cocircularity, predicts the arrangement of segments in natural scenes, and that different geometrical arrangements show relevant differences in their scaling properties. Our results show similarities to geometric features of previous physiological and psychophysical studies. We discuss the implications of these findings for theories of early vision.

  13. Extended Hu¨ckel Calculations on Solids Using the Avogadro Molecular Editor and Visualizer

    ERIC Educational Resources Information Center

    Avery, Patrick; Ludoweig, Herbert; Autschbach, Jochen; Zurek, Eva

    2018-01-01

    The "Yet Another extended Hu¨ckel Molecular Orbital Package" (YAeHMOP) has been merged with the Avogadro open-source molecular editor and visualizer. It is now possible to perform YAeHMOP calculations directly from the Avogadro graphical user interface for materials that are periodic in one, two, or three dimensions, and to visualize…

  14. Regions of mid-level human visual cortex sensitive to the global coherence of local image patches.

    PubMed

    Mannion, Damien J; Kersten, Daniel J; Olman, Cheryl A

    2014-08-01

    The global structural arrangement and spatial layout of the visual environment must be derived from the integration of local signals represented in the lower tiers of the visual system. This interaction between the spatially local and global properties of visual stimulation underlies many of our visual capacities, and how this is achieved in the brain is a central question for visual and cognitive neuroscience. Here, we examine the sensitivity of regions of the posterior human brain to the global coordination of spatially displaced naturalistic image patches. We presented observers with image patches in two circular apertures to the left and right of central fixation, with the patches drawn from either the same (coherent condition) or different (noncoherent condition) extended image. Using fMRI at 7T (n = 5), we find that global coherence affected signal amplitude in regions of dorsal mid-level cortex. Furthermore, we find that extensive regions of mid-level visual cortex contained information in their local activity pattern that could discriminate coherent and noncoherent stimuli. These findings indicate that the global coordination of local naturalistic image information has important consequences for the processing in human mid-level visual cortex.

  15. Comparison of two laboratory-based systems for evaluation of halos in intraocular lenses

    PubMed Central

    Alexander, Elsinore; Wei, Xin; Lee, Shinwook

    2018-01-01

    Purpose Multifocal intraocular lenses (IOLs) can be associated with unwanted visual phenomena, including halos. Predicting potential for halos is desirable when designing new multifocal IOLs. Halo images from 6 IOL models were compared using the Optikos modulation transfer function bench system and a new high dynamic range (HDR) system. Materials and methods One monofocal, 1 extended depth of focus, and 4 multifocal IOLs were evaluated. An off-the-shelf optical bench was used to simulate a distant (>50 m) car headlight and record images. A custom HDR system was constructed using an imaging photometer to simulate headlight images and to measure quantitative halo luminance data. A metric was developed to characterize halo luminance properties. Clinical relevance was investigated by correlating halo measurements to visual outcomes questionnaire data. Results The Optikos system produced halo images useful for visual comparisons; however, measurements were relative and not quantitative. The HDR halo system provided objective and quantitative measurements used to create a metric from the area under the curve (AUC) of the logarithmic normalized halo profile. This proposed metric differentiated between IOL models, and linear regression analysis found strong correlations between AUC and subjective clinical ratings of halos. Conclusion The HDR system produced quantitative, preclinical metrics that correlated to patients’ subjective perception of halos. PMID:29503526

  16. Visual Control for Multirobot Organized Rendezvous.

    PubMed

    Lopez-Nicolas, G; Aranda, M; Mezouar, Y; Sagues, C

    2012-08-01

    This paper addresses the problem of visual control of a set of mobile robots. In our framework, the perception system consists of an uncalibrated flying camera performing an unknown general motion. The robots are assumed to undergo planar motion considering nonholonomic constraints. The goal of the control task is to drive the multirobot system to a desired rendezvous configuration relying solely on visual information given by the flying camera. The desired multirobot configuration is defined with an image of the set of robots in that configuration without any additional information. We propose a homography-based framework relying on the homography induced by the multirobot system that gives a desired homography to be used to define the reference target, and a new image-based control law that drives the robots to the desired configuration by imposing a rigidity constraint. This paper extends our previous work, and the main contributions are that the motion constraints on the flying camera are removed, the control law is improved by reducing the number of required steps, the stability of the new control law is proved, and real experiments are provided to validate the proposal.

  17. Virtual Reality: Visualization in Three Dimensions.

    ERIC Educational Resources Information Center

    McLellan, Hilary

    Virtual reality is a newly emerging tool for scientific visualization that makes possible multisensory, three-dimensional modeling of scientific data. While the emphasis is on visualization, the other senses are added to enhance what the scientist can visualize. Researchers are working to extend the sensory range of what can be perceived in…

  18. Visual Target Tracking in the Presence of Unknown Observer Motion

    NASA Technical Reports Server (NTRS)

    Williams, Stephen; Lu, Thomas

    2009-01-01

    Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.

  19. Endoscopic surgical management of sinonasal inverted papilloma extending to frontal sinuses.

    PubMed

    Takahashi, Yukiko; Shoji, Fumi; Katori, Yukio; Hidaka, Hiroshi; Noguchi, Naoya; Abe, Yasuhiro; Kakuta, Risako Kakuta; Suzuki, Takahiro; Suzuki, Yusuke; Ohta, Nobuo; Kakehata, Seiji; Okamoto, Yoshitaka

    2016-11-10

    Sinonasal inverted papilloma has been traditionally managed with external surgical approaches. Advances in imaging guidance systems, surgical instrumentation, and intraoperative multi-visualization have led to a gradual shift from external approaches to endoscopic surgery. However, for anatomical and technical reasons, endoscopic surgery of sinonasal inverted papilloma extending to the frontal sinuses is still challenging. Here, we present our experience in endoscopic surgical management of sinonasal inverted papilloma extending to one or both frontal sinuses. We present 10 cases of sinonasal inverted papilloma extending to the frontal sinuses and successfully removed by endoscopic median drainage (Draf III procedure) under endoscopic guidance without any additional external approach. The whole cavity of the frontal sinuses was easily inspected at the end of the surgical procedure. No early or late complications were observed. No recurrence was identified after an average follow-up period of 39.5 months. Use of an endoscopic median drainage approach to manage sinonasal inverted papilloma extending to one or both frontal sinuses is feasible and seems effective.

  20. Multi-modal demands of a smartphone used to place calls and enter addresses during highway driving relative to two embedded systems.

    PubMed

    Reimer, Bryan; Mehler, Bruce; Reagan, Ian; Kidd, David; Dobres, Jonathan

    2016-12-01

    There is limited research on trade-offs in demand between manual and voice interfaces of embedded and portable technologies. Mehler et al. identified differences in driving performance, visual engagement and workload between two contrasting embedded vehicle system designs (Chevrolet MyLink and Volvo Sensus). The current study extends this work by comparing these embedded systems with a smartphone (Samsung Galaxy S4). None of the voice interfaces eliminated visual demand. Relative to placing calls manually, both embedded voice interfaces resulted in less eyes-off-road time than the smartphone. Errors were most frequent when calling contacts using the smartphone. The smartphone and MyLink allowed addresses to be entered using compound voice commands resulting in shorter eyes-off-road time compared with the menu-based Sensus but with many more errors. Driving performance and physiological measures indicated increased demand when performing secondary tasks relative to 'just driving', but were not significantly different between the smartphone and embedded systems. Practitioner Summary: The findings show that embedded system and portable device voice interfaces place fewer visual demands on the driver than manual interfaces, but they also underscore how differences in system designs can significantly affect not only the demands placed on drivers, but also the successful completion of tasks.

  1. Dual function seal: visualized digital signature for electronic medical record systems.

    PubMed

    Yu, Yao-Chang; Hou, Ting-Wei; Chiang, Tzu-Chiang

    2012-10-01

    Digital signature is an important cryptography technology to be used to provide integrity and non-repudiation in electronic medical record systems (EMRS) and it is required by law. However, digital signatures normally appear in forms unrecognizable to medical staff, this may reduce the trust from medical staff that is used to the handwritten signatures or seals. Therefore, in this paper we propose a dual function seal to extend user trust from a traditional seal to a digital signature. The proposed dual function seal is a prototype that combines the traditional seal and digital seal. With this prototype, medical personnel are not just can put a seal on paper but also generate a visualized digital signature for electronic medical records. Medical Personnel can then look at the visualized digital signature and directly know which medical personnel generated it, just like with a traditional seal. Discrete wavelet transform (DWT) is used as an image processing method to generate a visualized digital signature, and the peak signal to noise ratio (PSNR) is calculated to verify that distortions of all converted images are beyond human recognition, and the results of our converted images are from 70 dB to 80 dB. The signature recoverability is also tested in this proposed paper to ensure that the visualized digital signature is verifiable. A simulated EMRS is implemented to show how the visualized digital signature can be integrity into EMRS.

  2. Developmental trajectory of neural specialization for letter and number visual processing.

    PubMed

    Park, Joonkoo; van den Berg, Berry; Chiang, Crystal; Woldorff, Marty G; Brannon, Elizabeth M

    2018-05-01

    Adult neuroimaging studies have demonstrated dissociable neural activation patterns in the visual cortex in response to letters (Latin alphabet) and numbers (Arabic numerals), which suggest a strong experiential influence of reading and mathematics on the human visual system. Here, developmental trajectories in the event-related potential (ERP) patterns evoked by visual processing of letters, numbers, and false fonts were examined in four different age groups (7-, 10-, 15-year-olds, and young adults). The 15-year-olds and adults showed greater neural sensitivity to letters over numbers in the left visual cortex and the reverse pattern in the right visual cortex, extending previous findings in adults to teenagers. In marked contrast, 7- and 10-year-olds did not show this dissociable neural pattern. Furthermore, the contrast of familiar stimuli (letters or numbers) versus unfamiliar ones (false fonts) showed stark ERP differences between the younger (7- and 10-year-olds) and the older (15-year-olds and adults) participants. These results suggest that both coarse (familiar versus unfamiliar) and fine (letters versus numbers) tuning for letters and numbers continue throughout childhood and early adolescence, demonstrating a profound impact of uniquely human cultural inventions on visual cognition and its development. © 2017 John Wiley & Sons Ltd.

  3. Annotation Graphs: A Graph-Based Visualization for Meta-Analysis of Data Based on User-Authored Annotations.

    PubMed

    Zhao, Jian; Glueck, Michael; Breslav, Simon; Chevalier, Fanny; Khan, Azam

    2017-01-01

    User-authored annotations of data can support analysts in the activity of hypothesis generation and sensemaking, where it is not only critical to document key observations, but also to communicate insights between analysts. We present annotation graphs, a dynamic graph visualization that enables meta-analysis of data based on user-authored annotations. The annotation graph topology encodes annotation semantics, which describe the content of and relations between data selections, comments, and tags. We present a mixed-initiative approach to graph layout that integrates an analyst's manual manipulations with an automatic method based on similarity inferred from the annotation semantics. Various visual graph layout styles reveal different perspectives on the annotation semantics. Annotation graphs are implemented within C8, a system that supports authoring annotations during exploratory analysis of a dataset. We apply principles of Exploratory Sequential Data Analysis (ESDA) in designing C8, and further link these to an existing task typology in the visualization literature. We develop and evaluate the system through an iterative user-centered design process with three experts, situated in the domain of analyzing HCI experiment data. The results suggest that annotation graphs are effective as a method of visually extending user-authored annotations to data meta-analysis for discovery and organization of ideas.

  4. Statecharts Via Process Algebra

    NASA Technical Reports Server (NTRS)

    Luttgen, Gerald; vonderBeeck, Michael; Cleaveland, Rance

    1999-01-01

    Statecharts is a visual language for specifying the behavior of reactive systems. The Language extends finite-state machines with concepts of hierarchy, concurrency, and priority. Despite its popularity as a design notation for embedded system, precisely defining its semantics has proved extremely challenging. In this paper, a simple process algebra, called Statecharts Process Language (SPL), is presented, which is expressive enough for encoding Statecharts in a structure-preserving and semantic preserving manner. It is establish that the behavioral relation bisimulation, when applied to SPL, preserves Statecharts semantics

  5. An extended retinotopic map of mouse cortex

    PubMed Central

    Zhuang, Jun; Ng, Lydia; Williams, Derric; Valley, Matthew; Li, Yang; Garrett, Marina; Waters, Jack

    2017-01-01

    Visual perception and behavior are mediated by cortical areas that have been distinguished using architectonic and retinotopic criteria. We employed fluorescence imaging and GCaMP6 reporter mice to generate retinotopic maps, revealing additional regions of retinotopic organization that extend into barrel and retrosplenial cortices. Aligning retinotopic maps to architectonic borders, we found a mismatch in border location, indicating that architectonic borders are not aligned with the retinotopic transition at the vertical meridian. We also assessed the representation of visual space within each region, finding that four visual areas bordering V1 (LM, P, PM and RL) display complementary representations, with overlap primarily at the central hemifield. Our results extend our understanding of the organization of mouse cortex to include up to 16 distinct retinotopically organized regions. DOI: http://dx.doi.org/10.7554/eLife.18372.001 PMID:28059700

  6. Advancements to Visualization Control System (VCS, part of UV-CDAT), a Visualization Package Designed for Climate Scientists

    NASA Astrophysics Data System (ADS)

    Lipsa, D.; Chaudhary, A.; Williams, D. N.; Doutriaux, C.; Jhaveri, S.

    2017-12-01

    Climate Data Analysis Tools (UV-CDAT, https://uvcdat.llnl.gov) is a data analysis and visualization software package developed at Lawrence Livermore National Laboratory and designed for climate scientists. Core components of UV-CDAT include: 1) Community Data Management System (CDMS) which provides I/O support and a data model for climate data;2) CDAT Utilities (GenUtil) that processes data using spatial and temporal averaging and statistic functions; and 3) Visualization Control System (VCS) for interactive visualization of the data. VCS is a Python visualization package primarily built for climate scientists, however, because of its generality and breadth of functionality, it can be a useful tool to other scientific applications. VCS provides 1D, 2D and 3D visualization functions such as scatter plot and line graphs for 1d data, boxfill, meshfill, isofill, isoline for 2d scalar data, vector glyphs and streamlines for 2d vector data and 3d_scalar and 3d_vector for 3d data. Specifically for climate data our plotting routines include projections, Skew-T plots and Taylor diagrams. While VCS provided a user-friendly API, the previous implementation of VCS relied on slow performing vector graphics (Cairo) backend which is suitable for smaller dataset and non-interactive graphics. LLNL and Kitware team has added a new backend to VCS that uses the Visualization Toolkit (VTK) as its visualization backend. VTK is one of the most popular open source, multi-platform scientific visualization library written in C++. Its use of OpenGL and pipeline processing architecture results in a high performant VCS library. Its multitude of available data formats and visualization algorithms results in easy adoption of new visualization methods and new data formats in VCS. In this presentation, we describe recent contributions to VCS that includes new visualization plots, continuous integration testing using Conda and CircleCI, tutorials and examples using Jupyter notebooks as well as upgrades that we are planning in the near future which will improve its ease of use and reliability and extend its capabilities.

  7. A UML Profile for Developing Databases that Conform to the Third Manifesto

    NASA Astrophysics Data System (ADS)

    Eessaar, Erki

    The Third Manifesto (TTM) presents the principles of a relational database language that is free of deficiencies and ambiguities of SQL. There are database management systems that are created according to TTM. Developers need tools that support the development of databases by using these database management systems. UML is a widely used visual modeling language. It provides built-in extension mechanism that makes it possible to extend UML by creating profiles. In this paper, we introduce a UML profile for designing databases that correspond to the rules of TTM. We created the first version of the profile by translating existing profiles of SQL database design. After that, we extended and improved the profile. We implemented the profile by using UML CASE system StarUML™. We present an example of using the new profile. In addition, we describe problems that occurred during the profile development.

  8. Imaging systems level consolidation of novel associate memories: A longitudinal neuroimaging study

    PubMed Central

    Smith, Jason F; Alexander, Gene E; Chen, Kewei; Husain, Fatima T; Kim, Jieun; Pajor, Nathan; Horwitz, Barry

    2010-01-01

    Previously, a standard theory of systems level memory consolidation was developed to describe how memory recall becomes independent of the medial temporal memory system. More recently, an extended consolidation theory was proposed that predicts seven changes in regional neural activity and inter-regional functional connectivity. Using longitudinal event related functional magnetic resonance imaging of an associate memory task, we simultaneously tested all predictions and additionally tested for consolidation related changes in recall of associate memories at a sub-trial temporal resolution, analyzing cue, delay and target periods of each trial separately. Results consistent with the theoretical predictions were observed though two inconsistent results were also obtained. In particular, while recall-related delay period activity decreased with consolidation as predicted, visual cue activity increased for consolidated memories. Though the extended theory of memory consolidation is largely supported by our study, these results suggest the extended theory needs further refinement and the medial temporal memory system has multiple, temporally distinct roles in associate memory recall. Neuroimaging analysis at a sub-trial temporal resolution, as used here, may further clarify the role of the hippocampal complex in memory consolidation. PMID:19948227

  9. Effects of VR system fidelity on analyzing isosurface visualization of volume datasets.

    PubMed

    Laha, Bireswar; Bowman, Doug A; Socha, John J

    2014-04-01

    Volume visualization is an important technique for analyzing datasets from a variety of different scientific domains. Volume data analysis is inherently difficult because volumes are three-dimensional, dense, and unfamiliar, requiring scientists to precisely control the viewpoint and to make precise spatial judgments. Researchers have proposed that more immersive (higher fidelity) VR systems might improve task performance with volume datasets, and significant results tied to different components of display fidelity have been reported. However, more information is needed to generalize these results to different task types, domains, and rendering styles. We visualized isosurfaces extracted from synchrotron microscopic computed tomography (SR-μCT) scans of beetles, in a CAVE-like display. We ran a controlled experiment evaluating the effects of three components of system fidelity (field of regard, stereoscopy, and head tracking) on a variety of abstract task categories that are applicable to various scientific domains, and also compared our results with those from our prior experiment using 3D texture-based rendering. We report many significant findings. For example, for search and spatial judgment tasks with isosurface visualization, a stereoscopic display provides better performance, but for tasks with 3D texture-based rendering, displays with higher field of regard were more effective, independent of the levels of the other display components. We also found that systems with high field of regard and head tracking improve performance in spatial judgment tasks. Our results extend existing knowledge and produce new guidelines for designing VR systems to improve the effectiveness of volume data analysis.

  10. The importance of ultraviolet and near-infrared sensitivity for visual discrimination in two species of lacertid lizards.

    PubMed

    Martin, Mélissa; Le Galliard, Jean-François; Meylan, Sandrine; Loew, Ellis R

    2015-02-01

    Male and female Lacertid lizards often display conspicuous coloration that is involved in intraspecific communication. However, visual systems of Lacertidae have rarely been studied and the spectral sensitivity of their retinal photoreceptors remains unknown. Here, we characterise the spectral sensitivity of two Lacertid species from contrasting habitats: the wall lizard Podarcis muralis and the common lizard Zootoca vivipara. Both species possess a pure-cone retina with one spectral class of double cones and four spectral classes of single cones. The two species differ in the spectral sensitivity of the LWS cones, the relative abundance of UVS single cones (potentially more abundant in Z. vivipara) and the coloration of oil droplets. Wall lizards have pure vitamin A1-based photopigments, whereas common lizards possess mixed vitamin A1 and A2 photopigments, extending spectral sensitivity into the near infrared, which is a rare feature in terrestrial vertebrates. We found that spectral sensitivity in the UV and near infrared improves discrimination of small variations in throat coloration among Z. vivipara. Thus, retinal specialisations optimise chromatic resolution in common lizards, indicating that the visual system and visual signals might co-evolve. © 2015. Published by The Company of Biologists Ltd.

  11. Plastic reorganization of neural systems for perception of others in the congenitally blind.

    PubMed

    Fairhall, S L; Porter, K B; Bellucci, C; Mazzetti, M; Cipolli, C; Gobbini, M I

    2017-09-01

    Recent evidence suggests that the function of the core system for face perception might extend beyond visual face-perception to a broader role in person perception. To critically test the broader role of core face-system in person perception, we examined the role of the core system during the perception of others in 7 congenitally blind individuals and 15 sighted subjects by measuring their neural responses using fMRI while they listened to voices and performed identity and emotion recognition tasks. We hypothesised that in people who have had no visual experience of faces, core face-system areas may assume a role in the perception of others via voices. Results showed that emotions conveyed by voices can be decoded in homologues of the core face system only in the blind. Moreover, there was a specific enhancement of response to verbal as compared to non-verbal stimuli in bilateral fusiform face areas and the right posterior superior temporal sulcus showing that the core system also assumes some language-related functions in the blind. These results indicate that, in individuals with no history of visual experience, areas of the core system for face perception may assume a role in aspects of voice perception that are relevant to social cognition and perception of others' emotions. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  12. The impact of recreational MDMA 'ecstasy' use on global form processing.

    PubMed

    White, Claire; Edwards, Mark; Brown, John; Bell, Jason

    2014-11-01

    The ability to integrate local orientation information into a global form percept was investigated in long-term ecstasy users. Evidence suggests that ecstasy disrupts the serotonin system, with the visual areas of the brain being particularly susceptible. Previous research has found altered orientation processing in the primary visual area (V1) of users, thought to be due to disrupted serotonin-mediated lateral inhibition. The current study aimed to investigate whether orientation deficits extend to higher visual areas involved in global form processing. Forty-five participants completed a psychophysical (Glass pattern) study allowing an investigation into the mechanisms underlying global form processing and sensitivity to changes in the offset of the stimuli (jitter). A subgroup of polydrug-ecstasy users (n=6) with high ecstasy use had significantly higher thresholds for the detection of Glass patterns than controls (n=21, p=0.039) after Bonferroni correction. There was also a significant interaction between jitter level and drug-group, with polydrug-ecstasy users showing reduced sensitivity to alterations in jitter level (p=0.003). These results extend previous research, suggesting disrupted global form processing and reduced sensitivity to orientation jitter with ecstasy use. Further research is needed to investigate this finding in a larger sample of heavy ecstasy users and to differentiate the effects of other drugs. © The Author(s) 2014.

  13. Universal brain systems for recognizing word shapes and handwriting gestures during reading

    PubMed Central

    Nakamura, Kimihiro; Kuo, Wen-Jui; Pegado, Felipe; Cohen, Laurent; Tzeng, Ovid J. L.; Dehaene, Stanislas

    2012-01-01

    Do the neural circuits for reading vary across culture? Reading of visually complex writing systems such as Chinese has been proposed to rely on areas outside the classical left-hemisphere network for alphabetic reading. Here, however, we show that, once potential confounds in cross-cultural comparisons are controlled for by presenting handwritten stimuli to both Chinese and French readers, the underlying network for visual word recognition may be more universal than previously suspected. Using functional magnetic resonance imaging in a semantic task with words written in cursive font, we demonstrate that two universal circuits, a shape recognition system (reading by eye) and a gesture recognition system (reading by hand), are similarly activated and show identical patterns of activation and repetition priming in the two language groups. These activations cover most of the brain regions previously associated with culture-specific tuning. Our results point to an extended reading network that invariably comprises the occipitotemporal visual word-form system, which is sensitive to well-formed static letter strings, and a distinct left premotor region, Exner’s area, which is sensitive to the forward or backward direction with which cursive letters are dynamically presented. These findings suggest that cultural effects in reading merely modulate a fixed set of invariant macroscopic brain circuits, depending on surface features of orthographies. PMID:23184998

  14. Premotor cortex is sensitive to auditory-visual congruence for biological motion.

    PubMed

    Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F

    2012-03-01

    The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.

  15. Visual Dialect: Ethnovisual and Sociovisual Elements of Design in Public Service Communication.

    ERIC Educational Resources Information Center

    Schiffman, Carole B.

    Graphic design is a form of communication by which visual messages are conveyed to a viewer. Audience needs and views must steer the design process when constructing public service visual messages. Well-educated people may be better able to comprehend visuals which require some level of interpretation or extend beyond their world view. Public…

  16. Improved detection following Neuro-Eye Therapy in patients with post-geniculate brain damage.

    PubMed

    Sahraie, Arash; Macleod, Mary-Joan; Trevethan, Ceri T; Robson, Siân E; Olson, John A; Callaghan, Paula; Yip, Brigitte

    2010-09-01

    Damage to the optic radiation or the occipital cortex results in loss of vision in the contralateral visual field, termed partial cortical blindness or hemianopia. Previously, we have demonstrated that stimulation in the field defect using visual stimuli with optimal properties for blindsight detection can lead to increases in visual sensitivity within the blind field of a group of patients. The present study was aimed to extend the previous work by investigating the effect of positive feedback on recovery of visual sensitivity. Patients' abilities for detection of a range of spatial frequencies within their field defect were determined using a temporal two-alternative forced-choice technique, before and after a period of visual training (n = 4). Patients underwent Neuro-Eye Therapy which involved detection of temporally modulated spatial grating patches at specific retinal locations within their field defect. Three patients showed improved detection ability following visual training. Based on our previous studies, we had hypothesised that should the occipital brain lesion extend anteriorly to the thalamus, little recovery would be expected. Here, we describe one such case who showed no improvements after extensive training. The present study provides further evidence that recovery (a) can be gradual and may require a large number of training sessions (b) can be accelerated using positive feedback and (c) may be less likely to take place if the occipital damage extends anteriorly to the thalamus.

  17. A highly scalable information system as extendable framework solution for medical R&D projects.

    PubMed

    Holzmüller-Laue, Silke; Göde, Bernd; Stoll, Regina; Thurow, Kerstin

    2009-01-01

    For research projects in preventive medicine a flexible information management is needed that offers a free planning and documentation of project specific examinations. The system should allow a simple, preferably automated data acquisition from several distributed sources (e.g., mobile sensors, stationary diagnostic systems, questionnaires, manual inputs) as well as an effective data management, data use and analysis. An information system fulfilling these requirements has been developed at the Center for Life Science Automation (celisca). This system combines data of multiple investigations and multiple devices and displays them on a single screen. The integration of mobile sensor systems for comfortable, location-independent capture of time-based physiological parameter and the possibility of observation of these measurements directly by this system allow new scenarios. The web-based information system presented in this paper is configurable by user interfaces. It covers medical process descriptions, operative process data visualizations, a user-friendly process data processing, modern online interfaces (data bases, web services, XML) as well as a comfortable support of extended data analysis with third-party applications.

  18. Extended Plasticity of Visual Cortex in Dark-Reared Animals May Result from Prolonged Expression of cpg15-Like Genes

    PubMed Central

    Lee, Wei-Chung Allen; Nedivi, Elly

    2011-01-01

    cpg15 is an activity-regulated gene that encodes a membrane-bound ligand that coordinately regulates growth of apposing dendritic and axonal arbors and the maturation of their synapses. These properties make it an attractive candidate for participating in plasticity of the mammalian visual system. Here we compare cpg15 expression during normal development of the rat visual system with that seen in response to dark rearing, monocular blockade of retinal action potentials, or monocular deprivation. Our results show that the onset of cpg15 expression in the visual cortex is coincident with eye opening, and it increases until the peak of the critical period at postnatal day 28 (P28). This early expression is independent of both retinal activity and visual experience. After P28, a component of cpg15 expression in the visual cortex, lateral geniculate nucleus (LGN), and superior colliculus (SC) develops a progressively stronger dependence on retinally driven action potentials. Dark rearing does not affect cpg15 mRNA expression in the LGN and SC at any age, but it does significantly affect its expression in the visual cortex from the peak of the critical period and into adulthood. In dark-reared rats, the peak level of cpg15 expression in the visual cortex at P28 is lower than in controls. Rather than showing the normal decline with maturation, these levels are maintained in dark-reared animals. We suggest that the prolonged plasticity in the visual cortex that is seen in dark-reared animals may result from failure to downregulate genes such as cpg15 that could promote structural remodeling and synaptic maturation. PMID:11880509

  19. Dynamical Analysis and Visualization of Tornadoes Time Series

    PubMed Central

    2015-01-01

    In this paper we analyze the behavior of tornado time-series in the U.S. from the perspective of dynamical systems. A tornado is a violently rotating column of air extending from a cumulonimbus cloud down to the ground. Such phenomena reveal features that are well described by power law functions and unveil characteristics found in systems with long range memory effects. Tornado time series are viewed as the output of a complex system and are interpreted as a manifestation of its dynamics. Tornadoes are modeled as sequences of Dirac impulses with amplitude proportional to the events size. First, a collection of time series involving 64 years is analyzed in the frequency domain by means of the Fourier transform. The amplitude spectra are approximated by power law functions and their parameters are read as an underlying signature of the system dynamics. Second, it is adopted the concept of circular time and the collective behavior of tornadoes analyzed. Clustering techniques are then adopted to identify and visualize the emerging patterns. PMID:25790281

  20. Dynamical analysis and visualization of tornadoes time series.

    PubMed

    Lopes, António M; Tenreiro Machado, J A

    2015-01-01

    In this paper we analyze the behavior of tornado time-series in the U.S. from the perspective of dynamical systems. A tornado is a violently rotating column of air extending from a cumulonimbus cloud down to the ground. Such phenomena reveal features that are well described by power law functions and unveil characteristics found in systems with long range memory effects. Tornado time series are viewed as the output of a complex system and are interpreted as a manifestation of its dynamics. Tornadoes are modeled as sequences of Dirac impulses with amplitude proportional to the events size. First, a collection of time series involving 64 years is analyzed in the frequency domain by means of the Fourier transform. The amplitude spectra are approximated by power law functions and their parameters are read as an underlying signature of the system dynamics. Second, it is adopted the concept of circular time and the collective behavior of tornadoes analyzed. Clustering techniques are then adopted to identify and visualize the emerging patterns.

  1. Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach

    PubMed Central

    Kong, Weiwei; Hu, Tianjiang; Zhang, Daibing; Shen, Lincheng; Zhang, Jianwei

    2017-01-01

    One of the greatest challenges for fixed-wing unmanned aircraft vehicles (UAVs) is safe landing. Hereafter, an on-ground deployed visual approach is developed in this paper. This approach is definitely suitable for landing within the global navigation satellite system (GNSS)-denied environments. As for applications, the deployed guidance system makes full use of the ground computing resource and feedbacks the aircraft’s real-time localization to its on-board autopilot. Under such circumstances, a separate long baseline stereo architecture is proposed to possess an extendable baseline and wide-angle field of view (FOV) against the traditional fixed baseline schemes. Furthermore, accuracy evaluation of the new type of architecture is conducted by theoretical modeling and computational analysis. Dataset-driven experimental results demonstrate the feasibility and effectiveness of the developed approach. PMID:28629189

  2. Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach.

    PubMed

    Kong, Weiwei; Hu, Tianjiang; Zhang, Daibing; Shen, Lincheng; Zhang, Jianwei

    2017-06-19

    [-5]One of the greatest challenges for fixed-wing unmanned aircraft vehicles (UAVs) is safe landing. Hereafter, an on-ground deployed visual approach is developed in this paper. This approach is definitely suitable for landing within the global navigation satellite system (GNSS)-denied environments. As for applications, the deployed guidance system makes full use of the ground computing resource and feedbacks the aircraft's real-time localization to its on-board autopilot. Under such circumstances, a separate long baseline stereo architecture is proposed to possess an extendable baseline and wide-angle field of view (FOV) against the traditional fixed baseline schemes. Furthermore, accuracy evaluation of the new type of architecture is conducted by theoretical modeling and computational analysis. Dataset-driven experimental results demonstrate the feasibility and effectiveness of the developed approach.

  3. Visualization of a Unidirectional Electromagnetic Waveguide Using Topological Photonic Crystals Made of Dielectric Materials

    NASA Astrophysics Data System (ADS)

    Yang, Yuting; Xu, Yun Fei; Xu, Tao; Wang, Hai-Xiao; Jiang, Jian-Hua; Hu, Xiao; Hang, Zhi Hong

    2018-05-01

    We demonstrate experimentally that a photonic crystal made of Al2O3 cylinders exhibits topological time-reversal symmetric electromagnetic propagation, similar to the quantum spin Hall effect in electronic systems. A pseudospin degree of freedom in the electromagnetic system representing different states of orbital angular momentum arises due to a deformation of the photonic crystal from the ideal honeycomb lattice. It serves as the photonic analogue to the electronic Kramers pair. We visualized qualitatively and measured quantitatively that microwaves of a specific pseudospin propagate only in one direction along the interface between a topological photonic crystal and a trivial one. As only a conventional dielectric material is used and only local real-space manipulations are required, our scheme can be extended to visible light to inspire many future applications in the field of photonics and beyond.

  4. Real-time, interactive, visually updated simulator system for telepresence

    NASA Technical Reports Server (NTRS)

    Schebor, Frederick S.; Turney, Jerry L.; Marzwell, Neville I.

    1991-01-01

    Time delays and limited sensory feedback of remote telerobotic systems tend to disorient teleoperators and dramatically decrease the operator's performance. To remove the effects of time delays, key components were designed and developed of a prototype forward simulation subsystem, the Global-Local Environment Telerobotic Simulator (GLETS) that buffers the operator from the remote task. GLETS totally immerses an operator in a real-time, interactive, simulated, visually updated artificial environment of the remote telerobotic site. Using GLETS, the operator will, in effect, enter into a telerobotic virtual reality and can easily form a gestalt of the virtual 'local site' that matches the operator's normal interactions with the remote site. In addition to use in space based telerobotics, GLETS, due to its extendable architecture, can also be used in other teleoperational environments such as toxic material handling, construction, and undersea exploration.

  5. The Processing of Biologically Plausible and Implausible forms in American Sign Language: Evidence for Perceptual Tuning.

    PubMed

    Almeida, Diogo; Poeppel, David; Corina, David

    The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.

  6. Large-scale functional models of visual cortex for remote sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brumby, Steven P; Kenyon, Garrett; Rasmussen, Craig E

    Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simplemore » region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.« less

  7. Tool use and the distalization of the end-effector

    PubMed Central

    Bonaiuto, James B.; Jacobs, Stéphane; Frey, Scott H.

    2009-01-01

    We review recent neurophysiological data from macaques and humans suggesting that the use of tools extends the internal representation of the actor’s hand, and relate it to our modeling of the visual control of grasping. We introduce the idea that, in addition to extending the body schema to incorporate the tool, tool use involves distalization of the end-effector from hand to tool. Different tools extend the body schema in different ways, with a displaced visual target and a novel, task-specific processing of haptic feedback to the hand. This distalization is critical in order to exploit the unique functional capacities engendered by complex tools. PMID:19347356

  8. Integration of a vision-based tracking platform, visual instruction, and error analysis models for an efficient billiard training system

    NASA Astrophysics Data System (ADS)

    Shih, Chihhsiong; Hsiung, Pao-Ann; Wan, Chieh-Hao; Koong, Chorng-Shiuh; Liu, Tang-Kun; Yang, Yuanfan; Lin, Chu-Hsing; Chu, William Cheng-Chung

    2009-02-01

    A billiard ball tracking system is designed to combine with a visual guide interface to instruct users for a reliable strike. The integrated system runs on a PC platform. The system makes use of a vision system for cue ball, object ball and cue stick tracking. A least-squares error calibration process correlates the real-world and the virtual-world pool ball coordinates for a precise guidance line calculation. Users are able to adjust the cue stick on the pool table according to a visual guidance line instruction displayed on a PC monitor. The ideal visual guidance line extended from the cue ball is calculated based on a collision motion analysis. In addition to calculating the ideal visual guide, the factors influencing selection of the best shot among different object balls and pockets are explored. It is found that a tolerance angle around the ideal line for the object ball to roll into a pocket determines the difficulty of a strike. This angle depends in turn on the distance from the pocket to the object, the distance from the object to the cue ball, and the angle between these two vectors. Simulation results for tolerance angles as a function of these quantities are given. A selected object ball was tested extensively with respect to various geometrical parameters with and without using our integrated system. Players with different proficiency levels were selected for the experiment. The results indicate that all players benefit from our proposed visual guidance system in enhancing their skills, while low-skill players show the maximum enhancement in skill with the help of our system. All exhibit enhanced maximum and average hit-in rates. Experimental results on hit-in rates have shown a pattern consistent with that of the analysis. The hit-in rate is thus tightly connected with the analyzed tolerance angles for sinking object balls into a target pocket. These results prove the efficiency of our system, and the analysis results can be used to attain an efficient game-playing strategy.

  9. Columnar Segregation of Magnocellular and Parvocellular Streams in Human Extrastriate Cortex

    PubMed Central

    2017-01-01

    Magnocellular versus parvocellular (M-P) streams are fundamental to the organization of macaque visual cortex. Segregated, paired M-P streams extend from retina through LGN into V1. The M stream extends further into area V5/MT, and parts of V2. However, elsewhere in visual cortex, it remains unclear whether M-P-derived information (1) becomes intermixed or (2) remains segregated in M-P-dominated columns and neurons. Here we tested whether M-P streams exist in extrastriate cortical columns, in 8 human subjects (4 female). We acquired high-resolution fMRI at high field (7T), testing for M- and P-influenced columns within each of four cortical areas (V2, V3, V3A, and V4), based on known functional distinctions in M-P streams in macaque: (1) color versus luminance, (2) binocular disparity, (3) luminance contrast sensitivity, (4) peak spatial frequency, and (5) color/spatial interactions. Additional measurements of resting state activity (eyes closed) tested for segregated functional connections between these columns. We found M- and P-like functions and connections within and between segregated cortical columns in V2, V3, and (in most experiments) area V4. Area V3A was dominated by the M stream, without significant influence from the P stream. These results suggest that M-P streams exist, and extend through, specific columns in early/middle stages of human extrastriate cortex. SIGNIFICANCE STATEMENT The magnocellular and parvocellular (M-P) streams are fundamental components of primate visual cortical organization. These streams segregate both anatomical and functional properties in parallel, from retina through primary visual cortex. However, in most higher-order cortical sites, it is unknown whether such M-P streams exist and/or what form those streams would take. Moreover, it is unknown whether M-P streams exist in human cortex. Here, fMRI evidence measured at high field (7T) and high resolution revealed segregated M-P streams in four areas of human extrastriate cortex. These results suggest that M-P information is processed in segregated parallel channels throughout much of human visual cortex; the M-P streams are more than a convenient sorting property in earlier stages of the visual system. PMID:28724749

  10. Flight Deck Technologies to Enable NextGen Low Visibility Surface Operations

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence (Lance) J., III; Arthur, Jarvis (Trey) J.; Kramer, Lynda J.; Norman, Robert M.; Bailey, Randall E.; Jones, Denise R.; Karwac, Jerry R., Jr.; Shelton, Kevin J.; Ellis, Kyle K. E.

    2013-01-01

    Many key capabilities are being identified to enable Next Generation Air Transportation System (NextGen), including the concept of Equivalent Visual Operations (EVO) . replicating the capacity and safety of today.s visual flight rules (VFR) in all-weather conditions. NASA is striving to develop the technologies and knowledge to enable EVO and to extend EVO towards a Better-Than-Visual operational concept. This operational concept envisions an .equivalent visual. paradigm where an electronic means provides sufficient visual references of the external world and other required flight references on flight deck displays that enable Visual Flight Rules (VFR)-like operational tempos while maintaining and improving safety of VFR while using VFR-like procedures in all-weather conditions. The Langley Research Center (LaRC) has recently completed preliminary research on flight deck technologies for low visibility surface operations. The work assessed the potential of enhanced vision and airport moving map displays to achieve equivalent levels of safety and performance to existing low visibility operational requirements. The work has the potential to better enable NextGen by perhaps providing an operational credit for conducting safe low visibility surface operations by use of the flight deck technologies.

  11. Analytical evaluation of two motion washout techniques

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1977-01-01

    Practical tools were developed which extend the state of the art of moving base flight simulation for research and training purposes. The use of visual and vestibular cues to minimize the actual motion of the simulator itself was a primary consideration. The investigation consisted of optimum programming of motion cues based on a physiological model of the vestibular system to yield 'ideal washout logic' for any given simulator constraints.

  12. School Starters' Vision--An Educational Approach

    ERIC Educational Resources Information Center

    Wilhelmsen, Gunvor B

    2016-01-01

    Although good visual capacity is essential for children's learning, we have limited understanding of the various visual functions among school starters. In order to extend this knowledge, a small-scale study was undertaken involving 24 preschool children age 5-6 years who completed a test battery originally designed for visual impairment…

  13. Top-down control of visual perception: attention in natural vision.

    PubMed

    Rolls, Edmund T

    2008-01-01

    Top-down perceptual influences can bias (or pre-empt) perception. In natural scenes, the receptive fields of neurons in the inferior temporal visual cortex (IT) shrink to become close to the size of objects. This facilitates the read-out of information from the ventral visual system, because the information is primarily about the object at the fovea. Top-down attentional influences are much less evident in natural scenes than when objects are shown against blank backgrounds, though are still present. It is suggested that the reduced receptive-field size in natural scenes, and the effects of top-down attention contribute to change blindness. The receptive fields of IT neurons in complex scenes, though including the fovea, are frequently asymmetric around the fovea, and it is proposed that this is the solution the IT uses to represent multiple objects and their relative spatial positions in a scene. Networks that implement probabilistic decision-making are described, and it is suggested that, when in perceptual systems they take decisions (or 'test hypotheses'), they influence lower-level networks to bias visual perception. Finally, it is shown that similar processes extend to systems involved in the processing of emotion-provoking sensory stimuli, in that word-level cognitive states provide top-down biasing that reaches as far down as the orbitofrontal cortex, where, at the first stage of affective representations, olfactory, taste, flavour, and touch processing is biased (or pre-empted) in humans.

  14. Multi-modal demands of a smartphone used to place calls and enter addresses during highway driving relative to two embedded systems

    PubMed Central

    Reimer, Bryan; Mehler, Bruce; Reagan, Ian; Kidd, David; Dobres, Jonathan

    2016-01-01

    Abstract There is limited research on trade-offs in demand between manual and voice interfaces of embedded and portable technologies. Mehler et al. identified differences in driving performance, visual engagement and workload between two contrasting embedded vehicle system designs (Chevrolet MyLink and Volvo Sensus). The current study extends this work by comparing these embedded systems with a smartphone (Samsung Galaxy S4). None of the voice interfaces eliminated visual demand. Relative to placing calls manually, both embedded voice interfaces resulted in less eyes-off-road time than the smartphone. Errors were most frequent when calling contacts using the smartphone. The smartphone and MyLink allowed addresses to be entered using compound voice commands resulting in shorter eyes-off-road time compared with the menu-based Sensus but with many more errors. Driving performance and physiological measures indicated increased demand when performing secondary tasks relative to ‘just driving’, but were not significantly different between the smartphone and embedded systems. Practitioner Summary: The findings show that embedded system and portable device voice interfaces place fewer visual demands on the driver than manual interfaces, but they also underscore how differences in system designs can significantly affect not only the demands placed on drivers, but also the successful completion of tasks. PMID:27110964

  15. Continued use of an interactive computer game-based visual perception learning system in children with developmental delay.

    PubMed

    Lin, Hsien-Cheng; Chiu, Yu-Hsien; Chen, Yenming J; Wuang, Yee-Pay; Chen, Chiu-Ping; Wang, Chih-Chung; Huang, Chien-Ling; Wu, Tang-Meng; Ho, Wen-Hsien

    2017-11-01

    This study developed an interactive computer game-based visual perception learning system for special education children with developmental delay. To investigate whether perceived interactivity affects continued use of the system, this study developed a theoretical model of the process in which learners decide whether to continue using an interactive computer game-based visual perception learning system. The technology acceptance model, which considers perceived ease of use, perceived usefulness, and perceived playfulness, was extended by integrating perceived interaction (i.e., learner-instructor interaction and learner-system interaction) and then analyzing the effects of these perceptions on satisfaction and continued use. Data were collected from 150 participants (rehabilitation therapists, medical paraprofessionals, and parents of children with developmental delay) recruited from a single medical center in Taiwan. Structural equation modeling and partial-least-squares techniques were used to evaluate relationships within the model. The modeling results indicated that both perceived ease of use and perceived usefulness were positively associated with both learner-instructor interaction and learner-system interaction. However, perceived playfulness only had a positive association with learner-system interaction and not with learner-instructor interaction. Moreover, satisfaction was positively affected by perceived ease of use, perceived usefulness, and perceived playfulness. Thus, satisfaction positively affects continued use of the system. The data obtained by this study can be applied by researchers, designers of computer game-based learning systems, special education workers, and medical professionals. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Multiscale Processes of Hurricane Sandy (2012) as Revealed by the CAMVis-MAP

    NASA Astrophysics Data System (ADS)

    Shen, B.; Li, J. F.; Cheung, S.

    2013-12-01

    In late October 2012, Storm Sandy made landfall near Brigantine, New Jersey, devastating surrounding areas and causing tremendous economic loss and hundreds of fatalities (Blake et al., 2013). An estimated damage of $50 billion made Sandy become the second costliest tropical cyclone (TC) in US history, surpassed only by Hurricane Katrina (2005). Central questions to be addressed include (1) to what extent the lead time of severe storm prediction such as Sandy can be extended (e.g., Emanuel 2012); and (2) whether and how advanced global model, supercomputing technology and numerical algorithm can help effectively illustrate the complicated physical processes that are associated with the evolution of the storms. In this study, the predictability of Sandy is addressed with a focus on short-term (or extended-range) genesis prediction as the first step toward the goal of understanding the relationship between extreme events, such as Sandy, and the current climate. The newly deployed Coupled Advanced global mesoscale Modeling (GMM) and concurrent Visualization (CAMVis) system is used for this study. We will show remarkable simulations of Hurricane Sandy with the GMM, including realistic 7-day track and intensity forecast and genesis predictions with a lead time of up to 6 days (e.g., Shen et al., 2013, GRL, submitted). We then discuss the enabling role of the high-resolution 4-D (time-X-Y-Z) visualizations in illustrating TC's transient dynamics and its interaction with tropical waves. In addition, we have finished the parallel implementation of the ensemble empirical mode decomposition (PEEMD, Cheung et al., 2013, AGU13, submitted) method that will be soon integrated into the multiscale analysis package (MAP) for the analysis of tropical weather systems such as TCs and tropical waves. While the original EEMD has previously shown superior performance in decomposition of nonlinear (local) and non-stationary data into different intrinsic modes which stay within the natural filter period windows, the PEEMD achieves a speedup of over 100 times as compared to the original EEMD. The advanced GMM, 4D visualizations and PEEMD method are being used to examine the multiscale processes of Sandy and its environmental flows that may contribute to the extended lead-time predictability of Hurricane Sandy. Figure 1: Evolution of Hurricane Sandy (2012) as revealed by the advanced visualization.

  17. Using secure web services to visualize poison center data for nationwide biosurveillance: a case study.

    PubMed

    Savel, Thomas G; Bronstein, Alvin; Duck, William; Rhodes, M Barry; Lee, Brian; Stinn, John; Worthen, Katherine

    2010-01-01

    Real-time surveillance systems are valuable for timely response to public health emergencies. It has been challenging to leverage existing surveillance systems in state and local communities, and, using a centralized architecture, add new data sources and analytical capacity. Because this centralized model has proven to be difficult to maintain and enhance, the US Centers for Disease Control and Prevention (CDC) has been examining the ability to use a federated model based on secure web services architecture, with data stewardship remaining with the data provider. As a case study for this approach, the American Association of Poison Control Centers and the CDC extended an existing data warehouse via a secure web service, and shared aggregate clinical effects and case counts data by geographic region and time period. To visualize these data, CDC developed a web browser-based interface, Quicksilver, which leveraged the Google Maps API and Flot, a javascript plotting library. Two iterations of the NPDS web service were completed in 12 weeks. The visualization client, Quicksilver, was developed in four months. This implementation of web services combined with a visualization client represents incremental positive progress in transitioning national data sources like BioSense and NPDS to a federated data exchange model. Quicksilver effectively demonstrates how the use of secure web services in conjunction with a lightweight, rapidly deployed visualization client can easily integrate isolated data sources for biosurveillance.

  18. Bedside assistance in freehand ultrasonic diagnosis by real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion

    NASA Astrophysics Data System (ADS)

    Fukuzawa, M.; Kawata, K.; Nakamori, N.; Kitsunezuka, Y.

    2011-03-01

    By real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion, freehand ultrasonic diagnosis of neonatal ischemic diseases has been assisted at the bedside. The 2D ultrasonic movie was taken with a conventional ultrasonic apparatus (ATL HDI5000) and ultrasonic probes of 5-7 MHz with the compact tilt-sensor to measure the probe orientation. The real-time 3D visualization was realized by developing an extended version of the PC-based visualization system. The software was originally developed on the DirectX platform and optimized with the streaming SIMD extensions. The 3D scatter diagram of the latest pulsatile tissues has been continuously generated and visualized as projection image with the ultrasonic movie in the current section more than 15 fps. It revealed the 3D structure of pulsatile tissues such as middle and posterior cerebral arteries, Willis ring and cerebellar arteries, in which pediatricians have great interests in the blood flow because asphyxiated and/or low-birth-weight neonates have a high risk of ischemic diseases such as hypoxic-ischemic encephalopathy and periventricular leukomalacia. Since the pulsatile tissue-motion is due to local blood flow, it can be concluded that the system developed in this work is very useful to assist freehand ultrasonic diagnosis of ischemic diseases in the neonatal cranium.

  19. Natural Inspired Intelligent Visual Computing and Its Application to Viticulture.

    PubMed

    Ang, Li Minn; Seng, Kah Phooi; Ge, Feng Lu

    2017-05-23

    This paper presents an investigation of natural inspired intelligent computing and its corresponding application towards visual information processing systems for viticulture. The paper has three contributions: (1) a review of visual information processing applications for viticulture; (2) the development of natural inspired computing algorithms based on artificial immune system (AIS) techniques for grape berry detection; and (3) the application of the developed algorithms towards real-world grape berry images captured in natural conditions from vineyards in Australia. The AIS algorithms in (2) were developed based on a nature-inspired clonal selection algorithm (CSA) which is able to detect the arcs in the berry images with precision, based on a fitness model. The arcs detected are then extended to perform the multiple arcs and ring detectors information processing for the berry detection application. The performance of the developed algorithms were compared with traditional image processing algorithms like the circular Hough transform (CHT) and other well-known circle detection methods. The proposed AIS approach gave a Fscore of 0.71 compared with Fscores of 0.28 and 0.30 for the CHT and a parameter-free circle detection technique (RPCD) respectively.

  20. Three-dimensional Talairach-Tournoux brain atlas

    NASA Astrophysics Data System (ADS)

    Fang, Anthony; Nowinski, Wieslaw L.; Nguyen, Bonnie T.; Bryan, R. Nick

    1995-04-01

    The Talairach-Tournoux Stereotaxic Atlas of the human brain is a frequently consulted resource in stereotaxic neurosurgery and computer-based neuroradiology. Its primary application lies in the 2-D analysis and interpretation of neurological images. However, for the purpose of the analysis and visualization of shapes and forms, accurate mensuration of volumes, or 3-D models matching, a 3-D representation of the atlas is essential. This paper proposes and describes, along with its difficulties, a 3-D geometric extension of the atlas. We introduce a `zero-potential' surface smoothing technique, along with a space-dependent convolution kernel and space-dependent normalization. The mesh-based atlas structures are hierarchically organized, and anatomically conform to the original atlas. Structures and their constituents can be independently selected and manipulated in real-time within an integrated system. The extended atlas may be navigated by itself, or interactively registered with patient data with the proportional grid system (piecewise linear) transformation. Visualization of the geometric atlas along with patient data gives a remarkable visual `feel' of the biological structures, not usually perceivable to the untrained eyes in conventional 2-D atlas to image analysis.

  1. Manipulating and Visualizing Molecular Interactions in Customized Nanoscale Spaces

    NASA Astrophysics Data System (ADS)

    Stabile, Francis; Henkin, Gil; Berard, Daniel; Shayegan, Marjan; Leith, Jason; Leslie, Sabrina

    We present a dynamically adjustable nanofluidic platform for formatting the conformations of and visualizing the interaction kinetics between biomolecules in solution, offering new time resolution and control of the reaction processes. This platform extends convex lens-induced confinement (CLiC), a technique for imaging molecules under confinement, by introducing a system for in situ modification of the chemical environment; this system uses a deep microchannel to diffusively exchange reagents within the nanoscale imaging region, whose height is fixed by a nanopost array. To illustrate, we visualize and manipulate salt-induced, surfactant-induced, and enzyme-induced reactions between small-molecule reagents and DNA molecules, where the conformations of the DNA molecules are formatted by the imposed nanoscale confinement. By using nanofabricated, nonabsorbing, low-background glass walls to confine biomolecules, our nanofluidic platform facilitates quantitative exploration of physiologically and biotechnologically relevant processes at the nanoscale. This device provides new kinetic information about dynamic chemical processes at the single-molecule level, using advancements in the CLiC design including a microchannel-based diffuser and postarray-based dialysis slit.

  2. Visual system evolution and the nature of the ancestral snake.

    PubMed

    Simões, B F; Sampaio, F L; Jared, C; Antoniazzi, M M; Loew, E R; Bowmaker, J K; Rodriguez, A; Hart, N S; Hunt, D M; Partridge, J C; Gower, D J

    2015-07-01

    The dominant hypothesis for the evolutionary origin of snakes from 'lizards' (non-snake squamates) is that stem snakes acquired many snake features while passing through a profound burrowing (fossorial) phase. To investigate this, we examined the visual pigments and their encoding opsin genes in a range of squamate reptiles, focusing on fossorial lizards and snakes. We sequenced opsin transcripts isolated from retinal cDNA and used microspectrophotometry to measure directly the spectral absorbance of the photoreceptor visual pigments in a subset of samples. In snakes, but not lizards, dedicated fossoriality (as in Scolecophidia and the alethinophidian Anilius scytale) corresponds with loss of all visual opsins other than RH1 (λmax 490-497 nm); all other snakes (including less dedicated burrowers) also have functional sws1 and lws opsin genes. In contrast, the retinas of all lizards sampled, even highly fossorial amphisbaenians with reduced eyes, express functional lws, sws1, sws2 and rh1 genes, and most also express rh2 (i.e. they express all five of the visual opsin genes present in the ancestral vertebrate). Our evidence of visual pigment complements suggests that the visual system of stem snakes was partly reduced, with two (RH2 and SWS2) of the ancestral vertebrate visual pigments being eliminated, but that this did not extend to the extreme additional loss of SWS1 and LWS that subsequently occurred (probably independently) in highly fossorial extant scolecophidians and A. scytale. We therefore consider it unlikely that the ancestral snake was as fossorial as extant scolecophidians, whether or not the latter are para- or monophyletic. © 2015 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2015 European Society For Evolutionary Biology.

  3. Location-Driven Image Retrieval for Images Collected by a Mobile Robot

    NASA Astrophysics Data System (ADS)

    Tanaka, Kanji; Hirayama, Mitsuru; Okada, Nobuhiro; Kondo, Eiji

    Mobile robot teleoperation is a method for a human user to interact with a mobile robot over time and distance. Successful teleoperation depends on how well images taken by the mobile robot are visualized to the user. To enhance the efficiency and flexibility of the visualization, an image retrieval system on such a robot’s image database would be very useful. The main difference of the robot’s image database from standard image databases is that various relevant images exist due to variety of viewing conditions. The main contribution of this paper is to propose an efficient retrieval approach, named location-driven approach, utilizing correlation between visual features and real world locations of images. Combining the location-driven approach with the conventional feature-driven approach, our goal can be viewed as finding an optimal classifier between relevant and irrelevant feature-location pairs. An active learning technique based on support vector machine is extended for this aim.

  4. Usability and Visual Communication for Southern California Tsunami Evacuation Information: The importance of information design in disaster risk management

    NASA Astrophysics Data System (ADS)

    Jaenichen, C.; Schandler, S.; Wells, M.; Danielsen, T.

    2015-12-01

    Evacuation behavior, including participation and response, is rarely an individual and isolated process and the outcomes are usually systemic. Ineffective evacuation information can easily attribute to delayed evacuation response. Delays increase demands on already extended emergency personal, increase the likelihood of traffic congestion, and can cause harm to self and property. From an information design perspective, addressing issues in cognitive recall and emergency psychology, this case study examines evacuation messaging including written, audio, and visual presentation of information, and describes the application of design principles and role of visual communication for Southern California tsunami evacuation outreach. The niche of this project is the inclusion of cognitive processing as the driving influence when making formal design decisions and measurable data from a 4-year cognitive recall study to support the solution. Image included shows a tsunami evacaution map before and after the redesign.

  5. The characteristics of low-speed streaks in the near-wall region of a turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Smith, C. R.; Metzler, S. P.

    1983-04-01

    The discovery of an instantaneous spanwise velocity distribution consisting of alternative zones of high- and low-speed fluid which develop in the viscous sublayer and extend into the logarithmic region was one of the first clues to the existence of an ordered structure within a turbulent boundary layer. The present investigation is concerned with quantitative flow-visualization results obtained with the aid of a high-speed video flow visualization system which permits the detailed visual examination of both the statistics and characteristics of low-speed streaks over a much wider range of Reynolds numbers than has been possible before. Attention is given to streak appearance, mean streak spacing, the spanwise distribution of streaks, streak persistence, and aspects of streak merging and intermittency. The results indicate that the statistical characteristics of the spanwise spacing of low-speed streaks are essentially invariant with Reynolds number.

  6. Visualization of Motor Axon Navigation and Quantification of Axon Arborization In Mouse Embryos Using Light Sheet Fluorescence Microscopy.

    PubMed

    Liau, Ee Shan; Yen, Ya-Ping; Chen, Jun-An

    2018-05-11

    Spinal motor neurons (MNs) extend their axons to communicate with their innervating targets, thereby controlling movement and complex tasks in vertebrates. Thus, it is critical to uncover the molecular mechanisms of how motor axons navigate to, arborize, and innervate their peripheral muscle targets during development and degeneration. Although transgenic Hb9::GFP mouse lines have long served to visualize motor axon trajectories during embryonic development, detailed descriptions of the full spectrum of axon terminal arborization remain incomplete due to the pattern complexity and limitations of current optical microscopy. Here, we describe an improved protocol that combines light sheet fluorescence microscopy (LSFM) and robust image analysis to qualitatively and quantitatively visualize developing motor axons. This system can be easily adopted to cross genetic mutants or MN disease models with Hb9::GFP lines, revealing novel molecular mechanisms that lead to defects in motor axon navigation and arborization.

  7. DEVA: An extensible ontology-based annotation model for visual document collections

    NASA Astrophysics Data System (ADS)

    Jelmini, Carlo; Marchand-Maillet, Stephane

    2003-01-01

    The description of visual documents is a fundamental aspect of any efficient information management system, but the process of manually annotating large collections of documents is tedious and far from being perfect. The need for a generic and extensible annotation model therefore arises. In this paper, we present DEVA, an open, generic and expressive multimedia annotation framework. DEVA is an extension of the Dublin Core specification. The model can represent the semantic content of any visual document. It is described in the ontology language DAML+OIL and can easily be extended with external specialized ontologies, adapting the vocabulary to the given application domain. In parallel, we present the Magritte annotation tool, which is an early prototype that validates the DEVA features. Magritte allows to manually annotating image collections. It is designed with a modular and extensible architecture, which enables the user to dynamically adapt the user interface to specialized ontologies merged into DEVA.

  8. Two wrongs make a right: linear increase of accuracy of visually-guided manual pointing, reaching, and height-matching with increase in hand-to-body distance.

    PubMed

    Li, Wenxun; Matin, Leonard

    2005-03-01

    Measurements were made of the accuracy of open-loop manual pointing and height-matching to a visual target whose elevation was perceptually mislocalized. Accuracy increased linearly with distance of the hand from the body, approaching complete accuracy at full extension; with the hand close to the body (within the midfrontal plane), the manual errors equaled the magnitude of the perceptual mislocalization. The visual inducing stimulus responsible for the perceptual errors was a single pitched-from-vertical line that was long (50 degrees), eccentrically-located (25 degrees horizontal), and viewed in otherwise total darkness. The line induced perceptual errors in the elevation of a small, circular visual target set to appear at eye level (VPEL), a setting that changed linearly with the change in the line's visual pitch as has been previously reported (pitch: -30 degrees topbackward to 30 degrees topforward); the elevation errors measured by VPEL settings varied systematically with pitch through an 18 degrees range. In a fourth experiment the visual inducing stimulus responsible for the perceptual errors was shown to induce separately-measured errors in the manual setting of the arm to feel horizontal that were also distance-dependent. The distance-dependence of the visually-induced changes in felt arm position accounts quantitatively for the distance-dependence of the manual errors in pointing/reaching and height matching to the visual target: The near equality of the changes in felt horizontal and changes in pointing/reaching with the finger at the end of the fully extended arm is responsible for the manual accuracy of the fully-extended point; with the finger in the midfrontal plane their large difference is responsible for the inaccuracies of the midfrontal-plane point. The results are inconsistent with the widely-held but controversial theory that visual spatial information employed for perception and action are dissociated and different with no illusory visual influence on action. A different two-system theory, the Proximal/Distal model, employing the same signals from vision and from the body-referenced mechanism with different weights for different hand-to-body distances, accounts for both the perceptual and the manual results in the present experiments.

  9. Perceptual adaptation in the use of night vision goggles

    NASA Technical Reports Server (NTRS)

    Durgin, Frank H.; Proffitt, Dennis R.

    1992-01-01

    The image intensification (I sup 2) systems studied for this report were the biocular AN/PVS-7(NVG) and the binocular AN/AVS-6(ANVIS). Both are quite impressive for purposes of revealing the structure of the environment in a fairly straightforward way in extremely low-light conditions. But these systems represent an unusual viewing medium. The perceptual information available through I sup 2 systems is different in a variety of ways from the typical input of everyday vision, and extensive training and practice is required for optimal use. Using this sort of system involves a kind of perceptual skill learning, but is may also involve visual adaptations that are not simply an extension of normal vision. For example, the visual noise evident in the goggles in very low-light conditions results in unusual statistical properties in visual input. Because we had recently discovered a strong and enduring aftereffect of perceived texture density which seemed to be sensitive to precisely the sorts of statistical distortions introduced by I sup 2 systems, it occurred to use that visual noise of this sort might be a very adapting stimulus for texture density and produce an aftereffect that extended into normal vision once the goggles were removed. We have not found any experimental evidence that I sup 2 systems produce texture density aftereffects. The nature of the texture density aftereffect is briefly explained, followed by an accounting of our studies of I sup 2 systems and our most recent work on the texture density aftereffect. A test for spatial frequency adaptation after exposure to NVG's is also reported, as is a study of perceived depth from motion (motion parallax) while wearing the biocular goggles. We conclude with a summary of our findings.

  10. Computer interfaces for the visually impaired

    NASA Technical Reports Server (NTRS)

    Higgins, Gerry

    1991-01-01

    Information access via computer terminals extends to blind and low vision persons employed in many technical and nontechnical disciplines. Two aspects are detailed of providing computer technology for persons with a vision related handicap. First, research into the most effective means of integrating existing adaptive technologies into information systems was made. This was conducted to integrate off the shelf products with adaptive equipment for cohesive integrated information processing systems. Details are included that describe the type of functionality required in software to facilitate its incorporation into a speech and/or braille system. The second aspect is research into providing audible and tactile interfaces to graphics based interfaces. Parameters are included for the design and development of the Mercator Project. The project will develop a prototype system for audible access to graphics based interfaces. The system is being built within the public domain architecture of X windows to show that it is possible to provide access to text based applications within a graphical environment. This information will be valuable to suppliers to ADP equipment since new legislation requires manufacturers to provide electronic access to the visually impaired.

  11. Be the Volume: A Classroom Activity to Visualize Volume Estimation

    ERIC Educational Resources Information Center

    Mikhaylov, Jessica

    2011-01-01

    A hands-on activity can help multivariable calculus students visualize surfaces and understand volume estimation. This activity can be extended to include the concepts of Fubini's Theorem and the visualization of the curves resulting from cross-sections of the surface. This activity uses students as pillars and a sheet or tablecloth for the…

  12. Reading Acquisition Enhances an Early Visual Process of Contour Integration

    ERIC Educational Resources Information Center

    Szwed, Marcin; Ventura, Paulo; Querido, Luis; Cohen, Laurent; Dehaene, Stanislas

    2012-01-01

    The acquisition of reading has an extensive impact on the developing brain and leads to enhanced abilities in phonological processing and visual letter perception. Could this expertise also extend to early visual abilities outside the reading domain? Here we studied the performance of illiterate, ex-illiterate and literate adults closely matched…

  13. Consistent Visual Analyses of Intrasubject Data

    ERIC Educational Resources Information Center

    Kahng, SungWoo; Chung, Kyong-Mee; Gutshall, Katharine; Pitts, Steven C.; Kao, Joyce; Girolami, Kelli

    2010-01-01

    Visual inspection of single-case data is the primary method of interpretation of the effects of an independent variable on a dependent variable in applied behavior analysis. The purpose of the current study was to replicate and extend the results of DeProspero and Cohen (1979) by reexamining the consistency of visual analysis across raters. We…

  14. IP-Based Video Modem Extender Requirements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierson, L G; Boorman, T M; Howe, R E

    2003-12-16

    Visualization is one of the keys to understanding large complex data sets such as those generated by the large computing resources purchased and developed by the Advanced Simulation and Computing program (aka ASCI). In order to be convenient to researchers, visualization data must be distributed to offices and large complex visualization theaters. Currently, local distribution of the visual data is accomplished by distance limited modems and RGB switches that simply do not scale to hundreds of users across the local, metropolitan, and WAN distances without incurring large costs in fiber plant installation and maintenance. Wide Area application over the DOEmore » Complex is infeasible using these limited distance RGB extenders. On the other hand, Internet Protocols (IP) over Ethernet is a scalable well-proven technology that can distribute large volumes of data over these distances. Visual data has been distributed at lower resolutions over IP in industrial applications. This document describes requirements of the ASCI program in visual signal distribution for the purpose of identifying industrial partners willing to develop products to meet ASCI's needs.« less

  15. Alterations to global but not local motion processing in long-term ecstasy (MDMA) users.

    PubMed

    White, Claire; Brown, John; Edwards, Mark

    2014-07-01

    Growing evidence indicates that the main psychoactive ingredient in the illegal drug "ecstasy" (methylendioxymethamphetamine) causes reduced activity in the serotonin and gamma-aminobutyric acid (GABA) systems in humans. On the basis of substantial serotonin input to the occipital lobe, recent research investigated visual processing in long-term users and found a larger magnitude of the tilt aftereffect, interpreted to reflect broadened orientation tuning bandwidths. Further research found higher orientation discrimination thresholds and reduced long-range interactions in the primary visual area of ecstasy users. The aim of the present research was to investigate whether serotonin-mediated V1 visual processing deficits in ecstasy users extend to motion processing mechanisms. Forty-five participants (21 controls, 24 drug users) completed two psychophysical studies: A direction discrimination study directly measured local motion processing in V1, while a motion coherence task tested global motion processing in area V5/MT. "Primary" ecstasy users (n = 18), those without substantial polydrug use, had significantly lower global motion thresholds than controls [p = 0.027, Cohen's d = 0.78 (large)], indicating increased sensitivity to global motion stimuli, but no difference in local motion processing (p = 0.365). These results extend on previous research investigating the long-term effects of illicit drugs on visual processing. Two possible explanations are explored: defuse attentional processes may be facilitating spatial pooling of motion signals in users. Alternatively, it may be that a GABA-mediated disruption to V5/MT processing is reducing spatial suppression and therefore improving global motion perception in ecstasy users.

  16. Attention modulates perception of visual space

    PubMed Central

    Zhou, Liu; Deng, Chenglong; Ooi, Teng Leng; He, Zijiang J.

    2017-01-01

    Attention readily facilitates the detection and discrimination of objects, but it is not known whether it helps to form the vast volume of visual space that contains the objects and where actions are implemented. Conventional wisdom suggests not, given the effortless ease with which we perceive three-dimensional (3D) scenes on opening our eyes. Here, we show evidence to the contrary. In Experiment 1, the observer judged the location of a briefly presented target, placed either on the textured ground or ceiling surface. Judged location was more accurate for a target on the ground, provided that the ground was visible and that the observer directed attention to the lower visual field, not the upper field. This reveals that attention facilitates space perception with reference to the ground. Experiment 2 showed that judged location of a target in mid-air, with both ground and ceiling surfaces present, was more accurate when the observer directed their attention to the lower visual field; this indicates that the attention effect extends to visual space above the ground. These findings underscore the role of attention in anchoring visual orientation in space, which is arguably a primal event that enhances one’s ability to interact with objects and surface layouts within the visual space. The fact that the effect of attention was contingent on the ground being visible suggests that our terrestrial visual system is best served by its ecological niche. PMID:29177198

  17. Programmable diffractive optical elements for extending the depth of focus in ophthalmic optics

    NASA Astrophysics Data System (ADS)

    Romero, Lenny A.; Millán, María. S.; Jaroszewicz, Zbigniew; Kołodziejczyk, Andrzej

    2015-01-01

    The depth of focus (DOF) defines the axial range of high lateral resolution in the image space for object position. Optical devices with a traditional lens system typically have a limited DOF. However, there are applications such as in ophthalmology, which require a large DOF in comparison to a traditional optical system, this is commonly known as extended DOF (EDOF). In this paper we explore Programmable Diffractive Optical Elements (PDOEs), with EDOF, as an alternative solution to visual impairments, especially presbyopia. These DOEs were written onto a reflective liquid cystal on silicon (LCoS) spatial light modulator (SLM). Several designs of the elements are analyzed: the Forward Logarithmic Axicon (FLAX), the Axilens (AXL), the Light sword Optical Element (LSOE), the Peacock Eye Optical Element (PE) and Double Peacock Eye Optical Element (DPE). These elements focus an incident plane wave into a segment of the optical axis. The performances of the PDOEs are compared with those of multifocal lenses. In all cases, we obtained the point spread function and the image of an extended object. The results are presented and discussed.

  18. Dyscalculia and the Calculating Brain.

    PubMed

    Rapin, Isabelle

    2016-08-01

    Dyscalculia, like dyslexia, affects some 5% of school-age children but has received much less investigative attention. In two thirds of affected children, dyscalculia is associated with another developmental disorder like dyslexia, attention-deficit disorder, anxiety disorder, visual and spatial disorder, or cultural deprivation. Infants, primates, some birds, and other animals are born with the innate ability, called subitizing, to tell at a glance whether small sets of scattered dots or other items differ by one or more item. This nonverbal approximate number system extends mostly to single digit sets as visual discrimination drops logarithmically to "many" with increasing numerosity (size effect) and crowding (distance effect). Preschoolers need several years and specific teaching to learn verbal names and visual symbols for numbers and school agers to understand their cardinality and ordinality and the invariance of their sequence (arithmetic number line) that enables calculation. This arithmetic linear line differs drastically from the nonlinear approximate number system mental number line that parallels the individual number-tuned neurons in the intraparietal sulcus in monkeys and overlying scalp distribution of discrete functional magnetic resonance imaging activations by number tasks in man. Calculation is a complex skill that activates both visual and spatial and visual and verbal networks. It is less strongly left lateralized than language, with approximate number system activation somewhat more right sided and exact number and arithmetic activation more left sided. Maturation and increasing number skill decrease associated widespread non-numerical brain activations that persist in some individuals with dyscalculia, which has no single, universal neurological cause or underlying mechanism in all affected individuals. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Game engines and immersive displays

    NASA Astrophysics Data System (ADS)

    Chang, Benjamin; Destefano, Marc

    2014-02-01

    While virtual reality and digital games share many core technologies, the programming environments, toolkits, and workflows for developing games and VR environments are often distinct. VR toolkits designed for applications in visualization and simulation often have a different feature set or design philosophy than game engines, while popular game engines often lack support for VR hardware. Extending a game engine to support systems such as the CAVE gives developers a unified development environment and the ability to easily port projects, but involves challenges beyond just adding stereo 3D visuals. In this paper we outline the issues involved in adapting a game engine for use with an immersive display system including stereoscopy, tracking, and clustering, and present example implementation details using Unity3D. We discuss application development and workflow approaches including camera management, rendering synchronization, GUI design, and issues specific to Unity3D, and present examples of projects created for a multi-wall, clustered, stereoscopic display.

  20. A human visual model-based approach of the visual attention and performance evaluation

    NASA Astrophysics Data System (ADS)

    Le Meur, Olivier; Barba, Dominique; Le Callet, Patrick; Thoreau, Dominique

    2005-03-01

    In this paper, a coherent computational model of visual selective attention for color pictures is described and its performances are precisely evaluated. The model based on some important behaviours of the human visual system is composed of four parts: visibility, perception, perceptual grouping and saliency map construction. This paper focuses mainly on its performances assessment by achieving extended subjective and objective comparisons with real fixation points captured by an eye-tracking system used by the observers in a task-free viewing mode. From the knowledge of the ground truth, qualitatively and quantitatively comparisons have been made in terms of the measurement of the linear correlation coefficient (CC) and of the Kulback Liebler divergence (KL). On a set of 10 natural color images, the results show that the linear correlation coefficient and the Kullback Leibler divergence are of about 0.71 and 0.46, respectively. CC and Kl measures with this model are respectively improved by about 4% and 7% compared to the best model proposed by L.Itti. Moreover, by comparing the ability of our model to predict eye movements produced by an average observer, we can conclude that our model succeeds quite well in predicting the spatial locations of the most important areas of the image content.

  1. A transparently scalable visualization architecture for exploring the universe.

    PubMed

    Fu, Chi-Wing; Hanson, Andrew J

    2007-01-01

    Modern astronomical instruments produce enormous amounts of three-dimensional data describing the physical Universe. The currently available data sets range from the solar system to nearby stars and portions of the Milky Way Galaxy, including the interstellar medium and some extrasolar planets, and extend out to include galaxies billions of light years away. Because of its gigantic scale and the fact that it is dominated by empty space, modeling and rendering the Universe is very different from modeling and rendering ordinary three-dimensional virtual worlds at human scales. Our purpose is to introduce a comprehensive approach to an architecture solving this visualization problem that encompasses the entire Universe while seeking to be as scale-neutral as possible. One key element is the representation of model-rendering procedures using power scaled coordinates (PSC), along with various PSC-based techniques that we have devised to generalize and optimize the conventional graphics framework to the scale domains of astronomical visualization. Employing this architecture, we have developed an assortment of scale-independent modeling and rendering methods for a large variety of astronomical models, and have demonstrated scale-insensitive interactive visualizations of the physical Universe covering scales ranging from human scale to the Earth, to the solar system, to the Milky Way Galaxy, and to the entire observable Universe.

  2. Development of a Web-Based Visualization Platform for Climate Research Using Google Earth

    NASA Technical Reports Server (NTRS)

    Sun, Xiaojuan; Shen, Suhung; Leptoukh, Gregory G.; Wang, Panxing; Di, Liping; Lu, Mingyue

    2011-01-01

    Recently, it has become easier to access climate data from satellites, ground measurements, and models from various data centers, However, searching. accessing, and prc(essing heterogeneous data from different sources are very tim -consuming tasks. There is lack of a comprehensive visual platform to acquire distributed and heterogeneous scientific data and to render processed images from a single accessing point for climate studies. This paper. documents the design and implementation of a Web-based visual, interoperable, and scalable platform that is able to access climatological fields from models, satellites, and ground stations from a number of data sources using Google Earth (GE) as a common graphical interface. The development is based on the TCP/IP protocol and various data sharing open sources, such as OPeNDAP, GDS, Web Processing Service (WPS), and Web Mapping Service (WMS). The visualization capability of integrating various measurements into cE extends dramatically the awareness and visibility of scientific results. Using embedded geographic information in the GE, the designed system improves our understanding of the relationships of different elements in a four dimensional domain. The system enables easy and convenient synergistic research on a virtual platform for professionals and the general public, gr$tly advancing global data sharing and scientific research collaboration.

  3. Automatic optimization high-speed high-resolution OCT retinal imaging at 1μm

    NASA Astrophysics Data System (ADS)

    Cua, Michelle; Liu, Xiyun; Miao, Dongkai; Lee, Sujin; Lee, Sieun; Bonora, Stefano; Zawadzki, Robert J.; Mackenzie, Paul J.; Jian, Yifan; Sarunic, Marinko V.

    2015-03-01

    High-resolution OCT retinal imaging is important in providing visualization of various retinal structures to aid researchers in better understanding the pathogenesis of vision-robbing diseases. However, conventional optical coherence tomography (OCT) systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking optical coherence tomography (OCT) system with automatic optimization for high-resolution, extended-focal-range clinical retinal imaging. A variable-focus liquid lens was added to correct for de-focus in real-time. A GPU-accelerated segmentation and optimization was used to provide real-time layer-specific enface visualization as well as depth-specific focus adjustment. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the ONH, from which we extracted clinically-relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.

  4. Retinal optical coherence tomography at 1 μm with dynamic focus control and axial motion tracking

    NASA Astrophysics Data System (ADS)

    Cua, Michelle; Lee, Sujin; Miao, Dongkai; Ju, Myeong Jin; Mackenzie, Paul J.; Jian, Yifan; Sarunic, Marinko V.

    2016-02-01

    High-resolution optical coherence tomography (OCT) retinal imaging is important to noninvasively visualize the various retinal structures to aid in better understanding of the pathogenesis of vision-robbing diseases. However, conventional OCT systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking OCT system with automatic focus optimization for high-resolution, extended-focal-range clinical retinal imaging by incorporating a variable-focus liquid lens into the sample arm optics. Retinal layer tracking and selection was performed using a graphics processing unit accelerated processing platform for focus optimization, providing real-time layer-specific en face visualization. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the retina and optic nerve head, from which we extracted clinically relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.

  5. Retinal optical coherence tomography at 1 μm with dynamic focus control and axial motion tracking.

    PubMed

    Cua, Michelle; Lee, Sujin; Miao, Dongkai; Ju, Myeong Jin; Mackenzie, Paul J; Jian, Yifan; Sarunic, Marinko V

    2016-02-01

    High-resolution optical coherence tomography (OCT) retinal imaging is important to noninvasively visualize the various retinal structures to aid in better understanding of the pathogenesis of vision-robbing diseases. However, conventional OCT systems have a trade-off between lateral resolution and depth-of-focus. In this report, we present the development of a focus-stacking OCT system with automatic focus optimization for high-resolution, extended-focal-range clinical retinal imaging by incorporating a variable-focus liquid lens into the sample arm optics. Retinal layer tracking and selection was performed using a graphics processing unit accelerated processing platform for focus optimization, providing real-time layer-specific en face visualization. After optimization, multiple volumes focused at different depths were acquired, registered, and stitched together to yield a single, high-resolution focus-stacked dataset. Using this system, we show high-resolution images of the retina and optic nerve head, from which we extracted clinically relevant parameters such as the nerve fiber layer thickness and lamina cribrosa microarchitecture.

  6. Visual performance after bilateral implantation of 2 new presbyopia-correcting intraocular lenses: Trifocal versus extended range of vision.

    PubMed

    Monaco, Gaspare; Gari, Mariangela; Di Censo, Fabio; Poscia, Andrea; Ruggi, Giada; Scialdone, Antonio

    2017-06-01

    To compare the visual outcomes and quality of vision of 2 new diffractive multifocal intraocular lenses (IOLs) with those of a monofocal IOL. Fatebenefratelli e Oftalmico Hospital, Milan, Italy. Prospective case series. Patients had bilateral cataract surgery with implantation of a trifocal IOL (Panoptix), an extended-range-of-vision IOL (Symfony), or a monofocal IOL (SN60WF). Postoperative examinations included assessing distance, intermediate, and near visual acuity; binocular defocus; intraocular and total aberrations; point-spread function (PSF); modulation transfer function (MTF); retinal straylight; and quality-of-vision (QoV) and spectacle-dependence questionnaires. Seventy-six patients (152 eyes) were assessed for study eligibility. Twenty patients (40 eyes) in each arm of the study (60 patients, 120 eyes) completed the outcome assessment. At the 4-month follow-up, the trifocal group had significantly better near visual acuity than the extended-range-of-vision group (P = .005). The defocus curve showed the trifocal IOL had better intermediate/near performance than the extended-range-of-vision IOL and both multifocal IOLs performed better than the monofocal IOL. Intragroup comparison of the total higher-order aberrations, PSF, MTF, and retinal straylight were not statistically different. The QoV questionnaire results showed no differences in dysphotopsia between the multifocal IOL groups; however, the results were significantly higher than in the monofocal IOL group. Both multifocal IOLs seemed to be good options for patients with intermediate-vision requirements, whereas the trifocal IOL might be better for patients with near-vision requirements. The significant perception of visual side effects indicates that patients still must be counseled about these effects before a multifocal IOL is implanted. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  7. Reward associations impact both iconic and visual working memory.

    PubMed

    Infanti, Elisa; Hickey, Clayton; Turatto, Massimo

    2015-02-01

    Reward plays a fundamental role in human behavior. A growing number of studies have shown that stimuli associated with reward become salient and attract attention. The aim of the present study was to extend these results into the investigation of iconic memory and visual working memory. In two experiments we asked participants to perform a visual-search task where different colors of the target stimuli were paired with high or low reward. We then tested whether the pre-established feature-reward association affected performance on a subsequent visual memory task, in which no reward was provided. In this test phase participants viewed arrays of 8 objects, one of which had unique color that could match the color associated with reward during the previous visual-search task. A probe appeared at varying intervals after stimulus offset to identify the to-be-reported item. Our results suggest that reward biases the encoding of visual information such that items characterized by a reward-associated feature interfere with mnemonic representations of other items in the test display. These results extend current knowledge regarding the influence of reward on early cognitive processes, suggesting that feature-reward associations automatically interact with the encoding and storage of visual information, both in iconic memory and visual working memory. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. A methodology for coupling a visual enhancement device to human visual attention

    NASA Astrophysics Data System (ADS)

    Todorovic, Aleksandar; Black, John A., Jr.; Panchanathan, Sethuraman

    2009-02-01

    The Human Variation Model views disability as simply "an extension of the natural physical, social, and cultural variability of mankind." Given this human variation, it can be difficult to distinguish between a prosthetic device such as a pair of glasses (which extends limited visual abilities into the "normal" range) and a visual enhancement device such as a pair of binoculars (which extends visual abilities beyond the "normal" range). Indeed, there is no inherent reason why the design of visual prosthetic devices should be limited to just providing "normal" vision. One obvious enhancement to human vision would be the ability to visually "zoom" in on objects that are of particular interest to the viewer. Indeed, it could be argued that humans already have a limited zoom capability, which is provided by their highresolution foveal vision. However, humans still find additional zooming useful, as evidenced by their purchases of binoculars equipped with mechanized zoom features. The fact that these zoom features are manually controlled raises two questions: (1) Could a visual enhancement device be developed to monitor attention and control visual zoom automatically? (2) If such a device were developed, would its use be experienced by users as a simple extension of their natural vision? This paper details the results of work with two research platforms called the Remote Visual Explorer (ReVEx) and the Interactive Visual Explorer (InVEx) that were developed specifically to answer these two questions.

  9. Ensemble: a web-based system for psychology survey and experiment management.

    PubMed

    Tomic, Stefan T; Janata, Petr

    2007-08-01

    We provide a description of Ensemble, a suite of Web-integrated modules for managing and analyzing data associated with psychology experiments in a small research lab. The system delivers interfaces via a Web browser for creating and presenting simple surveys without the need to author Web pages and with little or no programming effort. The surveys may be extended by selecting and presenting auditory and/or visual stimuli with MATLAB and Flash to enable a wide range of psychophysical and cognitive experiments which do not require the recording of precise reaction times. Additionally, one is provided with the ability to administer and present experiments remotely. The software technologies employed by the various modules of Ensemble are MySQL, PHP, MATLAB, and Flash. The code for Ensemble is open source and available to the public, so that its functions can be readily extended by users. We describe the architecture of the system, the functionality of each module, and provide basic examples of the interfaces.

  10. Multi-brain fusion and applications to intelligence analysis

    NASA Astrophysics Data System (ADS)

    Stoica, A.; Matran-Fernandez, A.; Andreou, D.; Poli, R.; Cinel, C.; Iwashita, Y.; Padgett, C.

    2013-05-01

    In a rapid serial visual presentation (RSVP) images are shown at an extremely rapid pace. Yet, the images can still be parsed by the visual system to some extent. In fact, the detection of specific targets in a stream of pictures triggers a characteristic electroencephalography (EEG) response that can be recognized by a brain-computer interface (BCI) and exploited for automatic target detection. Research funded by DARPA's Neurotechnology for Intelligence Analysts program has achieved speed-ups in sifting through satellite images when adopting this approach. This paper extends the use of BCI technology from individual analysts to collaborative BCIs. We show that the integration of information in EEGs collected from multiple operators results in performance improvements compared to the single-operator case.

  11. Nondestructive Testing of Overhead Transmission LINES—NUMERICAL and Experimental Investigation

    NASA Astrophysics Data System (ADS)

    Kulkarni, S.; Hurlebaus, S.

    2009-03-01

    Overhead transmission lines are periodically inspected using both on-ground and helicopter-aided visual inspection. Factors including sun glare, cloud cover, close proximity to power lines and the rapidly changing visual circumstances make airborne inspection of power lines a particularly hazardous task. In this study, a finite element model is developed that can be used to create the theoretical dispersion curves of an overhead transmission line. The numerical results are then verified with experimental test using a non-contact and broadband laser detection technique. The methodology developed in this study can be further extended to a continuous monitoring system and be applied to other cable monitoring applications, such as bridge cable monitoring, which would otherwise put human inspectors at risk.

  12. In Situ Visualization of the Growth and Fluctuations of Nanoparticle Superlattice in Liquids

    NASA Astrophysics Data System (ADS)

    Ou, Zihao; Shen, Bonan; Chen, Qian

    We use liquid phase transmission electron microscopy to image and understand the crystal growth front and interfacial fluctuation of a nanoparticle superlattice. With single particle resolution and hundreds of nanoscale building blocks in view, we are able to identify the interface between ordered lattice and disordered structure and visualize the kinetics of single building block attachment at the lattice growth front. The spatial interfacial fluctuation profiles support the capillary wave theory, from which we derive a surface stiffness value consistent with scaling analysis. Our experiments demonstrate the potential of extending model study on collective systems to nanoscale with single particle resolution and testing fundamental theories of condensed matter at a length scale linking atoms and micron-sized colloids.

  13. Comparison Between RGB and Rgb-D Cameras for Supporting Low-Cost Gnss Urban Navigation

    NASA Astrophysics Data System (ADS)

    Rossi, L.; De Gaetani, C. I.; Pagliari, D.; Realini, E.; Reguzzoni, M.; Pinto, L.

    2018-05-01

    A pure GNSS navigation is often unreliable in urban areas because of the presence of obstructions, thus preventing a correct reception of the satellite signal. The bridging between GNSS outages, as well as the vehicle attitude reconstruction, can be recovered by using complementary information, such as visual data acquired by RGB-D or RGB cameras. In this work, the possibility of integrating low-cost GNSS and visual data by means of an extended Kalman filter has been investigated. The focus is on the comparison between the use of RGB-D or RGB cameras. In particular, a Microsoft Kinect device (second generation) and a mirrorless Canon EOS M RGB camera have been compared. The former is an interesting RGB-D camera because of its low-cost, easiness of use and raw data accessibility. The latter has been selected for the high-quality of the acquired images and for the possibility of mounting fixed focal length lenses with a lower weight and cost with respect to a reflex camera. The designed extended Kalman filter takes as input the GNSS-only trajectory and the relative orientation between subsequent pairs of images. Depending on the visual data acquisition system, the filter is different because RGB-D cameras acquire both RGB and depth data, allowing to solve the scale problem, which is instead typical of image-only solutions. The two systems and filtering approaches were assessed by ad-hoc experimental tests, showing that the use of a Kinect device for supporting a u-blox low-cost receiver led to a trajectory with a decimeter accuracy, that is 15 % better than the one obtained when using the Canon EOS M camera.

  14. Visual difference metric for realistic image synthesis

    NASA Astrophysics Data System (ADS)

    Bolin, Mark R.; Meyer, Gary W.

    1999-05-01

    An accurate and efficient model of human perception has been developed to control the placement of sample in a realistic image synthesis algorithm. Previous sampling techniques have sought to spread the error equally across the image plane. However, this approach neglects the fact that the renderings are intended to be displayed for a human observer. The human visual system has a varying sensitivity to error that is based upon the viewing context. This means that equivalent optical discrepancies can be very obvious in one situation and imperceptible in another. It is ultimately the perceptibility of this error that governs image quality and should be used as the basis of a sampling algorithm. This paper focuses on a simplified version of the Lubin Visual Discrimination Metric (VDM) that was developed for insertion into an image synthesis algorithm. The sampling VDM makes use of a Haar wavelet basis for the cortical transform and a less severe spatial pooling operation. The model was extended for color including the effects of chromatic aberration. Comparisons are made between the execution time and visual difference map for the original Lubin and simplified visual difference metrics. Results for the realistic image synthesis algorithm are also presented.

  15. Perceptual response to visual noise and display media

    NASA Technical Reports Server (NTRS)

    Durgin, Frank H.; Proffitt, Dennis R.

    1993-01-01

    The present project was designed to follow up an earlier investigation in which perceptual adaptation in response to the use of Night Vision Goggles, or image intensification (I squared) systems, such as those employed in the military were studied. Our chief concern in the earlier studies was with the dynamic visual noise that is a byproduct of the I(sup 2) technology: under low light conditions, there is a great deal of 'snow' or sporadic 'twinkling' of pixels in the I(sup 2) display which is more salient as the ambient light levels are lower. Because prolonged exposure to static visual noise produces strong adaptation responses, we reasoned that the dynamic visual noise of I(sup 2) displays might have a similar effect, which could have implications for their long term use. However, in the series of experiments reported last year, no evidence at all of such aftereffects following extended exposure to I(sup 2) displays were found. This finding surprised us, and led us to propose the following studies: (1) an investigation of dynamic visual noise and its capacity to produce after effects; and (2) an investigation of the perceptual consequences of characteristics of the display media.

  16. Atoms of recognition in human and computer vision.

    PubMed

    Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel

    2016-03-08

    Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.

  17. Deep skin structural and microcirculation imaging with extended-focus OCT

    NASA Astrophysics Data System (ADS)

    Blatter, Cedric; Grajciar, Branislav; Huber, Robert; Leitgeb, Rainer A.

    2012-02-01

    We present an extended focus OCT system for dermatologic applications that maintains high lateral resolution over a large depth range by using Bessel beam illumination. More, Bessel beams exhibit a self-reconstruction property that is particularly useful to avoid shadowing from surface structures such as hairs. High lateral resolution and high-speed measurement, thanks to a rapidly tuning swept source, allows not only for imaging of small skin structures in depth but also for comprehensive visualization of the small capillary network within the human skin in-vivo. We use this information for studying temporal vaso-responses to hypothermia. In contrast to other perfusion imaging methods such as laser Doppler imaging (LDI), OCT gives specific access to vascular responses in different vascular beds in depth.

  18. The neural representation of the gender of faces in the primate visual system: A computer modeling study.

    PubMed

    Minot, Thomas; Dury, Hannah L; Eguchi, Akihiro; Humphreys, Glyn W; Stringer, Simon M

    2017-03-01

    We use an established neural network model of the primate visual system to show how neurons might learn to encode the gender of faces. The model consists of a hierarchy of 4 competitive neuronal layers with associatively modifiable feedforward synaptic connections between successive layers. During training, the network was presented with many realistic images of male and female faces, during which the synaptic connections are modified using biologically plausible local associative learning rules. After training, we found that different subsets of output neurons have learned to respond exclusively to either male or female faces. With the inclusion of short range excitation within each neuronal layer to implement a self-organizing map architecture, neurons representing either male or female faces were clustered together in the output layer. This learning process is entirely unsupervised, as the gender of the face images is not explicitly labeled and provided to the network as a supervisory training signal. These simulations are extended to training the network on rotating faces. It is found that by using a trace learning rule incorporating a temporal memory trace of recent neuronal activity, neurons responding selectively to either male or female faces were also able to learn to respond invariantly over different views of the faces. This kind of trace learning has been previously shown to operate within the primate visual system by neurophysiological and psychophysical studies. The computer simulations described here predict that similar neurons encoding the gender of faces will be present within the primate visual system. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Neuroanatomical affiliation visualization-interface system.

    PubMed

    Palombi, Olivier; Shin, Jae-Won; Watson, Charles; Paxinos, George

    2006-01-01

    A number of knowledge management systems have been developed to allow users to have access to large quantity of neuroanatomical data. The advent of three-dimensional (3D) visualization techniques allows users to interact with complex 3D object. In order to better understand the structural and functional organization of the brain, we present Neuroanatomical Affiliations Visualization-Interface System (NAVIS) as the original software to see brain structures and neuroanatomical affiliations in 3D. This version of NAVIS has made use of the fifth edition of "The Rat Brain in Stereotaxic coordinates" (Paxinos and Watson, 2005). The NAVIS development environment was based on the scripting language name Python, using visualization toolkit (VTK) as 3D-library and wxPython for the graphic user interface. The following manuscript is focused on the nucleus of the solitary tract (Sol) and the set of affiliated structures in the brain to illustrate the functionality of NAVIS. The nucleus of the Sol is the primary relay center of visceral and taste information, and consists of 14 distinct subnuclei that differ in cytoarchitecture, chemoarchitecture, connections, and function. In the present study, neuroanatomical projection data of the rat Sol were collected from selected literature in PubMed since 1975. Forty-nine identified projection data of Sol were inserted in NAVIS. The standard XML format used as an input for affiliation data allows NAVIS to update data online and/or allows users to manually change or update affiliation data. NAVIS can be extended to nuclei other than Sol.

  20. Visual Analysis of North Atlantic Hurricane Trends Using Parallel Coordinates and Statistical Techniques

    DTIC Science & Technology

    2008-07-07

    analyzing multivariate data sets. The system was developed using the Java Development Kit (JDK) version 1.5; and it yields interactive performance on a... script and captures output from the MATLAB’s “regress” and “stepwisefit” utilities that perform simple and stepwise regression, respectively. The MATLAB...Statistical Association, vol. 85, no. 411, pp. 664–675, 1990. [9] H. Hauser, F. Ledermann, and H. Doleisch, “ Angular brushing of extended parallel coordinates

  1. (DURIP 10) High Speed Intensified Imaging System For Studies Of Mixing And Combustion In Supersonic Flows And Hydrocarbon Flame Structure Measurements At Elevated Pressures

    DTIC Science & Technology

    2016-11-09

    software, and their networking to augment optical diagnostics employed in supersonic reacting and non-reacting flow experiments . A high-speed...facility at Caltech. Experiments to date have made use of this equipment, extending previous capabilities to high-speed schlieren quantitative flow...visualization and image correlation velocimetry, with further experiments currently in progress. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17

  2. Enhancement of vision by monocular deprivation in adult mice.

    PubMed

    Prusky, Glen T; Alam, Nazia M; Douglas, Robert M

    2006-11-08

    Plasticity of vision mediated through binocular interactions has been reported in mammals only during a "critical" period in juvenile life, wherein monocular deprivation (MD) causes an enduring loss of visual acuity (amblyopia) selectively through the deprived eye. Here, we report a different form of interocular plasticity of vision in adult mice in which MD leads to an enhancement of the optokinetic response (OKR) selectively through the nondeprived eye. Over 5 d of MD, the spatial frequency sensitivity of the OKR increased gradually, reaching a plateau of approximately 36% above pre-deprivation baseline. Eye opening initiated a gradual decline, but sensitivity was maintained above pre-deprivation baseline for 5-6 d. Enhanced function was restricted to the monocular visual field, notwithstanding the dependence of the plasticity on binocular interactions. Activity in visual cortex ipsilateral to the deprived eye was necessary for the characteristic induction of the enhancement, and activity in visual cortex contralateral to the deprived eye was necessary for its maintenance after MD. The plasticity also displayed distinct learning-like properties: Active testing experience was required to attain maximal enhancement and for enhancement to persist after MD, and the duration of enhanced sensitivity after MD was extended by increasing the length of MD, and by repeating MD. These data show that the adult mouse visual system maintains a form of experience-dependent plasticity in which the visual cortex can modulate the normal function of subcortical visual pathways.

  3. Altered visual perception in long-term ecstasy (MDMA) users.

    PubMed

    White, Claire; Brown, John; Edwards, Mark

    2013-09-01

    The present study investigated the long-term consequences of ecstasy use on visual processes thought to reflect serotonergic functions in the occipital lobe. Evidence indicates that the main psychoactive ingredient in ecstasy (methylendioxymethamphetamine) causes long-term changes to the serotonin system in human users. Previous research has found that amphetamine-abstinent ecstasy users have disrupted visual processing in the occipital lobe which relies on serotonin, with researchers concluding that ecstasy broadens orientation tuning bandwidths. However, other processes may have accounted for these results. The aim of the present research was to determine if amphetamine-abstinent ecstasy users have changes in occipital lobe functioning, as revealed by two studies: a masking study that directly measured the width of orientation tuning bandwidths and a contour integration task that measured the strength of long-range connections in the visual cortex of drug users compared to controls. Participants were compared on the width of orientation tuning bandwidths (26 controls, 12 ecstasy users, 10 ecstasy + amphetamine users) and the strength of long-range connections (38 controls, 15 ecstasy user, 12 ecstasy + amphetamine users) in the occipital lobe. Amphetamine-abstinent ecstasy users had significantly broader orientation tuning bandwidths than controls and significantly lower contour detection thresholds (CDTs), indicating worse performance on the task, than both controls and ecstasy + amphetamine users. These results extend on previous research, which is consistent with the proposal that ecstasy may damage the serotonin system, resulting in behavioral changes on tests of visual perception processes which are thought to reflect serotonergic functions in the occipital lobe.

  4. Using Secure Web Services to Visualize Poison Center Data for Nationwide Biosurveillance: A Case Study

    PubMed Central

    Savel, Thomas G; Bronstein, Alvin; Duck, William; Rhodes, M. Barry; Lee, Brian; Stinn, John; Worthen, Katherine

    2010-01-01

    Objectives Real-time surveillance systems are valuable for timely response to public health emergencies. It has been challenging to leverage existing surveillance systems in state and local communities, and, using a centralized architecture, add new data sources and analytical capacity. Because this centralized model has proven to be difficult to maintain and enhance, the US Centers for Disease Control and Prevention (CDC) has been examining the ability to use a federated model based on secure web services architecture, with data stewardship remaining with the data provider. Methods As a case study for this approach, the American Association of Poison Control Centers and the CDC extended an existing data warehouse via a secure web service, and shared aggregate clinical effects and case counts data by geographic region and time period. To visualize these data, CDC developed a web browser-based interface, Quicksilver, which leveraged the Google Maps API and Flot, a javascript plotting library. Results Two iterations of the NPDS web service were completed in 12 weeks. The visualization client, Quicksilver, was developed in four months. Discussion This implementation of web services combined with a visualization client represents incremental positive progress in transitioning national data sources like BioSense and NPDS to a federated data exchange model. Conclusion Quicksilver effectively demonstrates how the use of secure web services in conjunction with a lightweight, rapidly deployed visualization client can easily integrate isolated data sources for biosurveillance. PMID:23569581

  5. Structural Integrity and Durability of Reusable Space Propulsion Systems

    NASA Technical Reports Server (NTRS)

    1991-01-01

    A two-day conference on the structural integrity and durability of reusable space propulsion systems was held on 14 to 15 May 1991 at the NASA Lewis Research Center. Presentations were made by industry, university, and government researchers organized into four sessions: (1) aerothermodynamic loads; (2) instrumentation; (3) fatigue, fracture, and constitutive modeling; and (4) structural dynamics. The principle objectives were to disseminate research results and future plans in each of four areas. This publication contains extended abstracts and the visual material presented during the conference. Particular emphasis is placed on the Space Shuttle Main Engine (SSME) and the SSME turbopump.

  6. Crustacean Larvae-Vision in the Plankton.

    PubMed

    Cronin, Thomas W; Bok, Michael J; Lin, Chan

    2017-11-01

    We review the visual systems of crustacean larvae, concentrating on the compound eyes of decapod and stomatopod larvae as well as the functional and behavioral aspects of their vision. Larval compound eyes of these macrurans are all built on fundamentally the same optical plan, the transparent apposition eye, which is eminently suitable for modification into the abundantly diverse optical systems of the adults. Many of these eyes contain a layer of reflective structures overlying the retina that produces a counterilluminating eyeshine, so they are unique in being camouflaged both by their transparency and by their reflection of light spectrally similar to background light to conceal the opaque retina. Besides the pair of compound eyes, at least some crustacean larvae have a non-imaging photoreceptor system based on a naupliar eye and possibly other frontal eyes. Larval compound-eye photoreceptors send axons to a large and well-developed optic lobe consisting of a series of neuropils that are similar to those of adult crustaceans and insects, implying sophisticated analysis of visual stimuli. The visual system fosters a number of advanced and flexible behaviors that permit crustacean larvae to survive extended periods in the plankton and allows them to reach acceptable adult habitats, within which to metamorphose. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  7. Eye-Tracking in the Study of Visual Expertise: Methodology and Approaches in Medicine

    ERIC Educational Resources Information Center

    Fox, Sharon E.; Faulkner-Jones, Beverly E.

    2017-01-01

    Eye-tracking is the measurement of eye motions and point of gaze of a viewer. Advances in this technology have been essential to our understanding of many forms of visual learning, including the development of visual expertise. In recent years, these studies have been extended to the medical professions, where eye-tracking technology has helped us…

  8. Deep recurrent neural network reveals a hierarchy of process memory during dynamic natural vision.

    PubMed

    Shi, Junxing; Wen, Haiguang; Zhang, Yizhen; Han, Kuan; Liu, Zhongming

    2018-05-01

    The human visual cortex extracts both spatial and temporal visual features to support perception and guide behavior. Deep convolutional neural networks (CNNs) provide a computational framework to model cortical representation and organization for spatial visual processing, but unable to explain how the brain processes temporal information. To overcome this limitation, we extended a CNN by adding recurrent connections to different layers of the CNN to allow spatial representations to be remembered and accumulated over time. The extended model, or the recurrent neural network (RNN), embodied a hierarchical and distributed model of process memory as an integral part of visual processing. Unlike the CNN, the RNN learned spatiotemporal features from videos to enable action recognition. The RNN better predicted cortical responses to natural movie stimuli than the CNN, at all visual areas, especially those along the dorsal stream. As a fully observable model of visual processing, the RNN also revealed a cortical hierarchy of temporal receptive window, dynamics of process memory, and spatiotemporal representations. These results support the hypothesis of process memory, and demonstrate the potential of using the RNN for in-depth computational understanding of dynamic natural vision. © 2018 Wiley Periodicals, Inc.

  9. Error correcting mechanisms during antisaccades: contribution of online control during primary saccades and offline control via secondary saccades.

    PubMed

    Bedi, Harleen; Goltz, Herbert C; Wong, Agnes M F; Chandrakumar, Manokaraananthan; Niechwiej-Szwedo, Ewa

    2013-01-01

    Errors in eye movements can be corrected during the ongoing saccade through in-flight modifications (i.e., online control), or by programming a secondary eye movement (i.e., offline control). In a reflexive saccade task, the oculomotor system can use extraretinal information (i.e., efference copy) online to correct errors in the primary saccade, and offline retinal information to generate a secondary corrective saccade. The purpose of this study was to examine the error correction mechanisms in the antisaccade task. The roles of extraretinal and retinal feedback in maintaining eye movement accuracy were investigated by presenting visual feedback at the spatial goal of the antisaccade. We found that online control for antisaccade is not affected by the presence of visual feedback; that is whether visual feedback is present or not, the duration of the deceleration interval was extended and significantly correlated with reduced antisaccade endpoint error. We postulate that the extended duration of deceleration is a feature of online control during volitional saccades to improve their endpoint accuracy. We found that secondary saccades were generated more frequently in the antisaccade task compared to the reflexive saccade task. Furthermore, we found evidence for a greater contribution from extraretinal sources of feedback in programming the secondary "corrective" saccades in the antisaccade task. Nonetheless, secondary saccades were more corrective for the remaining antisaccade amplitude error in the presence of visual feedback of the target. Taken together, our results reveal a distinctive online error control strategy through an extension of the deceleration interval in the antisaccade task. Target feedback does not improve online control, rather it improves the accuracy of secondary saccades in the antisaccade task.

  10. Error Correcting Mechanisms during Antisaccades: Contribution of Online Control during Primary Saccades and Offline Control via Secondary Saccades

    PubMed Central

    Bedi, Harleen; Goltz, Herbert C.; Wong, Agnes M. F.; Chandrakumar, Manokaraananthan; Niechwiej-Szwedo, Ewa

    2013-01-01

    Errors in eye movements can be corrected during the ongoing saccade through in-flight modifications (i.e., online control), or by programming a secondary eye movement (i.e., offline control). In a reflexive saccade task, the oculomotor system can use extraretinal information (i.e., efference copy) online to correct errors in the primary saccade, and offline retinal information to generate a secondary corrective saccade. The purpose of this study was to examine the error correction mechanisms in the antisaccade task. The roles of extraretinal and retinal feedback in maintaining eye movement accuracy were investigated by presenting visual feedback at the spatial goal of the antisaccade. We found that online control for antisaccade is not affected by the presence of visual feedback; that is whether visual feedback is present or not, the duration of the deceleration interval was extended and significantly correlated with reduced antisaccade endpoint error. We postulate that the extended duration of deceleration is a feature of online control during volitional saccades to improve their endpoint accuracy. We found that secondary saccades were generated more frequently in the antisaccade task compared to the reflexive saccade task. Furthermore, we found evidence for a greater contribution from extraretinal sources of feedback in programming the secondary “corrective” saccades in the antisaccade task. Nonetheless, secondary saccades were more corrective for the remaining antisaccade amplitude error in the presence of visual feedback of the target. Taken together, our results reveal a distinctive online error control strategy through an extension of the deceleration interval in the antisaccade task. Target feedback does not improve online control, rather it improves the accuracy of secondary saccades in the antisaccade task. PMID:23936308

  11. Toward a more embedded/extended perspective on the cognitive function of gestures

    PubMed Central

    Pouw, Wim T. J. L.; de Nooijer, Jacqueline A.; van Gog, Tamara; Zwaan, Rolf A.; Paas, Fred

    2014-01-01

    Gestures are often considered to be demonstrative of the embodied nature of the mind (Hostetter and Alibali, 2008). In this article, we review current theories and research targeted at the intra-cognitive role of gestures. We ask the question how can gestures support internal cognitive processes of the gesturer? We suggest that extant theories are in a sense disembodied, because they focus solely on embodiment in terms of the sensorimotor neural precursors of gestures. As a result, current theories on the intra-cognitive role of gestures are lacking in explanatory scope to address how gestures-as-bodily-acts fulfill a cognitive function. On the basis of recent theoretical appeals that focus on the possibly embedded/extended cognitive role of gestures (Clark, 2013), we suggest that gestures are external physical tools of the cognitive system that replace and support otherwise solely internal cognitive processes. That is gestures provide the cognitive system with a stable external physical and visual presence that can provide means to think with. We show that there is a considerable amount of overlap between the way the human cognitive system has been found to use its environment, and how gestures are used during cognitive processes. Lastly, we provide several suggestions of how to investigate the embedded/extended perspective of the cognitive function of gestures. PMID:24795687

  12. Texture-Based Correspondence Display

    NASA Technical Reports Server (NTRS)

    Gerald-Yamasaki, Michael

    2004-01-01

    Texture-based correspondence display is a methodology to display corresponding data elements in visual representations of complex multidimensional, multivariate data. Texture is utilized as a persistent medium to contain a visual representation model and as a means to create multiple renditions of data where color is used to identify correspondence. Corresponding data elements are displayed over a variety of visual metaphors in a normal rendering process without adding extraneous linking metadata creation and maintenance. The effectiveness of visual representation for understanding data is extended to the expression of the visual representation model in texture.

  13. A Spiking Neural Network Based Cortex-Like Mechanism and Application to Facial Expression Recognition

    PubMed Central

    Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai

    2012-01-01

    In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism. PMID:23193391

  14. Dynamic facial expressions evoke distinct activation in the face perception network: a connectivity analysis study.

    PubMed

    Foley, Elaine; Rippon, Gina; Thai, Ngoc Jade; Longe, Olivia; Senior, Carl

    2012-02-01

    Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223-233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.

  15. Numerosity as a topological invariant.

    PubMed

    Kluth, Tobias; Zetzsche, Christoph

    2016-01-01

    The ability to quickly recognize the number of objects in our environment is a fundamental cognitive function. However, it is far from clear which computations and which actual neural processing mechanisms are used to provide us with such a skill. Here we try to provide a detailed and comprehensive analysis of this issue, which comprises both the basic mathematical foundations and the peculiarities imposed by the structure of the visual system and by the neural computations provided by the visual cortex. We suggest that numerosity should be considered as a mathematical invariant. Making use of concepts from mathematical topology--like connectedness, Betti numbers, and the Gauss-Bonnet theorem--we derive the basic computations suited for the computation of this invariant. We show that the computation of numerosity is possible in a neurophysiologically plausible fashion using only computational elements which are known to exist in the visual cortex. We further show that a fundamental feature of numerosity perception, its Weber property, arises naturally, assuming noise in the basic neural operations. The model is tested on an extended data set (made publicly available). It is hoped that our results can provide a general framework for future research on the invariance properties of the numerosity system.

  16. The Effect of Viewing Eccentricity on Enumeration

    PubMed Central

    Palomares, Melanie; Smith, Paul R.; Pitts, Carole Holley; Carter, Breana M.

    2011-01-01

    Visual acuity and contrast sensitivity progressively diminish with increasing viewing eccentricity. Here we evaluated how visual enumeration is affected by visual eccentricity, and whether subitizing capacity, the accurate enumeration of a small number (∼3) of items, decreases with more eccentric viewing. Participants enumerated gratings whose (1) stimulus size was constant across eccentricity, and (2) whose stimulus size scaled by a cortical magnification factor across eccentricity. While we found that enumeration accuracy and precision decreased with increasing eccentricity, cortical magnification scaling of size neutralized the deleterious effects of increasing eccentricity. We found that size scaling did not affect subitizing capacities, which were nearly constant across all eccentricities. We also found that size scaling modulated the variation coefficients, a normalized metric of enumeration precision, defined as the standard deviation divided by the mean response. Our results show that the inaccuracy and imprecision associated with increasing viewing eccentricity is due to limitations in spatial resolution. Moreover, our results also support the notion that the precise number system is restricted to small numerosities (represented by the subitizing limit), while the approximate number system extends across both small and large numerosities (indexed by variation coefficients) at large eccentricities. PMID:21695212

  17. The effect of viewing eccentricity on enumeration.

    PubMed

    Palomares, Melanie; Smith, Paul R; Pitts, Carole Holley; Carter, Breana M

    2011-01-01

    Visual acuity and contrast sensitivity progressively diminish with increasing viewing eccentricity. Here we evaluated how visual enumeration is affected by visual eccentricity, and whether subitizing capacity, the accurate enumeration of a small number (∼3) of items, decreases with more eccentric viewing. Participants enumerated gratings whose (1) stimulus size was constant across eccentricity, and (2) whose stimulus size scaled by a cortical magnification factor across eccentricity. While we found that enumeration accuracy and precision decreased with increasing eccentricity, cortical magnification scaling of size neutralized the deleterious effects of increasing eccentricity. We found that size scaling did not affect subitizing capacities, which were nearly constant across all eccentricities. We also found that size scaling modulated the variation coefficients, a normalized metric of enumeration precision, defined as the standard deviation divided by the mean response. Our results show that the inaccuracy and imprecision associated with increasing viewing eccentricity is due to limitations in spatial resolution. Moreover, our results also support the notion that the precise number system is restricted to small numerosities (represented by the subitizing limit), while the approximate number system extends across both small and large numerosities (indexed by variation coefficients) at large eccentricities.

  18. The development of contour processing: evidence from physiology and psychophysics

    PubMed Central

    Taylor, Gemma; Hipp, Daniel; Moser, Alecia; Dickerson, Kelly; Gerhardstein, Peter

    2014-01-01

    Object perception and pattern vision depend fundamentally upon the extraction of contours from the visual environment. In adulthood, contour or edge-level processing is supported by the Gestalt heuristics of proximity, collinearity, and closure. Less is known, however, about the developmental trajectory of contour detection and contour integration. Within the physiology of the visual system, long-range horizontal connections in V1 and V2 are the likely candidates for implementing these heuristics. While post-mortem anatomical studies of human infants suggest that horizontal interconnections reach maturity by the second year of life, psychophysical research with infants and children suggests a considerably more protracted development. In the present review, data from infancy to adulthood will be discussed in order to track the development of contour detection and integration. The goal of this review is thus to integrate the development of contour detection and integration with research regarding the development of underlying neural circuitry. We conclude that the ontogeny of this system is best characterized as a developmentally extended period of associative acquisition whereby horizontal connectivity becomes functional over longer and longer distances, thus becoming able to effectively integrate over greater spans of visual space. PMID:25071681

  19. A spiking neural network based cortex-like mechanism and application to facial expression recognition.

    PubMed

    Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai

    2012-01-01

    In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism.

  20. Promoting Visualization Skills through Deconstruction Using Physical Models and a Visualization Activity Intervention

    NASA Astrophysics Data System (ADS)

    Schiltz, Holly Kristine

    Visualization skills are important in learning chemistry, as these skills have been shown to correlate to high ability in problem solving. Students' understanding of visual information and their problem-solving processes may only ever be accessed indirectly: verbalization, gestures, drawings, etc. In this research, deconstruction of complex visual concepts was aligned with the promotion of students' verbalization of visualized ideas to teach students to solve complex visual tasks independently. All instructional tools and teaching methods were developed in accordance with the principles of the theoretical framework, the Modeling Theory of Learning: deconstruction of visual representations into model components, comparisons to reality, and recognition of students' their problemsolving strategies. Three physical model systems were designed to provide students with visual and tangible representations of chemical concepts. The Permanent Reflection Plane Demonstration provided visual indicators that students used to support or invalidate the presence of a reflection plane. The 3-D Coordinate Axis system provided an environment that allowed students to visualize and physically enact symmetry operations in a relevant molecular context. The Proper Rotation Axis system was designed to provide a physical and visual frame of reference to showcase multiple symmetry elements that students must identify in a molecular model. Focus groups of students taking Inorganic chemistry working with the physical model systems demonstrated difficulty documenting and verbalizing processes and descriptions of visual concepts. Frequently asked student questions were classified, but students also interacted with visual information through gestures and model manipulations. In an effort to characterize how much students used visualization during lecture or recitation, we developed observation rubrics to gather information about students' visualization artifacts and examined the effect instructors' modeled visualization artifacts had on students. No patterns emerged from the passive observation of visualization artifacts in lecture or recitation, but the need to elicit visual information from students was made clear. Deconstruction proved to be a valuable method for instruction and assessment of visual information. Three strategies for using deconstruction in teaching were distilled from the lessons and observations of the student focus groups: begin with observations of what is given in an image and what it's composed of, identify the relationships between components to find additional operations in different environments about the molecule, and deconstructing steps of challenging questions can reveal mistakes. An intervention was developed to teach students to use deconstruction and verbalization to analyze complex visualization tasks and employ the principles of the theoretical framework. The activities were scaffolded to introduce increasingly challenging concepts to students, but also support them as they learned visually demanding chemistry concepts. Several themes were observed in the analysis of the visualization activities. Students used deconstruction by documenting which parts of the images were useful for interpretation of the visual. Students identified valid patterns and rules within the images, which signified understanding of arrangement of information presented in the representation. Successful strategy communication was identified when students documented personal strategies that allowed them to complete the activity tasks. Finally, students demonstrated the ability to extend symmetry skills to advanced applications they had not previously seen. This work shows how the use of deconstruction and verbalization may have a great impact on how students master difficult topics and combined, they offer students a powerful strategy to approach visually demanding chemistry problems and to the instructor a unique insight to mentally constructed strategies.

  1. MetaRep, an extended CMAS 3D program to visualize mafic (CMAS, ACF-S, ACF-N) and pelitic (AFM-K, AFM-S, AKF-S) projections

    NASA Astrophysics Data System (ADS)

    France, Lydéric; Nicollet, Christian

    2010-06-01

    MetaRep is a program based on our earlier program CMAS 3D. It is developed in MATLAB ® script. MetaRep objectives are to visualize and project major element compositions of mafic and pelitic rocks and their minerals in the pseudo-quaternary projections of the ACF-S, ACF-N, CMAS, AFM-K, AFM-S and AKF-S systems. These six systems are commonly used to describe metamorphic mineral assemblages and magmatic evolutions. Each system, made of four apices, can be represented in a tetrahedron that can be visualized in three dimensions with MetaRep; the four tetrahedron apices represent oxides or combination of oxides that define the composition of the projected rock or mineral. The three-dimensional representation allows one to obtain a better understanding of the topology of the relationships between the rocks and minerals and relations. From these systems, MetaRep can also project data in ternary plots (for example, the ACF, AFM and AKF ternary projections can be generated). A functional interface makes it easy to use and does not require any knowledge of MATLAB ® programming. To facilitate the use, MetaRep loads, from the main interface, data compiled in a Microsoft Excel ™ spreadsheet. Although useful for scientific research, the program is also a powerful tool for teaching. We propose an application example that, by using two combined systems (ACF-S and ACF-N), provides strong confirmation in the petrological interpretation.

  2. Visual short-term memory load reduces retinotopic cortex response to contrast.

    PubMed

    Konstantinou, Nikos; Bahrami, Bahador; Rees, Geraint; Lavie, Nilli

    2012-11-01

    Load Theory of attention suggests that high perceptual load in a task leads to reduced sensory visual cortex response to task-unrelated stimuli resulting in "load-induced blindness" [e.g., Lavie, N. Attention, distraction and cognitive control under load. Current Directions in Psychological Science, 19, 143-148, 2010; Lavie, N. Distracted and confused?: Selective attention under load. Trends in Cognitive Sciences, 9, 75-82, 2005]. Consideration of the findings that visual STM (VSTM) involves sensory recruitment [e.g., Pasternak, T., & Greenlee, M. Working memory in primate sensory systems. Nature Reviews Neuroscience, 6, 97-107, 2005] within Load Theory led us to a new hypothesis regarding the effects of VSTM load on visual processing. If VSTM load draws on sensory visual capacity, then similar to perceptual load, high VSTM load should also reduce visual cortex response to incoming stimuli leading to a failure to detect them. We tested this hypothesis with fMRI and behavioral measures of visual detection sensitivity. Participants detected the presence of a contrast increment during the maintenance delay in a VSTM task requiring maintenance of color and position. Increased VSTM load (manipulated by increased set size) led to reduced retinotopic visual cortex (V1-V3) responses to contrast as well as reduced detection sensitivity, as we predicted. Additional visual detection experiments established a clear tradeoff between the amount of information maintained in VSTM and detection sensitivity, while ruling out alternative accounts for the effects of VSTM load in terms of differential spatial allocation strategies or task difficulty. These findings extend Load Theory to demonstrate a new form of competitive interactions between early visual cortex processing and visual representations held in memory under load and provide a novel line of support for the sensory recruitment hypothesis of VSTM.

  3. Real-space and real-time dynamics of CRISPR-Cas9 visualized by high-speed atomic force microscopy.

    PubMed

    Shibata, Mikihiro; Nishimasu, Hiroshi; Kodera, Noriyuki; Hirano, Seiichi; Ando, Toshio; Uchihashi, Takayuki; Nureki, Osamu

    2017-11-10

    The CRISPR-associated endonuclease Cas9 binds to a guide RNA and cleaves double-stranded DNA with a sequence complementary to the RNA guide. The Cas9-RNA system has been harnessed for numerous applications, such as genome editing. Here we use high-speed atomic force microscopy (HS-AFM) to visualize the real-space and real-time dynamics of CRISPR-Cas9 in action. HS-AFM movies indicate that, whereas apo-Cas9 adopts unexpected flexible conformations, Cas9-RNA forms a stable bilobed structure and interrogates target sites on the DNA by three-dimensional diffusion. These movies also provide real-time visualization of the Cas9-mediated DNA cleavage process. Notably, the Cas9 HNH nuclease domain fluctuates upon DNA binding, and subsequently adopts an active conformation, where the HNH active site is docked at the cleavage site in the target DNA. Collectively, our HS-AFM data extend our understanding of the action mechanism of CRISPR-Cas9.

  4. Griffiths phase and long-range correlations in a biologically motivated visual cortex model

    NASA Astrophysics Data System (ADS)

    Girardi-Schappo, M.; Bortolotto, G. S.; Gonsalves, J. J.; Pinto, L. T.; Tragtenberg, M. H. R.

    2016-07-01

    Activity in the brain propagates as waves of firing neurons, namely avalanches. These waves’ size and duration distributions have been experimentally shown to display a stable power-law profile, long-range correlations and 1/f b power spectrum in vivo and in vitro. We study an avalanching biologically motivated model of mammals visual cortex and find an extended critical-like region - a Griffiths phase - characterized by divergent susceptibility and zero order parameter. This phase lies close to the expected experimental value of the excitatory postsynaptic potential in the cortex suggesting that critical be-havior may be found in the visual system. Avalanches are not perfectly power-law distributed, but it is possible to collapse the distributions and define a cutoff avalanche size that diverges as the network size is increased inside the critical region. The avalanches present long-range correlations and 1/f b power spectrum, matching experiments. The phase transition is analytically determined by a mean-field approximation.

  5. 76 FR 1990 - Airworthiness Directives; Pilatus Aircraft Ltd. Models PC-6, PC-6-H1, PC-6-H2, PC-6/350, PC-6/350...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-12

    ... eddy current and visual inspections of the upper wing strut fitting for evidence of cracks, wear and/or... permitted extending the intervals for the repetitive eddy current and visual inspections from 100 Flight... the applicability and to require repetitive eddy current and visual inspections of the upper wing...

  6. 75 FR 62005 - Airworthiness Directives; Pilatus Aircraft Ltd. Models PC-6, PC-6-H1, PC-6-H2, PC-6/350, PC-6/350...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-07

    ... the applicability and to require repetitive eddy current and visual inspections of the upper wing... the applicability and to require repetitive eddy current and visual inspections of the upper wing... Emergency AD 2007-0241-E to extend the applicability and to require repetitive eddy current and visual...

  7. Schlieren photography on freely flying hawkmoth.

    PubMed

    Liu, Yun; Roll, Jesse; Van Kooten, Stephen; Deng, Xinyan

    2018-05-01

    The aerodynamic force on flying insects results from the vortical flow structures that vary both spatially and temporally throughout flight. Due to these complexities and the inherent difficulties in studying flying insects in a natural setting, a complete picture of the vortical flow has been difficult to obtain experimentally. In this paper, Schlieren , a widely used technique for highspeed flow visualization, was adapted to capture the vortex structures around freely flying hawkmoth ( Manduca ). Flow features such as leading-edge vortex, trailing-edge vortex, as well as the full vortex system in the wake were visualized directly. Quantification of the flow from the Schlieren images was then obtained by applying a physics-based optical flow method, extending the potential applications of the method to further studies of flying insects. © 2018 The Author(s).

  8. Visual evidence of suppressing the ion and electron energy loss on the wall in Hall thrusters

    NASA Astrophysics Data System (ADS)

    Ding, Yongjie; Peng, Wuji; Sun, Hezhi; Wei, Liqiu; Zeng, Ming; Wang, Fufeng; Yu, Daren

    2017-03-01

    A method of pushing down magnetic field with two permanent magnetic rings is proposed in this paper. It can realize ionization in a channel and acceleration outside the channel. The wall will only suffer from the bombardment of low-energy ions and electrons, which can effectively reduce channel erosion and extend the operational lifetime of thrusters. Furthermore, there is no additional power consumption of coils, which improves the efficiency of systems. We show here the newly developed 200 W no wall-loss Hall thruster (NWLHT-200) that applies the method of pushing down magnetic field with two permanent magnetic rings; the visual evidence we obtained preliminarily confirms the feasibility that the proposed method can realize discharge without wall energy loss or erosion of Hall thrusters.

  9. Image Analysis of DNA Fiber and Nucleus in Plants.

    PubMed

    Ohmido, Nobuko; Wako, Toshiyuki; Kato, Seiji; Fukui, Kiichi

    2016-01-01

    Advances in cytology have led to the application of a wide range of visualization methods in plant genome studies. Image analysis methods are indispensable tools where morphology, density, and color play important roles in the biological systems. Visualization and image analysis methods are useful techniques in the analyses of the detailed structure and function of extended DNA fibers (EDFs) and interphase nuclei. The EDF is the highest in the spatial resolving power to reveal genome structure and it can be used for physical mapping, especially for closely located genes and tandemly repeated sequences. One the other hand, analyzing nuclear DNA and proteins would reveal nuclear structure and functions. In this chapter, we describe the image analysis protocol for quantitatively analyzing different types of plant genome, EDFs and interphase nuclei.

  10. Sparsening Filter Design for Iterative Soft-Input Soft-Output Detectors

    DTIC Science & Technology

    2012-02-29

    filter/detector structure. Since the BP detector itself is unaltered from [1], it can accommodate a system employing channel codes such as LDPC encoding...considered in [1], or can readily be extended to the MIMO case with, for example, space-time coding as in [2,8]. Since our focus is on the design of...simplex method of [15], since it was already available in Matlab , via the “fminsearch” function. 6 Cost surfaces To visualize the cost surfaces, consider

  11. Report on the Installation and Testing of the Advanced Weather Interactive Processing System (AWIPS II) for U.S. Navy Applications

    DTIC Science & Technology

    2018-04-24

    Environment (CAVE). The report also details NRL’s work in extending AWIPS II EDEX to ingest and decode a Navy movement report instructions (MOVREP...phase of this work involved obtaining a copy of the AWIPS II client, the Common Access Visualization Environment (CAVE) as well as a copy of the server...assess the development environment of CAVE for supporting a Navy specific application. In consultation with FWC-San Diego we chose to work with

  12. Distributed telemedicine for the National Information Infrastructure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forslund, D.W.; Lee, Seong H.; Reverbel, F.C.

    1997-08-01

    TeleMed is an advanced system that provides a distributed multimedia electronic medical record available over a wide area network. It uses object-based computing, distributed data repositories, advanced graphical user interfaces, and visualization tools along with innovative concept extraction of image information for storing and accessing medical records developed in a separate project from 1994-5. In 1996, we began the transition to Java, extended the infrastructure, and worked to begin deploying TeleMed-like technologies throughout the nation. Other applications are mentioned.

  13. Light and dark adaptation of visually perceived eye level controlled by visual pitch.

    PubMed

    Matin, L; Li, W

    1995-01-01

    The pitch of a visual field systematically influences the elevation at which a monocularly viewing subject sets a target so as to appear at visually perceived eye level (VPEL). The deviation of the setting from true eye level average approximately 0.6 times the angle of pitch while viewing a fully illuminated complexly structured visual field and is only slightly less with one or two pitched-from-vertical lines in a dark field (Matin & Li, 1994a). The deviation of VPEL from baseline following 20 min of dark adaptation reaches its full value less than 1 min after the onset of illumination of the pitched visual field and decays exponentially in darkness following 5 min of exposure to visual pitch, either 30 degrees topbackward or 20 degrees topforward. The magnitude of the VPEL deviation measured with the dark-adapted right eye following left-eye exposure to pitch was 85% of the deviation that followed pitch exposure of the right eye itself. Time constants for VPEL decay to the dark baseline were the same for same-eye and cross-adaptation conditions and averaged about 4 min. The time constants for decay during dark adaptation were somewhat smaller, and the change during dark adaptation extended over a 16% smaller range following the viewing of the dim two-line pitched-from-vertical stimulus than following the viewing of the complex field. The temporal course of light and dark adaptation of VPEL is virtually identical to the course of light and dark adaptation of the scotopic luminance threshold following exposure to the same luminance. We suggest that, following rod stimulation along particular retinal orientations by portions of the pitched visual field, the storage of the adaptation process resides in the retinogeniculate system and is manifested in the focal system as a change in luminance threshold and in the ambient system as a change in VPEL. The linear model previously developed to account for VPEL, which was based on the interaction of influences from the pitched visual field and extraretinal influences from the body-referenced mechanism, was employed to incorporate the effects of adaptation. Connections between VPEL adaptation and other cases of perceptual adaptation of visual direction are described.

  14. Visual Thinking and Gender Differences in High School Calculus

    ERIC Educational Resources Information Center

    Haciomeroglu, Erhan Selcuk; Chicken, Eric

    2012-01-01

    This study sought to examine calculus students' mathematical performances and preferences for visual or analytic thinking regarding derivative and antiderivative tasks presented graphically. It extends previous studies by investigating factors mediating calculus students' mathematical performances and their preferred modes of thinking. Data were…

  15. Effects of a School-Based Instrumental Music Program on Verbal and Visual Memory in Primary School Children: A Longitudinal Study

    PubMed Central

    Roden, Ingo; Kreutz, Gunter; Bongard, Stephan

    2012-01-01

    This study examined the effects of a school-based instrumental training program on the development of verbal and visual memory skills in primary school children. Participants either took part in a music program with weekly 45 min sessions of instrumental lessons in small groups at school, or they received extended natural science training. A third group of children did not receive additional training. Each child completed verbal and visual memory tests three times over a period of 18 months. Significant Group by Time interactions were found in the measures of verbal memory. Children in the music group showed greater improvements than children in the control groups after controlling for children’s socio-economic background, age, and IQ. No differences between groups were found in the visual memory tests. These findings are consistent with and extend previous research by suggesting that children receiving music training may benefit from improvements in their verbal memory skills. PMID:23267341

  16. Scientific Visualization Made Easy for the Scientist

    NASA Astrophysics Data System (ADS)

    Westerhoff, M.; Henderson, B.

    2002-12-01

    amirar is an application program used in creating 3D visualizations and geometric models of 3D image data sets from various application areas, e.g. medicine, biology, biochemistry, chemistry, physics, and engineering. It has demonstrated significant adoption in the market place since becoming commercially available in 2000. The rapid adoption has expanded the features being requested by the user base and broadened the scope of the amira product offering. The amira product offering includes amira Standard, amiraDevT, used to extend the product capabilities by users, amiraMolT, used for molecular visualization, amiraDeconvT, used to improve quality of image data, and amiraVRT, used in immersive VR environments. amira allows the user to construct a visualization tailored to his or her needs without requiring any programming knowledge. It also allows 3D objects to be represented as grids suitable for numerical simulations, notably as triangular surfaces and volumetric tetrahedral grids. The amira application also provides methods to generate such grids from voxel data representing an image volume, and it includes a general-purpose interactive 3D viewer. amiraDev provides an application-programming interface (API) that allows the user to add new components by C++ programming. amira supports many import formats including a 'raw' format allowing immediate access to your native uniform data sets. amira uses the power and speed of the OpenGLr and Open InventorT graphics libraries and 3D graphics accelerators to allow you to access over 145 modules, enabling you to process, probe, analyze and visualize your data. The amiraMolT extension adds powerful tools for molecular visualization to the existing amira platform. amiraMolT contains support for standard molecular file formats, tools for visualization and analysis of static molecules as well as molecular trajectories (time series). amiraDeconv adds tools for the deconvolution of 3D microscopic images. Deconvolution is the process of increasing image quality and resolution by computationally compensating artifacts of the recording process. amiraDeconv supports 3D wide field microscopy as well as 3D confocal microscopy. It offers both non-blind and blind image deconvolution algorithms. Non-blind deconvolution uses an individual measured point spread function, while non-blind algorithms work on the basis of only a few recording parameters (like numerical aperture or zoom factor). amiraVR is a specialized and extended version of the amira visualization system which is dedicated for use in immersive installations, such as large-screen stereoscopic projections, CAVEr or Holobenchr systems. Among others, it supports multi-threaded multi-pipe rendering, head-tracking, advanced 3D interaction concepts, and 3D menus allowing interaction with any amira object in the same way as on the desktop. With its unique set of features, amiraVR represents both a VR (Virtual Reality) ready application for scientific and medical visualization in immersive environments, and a development platform that allows building VR applications.

  17. Periorbital hemangiomas.

    PubMed

    Goldberg, N S; Rosanova, M A

    1992-10-01

    1. Any hemangioma that involves the upper or lower lid and leads to partial closure in infancy may interfere with or prevent development of normal binocular vision in a matter of days to weeks. 2. Hemangiomas least likely to interfere with vision are lower lid lesions occupying one third of the lid margin or less, not extending beyond the eyelid region, and resolving early. 3. Hemangiomas associated with deprivation amblyopia (with or without anisometropia) are lesions occupying more than one half of the lid margin, extending beyond the eyelid region, resolving late, and obstructing the visual axis. 4. Hemangiomas associated with isolated anisometropic amblyopia are local but bulky lesions that are usually but not always restricted to the upper lid, closing the eye partly and resolving late. 5. The treatment of choice for periorbital hemangiomas is corticosteroids, either systemic or intralesional.

  18. Determination of Orbital Parameters for Visual Binary Stars Using a Fourier-Series Approach

    NASA Astrophysics Data System (ADS)

    Brown, D. E.; Prager, J. R.; DeLeo, G. G.; McCluskey, G. E., Jr.

    2001-12-01

    We expand on the Fourier transform method of Monet (ApJ 234, 275, 1979) to infer the orbital parameters of visual binary stars, and we present results for several systems, both simulated and real. Although originally developed to address binary systems observed through at least one complete period, we have extended the method to deal explicitly with cases where the orbital data is less complete. This is especially useful in cases where the period is so long that only a fragment of the orbit has been recorded. We utilize Fourier-series fitting methods appropriate to data sets covering less than one period and containing random measurement errors. In so doing, we address issues of over-determination in fitting the data and the reduction of other deleterious Fourier-series artifacts. We developed our algorithm using the MAPLE mathematical software code, and tested it on numerous "synthetic" systems, and several real binaries, including Xi Boo, 24 Aqr, and Bu 738. This work was supported at Lehigh University by the Delaware Valley Space Grant Consortium and by NSF-REU grant PHY-9820301.

  19. Natural language processing and visualization in the molecular imaging domain.

    PubMed

    Tulipano, P Karina; Tao, Ying; Millar, William S; Zanzonico, Pat; Kolbert, Katherine; Xu, Hua; Yu, Hong; Chen, Lifeng; Lussier, Yves A; Friedman, Carol

    2007-06-01

    Molecular imaging is at the crossroads of genomic sciences and medical imaging. Information within the molecular imaging literature could be used to link to genomic and imaging information resources and to organize and index images in a way that is potentially useful to researchers. A number of natural language processing (NLP) systems are available to automatically extract information from genomic literature. One existing NLP system, known as BioMedLEE, automatically extracts biological information consisting of biomolecular substances and phenotypic data. This paper focuses on the adaptation, evaluation, and application of BioMedLEE to the molecular imaging domain. In order to adapt BioMedLEE for this domain, we extend an existing molecular imaging terminology and incorporate it into BioMedLEE. BioMedLEE's performance is assessed with a formal evaluation study. The system's performance, measured as recall and precision, is 0.74 (95% CI: [.70-.76]) and 0.70 (95% CI [.63-.76]), respectively. We adapt a JAVA viewer known as PGviewer for the simultaneous visualization of images with NLP extracted information.

  20. Retrieving the unretrievable in electronic imaging systems: emotions, themes, and stories

    NASA Astrophysics Data System (ADS)

    Joergensen, Corinne

    1999-05-01

    New paradigms such as 'affective computing' and user-based research are extending the realm of facets traditionally addressed in IR systems. This paper builds on previous research reported to the electronic imaging community concerning the need to provide access to more abstract attributes of images than those currently amenable to a variety of content-based and text-based indexing techniques. Empirical research suggest that, for visual materials, in addition to standard bibliographic data and broad subject, and in addition to such visually perceptual attributes such as color, texture, shape, and position or focal point, additional access points such as themes, abstract concepts, emotions, stories, and 'people-related' information such as social status would be useful in image retrieval. More recent research demonstrates that similar results are also obtained with 'fine arts' images, which generally have no access provided for these types of attributes. Current efforts to match image attributes as revealed in empirical research with those addressed both in current textural and content-based indexing systems are discussed, as well as the need for new representations for image attributes and for collaboration among diverse communities of researchers.

  1. Extrabulbar olfactory system and nervus terminalis FMRFamide immunoreactive components in Xenopus laevis ontogenesis.

    PubMed

    Pinelli, Claudia; D'Aniello, Biagio; Polese, Gianluca; Rastogi, Rakesh K

    2004-09-01

    The extrabulbar olfactory system (EBOS) is a collection of nerve fibers which originate from primary olfactory receptor-like neurons and penetrate into the brain bypassing the olfactory bulbs. Our description is based upon the application of two neuronal tracers (biocytin, carbocyanine DiI) in the olfactory sac, at the cut end of the olfactory nerve and in the telencephalon of the developing clawed frog. The extrabulbar olfactory system was observed already at stage 45, which is the first developmental stage compatible with our techniques; at this stage, the extrabulbar olfactory system fibers terminated diffusely in the preoptic area. A little later in development, i.e. at stage 50, the extrabulbar olfactory system was maximally developed, extending as far caudally as the rhombencephalon. In the metamorphosing specimens, the extrabulbar olfactory system appeared reduced in extension; caudally, the fiber terminals did not extend beyond the diencephalon. While a substantial overlapping of biocytin/FMRFamide immunoreactivity was observed along the olfactory pathways as well as in the telencephalon, FMRFamide immunoreactivity was never observed to be colocalized in the same cellular or fiber components visualized by tracer molecules. The question whether the extrabulbar olfactory system and the nervus terminalis (NT) are separate anatomical entities or represent an integrated system is discussed.

  2. Towards a Comprehensive Computational Simulation System for Turbomachinery

    NASA Technical Reports Server (NTRS)

    Shih, Ming-Hsin

    1994-01-01

    The objective of this work is to develop algorithms associated with a comprehensive computational simulation system for turbomachinery flow fields. This development is accomplished in a modular fashion. These modules includes grid generation, visualization, network, simulation, toolbox, and flow modules. An interactive grid generation module is customized to facilitate the grid generation process associated with complicated turbomachinery configurations. With its user-friendly graphical user interface, the user may interactively manipulate the default settings to obtain a quality grid within a fraction of time that is usually required for building a grid about the same geometry with a general-purpose grid generation code. Non-Uniform Rational B-Spline formulations are utilized in the algorithm to maintain geometry fidelity while redistributing grid points on the solid surfaces. Bezier curve formulation is used to allow interactive construction of inner boundaries. It is also utilized to allow interactive point distribution. Cascade surfaces are transformed from three-dimensional surfaces of revolution into two-dimensional parametric planes for easy manipulation. Such a transformation allows these manipulated plane grids to be mapped to surfaces of revolution by any generatrix definition. A sophisticated visualization module is developed to al-low visualization for both grid and flow solution, steady or unsteady. A network module is built to allow data transferring in the heterogeneous environment. A flow module is integrated into this system, using an existing turbomachinery flow code. A simulation module is developed to combine the network, flow, and visualization module to achieve near real-time flow simulation about turbomachinery geometries. A toolbox module is developed to support the overall task. A batch version of the grid generation module is developed to allow portability and has been extended to allow dynamic grid generation for pitch changing turbomachinery configurations. Various applications with different characteristics are presented to demonstrate the success of this system.

  3. Alterations in task-induced activity and resting-state fluctuations in visual and DMN areas revealed in long-term meditators.

    PubMed

    Berkovich-Ohana, Aviva; Harel, Michal; Hahamy, Avital; Arieli, Amos; Malach, Rafael

    2016-07-15

    Recently we proposed that the information contained in spontaneously emerging (resting-state) fluctuations may reflect individually unique neuro-cognitive traits. One prediction of this conjecture, termed the "spontaneous trait reactivation" (STR) hypothesis, is that resting-state activity patterns could be diagnostic of unique personalities, talents and life-styles of individuals. Long-term meditators could provide a unique experimental group to test this hypothesis. Using fMRI we found that, during resting-state, the amplitude of spontaneous fluctuations in long-term mindfulness meditation (MM) practitioners was enhanced in the visual cortex and significantly reduced in the DMN compared to naïve controls. Importantly, during a visual recognition memory task, the MM group showed heightened visual cortex responsivity, concomitant with weaker negative responses in Default Mode Network (DMN) areas. This effect was also reflected in the behavioral performance, where MM practitioners performed significantly faster than the control group. Thus, our results uncover opposite changes in the visual and default mode systems in long-term meditators which are revealed during both rest and task. The results support the STR hypothesis and extend it to the domain of local changes in the magnitude of the spontaneous fluctuations. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Jewelled spiders manipulate colour-lure geometry to deceive prey

    PubMed Central

    2017-01-01

    Selection is expected to favour the evolution of efficacy in visual communication. This extends to deceptive systems, and predicts functional links between the structure of visual signals and their behavioural presentation. Work to date has primarily focused on colour, however, thereby understating the multicomponent nature of visual signals. Here I examined the relationship between signal structure, presentation behaviour, and efficacy in the context of colour-based prey luring. I used the polymorphic orb-web spider Gasteracantha fornicata, whose yellow- or white-and-black striped dorsal colours have been broadly implicated in prey attraction. In a manipulative assay, I found that spiders actively control the orientation of their conspicuous banded signals in the web, with a distinct preference for near-diagonal bearings. Further field-based study identified a predictive relationship between pattern orientation and prey interception rates, with a local maximum at the spiders' preferred orientation. There were no morph-specific effects on capture success, either singularly or via an interaction with pattern orientation. These results reveal a dynamic element in a traditionally ‘static’ signalling context, and imply differential functions for chromatic and geometric signal components across visual contexts. More broadly, they underscore how multicomponent signal designs and display behaviours may coevolve to enhance efficacy in visual deception. PMID:28356411

  5. Jewelled spiders manipulate colour-lure geometry to deceive prey.

    PubMed

    White, Thomas E

    2017-03-01

    Selection is expected to favour the evolution of efficacy in visual communication. This extends to deceptive systems, and predicts functional links between the structure of visual signals and their behavioural presentation. Work to date has primarily focused on colour, however, thereby understating the multicomponent nature of visual signals. Here I examined the relationship between signal structure, presentation behaviour, and efficacy in the context of colour-based prey luring. I used the polymorphic orb-web spider Gasteracantha fornicata , whose yellow- or white-and-black striped dorsal colours have been broadly implicated in prey attraction. In a manipulative assay, I found that spiders actively control the orientation of their conspicuous banded signals in the web, with a distinct preference for near-diagonal bearings. Further field-based study identified a predictive relationship between pattern orientation and prey interception rates, with a local maximum at the spiders' preferred orientation. There were no morph-specific effects on capture success, either singularly or via an interaction with pattern orientation. These results reveal a dynamic element in a traditionally 'static' signalling context, and imply differential functions for chromatic and geometric signal components across visual contexts. More broadly, they underscore how multicomponent signal designs and display behaviours may coevolve to enhance efficacy in visual deception. © 2017 The Author(s).

  6. SU-E-J-196: Implementation of An In-House Visual Feedback System for Motion Management During Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, V; James, J; Wang, B

    Purpose: To describe an in-house video goggle feedback system for motion management during simulation and treatment of radiation therapy patients. Methods: This video goggle system works by splitting and amplifying the video output signal directly from the Varian Real-Time Position Management (RPM) workstation or TrueBeam imaging workstation into two signals using a Distribution Amplifier. The first signal S[1] gets reconnected back to the monitor. The second signal S[2] gets connected to the input of a Video Scaler. The S[2] signal can be scaled, cropped and panned in real time to display only the relevant information to the patient. The outputmore » signal from the Video Scaler gets connected to an HDMI Extender Transmitter via a DVI-D to HDMI converter cable. The S[2] signal can be transported from the HDMI Extender Transmitter to the HDMI Extender Receiver located inside the treatment room via a Cat5e/6 cable. Inside the treatment room, the HDMI Extender Receiver is permanently mounted on the wall near the conduit where the Cat5e/6 cable is located. An HDMI cable is used to connect from the output of the HDMI Receiver to the video goggles. Results: This video goggle feedback system is currently being used at two institutions. At one institution, the system was just recently implemented for simulation and treatments on two breath-hold gated patients with 8+ total fractions over a two month period. At the other institution, the system was used to treat 100+ breath-hold gated patients on three Varian TrueBeam linacs and has been operational for twelve months. The average time to prepare the video goggle system for treatment is less than 1 minute. Conclusion: The video goggle system provides an efficient and reliable method to set up a video feedback signal for radiotherapy patients with motion management.« less

  7. [Maculopathy caused by Nd:YAG laser accident].

    PubMed

    Blümel, C; Brosig, J

    1999-02-01

    Since the construction of the first laser in the sixties and the extended use in medicine, technology and hobby the number of accidents has increased. Appreciated to therapy concepts are missing at the time. A 19 year-old-man was hit by the impulse of an military hand-held rangefinder (Nd:YAG with a wavelength of 1064 nm) on the right eye. The visual acuity dropped to 1/35 and a central scotoma with metamorphopsia occurred immediatly after the accident. The ophthalmological findings showed a distinct submacular hemorrhage. The therapy with Prednisolon intravenous and daily parabulbar, vitamin C, indomethacin systemical and lokal application resulted in an increase of visual acuity up to 0.4 and a reduction of central scotoma from 8 degrees to 2 degrees. Systemical and local use of antiphlogistic and antiinflamatoric substances may partially reduce the vision limitating scar formation. Application of antioxidants to neutralize the toxic radicals that arise by tissue decay should be given additionally to the cyclopegic medication. Special attention should be payed to the prevention of such laser accidents.

  8. Combining Multiple Forms Of Visual Information To Specify Contact Relations In Spatial Layout

    NASA Astrophysics Data System (ADS)

    Sedgwick, Hal A.

    1990-03-01

    An expert system, called Layout2, has been described, which models a subset of available visual information for spatial layout. The system is used to examine detailed interactions between multiple, partially redundant forms of information in an environment-centered geometrical model of an environment obeying certain rather general constraints. This paper discusses the extension of Layout2 to include generalized contact relations between surfaces. In an environment-centered model, the representation of viewer-centered distance is replaced by the representation of environmental location. This location information is propagated through the representation of the environment by a network of contact relations between contiguous surfaces. Perspective information interacts with other forms of information to specify these contact relations. The experimental study of human perception of contact relations in extended spatial layouts is also discussed. Differences between human results and Layout2 results reveal limitations in the human ability to register available information; they also point to the existence of certain forms of information not yet formalized in Layout2.

  9. Synthetic light-needle photoacoustic microscopy for extended depth of field (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yang, Jiamiao; Gong, Lei; Xu, Xiao; Hai, Pengfei; Suzuki, Yuta; Wang, Lihong V.

    2017-03-01

    Photoacoustic microscopy (PAM) has been extensively applied in biomedical study because of its ability to visualize tissue morphology and physiology in vivo in three dimensions (3D). However, conventional PAM suffers from a rapidly decreasing resolution away from the focal plane because of the limited depth of focus of an objective lens, which deteriorates the volumetric imaging quality inevitably. Here, we propose a novel method to synthesize an ultra-long light needle to extend a microscope's depth of focus beyond its physical limitations with wavefront engineering method. Furthermore, it enables an improved lateral resolution that exceeds the diffraction limit of the objective lens. The virtual light needle can be flexibly synthesized anywhere throughout the imaging volume without mechanical scanning. Benefiting from these advantages, we developed a synthetic light needle photoacoustic microscopy (SLN-PAM) to achieve an extended depth of field (DOF), sub-diffraction and motionless volumetric imaging. The DOF of our SLN-PAM system is up to 1800 µm, more than 30-fold improvement over that gained by conventional PAM. Our system also achieves the lateral resolution of 1.8 µm (characterized at 532 nm and 0.1 NA objective), about 50% higher than the Rayleigh diffraction limit. Its superior imaging performance was demonstrated by 3D imaging of both non-biological and biological samples. This extended DOF, sub-diffraction and motionless 3D PAM will open up new opportunities for potential biomedical applications.

  10. Development of 3D interactive visual objects using the Scripps Institution of Oceanography's Visualization Center

    NASA Astrophysics Data System (ADS)

    Kilb, D.; Reif, C.; Peach, C.; Keen, C. S.; Smith, B.; Mellors, R. J.

    2003-12-01

    Within the last year scientists and educators at the Scripps Institution of Oceanography (SIO), the Birch Aquarium at Scripps and San Diego State University have collaborated with education specialists to develop 3D interactive graphic teaching modules for use in the classroom and in teacher workshops at the SIO Visualization center (http://siovizcenter.ucsd.edu). The unique aspect of the SIO Visualization center is that the center is designed around a 120 degree curved Panoram floor-to-ceiling screen (8'6" by 28'4") that immerses viewers in a virtual environment. The center is powered by an SGI 3400 Onyx computer that is more powerful, by an order of magnitude in both speed and memory, than typical base systems currently used for education and outreach presentations. This technology allows us to display multiple 3D data layers (e.g., seismicity, high resolution topography, seismic reflectivity, draped interferometric synthetic aperture radar (InSAR) images, etc.) simultaneously, render them in 3D stereo, and take a virtual flight through the data as dictated on the spot by the user. This system can also render snapshots, images and movies that are too big for other systems, and then export smaller size end-products to more commonly used computer systems. Since early 2002, we have explored various ways to provide informal education and outreach focusing on current research presented directly by the researchers doing the work. The Center currently provides a centerpiece for instruction on southern California seismology for K-12 students and teachers for various Scripps education endeavors. Future plans are in place to use the Visualization Center at Scripps for extended K-12 and college educational programs. In particular, we will be identifying K-12 curriculum needs, assisting with teacher education, developing assessments of our programs and products, producing web-accessible teaching modules and facilitating the development of appropriate teaching tools to be used directly by classroom teachers.

  11. The Application of the Montage Image Mosaic Engine To The Visualization Of Astronomical Images

    NASA Astrophysics Data System (ADS)

    Berriman, G. Bruce; Good, J. C.

    2017-05-01

    The Montage Image Mosaic Engine was designed as a scalable toolkit, written in C for performance and portability across *nix platforms, that assembles FITS images into mosaics. This code is freely available and has been widely used in the astronomy and IT communities for research, product generation, and for developing next-generation cyber-infrastructure. Recently, it has begun finding applicability in the field of visualization. This development has come about because the toolkit design allows easy integration into scalable systems that process data for subsequent visualization in a browser or client. The toolkit it includes a visualization tool suitable for automation and for integration into Python: mViewer creates, with a single command, complex multi-color images overlaid with coordinate displays, labels, and observation footprints, and includes an adaptive image histogram equalization method that preserves the structure of a stretched image over its dynamic range. The Montage toolkit contains functionality originally developed to support the creation and management of mosaics, but which also offers value to visualization: a background rectification algorithm that reveals the faint structure in an image; and tools for creating cutout and downsampled versions of large images. Version 5 of Montage offers support for visualizing data written in HEALPix sky-tessellation scheme, and functionality for processing and organizing images to comply with the TOAST sky-tessellation scheme required for consumption by the World Wide Telescope (WWT). Four online tutorials allow readers to reproduce and extend all the visualizations presented in this paper.

  12. Managing the care of patients who have visual impairment.

    PubMed

    Watkinson, Sue; Scott, Eileen

    An ageing population means that the incidence of people who are visually impaired will increase. However, extending the role of ophthalmic nurses will promote delivery of a more effective health service for these patients. Using Maslow's hierarchy of needs as a basis for addressing the care of patients with visual impairment is a means of ensuring that they receive high quality, appropriate care at the right time.

  13. Seeing number using texture: How summary statistics account for reductions in perceived numerosity in the visual periphery.

    PubMed

    Balas, Benjamin

    2016-11-01

    Peripheral visual perception is characterized by reduced information about appearance due to constraints on how image structure is represented. Visual crowding is a consequence of excessive integration in the visual periphery. Basic phenomenology of visual crowding and other tasks have been successfully accounted for by a summary-statistic model of pooling, suggesting that texture-like processing is useful for how information is reduced in peripheral vision. I attempt to extend the scope of this model by examining a property of peripheral vision: reduced perceived numerosity in the periphery. I demonstrate that a summary-statistic model of peripheral appearance accounts for reduced numerosity in peripherally viewed arrays of randomly placed dots, but does not account for observed effects of dot clustering within such arrays. The model thus offers a limited account of how numerosity is perceived in the visual periphery. I also demonstrate that the model predicts that numerosity estimation is sensitive to element shape, which represents a novel prediction regarding the phenomenology of peripheral numerosity perception. Finally, I discuss ways to extend the model to a broader range of behavior and the potential for using the model to make further predictions about how number is perceived in untested scenarios in peripheral vision.

  14. Fine and distributed subcellular retinotopy of excitatory inputs to the dendritic tree of a collision-detecting neuron

    PubMed Central

    Zhu, Ying

    2016-01-01

    Individual neurons in several sensory systems receive synaptic inputs organized according to subcellular topographic maps, yet the fine structure of this topographic organization and its relation to dendritic morphology have not been studied in detail. Subcellular topography is expected to play a role in dendritic integration, particularly when dendrites are extended and active. The lobula giant movement detector (LGMD) neuron in the locust visual system is known to receive topographic excitatory inputs on part of its dendritic tree. The LGMD responds preferentially to objects approaching on a collision course and is thought to implement several interesting dendritic computations. To study the fine retinotopic mapping of visual inputs onto the excitatory dendrites of the LGMD, we designed a custom microscope allowing visual stimulation at the native sampling resolution of the locust compound eye while simultaneously performing two-photon calcium imaging on excitatory dendrites. We show that the LGMD receives a distributed, fine retinotopic projection from the eye facets and that adjacent facets activate overlapping portions of the same dendritic branches. We also demonstrate that adjacent retinal inputs most likely make independent synapses on the excitatory dendrites of the LGMD. Finally, we show that the fine topographic mapping can be studied using dynamic visual stimuli. Our results reveal the detailed structure of the dendritic input originating from individual facets on the eye and their relation to that of adjacent facets. The mapping of visual space onto the LGMD's dendrites is expected to have implications for dendritic computation. PMID:27009157

  15. Ultrahigh resolution optical coherence elastography using a Bessel beam for extended depth of field

    NASA Astrophysics Data System (ADS)

    Curatolo, Andrea; Villiger, Martin; Lorenser, Dirk; Wijesinghe, Philip; Fritz, Alexander; Kennedy, Brendan F.; Sampson, David D.

    2016-03-01

    Visualizing stiffness within the local tissue environment at the cellular and sub-cellular level promises to provide insight into the genesis and progression of disease. In this paper, we propose ultrahigh-resolution optical coherence elastography, and demonstrate three-dimensional imaging of local axial strain of tissues undergoing compressive loading. The technique employs a dual-arm extended focus optical coherence microscope to measure tissue displacement under compression. The system uses a broad bandwidth supercontinuum source for ultrahigh axial resolution, Bessel beam illumination and Gaussian beam detection, maintaining sub-2 μm transverse resolution over nearly 100 μm depth of field, and spectral-domain detection allowing high displacement sensitivity. The system produces strain elastograms with a record resolution (x,y,z) of 2×2×15 μm. We benchmark the advances in terms of resolution and strain sensitivity by imaging a suitable inclusion phantom. We also demonstrate this performance on freshly excised mouse aorta and reveal the mechanical heterogeneity of vascular smooth muscle cells and elastin sheets, otherwise unresolved in a typical, lower resolution optical coherence elastography system.

  16. Ultrahigh-resolution optical coherence elastography through a micro-endoscope: towards in vivo imaging of cellular-scale mechanics

    PubMed Central

    Fang, Qi; Curatolo, Andrea; Wijesinghe, Philip; Yeow, Yen Ling; Hamzah, Juliana; Noble, Peter B.; Karnowski, Karol; Sampson, David D.; Ganss, Ruth; Kim, Jun Ki; Lee, Woei M.; Kennedy, Brendan F.

    2017-01-01

    In this paper, we describe a technique capable of visualizing mechanical properties at the cellular scale deep in living tissue, by incorporating a gradient-index (GRIN)-lens micro-endoscope into an ultrahigh-resolution optical coherence elastography system. The optical system, after the endoscope, has a lateral resolution of 1.6 µm and an axial resolution of 2.2 µm. Bessel beam illumination and Gaussian mode detection are used to provide an extended depth-of-field of 80 µm, which is a 4-fold improvement over a fully Gaussian beam case with the same lateral resolution. Using this system, we demonstrate quantitative elasticity imaging of a soft silicone phantom containing a stiff inclusion and a freshly excised malignant murine pancreatic tumor. We also demonstrate qualitative strain imaging below the tissue surface on in situ murine muscle. The approach we introduce here can provide high-quality extended-focus images through a micro-endoscope with potential to measure cellular-scale mechanics deep in tissue. We believe this tool is promising for studying biological processes and disease progression in vivo. PMID:29188108

  17. Extended Wearing Trial of Trifield Lens Device for “Tunnel Vision”

    PubMed Central

    Woods, Russell L.; Giorgi, Robert G.; Berson, Eliot L.; Peli, Eli

    2009-01-01

    Severe visual field constriction (tunnel vision) impairs the ability to navigate and walk safely. We evaluated Trifield glasses as a mobility rehabilitation device for tunnel vision in an extended wearing trial. Twelve patients with tunnel vision (5 to 22 degrees wide) due to retinitis pigmentosa or choroideremia participated in the 5-visit wearing trial. To expand the horizontal visual field, one spectacle lens was fitted with two apex-to-apex prisms that vertically bisected the pupil on primary gaze. This provides visual field expansion at the expense of visual confusion (two objects with the same visual direction). Patients were asked to wear these spectacles as much as possible for the duration of the wearing trial (median 8, range 6 to 60, weeks). Clinical success (continued wear, indicating perceived overall benefit), visual field expansion, perceived direction and perceived visual ability were measured. Of 12 patients, 9 chose to continue wearing the Trifield glasses at the end of the wearing trial. Of those 9 patients, at long-term follow-up (35 to 78 weeks), 3 reported still wearing the Trifield glasses. Visual field expansion (median 18, range 9 to 38, degrees) was demonstrated for all patients. No patient demonstrated adaptation to the change in visual direction produced by the Trifield glasses (prisms). For difficulty with obstacles, some differences between successful and non-successful wearers were found. Trifield glasses provided reported benefits in obstacle avoidance to 7 of the 12 patients completing the wearing trial. Crowded environments were particularly difficult for most wearers. Possible reasons for long-term discontinuation and lack of adaptation to perceived direction are discussed. PMID:20444130

  18. Extended wearing trial of Trifield lens device for 'tunnel vision'.

    PubMed

    Woods, Russell L; Giorgi, Robert G; Berson, Eliot L; Peli, Eli

    2010-05-01

    Severe visual field constriction (tunnel vision) impairs the ability to navigate and walk safely. We evaluated Trifield glasses as a mobility rehabilitation device for tunnel vision in an extended wearing trial. Twelve patients with tunnel vision (5-22 degrees wide) due to retinitis pigmentosa or choroideremia participated in the 5-visit wearing trial. To expand the horizontal visual field, one spectacle lens was fitted with two apex-to-apex prisms that vertically bisected the pupil on primary gaze. This provides visual field expansion at the expense of visual confusion (two objects with the same visual direction). Patients were asked to wear these spectacles as much as possible for the duration of the wearing trial (median 8, range 6-60 weeks). Clinical success (continued wear, indicating perceived overall benefit), visual field expansion, perceived direction and perceived visual ability were measured. Of 12 patients, nine chose to continue wearing the Trifield glasses at the end of the wearing trial. Of those nine patients, at long-term follow-up (35-78 weeks), three reported still wearing the Trifield glasses. Visual field expansion (median 18, range 9-38 degrees) was demonstrated for all patients. No patient demonstrated adaptation to the change in visual direction produced by the Trifield glasses (prisms). For reported difficulty with obstacles, some differences between successful and non-successful wearers were found. Trifield glasses provided reported benefits in obstacle avoidance to 7 of the 12 patients completing the wearing trial. Crowded environments were particularly difficult for most wearers. Possible reasons for long-term discontinuation and lack of adaptation to perceived direction are discussed.

  19. A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors.

    PubMed

    Song, Yu; Nuske, Stephen; Scherer, Sebastian

    2016-12-22

    State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight.

  20. Are evoked potentials in patients with adult-onset pompe disease indicative of clinically relevant central nervous system involvement?

    PubMed

    Wirsching, Andreas; Müller-Felber, Wolfgang; Schoser, Benedikt

    2014-08-01

    Pompe disease is a multisystem autosomal recessive glycogen storage disease. Autoptic findings in patients with classic infantile and late-onset Pompe disease have proven that accumulation of glycogen can also be found in the peripheral and central nervous system. To assess the functional role of these pathologic findings, multimodal sensory evoked potentials were analyzed. Serial recordings for brainstem auditory, visual, and somatosensory evoked potentials of 11 late-onset Pompe patients were reviewed. Data at the onset of the enzyme replacement therapy with alglucosidase alfa were compared with follow-up recordings at 12 and 24 months. Brainstem auditory evoked potentials showed a delayed peak I in 1/10 patients and an increased I-III and I-V interpeak latency in 1/10 patients, respectively. The III-V interpeak latencies were in the normal range. Visual evoked potentials were completely normal. Median somatosensory evoked potentials showed an extended interpeak latency in 3/9 patients. Wilcoxon tests comparing age-matched subgroups found significant differences in brainstem auditory evoked potentials and visual evoked potentials. We found that the majority of recordings for evoked potentials were within the ranges for standard values, therefore reflecting the lack of clinically relevant central nervous system involvement. Regular surveillance by means of evoked potentials does not seem to be appropriate in late-onset Pompe patients.

  1. OpenControl: a free opensource software for video tracking and automated control of behavioral mazes.

    PubMed

    Aguiar, Paulo; Mendonça, Luís; Galhardo, Vasco

    2007-10-15

    Operant animal behavioral tests require the interaction of the subject with sensors and actuators distributed in the experimental environment of the arena. In order to provide user independent reliable results and versatile control of these devices it is vital to use an automated control system. Commercial systems for control of animal mazes are usually based in software implementations that restrict their application to the proprietary hardware of the vendor. In this paper we present OpenControl: an opensource Visual Basic software that permits a Windows-based computer to function as a system to run fully automated behavioral experiments. OpenControl integrates video-tracking of the animal, definition of zones from the video signal for real-time assignment of animal position in the maze, control of the maze actuators from either hardware sensors or from the online video tracking, and recording of experimental data. Bidirectional communication with the maze hardware is achieved through the parallel-port interface, without the need for expensive AD-DA cards, while video tracking is attained using an inexpensive Firewire digital camera. OpenControl Visual Basic code is structurally general and versatile allowing it to be easily modified or extended to fulfill specific experimental protocols and custom hardware configurations. The Visual Basic environment was chosen in order to allow experimenters to easily adapt the code and expand it at their own needs.

  2. A Supramodal Neural Network for Speech and Gesture Semantics: An fMRI Study

    PubMed Central

    Weis, Susanne; Kircher, Tilo

    2012-01-01

    In a natural setting, speech is often accompanied by gestures. As language, speech-accompanying iconic gestures to some extent convey semantic information. However, if comprehension of the information contained in both the auditory and visual modality depends on same or different brain-networks is quite unknown. In this fMRI study, we aimed at identifying the cortical areas engaged in supramodal processing of semantic information. BOLD changes were recorded in 18 healthy right-handed male subjects watching video clips showing an actor who either performed speech (S, acoustic) or gestures (G, visual) in more (+) or less (−) meaningful varieties. In the experimental conditions familiar speech or isolated iconic gestures were presented; during the visual control condition the volunteers watched meaningless gestures (G−), while during the acoustic control condition a foreign language was presented (S−). The conjunction of the visual and acoustic semantic processing revealed activations extending from the left inferior frontal gyrus to the precentral gyrus, and included bilateral posterior temporal regions. We conclude that proclaiming this frontotemporal network the brain's core language system is to take too narrow a view. Our results rather indicate that these regions constitute a supramodal semantic processing network. PMID:23226488

  3. Hypothalamic Projections to the Optic Tectum in Larval Zebrafish

    PubMed Central

    Heap, Lucy A.; Vanwalleghem, Gilles C.; Thompson, Andrew W.; Favre-Bulle, Itia; Rubinsztein-Dunlop, Halina; Scott, Ethan K.

    2018-01-01

    The optic tectum of larval zebrafish is an important model for understanding visual processing in vertebrates. The tectum has been traditionally viewed as dominantly visual, with a majority of studies focusing on the processes by which tectal circuits receive and process retinally-derived visual information. Recently, a handful of studies have shown a much more complex role for the optic tectum in larval zebrafish, and anatomical and functional data from these studies suggest that this role extends beyond the visual system, and beyond the processing of exclusively retinal inputs. Consistent with this evolving view of the tectum, we have used a Gal4 enhancer trap line to identify direct projections from rostral hypothalamus (RH) to the tectal neuropil of larval zebrafish. These projections ramify within the deepest laminae of the tectal neuropil, the stratum album centrale (SAC)/stratum griseum periventriculare (SPV), and also innervate strata distinct from those innervated by retinal projections. Using optogenetic stimulation of the hypothalamic projection neurons paired with calcium imaging in the tectum, we find rebound firing in tectal neurons consistent with hypothalamic inhibitory input. Our results suggest that tectal processing in larval zebrafish is modulated by hypothalamic inhibitory inputs to the deep tectal neuropil. PMID:29403362

  4. Hypothalamic Projections to the Optic Tectum in Larval Zebrafish.

    PubMed

    Heap, Lucy A; Vanwalleghem, Gilles C; Thompson, Andrew W; Favre-Bulle, Itia; Rubinsztein-Dunlop, Halina; Scott, Ethan K

    2017-01-01

    The optic tectum of larval zebrafish is an important model for understanding visual processing in vertebrates. The tectum has been traditionally viewed as dominantly visual, with a majority of studies focusing on the processes by which tectal circuits receive and process retinally-derived visual information. Recently, a handful of studies have shown a much more complex role for the optic tectum in larval zebrafish, and anatomical and functional data from these studies suggest that this role extends beyond the visual system, and beyond the processing of exclusively retinal inputs. Consistent with this evolving view of the tectum, we have used a Gal4 enhancer trap line to identify direct projections from rostral hypothalamus (RH) to the tectal neuropil of larval zebrafish. These projections ramify within the deepest laminae of the tectal neuropil, the stratum album centrale (SAC)/stratum griseum periventriculare (SPV), and also innervate strata distinct from those innervated by retinal projections. Using optogenetic stimulation of the hypothalamic projection neurons paired with calcium imaging in the tectum, we find rebound firing in tectal neurons consistent with hypothalamic inhibitory input. Our results suggest that tectal processing in larval zebrafish is modulated by hypothalamic inhibitory inputs to the deep tectal neuropil.

  5. The strength of attentional biases reduces as visual short-term memory load increases

    PubMed Central

    Shimi, A.

    2013-01-01

    Despite our visual system receiving irrelevant input that competes with task-relevant signals, we are able to pursue our perceptual goals. Attention enhances our visual processing by biasing the processing of the input that is relevant to the task at hand. The top-down signals enabling these biases are therefore important for regulating lower level sensory mechanisms. In three experiments, we examined whether we apply similar biases to successfully maintain information in visual short-term memory (VSTM). We presented participants with targets alongside distracters and we graded their perceptual similarity to vary the extent to which they competed. Experiments 1 and 2 showed that the more items held in VSTM before the onset of the distracters, the more perceptually distinct the distracters needed to be for participants to retain the target accurately. Experiment 3 extended these behavioral findings by demonstrating that the perceptual similarity between target and distracters exerted a significantly greater effect on occipital alpha amplitudes, depending on the number of items already held in VSTM. The trade-off between VSTM load and target-distracter competition suggests that VSTM and perceptual competition share a partially overlapping mechanism, namely top-down inputs into sensory areas. PMID:23576694

  6. Coherent Motion Sensitivity Predicts Individual Differences in Subtraction

    ERIC Educational Resources Information Center

    Boets, Bart; De Smedt, Bert; Ghesquiere, Pol

    2011-01-01

    Recent findings suggest deficits in coherent motion sensitivity, an index of visual dorsal stream functioning, in children with poor mathematical skills or dyscalculia, a specific learning disability in mathematics. We extended these data using a longitudinal design to unravel whether visual dorsal stream functioning is able to "predict"…

  7. Extending Our Vision: Access to Inclusive Dance Education for People with Visual Impairment

    ERIC Educational Resources Information Center

    Seham, Jenny; Yeo, Anna J.

    2015-01-01

    Environmental, organizational and attitudinal obstacles continue to prevent people with vision loss from meaningfully engaging in dance education and performance. This article addresses the societal disabilities that handicap access to dance education for the blind. Although much of traditional dance instruction relies upon visual cuing and…

  8. Visual Servoing-Based Nanorobotic System for Automated Electrical Characterization of Nanotubes inside SEM.

    PubMed

    Ding, Huiyang; Shi, Chaoyang; Ma, Li; Yang, Zhan; Wang, Mingyu; Wang, Yaqiong; Chen, Tao; Sun, Lining; Toshio, Fukuda

    2018-04-08

    The maneuvering and electrical characterization of nanotubes inside a scanning electron microscope (SEM) has historically been time-consuming and laborious for operators. Before the development of automated nanomanipulation-enabled techniques for the performance of pick-and-place and characterization of nanoobjects, these functions were still incomplete and largely operated manually. In this paper, a dual-probe nanomanipulation system vision-based feedback was demonstrated to automatically perform 3D nanomanipulation tasks, to investigate the electrical characterization of nanotubes. The XY-position of Atomic Force Microscope (AFM) cantilevers and individual carbon nanotubes (CNTs) were precisely recognized via a series of image processing operations. A coarse-to-fine positioning strategy in the Z-direction was applied through the combination of the sharpness-based depth estimation method and the contact-detection method. The use of nanorobotic magnification-regulated speed aided in improving working efficiency and reliability. Additionally, we proposed automated alignment of manipulator axes by visual tracking the movement trajectory of the end effector. The experimental results indicate the system's capability for automated measurement electrical characterization of CNTs. Furthermore, the automated nanomanipulation system has the potential to be extended to other nanomanipulation tasks.

  9. Cardiovascular System Sonographic Evaluation Algorithm: A New Sonographic Algorithm for Evaluation of the Fetal Cardiovascular System in the Second Trimester.

    PubMed

    De León-Luis, Juan; Bravo, Coral; Gámez, Francisco; Ortiz-Quintana, Luis

    2015-07-01

    To evaluate the reproducibility and feasibility of the new cardiovascular system sonographic evaluation algorithm for studying the extended fetal cardiovascular system, including the portal, thymic, and supra-aortic areas, in the second trimester of pregnancy (19-22 weeks). We performed a cross-sectional study of pregnant women with healthy fetuses (singleton and twin pregnancies) attending our center from March to August 2011. The extended fetal cardiovascular system was evaluated by following the new algorithm, a sequential acquisition of axial views comprising the following (caudal to cranial): I, portal sinus; II, ductus venosus; III, hepatic veins; IV, 4-chamber view; V, left ventricular outflow tract; VI, right ventricular outflow tract; VII, 3-vessel and trachea view; VIII, thy-box; and IX, subclavian arteries. Interobserver agreement on the feasibility and exploration time was estimated in a subgroup of patients. The feasibility and exploration time were determined for the main cohort. Maternal, fetal, and sonographic factors affecting both features were evaluated. Interobserver agreement was excellent for all views except view VIII; the difference in the mean exploration time between observers was 1.5 minutes (95% confidence interval, 0.7-2.1 minutes; P < .05). In 184 fetuses (mean gestational age ± SD, 20 ± 0.6 weeks), the feasibility of all views was close to 99% except view VIII (88.7%). The complete feasibility of the algorithm was 81.5%. The mean exploration time was 5.6 ± 4.2 minutes. Only the occiput anterior fetal position was associated with a lower frequency of visualization and a longer exploration time (P < .05). The cardiovascular system sonographic evaluation algorithm is a reproducible and feasible approach for exploration of the extended fetal cardiovascular system in a second-trimester scan. It can be used to explore these areas in normal and abnormal conditions and provides an integrated image of extended fetal cardiovascular anatomy. © 2015 by the American Institute of Ultrasound in Medicine.

  10. Systematic analysis of signaling pathways using an integrative environment.

    PubMed

    Visvanathan, Mahesh; Breit, Marc; Pfeifer, Bernhard; Baumgartner, Christian; Modre-Osprian, Robert; Tilg, Bernhard

    2007-01-01

    Understanding the biological processes of signaling pathways as a whole system requires an integrative software environment that has comprehensive capabilities. The environment should include tools for pathway design, visualization, simulation and a knowledge base concerning signaling pathways as one. In this paper we introduce a new integrative environment for the systematic analysis of signaling pathways. This system includes environments for pathway design, visualization, simulation and a knowledge base that combines biological and modeling information concerning signaling pathways that provides the basic understanding of the biological system, its structure and functioning. The system is designed with a client-server architecture. It contains a pathway designing environment and a simulation environment as upper layers with a relational knowledge base as the underlying layer. The TNFa-mediated NF-kB signal trans-duction pathway model was designed and tested using our integrative framework. It was also useful to define the structure of the knowledge base. Sensitivity analysis of this specific pathway was performed providing simulation data. Then the model was extended showing promising initial results. The proposed system offers a holistic view of pathways containing biological and modeling data. It will help us to perform biological interpretation of the simulation results and thus contribute to a better understanding of the biological system for drug identification.

  11. Features of the Retinotopic Representation in the Visual Wulst of a Laterally Eyed Bird, the Zebra Finch (Taeniopygia guttata)

    PubMed Central

    Michael, Neethu; Löwel, Siegrid; Bischof, Hans-Joachim

    2015-01-01

    The visual wulst of the zebra finch comprises at least two retinotopic maps of the contralateral eye. As yet, it is not known how much of the visual field is represented in the wulst neuronal maps, how the organization of the maps is related to the retinal architecture, and how information from the ipsilateral eye is involved in the activation of the wulst. Here, we have used autofluorescent flavoprotein imaging and classical anatomical methods to investigate such characteristics of the most posterior map of the multiple retinotopic representations. We found that the visual wulst can be activated by visual stimuli from a large part of the visual field of the contralateral eye. Horizontally, the visual field representation extended from -5° beyond the beak tip up to +125° laterally. Vertically, a small strip from -10° below to about +25° above the horizon activated the visual wulst. Although retinal ganglion cells had a much higher density around the fovea and along a strip extending from the fovea towards the beak tip, these areas were not overrepresented in the wulst map. The wulst area activated from the foveal region of the ipsilateral eye, overlapped substantially with the middle of the three contralaterally activated regions in the visual wulst, and partially with the other two. Visual wulst activity evoked by stimulation of the frontal visual field was stronger with contralateral than with binocular stimulation. This confirms earlier electrophysiological studies indicating an inhibitory influence of the activation of the ipsilateral eye on wulst activity elicited by stimulating the contralateral eye. The lack of a foveal overrepresentation suggests that identification of objects may not be the primary task of the zebra finch visual wulst. Instead, this brain area may be involved in the processing of visual information necessary for spatial orientation. PMID:25853253

  12. Optic nerve dysfunction during gravity inversion. Visual field abnormalities.

    PubMed

    Sanborn, G E; Friberg, T R; Allen, R

    1987-06-01

    Inversion in a head-down position (gravity inversion) results in an intraocular pressure of 35 to 40 mm Hg in normal subjects. We used computerized static perimetry to measure the visual fields of normal subjects during gravity inversion. There were no visual field changes in the central 6 degrees of the visual field compared with the baseline (preinversion) values. However, when the central 30 degrees of the visual field was tested, reversible visual field defects were found in 11 of 19 eyes. We believe that the substantial elevation of intraocular pressure during gravity inversion may pose potential risks to the eyes, and we recommend that inversion for extended periods of time be avoided.

  13. Line Scanning Thermography for Rapid Nondestructive Inspection of Large Scale Composites

    NASA Astrophysics Data System (ADS)

    Chung, S.; Ley, O.; Godinez, V.; Bandos, B.

    2011-06-01

    As next generation structures are utilizing larger amounts of composite materials, a rigorous and reliable method is needed to inspect these structures in order to prevent catastrophic failure and extend service life. Current inspection methods, such as ultrasonic, generally require extended down time and man hours as they are typically carried out via point-by-point measurements. A novel Line Scanning Thermography (LST) System has been developed for the non-contact, large-scale field inspection of composite structures with faster scanning times than conventional thermography systems. LST is a patented dynamic thermography technique where the heat source and thermal camera move in tandem, which allows the continuous scan of long surfaces without the loss of resolution. The current system can inspect an area of 10 in2 per 1 second, and has a resolution of 0.05×0.03 in2. Advanced data gathering protocols have been implemented for near-real time damage visualization and post-analysis algorithms for damage interpretation. The system has been used to successfully detect defects (delamination, dry areas) in fiber-reinforced composite sandwich panels for Navy applications, as well as impact damage in composite missile cases and armor ceramic panels.

  14. A pose estimation method for unmanned ground vehicles in GPS denied environments

    NASA Astrophysics Data System (ADS)

    Tamjidi, Amirhossein; Ye, Cang

    2012-06-01

    This paper presents a pose estimation method based on the 1-Point RANSAC EKF (Extended Kalman Filter) framework. The method fuses the depth data from a LIDAR and the visual data from a monocular camera to estimate the pose of a Unmanned Ground Vehicle (UGV) in a GPS denied environment. Its estimation framework continuy updates the vehicle's 6D pose state and temporary estimates of the extracted visual features' 3D positions. In contrast to the conventional EKF-SLAM (Simultaneous Localization And Mapping) frameworks, the proposed method discards feature estimates from the extended state vector once they are no longer observed for several steps. As a result, the extended state vector always maintains a reasonable size that is suitable for online calculation. The fusion of laser and visual data is performed both in the feature initialization part of the EKF-SLAM process and in the motion prediction stage. A RANSAC pose calculation procedure is devised to produce pose estimate for the motion model. The proposed method has been successfully tested on the Ford campus's LIDAR-Vision dataset. The results are compared with the ground truth data of the dataset and the estimation error is ~1.9% of the path length.

  15. Interdigitated Color- and Disparity-Selective Columns within Human Visual Cortical Areas V2 and V3

    PubMed Central

    Polimeni, Jonathan R.; Tootell, Roger B.H.

    2016-01-01

    In nonhuman primates (NHPs), secondary visual cortex (V2) is composed of repeating columnar stripes, which are evident in histological variations of cytochrome oxidase (CO) levels. Distinctive “thin” and “thick” stripes of dark CO staining reportedly respond selectively to stimulus variations in color and binocular disparity, respectively. Here, we first tested whether similar color-selective or disparity-selective stripes exist in human V2. If so, available evidence predicts that such stripes should (1) radiate “outward” from the V1–V2 border, (2) interdigitate, (3) differ from each other in both thickness and length, (4) be spaced ∼3.5–4 mm apart (center-to-center), and, perhaps, (5) have segregated functional connections. Second, we tested whether analogous segregated columns exist in a “next-higher” tier area, V3. To answer these questions, we used high-resolution fMRI (1 × 1 × 1 mm3) at high field (7 T), presenting color-selective or disparity-selective stimuli, plus extensive signal averaging across multiple scan sessions and cortical surface-based analysis. All hypotheses were confirmed. V2 stripes and V3 columns were reliably localized in all subjects. The two stripe/column types were largely interdigitated (e.g., nonoverlapping) in both V2 and V3. Color-selective stripes differed from disparity-selective stripes in both width (thickness) and length. Analysis of resting-state functional connections (eyes closed) showed a stronger correlation between functionally alike (compared with functionally unlike) stripes/columns in V2 and V3. These results revealed a fine-scale segregation of color-selective or disparity-selective streams within human areas V2 and V3. Together with prior evidence from NHPs, this suggests that two parallel processing streams extend from visual subcortical regions through V1, V2, and V3. SIGNIFICANCE STATEMENT In current textbooks and reviews, diagrams of cortical visual processing highlight two distinct neural-processing streams within the first and second cortical areas in monkeys. Two major streams consist of segregated cortical columns that are selectively activated by either color or ocular interactions. Because such cortical columns are so small, they were not revealed previously by conventional imaging techniques in humans. Here we demonstrate that such segregated columnar systems exist in humans. We find that, in humans, color versus binocular disparity columns extend one full area further, into the third visual area. Our approach can be extended to reveal and study additional types of columns in human cortex, perhaps including columns underlying more cognitive functions. PMID:26865609

  16. A new version of Visual tool for estimating the fractal dimension of images

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Felea, D.; Besliu, C.; Jipa, Al.; Bordeianu, C. C.; Stan, E.; Esanu, T.

    2010-04-01

    This work presents a new version of a Visual Basic 6.0 application for estimating the fractal dimension of images (Grossu et al., 2009 [1]). The earlier version was limited to bi-dimensional sets of points, stored in bitmap files. The application was extended for working also with comma separated values files and three-dimensional images. New version program summaryProgram title: Fractal Analysis v02 Catalogue identifier: AEEG_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEG_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9999 No. of bytes in distributed program, including test data, etc.: 4 366 783 Distribution format: tar.gz Programming language: MS Visual Basic 6.0 Computer: PC Operating system: MS Windows 98 or later RAM: 30 M Classification: 14 Catalogue identifier of previous version: AEEG_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 1999 Does the new version supersede the previous version?: Yes Nature of problem: Estimating the fractal dimension of 2D and 3D images. Solution method: Optimized implementation of the box-counting algorithm. Reasons for new version:The previous version was limited to bitmap image files. The new application was extended in order to work with objects stored in comma separated values (csv) files. The main advantages are: Easier integration with other applications (csv is a widely used, simple text file format); Less resources consumed and improved performance (only the information of interest, the "black points", are stored); Higher resolution (the points coordinates are loaded into Visual Basic double variables [2]); Possibility of storing three-dimensional objects (e.g. the 3D Sierpinski gasket). In this version the optimized box-counting algorithm [1] was extended to the three-dimensional case. Summary of revisions:The application interface was changed from SDI (single document interface) to MDI (multi-document interface). One form was added in order to provide a graphical user interface for the new functionalities (fractal analysis of 2D and 3D images stored in csv files). Additional comments: User friendly graphical interface; Easy deployment mechanism. Running time: In the first approximation, the algorithm is linear. References:[1] I.V. Grossu, C. Besliu, M.V. Rusu, Al. Jipa, C.C. Bordeianu, D. Felea, Comput. Phys. Comm. 180 (2009) 1999-2001.[2] F. Balena, Programming Microsoft Visual Basic 6.0, Microsoft Press, US, 1999.

  17. Concept, design and analysis of a large format autostereoscopic display system

    NASA Astrophysics Data System (ADS)

    Knocke, F.; de Jongh, R.; Frömel, M.

    2005-09-01

    Autostereoscopic display devices with large visual field are of importance in a number of applications such as computer aided design projects, technical education, and military command systems. Typical requirements for such systems are, aside from the large visual field, a large viewing zone, a high level of image brightness, and an extended depth of field. Additional appliances such as specialized eyeglasses or head-trackers are disadvantageous for the aforementioned applications. We report on the design and prototyping of an autostereoscopic display system on the basis of projection-type one-step unidirectional holography. The prototype consists of a hologram holder, an illumination unit, and a special direction-selective screen. Reconstruction light is provided by a 2W frequency-doubled Nd:YVO4 laser. The production of stereoscopic hologram stripes on photopolymer is carried out on a special origination setup. The prototype has a screen size of 180cm × 90cm and provides a visual field of 29° when viewed from 3.6 meters. Due to the coherent reconstruction, a depth of field of several meters is achievable. Up to 18 hologram stripes can be arranged on the holder to permit a rapid switch between a series of motifs or views. Both computer generated image sequences and digital camera photos may serve as input frames. However, a comprehensive pre-distortion must be performed in order to account for optical distortion and several other geometrical factors. The corresponding computations are briefly summarized below. The performance of the system is analyzed, aspects of beam-shaping and mechanical design are discussed and photographs of early reconstructions are presented.

  18. An autism-associated serotonin transporter variant disrupts multisensory processing.

    PubMed

    Siemann, J K; Muller, C L; Forsberg, C G; Blakely, R D; Veenstra-VanderWeele, J; Wallace, M T

    2017-03-21

    Altered sensory processing is observed in many children with autism spectrum disorder (ASD), with growing evidence that these impairments extend to the integration of information across the different senses (that is, multisensory function). The serotonin system has an important role in sensory development and function, and alterations of serotonergic signaling have been suggested to have a role in ASD. A gain-of-function coding variant in the serotonin transporter (SERT) associates with sensory aversion in humans, and when expressed in mice produces traits associated with ASD, including disruptions in social and communicative function and repetitive behaviors. The current study set out to test whether these mice also exhibit changes in multisensory function when compared with wild-type (WT) animals on the same genetic background. Mice were trained to respond to auditory and visual stimuli independently before being tested under visual, auditory and paired audiovisual (multisensory) conditions. WT mice exhibited significant gains in response accuracy under audiovisual conditions. In contrast, although the SERT mutant animals learned the auditory and visual tasks comparably to WT littermates, they failed to show behavioral gains under multisensory conditions. We believe these results provide the first behavioral evidence of multisensory deficits in a genetic mouse model related to ASD and implicate the serotonin system in multisensory processing and in the multisensory changes seen in ASD.

  19. Hierarchical neural network model of the visual system determining figure/ground relation

    NASA Astrophysics Data System (ADS)

    Kikuchi, Masayuki

    2017-07-01

    One of the most important functions of the visual perception in the brain is figure/ground interpretation from input images. Figural region in 2D image corresponding to object in 3D space are distinguished from background region extended behind the object. Previously the author proposed a neural network model of figure/ground separation constructed on the standpoint that local geometric features such as curvatures and outer angles at corners are extracted and propagated along input contour in a single layer network (Kikuchi & Akashi, 2001). However, such a processing principle has the defect that signal propagation requires manyiterations despite the fact that actual visual system determines figure/ground relation within the short period (Zhou et al., 2000). In order to attain speed-up for determining figure/ground, this study incorporates hierarchical architecture into the previous model. This study confirmed the effect of the hierarchization as for the computation time by simulation. As the number of layers increased, the required computation time reduced. However, such speed-up effect was saturatedas the layers increased to some extent. This study attempted to explain this saturation effect by the notion of average distance between vertices in the area of complex network, and succeeded to mimic the saturation effect by computer simulation.

  20. Multispectral photoacoustic tomography for detection of small tumors inside biological tissues

    NASA Astrophysics Data System (ADS)

    Hirasawa, Takeshi; Okawa, Shinpei; Tsujita, Kazuhiro; Kushibiki, Toshihiro; Fujita, Masanori; Urano, Yasuteru; Ishihara, Miya

    2018-02-01

    Visualization of small tumors inside biological tissue is important in cancer treatment because that promotes accurate surgical resection and enables therapeutic effect monitoring. For sensitive detection of tumor, we have been developing photoacoustic (PA) imaging technique to visualize tumor-specific contrast agents, and have already succeeded to image a subcutaneous tumor of a mouse using the contrast agents. To image tumors inside biological tissues, extension of imaging depth and improvement of sensitivity were required. In this study, to extend imaging depth, we developed a PA tomography (PAT) system that can image entire cross section of mice. To improve sensitivity, we discussed the use of the P(VDF-TrFE) linear array acoustic sensor that can detect PA signals with wide ranges of frequencies. Because PA signals produced from low absorbance optical absorbers shifts to low frequency, we hypothesized that the detection of low frequency PA signals improves sensitivity to low absorbance optical absorbers. We developed a PAT system with both a PZT linear array acoustic sensor and the P(VDF-TrFE) sensor, and performed experiment using tissue-mimicking phantoms to evaluate lower detection limits of absorbance. As a result, PAT images calculated from low frequency components of PA signals detected by the P(VDF-TrFE) sensor could visualize optical absorbers with lower absorbance.

  1. Effects of action video game training on visual working memory.

    PubMed

    Blacker, Kara J; Curby, Kim M; Klobusicky, Elizabeth; Chein, Jason M

    2014-10-01

    The ability to hold visual information in mind over a brief delay is critical for acquiring information and navigating a complex visual world. Despite the ubiquitous nature of visual working memory (VWM) in our everyday lives, this system is fundamentally limited in capacity. Therefore, the potential to improve VWM through training is a growing area of research. An emerging body of literature suggests that extensive experience playing action video games yields a myriad of perceptual and attentional benefits. Several lines of converging work suggest that action video game play may influence VWM as well. The current study utilized a training paradigm to examine whether action video games cause improvements to the quantity and/or the quality of information stored in VWM. The results suggest that VWM capacity, as measured by a change detection task, is increased after action video game training, as compared with training on a control game, and that some improvement to VWM precision occurs with action game training as well. However, these findings do not appear to extend to a complex span measure of VWM, which is often thought to tap into higher-order executive skills. The VWM improvements seen in individuals trained on an action video game cannot be accounted for by differences in motivation or engagement, differential expectations, or baseline differences in demographics as compared with the control group used. In sum, action video game training represents a potentially unique and engaging platform by which this severely capacity-limited VWM system might be enhanced.

  2. ETE: a python Environment for Tree Exploration.

    PubMed

    Huerta-Cepas, Jaime; Dopazo, Joaquín; Gabaldón, Toni

    2010-01-13

    Many bioinformatics analyses, ranging from gene clustering to phylogenetics, produce hierarchical trees as their main result. These are used to represent the relationships among different biological entities, thus facilitating their analysis and interpretation. A number of standalone programs are available that focus on tree visualization or that perform specific analyses on them. However, such applications are rarely suitable for large-scale surveys, in which a higher level of automation is required. Currently, many genome-wide analyses rely on tree-like data representation and hence there is a growing need for scalable tools to handle tree structures at large scale. Here we present the Environment for Tree Exploration (ETE), a python programming toolkit that assists in the automated manipulation, analysis and visualization of hierarchical trees. ETE libraries provide a broad set of tree handling options as well as specific methods to analyze phylogenetic and clustering trees. Among other features, ETE allows for the independent analysis of tree partitions, has support for the extended newick format, provides an integrated node annotation system and permits to link trees to external data such as multiple sequence alignments or numerical arrays. In addition, ETE implements a number of built-in analytical tools, including phylogeny-based orthology prediction and cluster validation techniques. Finally, ETE's programmable tree drawing engine can be used to automate the graphical rendering of trees with customized node-specific visualizations. ETE provides a complete set of methods to manipulate tree data structures that extends current functionality in other bioinformatic toolkits of a more general purpose. ETE is free software and can be downloaded from http://ete.cgenomics.org.

  3. Dose-Response Calculator for ArcGIS

    USGS Publications Warehouse

    Hanser, Steven E.; Aldridge, Cameron L.; Leu, Matthias; Nielsen, Scott E.

    2011-01-01

    The Dose-Response Calculator for ArcGIS is a tool that extends the Environmental Systems Research Institute (ESRI) ArcGIS 10 Desktop application to aid with the visualization of relationships between two raster GIS datasets. A dose-response curve is a line graph commonly used in medical research to examine the effects of different dosage rates of a drug or chemical (for example, carcinogen) on an outcome of interest (for example, cell mutations) (Russell and others, 1982). Dose-response curves have recently been used in ecological studies to examine the influence of an explanatory dose variable (for example, percentage of habitat cover, distance to disturbance) on a predicted response (for example, survival, probability of occurrence, abundance) (Aldridge and others, 2008). These dose curves have been created by calculating the predicted response value from a statistical model at different levels of the explanatory dose variable while holding values of other explanatory variables constant. Curves (plots) developed using the Dose-Response Calculator overcome the need to hold variables constant by using values extracted from the predicted response surface of a spatially explicit statistical model fit in a GIS, which include the variation of all explanatory variables, to visualize the univariate response to the dose variable. Application of the Dose-Response Calculator can be extended beyond the assessment of statistical model predictions and may be used to visualize the relationship between any two raster GIS datasets (see example in tool instructions). This tool generates tabular data for use in further exploration of dose-response relationships and a graph of the dose-response curve.

  4. ETE: a python Environment for Tree Exploration

    PubMed Central

    2010-01-01

    Background Many bioinformatics analyses, ranging from gene clustering to phylogenetics, produce hierarchical trees as their main result. These are used to represent the relationships among different biological entities, thus facilitating their analysis and interpretation. A number of standalone programs are available that focus on tree visualization or that perform specific analyses on them. However, such applications are rarely suitable for large-scale surveys, in which a higher level of automation is required. Currently, many genome-wide analyses rely on tree-like data representation and hence there is a growing need for scalable tools to handle tree structures at large scale. Results Here we present the Environment for Tree Exploration (ETE), a python programming toolkit that assists in the automated manipulation, analysis and visualization of hierarchical trees. ETE libraries provide a broad set of tree handling options as well as specific methods to analyze phylogenetic and clustering trees. Among other features, ETE allows for the independent analysis of tree partitions, has support for the extended newick format, provides an integrated node annotation system and permits to link trees to external data such as multiple sequence alignments or numerical arrays. In addition, ETE implements a number of built-in analytical tools, including phylogeny-based orthology prediction and cluster validation techniques. Finally, ETE's programmable tree drawing engine can be used to automate the graphical rendering of trees with customized node-specific visualizations. Conclusions ETE provides a complete set of methods to manipulate tree data structures that extends current functionality in other bioinformatic toolkits of a more general purpose. ETE is free software and can be downloaded from http://ete.cgenomics.org. PMID:20070885

  5. Optimal estimator model for human spatial orientation

    NASA Technical Reports Server (NTRS)

    Borah, J.; Young, L. R.; Curry, R. E.

    1979-01-01

    A model is being developed to predict pilot dynamic spatial orientation in response to multisensory stimuli. Motion stimuli are first processed by dynamic models of the visual, vestibular, tactile, and proprioceptive sensors. Central nervous system function is then modeled as a steady-state Kalman filter which blends information from the various sensors to form an estimate of spatial orientation. Where necessary, this linear central estimator has been augmented with nonlinear elements to reflect more accurately some highly nonlinear human response characteristics. Computer implementation of the model has shown agreement with several important qualitative characteristics of human spatial orientation, and it is felt that with further modification and additional experimental data the model can be improved and extended. Possible means are described for extending the model to better represent the active pilot with varying skill and work load levels.

  6. Audiovisual Perception of Congruent and Incongruent Dutch Front Vowels

    ERIC Educational Resources Information Center

    Valkenier, Bea; Duyne, Jurriaan Y.; Andringa, Tjeerd C.; Baskent, Deniz

    2012-01-01

    Purpose: Auditory perception of vowels in background noise is enhanced when combined with visually perceived speech features. The objective of this study was to investigate whether the influence of visual cues on vowel perception extends to incongruent vowels, in a manner similar to the McGurk effect observed with consonants. Method:…

  7. Principal Component Analysis Study of Visual and Verbal Metaphoric Comprehension in Children with Autism and Learning Disabilities

    ERIC Educational Resources Information Center

    Mashal, Nira; Kasirer, Anat

    2012-01-01

    This research extends previous studies regarding the metaphoric competence of autistic and learning disabled children on different measures of visual and verbal non-literal language comprehension, as well as cognitive abilities that include semantic knowledge, executive functions, similarities, and reading fluency. Thirty seven children with…

  8. A Rapid Assessment of Instructional Strategies to Teach Auditory-Visual Conditional Discriminations to Children with Autism

    ERIC Educational Resources Information Center

    Kodak, Tiffany; Clements, Andrea; LeBlanc, Brittany

    2013-01-01

    The purpose of the present investigation was to evaluate a rapid assessment procedure to identify effective instructional strategies to teach auditory-visual conditional discriminations to children diagnosed with autism. We replicated and extended previous rapid skills assessments (Lerman, Vorndran, Addison, & Kuhn, 2004) by evaluating the effects…

  9. Mental Visualization of Objects from Cross-Sectional Images

    ERIC Educational Resources Information Center

    Wu, Bing; Klatzky, Roberta L.; Stetten, George D.

    2012-01-01

    We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object…

  10. Tone Series and the Nature of Working Memory Capacity Development

    ERIC Educational Resources Information Center

    Clark, Katherine M.; Hardman, Kyle O.; Schachtman, Todd R.; Saults, J. Scott; Glass, Bret A.; Cowan, Nelson

    2018-01-01

    Recent advances in understanding visual working memory, the limited information held in mind for use in ongoing processing, are extended here to examine auditory working memory development. Research with arrays of visual objects has shown how to distinguish the capacity, in terms of the "number" of objects retained, from the…

  11. Detecting Disease Specific Pathway Substructures through an Integrated Systems Biology Approach

    PubMed Central

    Alaimo, Salvatore; Marceca, Gioacchino Paolo; Ferro, Alfredo; Pulvirenti, Alfredo

    2017-01-01

    In the era of network medicine, pathway analysis methods play a central role in the prediction of phenotype from high throughput experiments. In this paper, we present a network-based systems biology approach capable of extracting disease-perturbed subpathways within pathway networks in connection with expression data taken from The Cancer Genome Atlas (TCGA). Our system extends pathways with missing regulatory elements, such as microRNAs, and their interactions with genes. The framework enables the extraction, visualization, and analysis of statistically significant disease-specific subpathways through an easy to use web interface. Our analysis shows that the methodology is able to fill the gap in current techniques, allowing a more comprehensive analysis of the phenomena underlying disease states. PMID:29657291

  12. Study of Hemolysis During Storage of Blood in the Blood Bank of a Tertiary Health Care Centre.

    PubMed

    Arif, Sayeedul Hasan; Yadav, Neha; Rehman, Suhailur; Mehdi, Ghazala

    2017-12-01

    The aim of RBC storage system in a blood bank is to counteract damage to the metabolic machinery and the membrane, to improve post-transfusion viability. In recent years, the need for strict control over the quality of blood has been emphasised. Such quality indicator includes extend of hemolysis and morphological changes of RBC during storage. This study was design to see extend of hemolysis and level of plasma lactate dehydrogenase (LDH) and plasma potassium, during processing and storage at different intervals under blood bank condition. Forty-six donors were selected and blood units were collected and stored under blood bank conditions. Mean plasma haemoglobin of stored blood was estimated by tetra methyl benzidine method (TMB) and percentage hemolysis was calculated on day 0, 1, 7, 21, 28, 35 and 42 days. Similarly plasma LDH and plasma potassium level was also assessed during storage. It was noted that free haemoglobin level and percentage hemolysis progressively increased with storage along with the level of LDH and potassium. However, extend of hemolysis did not exceed the permissible limit of 0.8% up to 42 days of storage. 15 blood bags which showed visual hemolysis on day 28 did not exceeded the threshold of 0.8% hemolysis, when interpreted by TMB method. It was concluded that TMB method is better than visual method for determination of hemolysis. The reduced hemolysis at this centre may be accounted for the use of additive solution SAGM (Saline, Adenine, Glucose, Mannitol) and DEHP (di-2-ethyl hexyl phthalate) as plasticizer in blood bags for storage.

  13. Earth Adventure: Virtual Globe-based Suborbital Atmospheric Greenhouse Gases Exploration

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Landolt, K.; Boyer, A.; Santhana Vannan, S. K.; Wei, Z.; Wang, E.

    2016-12-01

    The Earth Venture Suborbital (EVS) mission is an important component of NASA's Earth System Science Pathfinder program that aims at making substantial advances in Earth system science through measurements from suborbital platforms and modeling researches. For example, the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) project of EVS-1 collected measurements of greenhouse gases (GHG) on local to regional scales in the Alaskan Arctic. The Atmospheric Carbon and Transport - America (ACT-America) project of EVS-2 will provide advanced, high-resolution measurements of atmospheric profiles and horizontal gradients of CO2 and CH4.As the long-term archival center for CARVE and the future ACT-America data, the Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) has been developing a versatile data management system for CARVE data to maximize their usability. One of these efforts is the virtual globe-based Suborbital Atmospheric GHG Exploration application. It leverages Google Earth to simulate the 185 flights flew by the C-23 Sherpa aircraft in 2012-2015 for the CARVE project. Based on Google Earth's 3D modeling capability and the precise coordinates, altitude, pitch, roll, and heading info of the aircraft recorded in every second during each flight, the application provides users accurate and vivid simulation of flight experiences, with an active 3D visualization of a C-23 Sherpa aircraft in view. This application provides dynamic visualization of GHG, including CO2, CO, H2O, and CH4 captured during the flights, at the same pace of the flight simulation in Google Earth. Photos taken during those flights are also properly displayed along the flight paths. In the future, this application will be extended to incorporate more complicated GHG measurements (e.g. vertical profiles) from the ACT-America project. This application leverages virtual globe technology to provide users an integrated framework to interactively explore information about GHG measurements and to link scientific measurements to the rich virtual planet environment provided by Google Earth. Positive feedbacks have been received from users. It provides a good example of extending basic data visualization into a knowledge discovery experience and maximizing the usability of Earth science observations.

  14. Invisible marker based augmented reality system

    NASA Astrophysics Data System (ADS)

    Park, Hanhoon; Park, Jong-Il

    2005-07-01

    Augmented reality (AR) has recently gained significant attention. The previous AR techniques usually need a fiducial marker with known geometry or objects of which the structure can be easily estimated such as cube. Placing a marker in the workspace of the user can be intrusive. To overcome this limitation, we present an AR system using invisible markers which are created/drawn with an infrared (IR) fluorescent pen. Two cameras are used: an IR camera and a visible camera, which are positioned in each side of a cold mirror so that their optical centers coincide with each other. We track the invisible markers using IR camera and visualize AR in the view of visible camera. Additional algorithms are employed for the system to have a reliable performance in the cluttered background. Experimental results are given to demonstrate the viability of the proposed system. As an application of the proposed system, the invisible marker can act as a Vision-Based Identity and Geometry (VBIG) tag, which can significantly extend the functionality of RFID. The invisible tag is the same as RFID in that it is not perceivable while more powerful in that the tag information can be presented to the user by direct projection using a mobile projector or by visualizing AR on the screen of mobile PDA.

  15. Interpersonal motor resonance in autism spectrum disorder: evidence against a global "mirror system" deficit.

    PubMed

    Enticott, Peter G; Kennedy, Hayley A; Rinehart, Nicole J; Bradshaw, John L; Tonge, Bruce J; Daskalakis, Zafiris J; Fitzgerald, Paul B

    2013-01-01

    The mirror neuron hypothesis of autism is highly controversial, in part because there are conflicting reports as to whether putative indices of mirror system activity are actually deficient in autism spectrum disorder (ASD). Recent evidence suggests that a typical putative mirror system response may be seen in people with an ASD when there is a degree of social relevance to the visual stimuli used to elicit that response. Individuals with ASD (n = 32) and matched neurotypical controls (n = 32) completed a transcranial magnetic stimulation (TMS) experiment in which the left primary motor cortex (M1) was stimulated during the observation of static hands, individual (i.e., one person) hand actions, and interactive (i.e., two person) hand actions. Motor-evoked potentials (MEP) were recorded from the contralateral first dorsal interosseous, and used to generate an index of interpersonal motor resonance (IMR; a putative measure of mirror system activity) during action observation. There was no difference between ASD and NT groups in the level of IMR during the observation of these actions. These findings provide evidence against a global mirror system deficit in ASD, and this evidence appears to extend beyond stimuli that have social relevance. Attentional and visual processing influences may be important for understanding the apparent role of IMR in the pathophysiology of ASD.

  16. An Innovate Robotic Endoscope Guidance System for Transnasal Sinus and Skull Base Surgery: Proof of Concept.

    PubMed

    Friedrich, D T; Sommer, F; Scheithauer, M O; Greve, J; Hoffmann, T K; Schuler, P J

    2017-12-01

    Objective  Advanced transnasal sinus and skull base surgery remains a challenging discipline for head and neck surgeons. Restricted access and space for instrumentation can impede advanced interventions. Thus, we present the combination of an innovative robotic endoscope guidance system and a specific endoscope with adjustable viewing angle to facilitate transnasal surgery in a human cadaver model. Materials and Methods  The applicability of the robotic endoscope guidance system with custom foot pedal controller was tested for advanced transnasal surgery on a fresh frozen human cadaver head. Visualization was enabled using a commercially available endoscope with adjustable viewing angle (15-90 degrees). Results  Visualization and instrumentation of all paranasal sinuses, including the anterior and middle skull base, were feasible with the presented setup. Controlling the robotic endoscope guidance system was effectively precise, and the adjustable endoscope lens extended the view in the surgical field without the common change of fixed viewing angle endoscopes. Conclusion  The combination of a robotic endoscope guidance system and an advanced endoscope with adjustable viewing angle enables bimanual surgery in transnasal interventions of the paranasal sinuses and the anterior skull base in a human cadaver model. The adjustable lens allows for the abandonment of fixed-angle endoscopes, saving time and resources, without reducing the quality of imaging.

  17. A Low-Cost EEG System-Based Hybrid Brain-Computer Interface for Humanoid Robot Navigation and Recognition

    PubMed Central

    Choi, Bongjae; Jo, Sungho

    2013-01-01

    This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system. PMID:24023953

  18. A low-cost EEG system-based hybrid brain-computer interface for humanoid robot navigation and recognition.

    PubMed

    Choi, Bongjae; Jo, Sungho

    2013-01-01

    This paper describes a hybrid brain-computer interface (BCI) technique that combines the P300 potential, the steady state visually evoked potential (SSVEP), and event related de-synchronization (ERD) to solve a complicated multi-task problem consisting of humanoid robot navigation and control along with object recognition using a low-cost BCI system. Our approach enables subjects to control the navigation and exploration of a humanoid robot and recognize a desired object among candidates. This study aims to demonstrate the possibility of a hybrid BCI based on a low-cost system for a realistic and complex task. It also shows that the use of a simple image processing technique, combined with BCI, can further aid in making these complex tasks simpler. An experimental scenario is proposed in which a subject remotely controls a humanoid robot in a properly sized maze. The subject sees what the surrogate robot sees through visual feedback and can navigate the surrogate robot. While navigating, the robot encounters objects located in the maze. It then recognizes if the encountered object is of interest to the subject. The subject communicates with the robot through SSVEP and ERD-based BCIs to navigate and explore with the robot, and P300-based BCI to allow the surrogate robot recognize their favorites. Using several evaluation metrics, the performances of five subjects navigating the robot were quite comparable to manual keyboard control. During object recognition mode, favorite objects were successfully selected from two to four choices. Subjects conducted humanoid navigation and recognition tasks as if they embodied the robot. Analysis of the data supports the potential usefulness of the proposed hybrid BCI system for extended applications. This work presents an important implication for the future work that a hybridization of simple BCI protocols provide extended controllability to carry out complicated tasks even with a low-cost system.

  19. When a loved one feels unfamiliar: a case study on the neural basis of Capgras delusion.

    PubMed

    Thiel, Christiane M; Studte, Sara; Hildebrandt, Helmut; Huster, Rene; Weerda, Riklef

    2014-03-01

    Perception of familiar faces depends on a core system analysing visual appearance and an extended system dealing with inference of mental states and emotional responses. Damage to the core system impairs face perception as seen in prosopagnosia. In contrast, patients with Capgras delusion show intact face perception but believe that closely related persons are impostors. It has been suggested that two deficits are necessary for the delusion, an aberrant perceptual or affective experience that leads to a bizarre belief as well as an impaired ability to evaluate beliefs. Using functional magnetic resonance imaging, we compared neural activity to familiar and unfamiliar faces in a patient with Capgras delusion and an age matched control group. We provide evidence that Capgras delusion is related to dysfunctional activity in the extended face processing system. The patient, who developed the delusion for the partner after a large right prefrontal lesion sparing the ventromedial and medial orbitofrontal cortex, lacked neural activity to the partner's face in left posterior cingulate cortex and left posterior superior temporal sulcus. Further, we found impaired functional connectivity of the latter region with the left superior frontal gyrus and to a lesser extent with the right superior frontal sulcus/middle frontal gyrus. The findings of this case study suggest that the first factor in Capgras delusion may be reduced neural activity in the extended face processing system that deals with inference of mental states while the second factor may be due to a lesion in the right middle frontal gyrus. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Age-related macular degeneration: using morphological predictors to modify current treatment protocols.

    PubMed

    Ashraf, Mohammed; Souka, Ahmed; Adelman, Ron A

    2018-03-01

    To assess predictors of treatment response in neovascular age-related macular degeneration (AMD) in an attempt to develop a patient-centric treatment algorithm. We conducted a systematic search using PubMed, EMBASE and Web of Science for prognostic indicators/predictive factors with the key words: 'age related macular degeneration', 'neovascular AMD', 'choroidal neovascular membrane (CNV)', 'anti-vascular endothelial growth factor (anti-VEGF)', 'aflibercept', 'ranibizumab', 'bevacizumab', 'randomized clinical trials', 'post-hoc', 'prognostic', 'predictive', 'response' 'injection frequency, 'treat and extend (TAE), 'pro re nata (PRN)', 'bi-monthly' and 'quarterly'. We only included studies that had an adequate period of follow-up (>1 year), a single predefined treatment regimen with a predetermined re-injection criteria, an adequate number of patients, specific morphological [optical coherence tomography (OCT)] criteria that predicted final visual outcomes and injection frequency and did not include switching from one drug to the other. We were able to identify seven prospective studies and 16 retrospective studies meeting our inclusion criteria. There are several morphological and demographic prognostic indicators that can predict response to therapy in wet AMD. Smaller CNV size, subretinal fluid (SRF), retinal angiomatous proliferation (RAP) and response to therapy at 12 weeks (visual, angiographic or OCT) can all predict good visual outcomes in patients receiving anti-VEGF therapy. Patients with larger CNV, older age, pigment epithelial detachment (PED), intraretinal cysts (IRC) and vitreomacular adhesion (VMA) achieved less visual gains. Patients having VMA/VMT required more intensive treatment with increased treatment frequency. Patients with both posterior vitreous detachment (PVD) and SRF require infrequent injections. Patients with PED are prone to recurrences of fluid activity with a reduction in visual acuity (VA). A regimen that involves less intensive therapy and extended follow-up intervals (4 weekly) can be suggested for patients who show adequate visual response and have both SRF and PVD at baseline. In addition, patients with poor prognostic indicators such as IRC, VMA, large CNV size, older age and poor response at 12 weeks should be extended very cautiously with the possibility of fixed monthly/bimonthly (every 2 months) treatments if they fail to achieve dryness. Patients with PED at baseline should receive monthly/bimonthly injections of anti-VEGF therapy or can be extended very cautiously (two weekly intervals) using a TAE protocol. © 2017 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  1. Differentiating Visual from Response Sequencing during Long-term Skill Learning.

    PubMed

    Lynch, Brighid; Beukema, Patrick; Verstynen, Timothy

    2017-01-01

    The dual-system model of sequence learning posits that during early learning there is an advantage for encoding sequences in sensory frames; however, it remains unclear whether this advantage extends to long-term consolidation. Using the serial RT task, we set out to distinguish the dynamics of learning sequential orders of visual cues from learning sequential responses. On each day, most participants learned a new mapping between a set of symbolic cues and responses made with one of four fingers, after which they were exposed to trial blocks of either randomly ordered cues or deterministic ordered cues (12-item sequence). Participants were randomly assigned to one of four groups (n = 15 per group): Visual sequences (same sequence of visual cues across training days), Response sequences (same order of key presses across training days), Combined (same serial order of cues and responses on all training days), and a Control group (a novel sequence each training day). Across 5 days of training, sequence-specific measures of response speed and accuracy improved faster in the Visual group than any of the other three groups, despite no group differences in explicit awareness of the sequence. The two groups that were exposed to the same visual sequence across days showed a marginal improvement in response binding that was not found in the other groups. These results indicate that there is an advantage, in terms of rate of consolidation across multiple days of training, for learning sequences of actions in a sensory representational space, rather than as motoric representations.

  2. KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery

    NASA Astrophysics Data System (ADS)

    Fraser, Joshua; Haridas, Anoop; Seetharaman, Guna; Rao, Raghuveer M.; Palaniappan, Kannappan

    2013-05-01

    KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.

  3. Visual Memory in Methamphetamine Dependent Individuals: Deficient Strategic Control of Encoding and Retrieval

    PubMed Central

    Morgan, Erin E.; Woods, Steven Paul; Poquette, Amelia J.; Vigil, Ofilio; Heaton, Robert K.; Grant, Igor

    2012-01-01

    Objective Chronic use of methamphetamine (MA) has moderate effects on neurocognitive functions associated with frontal systems, including the executive aspects of verbal episodic memory. Extending this literature, the current study examined the effects of MA on visual episodic memory with the hypothesis that a profile of deficient strategic encoding and retrieval processes would be revealed for visuospatial information (i.e., simple geometric designs), including possible differential effects on source versus item recall. Method The sample comprised 114 MA-dependent (MA+) and 110 demographically-matched MA-nondependent comparison participants (MA−) who completed the Brief Visuospatial Memory Test – Revised (BVMT-R), which was scored for standard learning and memory indices, as well as novel item (i.e., figure) and source (i.e., location) memory indices. Results Results revealed a profile of impaired immediate and delayed free recall (p < .05) in the context of preserved learning slope, retention, and recognition discriminability in the MA+ group. The MA+ group also performed more poorly than MA− participants on Item visual memory (p < .05) but not Source visual memory (p > .05), and no group by task-type interaction was observed (p > .05). Item visual memory demonstrated significant associations with executive dysfunction, deficits in working memory, and shorter length of abstinence from MA use (p < 0.05). Conclusions These visual memory findings are commensurate with studies reporting deficient strategic verbal encoding and retrieval in MA users that are posited to reflect the vulnerability of frontostriatal circuits to the neurotoxic effects of MA. Potential clinical implications of these visual memory deficits are discussed. PMID:22311530

  4. Common and distinct brain networks underlying verbal and visual creativity.

    PubMed

    Zhu, Wenfeng; Chen, Qunlin; Xia, Lingxiang; Beaty, Roger E; Yang, Wenjing; Tian, Fang; Sun, Jiangzhou; Cao, Guikang; Zhang, Qinglin; Chen, Xu; Qiu, Jiang

    2017-04-01

    Creativity is imperative to the progression of human civilization, prosperity, and well-being. Past creative researches tends to emphasize the default mode network (DMN) or the frontoparietal network (FPN) somewhat exclusively. However, little is known about how these networks interact to contribute to creativity and whether common or distinct brain networks are responsible for visual and verbal creativity. Here, we use functional connectivity analysis of resting-state functional magnetic resonance imaging data to investigate visual and verbal creativity-related regions and networks in 282 healthy subjects. We found that functional connectivity within the bilateral superior parietal cortex of the FPN was negatively associated with visual and verbal creativity. The strength of connectivity between the DMN and FPN was positively related to both creative domains. Visual creativity was negatively correlated with functional connectivity within the precuneus of the pDMN and right middle frontal gyrus of the FPN, and verbal creativity was negatively correlated with functional connectivity within the medial prefrontal cortex of the aDMN. Critically, the FPN mediated the relationship between the aDMN and verbal creativity, and it also mediated the relationship between the pDMN and visual creativity. Taken together, decreased within-network connectivity of the FPN and DMN may allow for flexible between-network coupling in the highly creative brain. These findings provide indirect evidence for the cooperative role of the default and executive control networks in creativity, extending past research by revealing common and distinct brain systems underlying verbal and visual creative cognition. Hum Brain Mapp 38:2094-2111, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  5. Halftone visual cryptography.

    PubMed

    Zhou, Zhi; Arce, Gonzalo R; Di Crescenzo, Giovanni

    2006-08-01

    Visual cryptography encodes a secret binary image (SI) into n shares of random binary patterns. If the shares are xeroxed onto transparencies, the secret image can be visually decoded by superimposing a qualified subset of transparencies, but no secret information can be obtained from the superposition of a forbidden subset. The binary patterns of the n shares, however, have no visual meaning and hinder the objectives of visual cryptography. Extended visual cryptography [1] was proposed recently to construct meaningful binary images as shares using hypergraph colourings, but the visual quality is poor. In this paper, a novel technique named halftone visual cryptography is proposed to achieve visual cryptography via halftoning. Based on the blue-noise dithering principles, the proposed method utilizes the void and cluster algorithm [2] to encode a secret binary image into n halftone shares (images) carrying significant visual information. The simulation shows that the visual quality of the obtained halftone shares are observably better than that attained by any available visual cryptography method known to date.

  6. Global-local visual biases correspond with visual-spatial orientation.

    PubMed

    Basso, Michael R; Lowery, Natasha

    2004-02-01

    Within the past decade, numerous investigations have demonstrated reliable associations of global-local visual processing biases with right and left hemisphere function, respectively (cf. Van Kleeck, 1989). Yet the relevance of these biases to other cognitive functions is not well understood. Towards this end, the present research examined the relationship between global-local visual biases and perception of visual-spatial orientation. Twenty-six women and 23 men completed a global-local judgment task (Kimchi and Palmer, 1982) and the Judgment of Line Orientation Test (JLO; Benton, Sivan, Hamsher, Varney, and Spreen, 1994), a measure of visual-spatial orientation. As expected, men had better performance on JLO. Extending previous findings, global biases were related to better visual-spatial acuity on JLO. The findings suggest that global-local biases and visual-spatial orientation may share underlying cerebral mechanisms. Implications of these findings for other visually mediated cognitive outcomes are discussed.

  7. Inspection of Pole-Like Structures Using a Visual-Inertial Aided VTOL Platform with Shared Autonomy

    PubMed Central

    Sa, Inkyu; Hrabar, Stefan; Corke, Peter

    2015-01-01

    This paper presents an algorithm and a system for vertical infrastructure inspection using a vertical take-off and landing (VTOL) unmanned aerial vehicle and shared autonomy. Inspecting vertical structures such as light and power distribution poles is a difficult task that is time-consuming, dangerous and expensive. Recently, micro VTOL platforms (i.e., quad-, hexa- and octa-rotors) have been rapidly gaining interest in research, military and even public domains. The unmanned, low-cost and VTOL properties of these platforms make them ideal for situations where inspection would otherwise be time-consuming and/or hazardous to humans. There are, however, challenges involved with developing such an inspection system, for example flying in close proximity to a target while maintaining a fixed stand-off distance from it, being immune to wind gusts and exchanging useful information with the remote user. To overcome these challenges, we require accurate and high-update rate state estimation and high performance controllers to be implemented onboard the vehicle. Ease of control and a live video feed are required for the human operator. We demonstrate a VTOL platform that can operate at close-quarters, whilst maintaining a safe stand-off distance and rejecting environmental disturbances. Two approaches are presented: Position-Based Visual Servoing (PBVS) using an Extended Kalman Filter (EKF) and estimator-free Image-Based Visual Servoing (IBVS). Both use monocular visual, inertia, and sonar data, allowing the approaches to be applied for indoor or GPS-impaired environments. We extensively compare the performances of PBVS and IBVS in terms of accuracy, robustness and computational costs. Results from simulations and indoor/outdoor (day and night) flight experiments demonstrate the system is able to successfully inspect and circumnavigate a vertical pole. PMID:26340631

  8. Natural image sequences constrain dynamic receptive fields and imply a sparse code.

    PubMed

    Häusler, Chris; Susemihl, Alex; Nawrot, Martin P

    2013-11-06

    In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Role of parafovea in blur perception.

    PubMed

    Venkataraman, Abinaya Priya; Radhakrishnan, Aiswaryah; Dorronsoro, Carlos; Lundström, Linda; Marcos, Susana

    2017-09-01

    The blur experienced by our visual system is not uniform across the visual field. Additionally, lens designs with variable power profile such as contact lenses used in presbyopia correction and to control myopia progression create variable blur from the fovea to the periphery. The perceptual changes associated with varying blur profile across the visual field are unclear. We therefore measured the perceived neutral focus with images of different angular subtense (from 4° to 20°) and found that the amount of blur, for which focus is perceived as neutral, increases when the stimulus was extended to cover the parafovea. We also studied the changes in central perceived neutral focus after adaptation to images with similar magnitude of optical blur across the image or varying blur from center to the periphery. Altering the blur in the periphery had little or no effect on the shift of perceived neutral focus following adaptation to normal/blurred central images. These perceptual outcomes should be considered while designing bifocal optical solutions for myopia or presbyopia. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. A Review on Stereoscopic 3D: Home Entertainment for the Twenty First Century

    NASA Astrophysics Data System (ADS)

    Karajeh, Huda; Maqableh, Mahmoud; Masa'deh, Ra'ed

    2014-12-01

    In the last few years, stereoscopic developed very rapidly and employed in many different fields such as entertainment. Due to the importance of entertainment aspect of stereoscopic 3D (S3D) applications, a review of the current state of S3D development in entertainment technology is conducted. In this paper, a novel survey of the stereoscopic entertainment aspects is presented by discussing the significant development of a 3D cinema, the major development of 3DTV, the issues related to 3D video content and 3D video games. Moreover, we reviewed some problems that can be caused in the viewers' visual system from watching stereoscopic contents. Some stereoscopic viewers are not satisfied as they are frustrated from wearing glasses, have visual fatigue, complain from unavailability of 3D contents, and/or complain from some sickness. Therefore, we will discuss stereoscopic visual discomfort and to what extend the viewer will have an eye fatigue while watching 3D contents or playing 3D games. The suggested solutions in the literature for this problem are discussed.

  11. Visualizing the shape of soft solid and fluid contacts between two surfaces

    NASA Astrophysics Data System (ADS)

    Pham, Jonathan; Schellenberger, Frank; Kappl, Michael; Vollmer, Doris; Butt, Hans-Jürgen

    The soft contact between two surfaces is fundamentally interesting for soft materials and fluid mechanics and relevant for friction and wear. The deformation of soft solid interfaces has received much interest because it interestingly reveals similarities to fluid wetting. We present an experimental route towards visualizing the three-dimensional contact geometry of either liquid-solid (i.e., oil and glass) or solid-solid (i.e., elastomer and glass) interfaces using a home-built combination of confocal microscopy and atomic force microscopy. We monitor the shape of a fluid capillary bridge and the depth of indentation in 3D while simultaneously measuring the force. In agreement with theoretical predictions, the height of the capillary bridge depends on the interfacial tensions. By using a slowly evaporating solvent, we quantify the temporal evolution of the capillary bridge and visualized the influence of pinning points on its shape. The position dependence of the advancing and receding contact angle along the three-phase contact line, particle-liquid-air, is resolved. Extending our system, we explore the contact deformation of soft solids where elasticity, in addition to surface tension, becomes an important factor.

  12. Analysis by gender and Visual Imagery Reactivity of conventional and imagery Rorschach.

    PubMed

    Yanovski, A; Menduke, H; Albertson, M G

    1995-06-01

    Examined here are the effects of gender and Visual Imagery Reactivity in 80 consecutively selected psychiatric outpatients. The participants were grouped by gender and by the amounts of responsiveness to preceding therapy work using imagery (Imagery Nonreactors and Reactors). In the group of Imagery Nonreactors were 13 men and 22 women, and in the Reactor group were 17 men and 28 women. Compared were the responses to standard Rorschach (Conventional condition) with visual associations to memory images of Rorschach inkblots (Imagery condition). Responses were scored using the Visual Imagery Reactivity (VIR) scoring system, a general, test-nonspecific scoring method. Nonparametric statistical analysis showed that critical indicators of Imagery Reactivity encoded as High Affect/Conflict score and its derivatives associated with sexual or bizarre content were not significantly associated with gender; neither was Neutral Content score which categorizes "non-Reactivity." These results support the notion that system's criteria of Visual Imagery Reactivity can be applied equally to both men and women for the classification of Imagery Reactors and Nonreactors. Discussed are also the speculative consequences of extending the tolerance range of significance levels for the interaction between Reactivity and sex above the customary limit of p < .05 in borderline cases. The results of such an analysis may imply a trend towards more rigid defensiveness under Imagery and toward lesser verbal productivity in response to either the Conventional or the Imagery task among women who are Nonreactors. In Reactors, men produced significantly more Sexual Reference scores (in the subcategory not associated with High Affect/Conflict) than women, but this could be attributed to the effect of tester's and subjects' gender combined.

  13. Through-Focus Vision Performance and Light Disturbances of 3 New Intraocular Lenses for Presbyopia Correction

    PubMed Central

    Escandón-García, Santiago; Ribeiro, Filomena J.; McAlinden, Colm

    2018-01-01

    Purpose To compare the through-focus visual performance in a clinical population of pseudophakic patients implanted with two new trifocal intraocular lenses (IOLs) and one extended depth of focus IOL. Methods Prospective, nonrandomized, examiner-masked case series. Twenty-three patients received the FineVision® and seven patients received the PanOptix™ trifocal IOLs. Fifteen patients received the Symfony extended depth of focus IOL. Mean age of patients was 63 ± 8 years. Through-focus visual acuity was measured from –3.00 to +1.00 D vergences. Contrast sensitivity was measured with and without a source of glare. Light disturbances were evaluated with the Light Distortion Analyzer. Results Though-focus evaluation showed that trifocal IOLs performed significantly better at near distance (33 and 40 cm), and extended depth of focus performed significantly better at intermediate distance (1.0 m). Contrast sensitivity function with glare and dysphotopsia was similar between the three IOLs and subjective response to questionnaire showed a significantly higher score (worse performance) for the extended depth of focus IOL compared to both trifocal IOLs in the bothersome subscale (p < 0.05). Conclusions Trifocal IOLs grant better performance at near distance while extended depth of focus IOL performs better at intermediate distance. Objective dysphotopsia measured with the Light Distortion Analyzer is not reduced in extended depth of focus IOL compared to trifocal IOLs. PMID:29651343

  14. Realization of the ergonomics design and automatic control of the fundus cameras

    NASA Astrophysics Data System (ADS)

    Zeng, Chi-liang; Xiao, Ze-xin; Deng, Shi-chao; Yu, Xin-ye

    2012-12-01

    The principles of ergonomics design in fundus cameras should be extending the agreeableness by automatic control. Firstly, a 3D positional numerical control system is designed for positioning the eye pupils of the patients who are doing fundus examinations. This system consists of a electronically controlled chin bracket for moving up and down, a lateral movement of binocular with the detector and the automatic refocusing of the edges of the eye pupils. Secondly, an auto-focusing device for the object plane of patient's fundus is designed, which collects the patient's fundus images automatically whether their eyes is ametropic or not. Finally, a moving visual target is developed for expanding the fields of the fundus images.

  15. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  16. Three-Month-Olds' Visual Preference for Faces and Its Underlying Visual Processing Mechanisms

    ERIC Educational Resources Information Center

    Turati, C.; Valenza, E.; Leo, I.; Simion, F.

    2005-01-01

    This study was aimed at investigating the face preference phenomenon and its underlying mechanisms at 3 months of age. Using an eye-tracker apparatus, Experiment 1 demonstrated that 3-month-olds prefer natural face images to unnatural ones, replicating and extending previous evidence obtained with schematic facelike stimuli. Experiments 2 and 3…

  17. Visual Short-Term Memory for Complex Objects in 6- and 8-Month-Old Infants

    ERIC Educational Resources Information Center

    Kwon, Mee-Kyoung; Luck, Steven J.; Oakes, Lisa M.

    2014-01-01

    Infants' visual short-term memory (VSTM) for simple objects undergoes dramatic development: Six-month-old infants can store in VSTM information about only a simple object presented in isolation, whereas 8-month-old infants can store information about simple objects presented in multiple-item arrays. This study extended this work to examine…

  18. Options for NDE Assessment of Heat and Fire Damaged Wood

    Treesearch

    Robert H. White; Brian Kukay; James P. Wacker

    2013-01-01

    Depending on the duration and temperature, heat can adversely affect structural properties of wood. While severe temperatures will result in damage that is visually obvious, damage to wood in terms of structural performance extends to wood that visually appears to be unaffected or only mildly affected. The loss in structural capacity includes both reductions for the...

  19. An Assessment of the Tinder Mobile Dating Application for Individuals Who Are Visually Impaired

    ERIC Educational Resources Information Center

    Kapperman, Gaylen; Kelly, Stacy M.; Kilmer, Kylie; Smith, Thomas J.

    2017-01-01

    People with visual impairments (that is, those who are blind or have low vision) have a disadvantage in the process of being selected as a romantic partner. It is further underscored that these difficulties with dating and fitting in among sighted individuals extend beyond formative years into adulthood (Sacks & Wolffe, 2006). Thus, the…

  20. On the Efficacy of a Computer-Based Program to Teach Visual Braille Reading

    ERIC Educational Resources Information Center

    Scheithauer, Mindy C.; Tiger, Jeffrey H.; Miller, Sarah J.

    2013-01-01

    Scheithauer and Tiger (2012) created an efficient computerized program that taught 4 sighted college students to select text letters when presented with visual depictions of braille alphabetic characters and resulted in the emergence of some braille reading. The current study extended these results to a larger sample (n?=?81) and compared the…

  1. Integration of visual and motion cues for simulator requirements and ride quality investigation

    NASA Technical Reports Server (NTRS)

    Young, L. R.

    1976-01-01

    Practical tools which can extend the state of the art of moving base flight simulation for research and training are developed. Main approaches to this research effort include: (1) application of the vestibular model for perception of orientation based on motion cues: optimum simulator motion controls; and (2) visual cues in landing.

  2. Visual Servoing-Based Nanorobotic System for Automated Electrical Characterization of Nanotubes inside SEM

    PubMed Central

    Ding, Huiyang; Shi, Chaoyang; Ma, Li; Yang, Zhan; Wang, Mingyu; Wang, Yaqiong; Chen, Tao; Sun, Lining; Toshio, Fukuda

    2018-01-01

    The maneuvering and electrical characterization of nanotubes inside a scanning electron microscope (SEM) has historically been time-consuming and laborious for operators. Before the development of automated nanomanipulation-enabled techniques for the performance of pick-and-place and characterization of nanoobjects, these functions were still incomplete and largely operated manually. In this paper, a dual-probe nanomanipulation system vision-based feedback was demonstrated to automatically perform 3D nanomanipulation tasks, to investigate the electrical characterization of nanotubes. The XY-position of Atomic Force Microscope (AFM) cantilevers and individual carbon nanotubes (CNTs) were precisely recognized via a series of image processing operations. A coarse-to-fine positioning strategy in the Z-direction was applied through the combination of the sharpness-based depth estimation method and the contact-detection method. The use of nanorobotic magnification-regulated speed aided in improving working efficiency and reliability. Additionally, we proposed automated alignment of manipulator axes by visual tracking the movement trajectory of the end effector. The experimental results indicate the system’s capability for automated measurement electrical characterization of CNTs. Furthermore, the automated nanomanipulation system has the potential to be extended to other nanomanipulation tasks. PMID:29642495

  3. A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors

    PubMed Central

    Song, Yu; Nuske, Stephen; Scherer, Sebastian

    2016-01-01

    State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight. PMID:28025524

  4. Visual motor response of crewmen during a simulated 90 day space mission as measured by the critical task battery

    NASA Technical Reports Server (NTRS)

    Allen, R. W.; Jex, H. R.

    1972-01-01

    In order to test various components of a regenerative life support system and to obtain data on the physiological and psychological effects of long-duration exposure to confinement in a space station atmosphere, four carefully screened young men were sealed in space station simulator for 90 days. A tracking test battery was administered during the above experiment. The battery included a clinical test (critical instability task) related to the subject's dynamic time delay, and a conventional steady tracking task, during which dynamic response (describing functions) and performance measures were obtained. Good correlation was noted between the clinical critical instability scores and more detailed tracking parameters such as dynamic time delay and gain-crossover frequency. The comprehensive data base on human operator tracking behavior obtained in this study demonstrate that sophisticated visual-motor response properties can be efficiently and reliably measured over extended periods of time.

  5. Optical, analog and digital domain architectural considerations for visual communications

    NASA Astrophysics Data System (ADS)

    Metz, W. A.

    2008-01-01

    The end of the performance entitlement historically achieved by classic scaling of CMOS devices is within sight, driven ultimately by fundamental limits. Performance entitlements predicted by classic CMOS scaling have progressively failed to be realized in recent process generations due to excessive leakage, increasing interconnect delays and scaling of gate dielectrics. Prior to reaching fundamental limits, trends in technology, architecture and economics will pressure the industry to adopt new paradigms. A likely response is to repartition system functions away from digital implementations and into new architectures. Future architectures for visual communications will require extending the implementation into the optical and analog processing domains. The fundamental properties of these domains will in turn give rise to new architectural concepts. The limits of CMOS scaling and impact on architectures will be briefly reviewed. Alternative approaches in the optical, electronic and analog domains will then be examined for advantages, architectural impact and drawbacks.

  6. WebGL-enabled 3D visualization of a Solar Flare Simulation

    NASA Astrophysics Data System (ADS)

    Chen, A.; Cheung, C. M. M.; Chintzoglou, G.

    2016-12-01

    The visualization of magnetohydrodynamic (MHD) simulations of astrophysical systems such as solar flares often requires specialized software packages (e.g. Paraview and VAPOR). A shortcoming of using such software packages is the inability to share our findings with the public and scientific community in an interactive and engaging manner. By using the javascript-based WebGL application programming interface (API) and the three.js javascript package, we create an online in-browser experience for rendering solar flare simulations that will be interactive and accessible to the general public. The WebGL renderer displays objects such as vector flow fields, streamlines and textured isosurfaces. This allows the user to explore the spatial relation between the solar coronal magnetic field and the thermodynamic structure of the plasma in which the magnetic field is embedded. Plans for extending the features of the renderer will also be presented.

  7. A unified account of gloss and lightness perception in terms of gamut relativity.

    PubMed

    Vladusich, Tony

    2013-08-01

    A recently introduced computational theory of visual surface representation, termed gamut relativity, overturns the classical assumption that brightness, lightness, and transparency constitute perceptual dimensions corresponding to the physical dimensions of luminance, diffuse reflectance, and transmittance, respectively. Here I extend the theory to show how surface gloss and lightness can be understood in a unified manner in terms of the vector computation of "layered representations" of surface and illumination properties, rather than as perceptual dimensions corresponding to diffuse and specular reflectance, respectively. The theory simulates the effects of image histogram skewness on surface gloss/lightness and lightness constancy as a function of specular highlight intensity. More generally, gamut relativity clarifies, unifies, and generalizes a wide body of previous theoretical and experimental work aimed at understanding how the visual system parses the retinal image into layered representations of surface and illumination properties.

  8. The DES Bright Arcs Survey: Hundreds of Candidate Strongly Lensed Galaxy Systems from the Dark Energy Survey Science Verification and Year 1 Observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diehl, H. T.; Buckley-Geer, E. J.; Lindgren, K. A.

    We report the results of searches for strong gravitational lens systems in the Dark Energy Survey (DES) Science Verification and Year 1 observations. The Science Verification data span approximately 250 sq. deg. with a median i -band limiting magnitude for extended objects (10 σ ) of 23.0. The Year 1 data span approximately 2000 sq. deg. and have an i -band limiting magnitude for extended objects (10 σ ) of 22.9. As these data sets are both wide and deep, they are particularly useful for identifying strong gravitational lens candidates. Potential strong gravitational lens candidate systems were initially identified basedmore » on a color and magnitude selection in the DES object catalogs or because the system is at the location of a previously identified galaxy cluster. Cutout images of potential candidates were then visually scanned using an object viewer and numerically ranked according to whether or not we judged them to be likely strong gravitational lens systems. Having scanned nearly 400,000 cutouts, we present 374 candidate strong lens systems, of which 348 are identified for the first time. We provide the R.A. and decl., the magnitudes and photometric properties of the lens and source objects, and the distance (radius) of the source(s) from the lens center for each system.« less

  9. The DES Bright Arcs Survey: Hundreds of Candidate Strongly Lensed Galaxy Systems from the Dark Energy Survey Science Verification and Year 1 Observations

    NASA Astrophysics Data System (ADS)

    Diehl, H. T.; Buckley-Geer, E. J.; Lindgren, K. A.; Nord, B.; Gaitsch, H.; Gaitsch, S.; Lin, H.; Allam, S.; Collett, T. E.; Furlanetto, C.; Gill, M. S. S.; More, A.; Nightingale, J.; Odden, C.; Pellico, A.; Tucker, D. L.; da Costa, L. N.; Fausti Neto, A.; Kuropatkin, N.; Soares-Santos, M.; Welch, B.; Zhang, Y.; Frieman, J. A.; Abdalla, F. B.; Annis, J.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Cunha, C. E.; D'Andrea, C. B.; Desai, S.; Dietrich, J. P.; Drlica-Wagner, A.; Evrard, A. E.; Finley, D. A.; Flaugher, B.; García-Bellido, J.; Gerdes, D. W.; Goldstein, D. A.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; James, D. J.; Kuehn, K.; Kuhlmann, S.; Lahav, O.; Li, T. S.; Lima, M.; Maia, M. A. G.; Marshall, J. L.; Menanteau, F.; Miquel, R.; Nichol, R. C.; Nugent, P.; Ogando, R. L. C.; Plazas, A. A.; Reil, K.; Romer, A. K.; Sako, M.; Sanchez, E.; Santiago, B.; Scarpine, V.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Sheldon, E.; Smith, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Walker, A. R.; DES Collaboration

    2017-09-01

    We report the results of searches for strong gravitational lens systems in the Dark Energy Survey (DES) Science Verification and Year 1 observations. The Science Verification data span approximately 250 sq. deg. with a median I-band limiting magnitude for extended objects (10σ) of 23.0. The Year 1 data span approximately 2000 sq. deg. and have an I-band limiting magnitude for extended objects (10σ) of 22.9. As these data sets are both wide and deep, they are particularly useful for identifying strong gravitational lens candidates. Potential strong gravitational lens candidate systems were initially identified based on a color and magnitude selection in the DES object catalogs or because the system is at the location of a previously identified galaxy cluster. Cutout images of potential candidates were then visually scanned using an object viewer and numerically ranked according to whether or not we judged them to be likely strong gravitational lens systems. Having scanned nearly 400,000 cutouts, we present 374 candidate strong lens systems, of which 348 are identified for the first time. We provide the R.A. and decl., the magnitudes and photometric properties of the lens and source objects, and the distance (radius) of the source(s) from the lens center for each system.

  10. The visual system of male scale insects

    NASA Astrophysics Data System (ADS)

    Buschbeck, Elke K.; Hauser, Martin

    2009-03-01

    Animal eyes generally fall into two categories: (1) their photoreceptive array is convex, as is typical for camera eyes, including the human eye, or (2) their photoreceptive array is concave, as is typical for the compound eye of insects. There are a few rare examples of the latter eye type having secondarily evolved into the former one. When viewed in a phylogenetic framework, the head morphology of a variety of male scale insects suggests that this group could be one such example. In the Margarodidae (Hemiptera, Coccoidea), males have been described as having compound eyes, while males of some more derived groups only have two single-chamber eyes on each side of the head. Those eyes are situated in the place occupied by the compound eye of other insects. Since male scale insects tend to be rare, little is known about how their visual systems are organized, and what anatomical traits are associated with this evolutionary transition. In adult male Margarodidae, one single-chamber eye (stemmateran ocellus) is present in addition to a compound eye-like region. Our histological investigation reveals that the stemmateran ocellus has an extended retina which is formed by concrete clusters of receptor cells that connect to its own first-order neuropil. In addition, we find that the ommatidia of the compound eyes also share several anatomical characteristics with simple camera eyes. These include shallow units with extended retinas, each of which is connected by its own small nerve to the lamina. These anatomical changes suggest that the margarodid compound eye represents a transitional form to the giant unicornal eyes that have been described in more derived species.

  11. Linguistic Layering: Social Language Development in the Context of Multimodal Design and Digital Technologies

    ERIC Educational Resources Information Center

    Domingo, Myrrh

    2012-01-01

    In our contemporary society, digital texts circulate more readily and extend beyond page-bound formats to include interactive representations such as online newsprint with hyperlinks to audio and video files. This is to say that multimodality combined with digital technologies extends grammar to include voice, visual, and music, among other modes…

  12. Color extended visual cryptography using error diffusion.

    PubMed

    Kang, InKoo; Arce, Gonzalo R; Lee, Heung-Kyu

    2011-01-01

    Color visual cryptography (VC) encrypts a color secret message into n color halftone image shares. Previous methods in the literature show good results for black and white or gray scale VC schemes, however, they are not sufficient to be applied directly to color shares due to different color structures. Some methods for color visual cryptography are not satisfactory in terms of producing either meaningless shares or meaningful shares with low visual quality, leading to suspicion of encryption. This paper introduces the concept of visual information pixel (VIP) synchronization and error diffusion to attain a color visual cryptography encryption method that produces meaningful color shares with high visual quality. VIP synchronization retains the positions of pixels carrying visual information of original images throughout the color channels and error diffusion generates shares pleasant to human eyes. Comparisons with previous approaches show the superior performance of the new method.

  13. The role of colour in signalling and male choice in the agamid lizard Ctenophorus ornatus.

    PubMed Central

    LeBas, N R; Marshall, N J

    2000-01-01

    Bright coloration and complex visual displays are frequent and well described in many lizard families. Reflectance spectrometry which extends into the ultraviolet (UV) allows measurement of such coloration independent of our visual system. We examined the role of colour in signalling and mate choice in the agamid lizard Ctenophorus ornatus. We found that throat reflectance strongly contrasted against the granite background of the lizards' habitat. The throat may act as a signal via the head-bobbing and push-up displays of C. ornatus. Dorsal coloration provided camouflage against the granite background, particularly in females. C. ornatus was sexually dichromatic for all traits examined including throat UV reflectance which is beyond human visual perception. Female throats were highly variable in spectral reflectance and males preferred females with higher throat chroma between 370 and 400 nm. However, female throat UV chroma is strongly correlated to both throat brightness and chest UV chroma and males may choose females on a combination of these colour variables. There was no evidence that female throat or chest coloration was an indicator of female quality. However, female brightness significantly predicted a female's laying date and, thus, may signal receptivity. One function of visual display in this species appears to be intersexual signalling, resulting in male choice of females. PMID:10737400

  14. The association between reading abilities and visual-spatial attention in Hong Kong Chinese children.

    PubMed

    Liu, Sisi; Liu, Duo; Pan, Zhihui; Xu, Zhengye

    2018-03-25

    A growing body of research suggests that visual-spatial attention is important for reading achievement. However, few studies have been conducted in non-alphabetic orthographies. This study extended the current research to reading development in Chinese, a logographic writing system known for its visual complexity. Eighty Hong Kong Chinese children were selected and divided into poor reader and typical reader groups, based on their performance on the measures of reading fluency, Chinese character reading, and reading comprehension. The poor and typical readers were matched on age and nonverbal intelligence. A Posner's spatial cueing task was adopted to measure the exogenous and endogenous orienting of visual-spatial attention. Although the typical readers showed the cueing effect in the central cue condition (i.e., responses to targets following valid cues were faster than those to targets following invalid cues), the poor readers did not respond differently in valid and invalid conditions, suggesting an impairment of the endogenous orienting of attention. The two groups, however, showed a similar cueing effect in the peripheral cue condition, indicating intact exogenous orienting in the poor readers. These findings generally supported a link between the orienting of covert attention and Chinese reading, providing evidence for the attentional-deficit theory of dyslexia. Copyright © 2018 John Wiley & Sons, Ltd.

  15. Topographic organization, number, and laminar distribution of callosal cells connecting visual cortical areas 17 and 18 of normally pigmented and Siamese cats.

    PubMed

    Berman, N E; Grant, S

    1992-07-01

    The callosal connections between visual cortical areas 17 and 18 in adult normally pigmented and "Boston" Siamese cats were studied using degeneration methods, and by transport of WGA-HRP combined with electrophysiological mapping. In normal cats, over 90% of callosal neurons were located in the supragranular layers. The supragranular callosal cell zone spanned the area 17/18 border and extended, on average, some 2-3 mm into both areas to occupy a territory which was roughly co-extensive with the distribution of callosal terminations in these areas. The region of the visual field adjoining the vertical meridian that was represented by neurons in the supragranular callosal cell zone was shown to increase systematically with decreasing visual elevation. Thus, close to the area centralis, receptive-field centers recorded from within this zone extended only up to 5 deg into the contralateral hemifield but at elevations of -10 deg and -40 deg they extended as far as 8 deg and 14 deg, respectively, into this hemifield. This suggests an element of visual non-correspondence in the callosal pathway between these cortical areas, which may be an essential substrate for "coarse" stereopsis at the visual midline. In the Siamese cats, the callosal cell and termination zones in areas 17 and 18 were expanded in width compared to the normal animals, but the major components were less robust. The area 17/18 border was often devoid of callosal axons and, in particular, the number of supragranular layer neurons participating in the pathway were drastically reduced, to only about 25% of those found in the normally pigmented adults. The callosal zones contained representations of the contralateral and ipsilateral hemifields that were roughly mirror-symmetric about the vertical meridian, and both hemifield representations increased with decreasing visual elevation. The extent and severity of the anomalies observed were similar across individual cats, regardless of whether a strabismus was also present. The callosal pathway between these visual cortical areas in the Siamese cat has been considered "silent," since nearly all neurons within its territory are activated only by the contralateral eye. The paucity of supragranular pyramidal neurons involved in the pathway may explain this silence.

  16. In situ gold nanoparticles formation: contrast agent for dental optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Braz, Ana K. S.; Araujo, Renato E. de; Ohulchanskyy, Tymish Y.; Shukla, Shoba; Bergey, Earl J.; Gomes, Anderson S. L.; Prasad, Paras N.

    2012-06-01

    In this work we demonstrate the potential use of gold nanoparticles as contrast agents for the optical coherence tomography (OCT) imaging technique in dentistry. Here, a new in situ photothermal reduction procedure was developed, producing spherical gold nanoparticles inside dentinal layers and tubules. Gold ions were dispersed in the primer of commercially available dental bonding systems. After the application and permeation in dentin by the modified adhesive systems, the dental bonding materials were photopolymerized concurrently with the formation of gold nanoparticles. The gold nanoparticles were visualized by scanning electron microscopy (SEM). The SEM images show the presence of gold nanospheres in the hybrid layer and dentinal tubules. The diameter of the gold nanoparticles was determined to be in the range of 40 to 120 nm. Optical coherence tomography images were obtained in two- and three-dimensions. The distribution of nanoparticles was analyzed and the extended depth of nanosphere production was determined. The results show that the OCT technique, using in situ formed gold nanoparticles as contrast enhancers, can be used to visualize dentin structures in a non-invasive and non-destructive way.

  17. Immunolocalization of choline acetyltransferase of common type in the central brain mass of Octopus vulgaris

    PubMed Central

    Casini, A.; Vaccaro, R.; D'Este, L.; Sakaue, Y.; Bellier, J.P.; Kimura, H.; Renda, T.G.

    2012-01-01

    Acetylcholine, the first neurotransmitter to be identified in the vertebrate frog, is widely distributed among the animal kingdom. The presence of a large amount of acetylcholine in the nervous system of cephalopods is well known from several biochemical and physiological studies. However, little is known about the precise distribution of cholinergic structures due to a lack of a suitable histochemical technique for detecting acetylcholine. The most reliable method to visualize the cholinergic neurons is the immunohistochemical localization of the enzyme choline acetyltransferase, the synthetic enzyme of acetylcholine. Following our previous study on the distribution patterns of cholinergic neurons in the Octopus vulgaris visual system, using a novel antibody that recognizes choline acetyltransferase of the common type (cChAT), now we extend our investigation on the octopus central brain mass. When applied on sections of octopus central ganglia, immunoreactivity for cChAT was detected in cell bodies of all central brain mass lobes with the notable exception of the subfrontal and subvertical lobes. Positive varicosed nerves fibers where observed in the neuropil of all central brain mass lobes. PMID:23027350

  18. Immunolocalization of choline acetyltransferase of common type in the central brain mass of Octopus vulgaris.

    PubMed

    Casini, A; Vaccaro, R; D'Este, L; Sakaue, Y; Bellier, J P; Kimura, H; Renda, T G

    2012-07-19

    Acetylcholine, the first neurotransmitter to be identified in the vertebrate frog, is widely distributed among the animal kingdom. The presence of a large amount of acetylcholine in the nervous system of cephalopods is well known from several biochemical and physiological studies. However, little is known about the precise distribution of cholinergic structures due to a lack of a suitable histochemical technique for detecting acetylcholine. The most reliable method to visualize the cholinergic neurons is the immunohistochemical localization of the enzyme choline acetyltransferase, the synthetic enzyme of acetylcholine. Following our previous study on the distribution patterns of cholinergic neurons in the Octopus vulgaris visual system, using a novel antibody that recognizes choline acetyltransferase of the common type (cChAT), now we extend our investigation on the octopus central brain mass. When applied on sections of octopus central ganglia, immunoreactivity for cChAT was detected in cell bodies of all central brain mass lobes with the notable exception of the subfrontal and subvertical lobes. Positive varicosed nerves fibers where observed in the neuropil of all central brain mass lobes.

  19. Building an Open-source Simulation Platform of Acoustic Radiation Force-based Breast Elastography

    PubMed Central

    Wang, Yu; Peng, Bo; Jiang, Jingfeng

    2017-01-01

    Ultrasound-based elastography including strain elastography (SE), acoustic radiation force Impulse (ARFI) imaging, point shear wave elastography (pSWE) and supersonic shear imaging (SSI) have been used to differentiate breast tumors among other clinical applications. The objective of this study is to extend a previously published virtual simulation platform built for ultrasound quasi-static breast elastography toward acoustic radiation force-based breast elastography. Consequently, the extended virtual breast elastography simulation platform can be used to validate image pixels with known underlying soft tissue properties (i.e. “ground truth”) in complex, heterogeneous media, enhancing confidence in elastographic image interpretations. The proposed virtual breast elastography system inherited four key components from the previously published virtual simulation platform: an ultrasound simulator (Field II), a mesh generator (Tetgen), a finite element solver (FEBio) and a visualization and data processing package (VTK). Using a simple message passing mechanism, functionalities have now been extended to acoustic radiation force-based elastography simulations. Examples involving three different numerical breast models with increasing complexity – one uniform model, one simple inclusion model and one virtual complex breast model derived from magnetic resonance imaging data, were used to demonstrate capabilities of this extended virtual platform. Overall, simulation results were compared with the published results. In the uniform model, the estimated shear wave speed (SWS) values were within 4% compared to the predetermined SWS values. In the simple inclusion and the complex breast models, SWS values of all hard inclusions in soft backgrounds were slightly underestimated, similar to what has been reported. The elastic contrast values and visual observation show that ARFI images have higher spatial resolution, while SSI images can provide higher inclusion-to-background contrast. In summary, our initial results were consistent with our expectations and what have been reported in the literature. The proposed (open-source) simulation platform can serve as a single gateway to perform many elastographic simulations in a transparent manner, thereby promoting collaborative developments. PMID:28075330

  20. Building an open-source simulation platform of acoustic radiation force-based breast elastography

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Peng, Bo; Jiang, Jingfeng

    2017-03-01

    Ultrasound-based elastography including strain elastography, acoustic radiation force impulse (ARFI) imaging, point shear wave elastography and supersonic shear imaging (SSI) have been used to differentiate breast tumors among other clinical applications. The objective of this study is to extend a previously published virtual simulation platform built for ultrasound quasi-static breast elastography toward acoustic radiation force-based breast elastography. Consequently, the extended virtual breast elastography simulation platform can be used to validate image pixels with known underlying soft tissue properties (i.e. ‘ground truth’) in complex, heterogeneous media, enhancing confidence in elastographic image interpretations. The proposed virtual breast elastography system inherited four key components from the previously published virtual simulation platform: an ultrasound simulator (Field II), a mesh generator (Tetgen), a finite element solver (FEBio) and a visualization and data processing package (VTK). Using a simple message passing mechanism, functionalities have now been extended to acoustic radiation force-based elastography simulations. Examples involving three different numerical breast models with increasing complexity—one uniform model, one simple inclusion model and one virtual complex breast model derived from magnetic resonance imaging data, were used to demonstrate capabilities of this extended virtual platform. Overall, simulation results were compared with the published results. In the uniform model, the estimated shear wave speed (SWS) values were within 4% compared to the predetermined SWS values. In the simple inclusion and the complex breast models, SWS values of all hard inclusions in soft backgrounds were slightly underestimated, similar to what has been reported. The elastic contrast values and visual observation show that ARFI images have higher spatial resolution, while SSI images can provide higher inclusion-to-background contrast. In summary, our initial results were consistent with our expectations and what have been reported in the literature. The proposed (open-source) simulation platform can serve as a single gateway to perform many elastographic simulations in a transparent manner, thereby promoting collaborative developments.

  1. Switching from pro re nata to treat-and-extend regimen improves visual acuity in patients with neovascular age-related macular degeneration.

    PubMed

    Kvannli, Line; Krohn, Jørgen

    2017-11-01

    To evaluate the visual outcome after transitioning from a pro re nata (PRN) intravitreal injection regimen to a treat-and-extend (TAE) regimen for patients with neovascular age-related macular degeneration (AMD). A retrospective review of patients who were switched from a PRN regimen with intravitreal injections of bevacizumab, ranibizumab or aflibercept to a TAE regimen. The best corrected visual acuity (BCVA), central retinal thickness (CRT) and type of medication used at baseline, at the time of changing treatment regimen and at the end of the study were analysed. Twenty-one eyes of 21 patients met the inclusion criteria. Prior to the switch, the patients received a mean of 13.8 injections (median, 10; range, 3-39 injections) with the PRN regimen for 44 months (range, 3-100 months), which improved the visual acuity in five patients (24%). After a mean of 6.1 injections (median, 5; range, 3-14 injections) with the TAE regimen over 8 months (range, 2-16 months), the visual acuity improved in 12 patients (57%). The improvement in visual acuity during treatment with the TAE regimen was statistically significant (p = 0.005). The proportion of patients with a visual acuity of 0.2 or better was significantly higher after treatment with the TAE regimen than after treatment with the PRN regimen (p = 0.048). No significant differences in CRT were found between the two treatment regimens. Even after prolonged treatment and a high number of intravitreal injections, switching AMD patients from a PRN regimen to a strict TAE regimen significantly improves visual acuity. © 2017 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  2. Late maturation of visual spatial integration in humans

    PubMed Central

    Kovács, Ilona; Kozma, Petra; Fehér, Ákos; Benedek, György

    1999-01-01

    Visual development is thought to be completed at an early age. We suggest that the maturation of the visual brain is not homogeneous: functions with greater need for early availability, such as visuomotor control, mature earlier, and the development of other visual functions may extend well into childhood. We found significant improvement in children between 5 and 14 years in visual spatial integration by using a contour-detection task. The data show that long-range spatial interactions—subserving the integration of orientational information across the visual field—span a shorter spatial range in children than in adults. Performance in the task improves in a cue-specific manner with practice, which indicates the participation of fairly low-level perceptual mechanisms. We interpret our findings in terms of a protracted development of ventral visual-stream function in humans. PMID:10518600

  3. Real-time Position Based Population Data Analysis and Visualization Using Heatmap for Hazard Emergency Response

    NASA Astrophysics Data System (ADS)

    Ding, R.; He, T.

    2017-12-01

    With the increased popularity in mobile applications and services, there has been a growing demand for more advanced mobile technologies that utilize real-time Location Based Services (LBS) data to support natural hazard response efforts. Compared to traditional sources like the census bureau that often can only provide historical and static data, an LBS service can provide more current data to drive a real-time natural hazard response system to more accurately process and assess issues such as population density in areas impacted by a hazard. However, manually preparing or preprocessing the data to suit the needs of the particular application would be time-consuming. This research aims to implement a population heatmap visual analytics system based on real-time data for natural disaster emergency management. System comprised of a three-layered architecture, including data collection, data processing, and visual analysis layers. Real-time, location-based data meeting certain polymerization conditions are collected from multiple sources across the Internet, then processed and stored in a cloud-based data store. Parallel computing is utilized to provide fast and accurate access to the pre-processed population data based on criteria such as the disaster event and to generate a location-based population heatmap as well as other types of visual digital outputs using auxiliary analysis tools. At present, a prototype system, which geographically covers the entire region of China and combines population heat map based on data from the Earthquake Catalogs database has been developed. It Preliminary results indicate that the generation of dynamic population density heatmaps based on the prototype system has effectively supported rapid earthquake emergency rescue and evacuation efforts as well as helping responders and decision makers to evaluate and assess earthquake damage. Correlation analyses that were conducted revealed that the aggregation and movement of people depended on various factors, including earthquake occurrence time and location of epicenter. This research hopes to continue to build upon the success of the prototype system in order to improve and extend the system to support the analysis of earthquakes and other types of natural hazard events.

  4. Brain-Computer Interfaces With Multi-Sensory Feedback for Stroke Rehabilitation: A Case Study.

    PubMed

    Irimia, Danut C; Cho, Woosang; Ortner, Rupert; Allison, Brendan Z; Ignat, Bogdan E; Edlinger, Guenter; Guger, Christoph

    2017-11-01

    Conventional therapies do not provide paralyzed patients with closed-loop sensorimotor integration for motor rehabilitation. This work presents the recoveriX system, a hardware and software platform that combines a motor imagery (MI)-based brain-computer interface (BCI), functional electrical stimulation (FES), and visual feedback technologies for a complete sensorimotor closed-loop therapy system for poststroke rehabilitation. The proposed system was tested on two chronic stroke patients in a clinical environment. The patients were instructed to imagine the movement of either the left or right hand in random order. During these two MI tasks, two types of feedback were provided: a bar extending to the left or right side of a monitor as visual feedback and passive hand opening stimulated from FES as proprioceptive feedback. Both types of feedback relied on the BCI classification result achieved using common spatial patterns and a linear discriminant analysis classifier. After 10 sessions of recoveriX training, one patient partially regained control of wrist extension in her paretic wrist and the other patient increased the range of middle finger movement by 1 cm. A controlled group study is planned with a new version of the recoveriX system, which will have several improvements. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  5. Retinal ganglion cell maps in the brain: implications for visual processing.

    PubMed

    Dhande, Onkar S; Huberman, Andrew D

    2014-02-01

    Everything the brain knows about the content of the visual world is built from the spiking activity of retinal ganglion cells (RGCs). As the output neurons of the eye, RGCs include ∼20 different subtypes, each responding best to a specific feature in the visual scene. Here we discuss recent advances in identifying where different RGC subtypes route visual information in the brain, including which targets they connect to and how their organization within those targets influences visual processing. We also highlight examples where causal links have been established between specific RGC subtypes, their maps of central connections and defined aspects of light-mediated behavior and we suggest the use of techniques that stand to extend these sorts of analyses to circuits underlying visual perception. Copyright © 2013. Published by Elsevier Ltd.

  6. EarthLabs Modules: Engaging Students In Extended, Rigorous Investigations Of The Ocean, Climate and Weather

    NASA Astrophysics Data System (ADS)

    Manley, J.; Chegwidden, D.; Mote, A. S.; Ledley, T. S.; Lynds, S. E.; Haddad, N.; Ellins, K.

    2016-02-01

    EarthLabs, envisioned as a national model for high school Earth or Environmental Science lab courses, is adaptable for both undergraduate middle school students. The collection includes ten online modules that combine to feature a global view of our planet as a dynamic, interconnected system, by engaging learners in extended investigations. EarthLabs support state and national guidelines, including the NGSS, for science content. Four modules directly guide students to discover vital aspects of the oceans while five other modules incorporate ocean sciences in order to complete an understanding of Earth's climate system. Students gain a broad perspective on the key role oceans play in fishing industry, droughts, coral reefs, hurricanes, the carbon cycle, as well as life on land and in the seas to drive our changing climate by interacting with scientific research data, manipulating satellite imagery, numerical data, computer visualizations, experiments, and video tutorials. Students explore Earth system processes and build quantitative skills that enable them to objectively evaluate scientific findings for themselves as they move through ordered sequences that guide the learning. As a robust collection, EarthLabs modules engage students in extended, rigorous investigations allowing a deeper understanding of the ocean, climate and weather. This presentation provides an overview of the ten curriculum modules that comprise the EarthLabs collection developed by TERC and found at http://serc.carleton.edu/earthlabs/index.html. Evaluation data on the effectiveness and use in secondary education classrooms will be summarized.

  7. On Visualizing Mixed-Type Data: A Joint Metric Approach to Profile Construction and Outlier Detection

    ERIC Educational Resources Information Center

    Grané, Aurea; Romera, Rosario

    2018-01-01

    Survey data are usually of mixed type (quantitative, multistate categorical, and/or binary variables). Multidimensional scaling (MDS) is one of the most extended methodologies to visualize the profile structure of the data. Since the past 60s, MDS methods have been introduced in the literature, initially in publications in the psychometrics area.…

  8. The Effect of Perceptual Load on Attention-Induced Motion Blindness: The Efficiency of Selective Inhibition

    ERIC Educational Resources Information Center

    Hay, Julia L.; Milders, Maarten M.; Sahraie, Arash; Niedeggen, Michael

    2006-01-01

    Recent visual marking studies have shown that the carry-over of distractor inhibition can impair the ability of singletons to capture attention if the singleton and distractors share features. The current study extends this finding to first-order motion targets and distractors, clearly separated in time by a visual cue (the letter X). Target…

  9. Using Social Network Graphs as Visualization Tools to Influence Peer Selection Decision-Making Strategies to Access Information about Complex Socioscientific Issues

    ERIC Educational Resources Information Center

    Yoon, Susan A.

    2011-01-01

    This study extends previous research that explores how visualization affordances that computational tools provide and social network analyses that account for individual- and group-level dynamic processes can work in conjunction to improve learning outcomes. The study's main hypothesis is that when social network graphs are used in instruction,…

  10. Psychometric Validation and Normative Data of a Second Chinese Version of the Hooper Visual Organization Test in Children

    ERIC Educational Resources Information Center

    Lin, Yueh-Hsien; Su, Chwen-Yng; Guo, Wei-Yuan; Wuang, Yee-Pay

    2012-01-01

    The Hooper Visual Organization Test (HVOT) is a measure of visuosynthetic ability. Previously, the psychometric properties of the HVOT have been evaluated for Chinese-speaking children aged 5-11 years. This study reports development and further evidence of reliability and validity for a second version involving an extended age range of healthy…

  11. Honeybees can discriminate between Monet and Picasso paintings.

    PubMed

    Wu, Wen; Moreno, Antonio M; Tangen, Jason M; Reinhard, Judith

    2013-01-01

    Honeybees (Apis mellifera) have remarkable visual learning and discrimination abilities that extend beyond learning simple colours, shapes or patterns. They can discriminate landscape scenes, types of flowers, and even human faces. This suggests that in spite of their small brain, honeybees have a highly developed capacity for processing complex visual information, comparable in many respects to vertebrates. Here, we investigated whether this capacity extends to complex images that humans distinguish on the basis of artistic style: Impressionist paintings by Monet and Cubist paintings by Picasso. We show that honeybees learned to simultaneously discriminate between five different Monet and Picasso paintings, and that they do not rely on luminance, colour, or spatial frequency information for discrimination. When presented with novel paintings of the same style, the bees even demonstrated some ability to generalize. This suggests that honeybees are able to discriminate Monet paintings from Picasso ones by extracting and learning the characteristic visual information inherent in each painting style. Our study further suggests that discrimination of artistic styles is not a higher cognitive function that is unique to humans, but simply due to the capacity of animals-from insects to humans-to extract and categorize the visual characteristics of complex images.

  12. The Gestalt Principle of Similarity Benefits Visual Working Memory

    PubMed Central

    Peterson, Dwight J.; Berryhill, Marian E.

    2013-01-01

    Visual working memory (VWM) is essential for many cognitive processes yet it is notably limited in capacity. Visual perception processing is facilitated by Gestalt principles of grouping, such as connectedness, similarity, and proximity. This introduces the question: do these perceptual benefits extend to VWM? If so, can this be an approach to enhance VWM function by optimizing the processing of information? Previous findings demonstrate that several Gestalt principles (connectedness, common region, and spatial proximity) do facilitate VWM performance in change detection tasks (Woodman, Vecera, & Luck, 2003; Xu, 2002a, 2006; Xu & Chun, 2007; Jiang, Olson & Chun, 2000). One prevalent Gestalt principle, similarity, has not been examined with regard to facilitating VWM. Here, we investigated whether grouping by similarity benefits VWM. Experiment 1 established the basic finding that VWM performance could benefit from grouping. Experiment 2 replicated and extended this finding by showing that similarity was only effective when the similar stimuli were proximal. In short, the VWM performance benefit derived from similarity was constrained by spatial proximity such that similar items need to be near each other. Thus, the Gestalt principle of similarity benefits visual perception, but it can provide benefits to VWM as well. PMID:23702981

  13. The Gestalt principle of similarity benefits visual working memory.

    PubMed

    Peterson, Dwight J; Berryhill, Marian E

    2013-12-01

    Visual working memory (VWM) is essential for many cognitive processes, yet it is notably limited in capacity. Visual perception processing is facilitated by Gestalt principles of grouping, such as connectedness, similarity, and proximity. This introduces the question, do these perceptual benefits extend to VWM? If so, can this be an approach to enhance VWM function by optimizing the processing of information? Previous findings have demonstrated that several Gestalt principles (connectedness, common region, and spatial proximity) do facilitate VWM performance in change detection tasks (Jiang, Olson, & Chun, 2000; Woodman, Vecera, & Luck, 2003; Xu, 2002, 2006; Xu & Chun, 2007). However, one prevalent Gestalt principle, similarity, has not been examined with regard to facilitating VWM. Here, we investigated whether grouping by similarity benefits VWM. Experiment 1 established the basic finding that VWM performance could benefit from grouping. Experiment 2 replicated and extended this finding by showing that similarity was only effective when the similar stimuli were proximal. In short, the VWM performance benefit derived from similarity was constrained by spatial proximity, such that similar items need to be near each other. Thus, the Gestalt principle of similarity benefits visual perception, but it can provide benefits to VWM as well.

  14. Visualization of planetary subsurface radar sounder data in three dimensions using stereoscopy

    NASA Astrophysics Data System (ADS)

    Frigeri, A.; Federico, C.; Pauselli, C.; Ercoli, M.; Coradini, A.; Orosei, R.

    2010-12-01

    Planetary subsurface sounding radar data extend the knowledge of planetary surfaces to a third dimension: the depth. The interpretation of delays of radar echoes converted into depth often requires the comparative analysis with other data, mainly topography, and radar data from different orbits can be used to investigate the spatial continuity of signals from subsurface geologic features. This scenario requires taking into account spatially referred information in three dimensions. Three dimensional objects are generally easier to understand if represented into a three dimensional space, and this representation can be improved by stereoscopic vision. Since its invention in the first half of 19th century, stereoscopy has been used in a broad range of application, including scientific visualization. The quick improvement of computer graphics and the spread of graphic rendering hardware allow to apply the basic principles of stereoscopy in the digital domain, allowing the stereoscopic projection of complex models. Specialized system for stereoscopic view of scientific data have been available in the industry, and proprietary solutions were affordable only to large research institutions. In the last decade, thanks to the GeoWall Consortium, the basics of stereoscopy have been applied for setting up stereoscopic viewers based on off-the shelf hardware products. Geowalls have been spread and are now used by several geo-science research institutes and universities. We are exploring techniques for visualizing planetary subsurface sounding radar data in three dimensions and we are developing a hardware system for rendering it in a stereoscopic vision system. Several Free Open Source Software tools and libraries are being used, as their level of interoperability is typically high and their licensing system offers the opportunity to implement quickly new functionalities to solve specific needs during the progress of the project. Visualization of planetary radar data in three dimensions represents a challenging task, and the exploration of different strategies will bring to the selection of the most appropriate ones for a meaningful extraction of information from the products of these innovative instruments.

  15. Extending the Cortical Grasping Network: Pre-supplementary Motor Neuron Activity During Vision and Grasping of Objects

    PubMed Central

    Lanzilotto, Marco; Livi, Alessandro; Maranesi, Monica; Gerbella, Marzio; Barz, Falk; Ruther, Patrick; Fogassi, Leonardo; Rizzolatti, Giacomo; Bonini, Luca

    2016-01-01

    Grasping relies on a network of parieto-frontal areas lying on the dorsolateral and dorsomedial parts of the hemispheres. However, the initiation and sequencing of voluntary actions also requires the contribution of mesial premotor regions, particularly the pre-supplementary motor area F6. We recorded 233 F6 neurons from 2 monkeys with chronic linear multishank neural probes during reaching–grasping visuomotor tasks. We showed that F6 neurons play a role in the control of forelimb movements and some of them (26%) exhibit visual and/or motor specificity for the target object. Interestingly, area F6 neurons form 2 functionally distinct populations, showing either visually-triggered or movement-related bursts of activity, in contrast to the sustained visual-to-motor activity displayed by ventral premotor area F5 neurons recorded in the same animals and with the same task during previous studies. These findings suggest that F6 plays a role in object grasping and extend existing models of the cortical grasping network. PMID:27733538

  16. Comparison of 3D cellular imaging techniques based on scanned electron probes: Serial block face SEM vs. Axial bright-field STEM tomography.

    PubMed

    McBride, E L; Rao, A; Zhang, G; Hoyne, J D; Calco, G N; Kuo, B C; He, Q; Prince, A A; Pokrovskaya, I D; Storrie, B; Sousa, A A; Aronova, M A; Leapman, R D

    2018-06-01

    Microscopies based on focused electron probes allow the cell biologist to image the 3D ultrastructure of eukaryotic cells and tissues extending over large volumes, thus providing new insight into the relationship between cellular architecture and function of organelles. Here we compare two such techniques: electron tomography in conjunction with axial bright-field scanning transmission electron microscopy (BF-STEM), and serial block face scanning electron microscopy (SBF-SEM). The advantages and limitations of each technique are illustrated by their application to determining the 3D ultrastructure of human blood platelets, by considering specimen geometry, specimen preparation, beam damage and image processing methods. Many features of the complex membranes composing the platelet organelles can be determined from both approaches, although STEM tomography offers a higher ∼3 nm isotropic pixel size, compared with ∼5 nm for SBF-SEM in the plane of the block face and ∼30 nm in the perpendicular direction. In this regard, we demonstrate that STEM tomography is advantageous for visualizing the platelet canalicular system, which consists of an interconnected network of narrow (∼50-100 nm) membranous cisternae. In contrast, SBF-SEM enables visualization of complete platelets, each of which extends ∼2 µm in minimum dimension, whereas BF-STEM tomography can typically only visualize approximately half of the platelet volume due to a rapid non-linear loss of signal in specimens of thickness greater than ∼1.5 µm. We also show that the limitations of each approach can be ameliorated by combining 3D and 2D measurements using a stereological approach. Copyright © 2018. Published by Elsevier Inc.

  17. Web-based interactive 2D/3D medical image processing and visualization software.

    PubMed

    Mahmoudi, Seyyed Ehsan; Akhondi-Asl, Alireza; Rahmani, Roohollah; Faghih-Roohi, Shahrooz; Taimouri, Vahid; Sabouri, Ahmad; Soltanian-Zadeh, Hamid

    2010-05-01

    There are many medical image processing software tools available for research and diagnosis purposes. However, most of these tools are available only as local applications. This limits the accessibility of the software to a specific machine, and thus the data and processing power of that application are not available to other workstations. Further, there are operating system and processing power limitations which prevent such applications from running on every type of workstation. By developing web-based tools, it is possible for users to access the medical image processing functionalities wherever the internet is available. In this paper, we introduce a pure web-based, interactive, extendable, 2D and 3D medical image processing and visualization application that requires no client installation. Our software uses a four-layered design consisting of an algorithm layer, web-user-interface layer, server communication layer, and wrapper layer. To compete with extendibility of the current local medical image processing software, each layer is highly independent of other layers. A wide range of medical image preprocessing, registration, and segmentation methods are implemented using open source libraries. Desktop-like user interaction is provided by using AJAX technology in the web-user-interface. For the visualization functionality of the software, the VRML standard is used to provide 3D features over the web. Integration of these technologies has allowed implementation of our purely web-based software with high functionality without requiring powerful computational resources in the client side. The user-interface is designed such that the users can select appropriate parameters for practical research and clinical studies. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  18. Perception of straightness and parallelism with minimal distance information.

    PubMed

    Rogers, Brian; Naumenko, Olga

    2016-07-01

    The ability of human observers to judge the straightness and parallelism of extended lines has been a neglected topic of study since von Helmholtz's initial observations 150 years ago. He showed that there were significant misperceptions of the straightness of extended lines seen in the peripheral visual field. The present study focused on the perception of extended lines (spanning 90° visual angle) that were directly fixated in the visual environment of a planetarium where there was only minimal information about the distance to the lines. Observers were asked to vary the curvature of 1 or more lines until they appeared to be straight and/or parallel, ignoring any perceived curvature in depth. When the horizon between the ground and the sky was visible, the results showed that observers' judgements of the straightness of a single line were significantly biased away from the veridical, great circle locations, and towards equal elevation settings. Similar biases can be seen in the jet trails of aircraft flying across the sky and in Rogers and Anstis's new moon illusion (Perception, 42(Abstract supplement) 18, 2013, 2016). The biasing effect of the horizon was much smaller when observers were asked to judge the straightness and parallelism of 2 or more extended lines. We interpret the results as showing that, in the absence of adequate distance information, observers tend to perceive the projected lines as lying on an approximately equidistant, hemispherical surface and that their judgements of straightness and parallelism are based on the perceived separation of the lines superimposed on that surface.

  19. CellMap visualizes protein-protein interactions and subcellular localization

    PubMed Central

    Dallago, Christian; Goldberg, Tatyana; Andrade-Navarro, Miguel Angel; Alanis-Lobato, Gregorio; Rost, Burkhard

    2018-01-01

    Many tools visualize protein-protein interaction (PPI) networks. The tool introduced here, CellMap, adds one crucial novelty by visualizing PPI networks in the context of subcellular localization, i.e. the location in the cell or cellular component in which a PPI happens. Users can upload images of cells and define areas of interest against which PPIs for selected proteins are displayed (by default on a cartoon of a cell). Annotations of localization are provided by the user or through our in-house database. The visualizer and server are written in JavaScript, making CellMap easy to customize and to extend by researchers and developers. PMID:29497493

  20. Development and implementation of Inflight Neurosensory Training for Adaptation/Readaptation (INSTAR)

    NASA Technical Reports Server (NTRS)

    Harm, D. L.; Guedry, F. E.; Parker, Donald E.; Reschke, M. F.

    1993-01-01

    Resolution of space motion sickness, and improvements in spatial orientation, posture and motion control, and compensatory eye movements occur as a function of neurosensory and sensorimotor adaptation to microgravity. These adaptive responses, however, are inappropriate for return to Earth. Even following relatively brief space Shuttle missions, significant re-adaptation disturbances related to visual performance, locomotion, and perceived self-motion have been observed. Russian reports suggest that these disturbances increase with mission duration and may be severe following landing after prolonged microgravity exposure such as during a voyage to Mars. Consequently, there is a need to enable the astronauts to be prepared for and more quickly re-adapt to a gravitational environment following extended space missions. Several devices to meet this need are proposed including a virtual environment - centrifuge device (VECD). A short-arm centrifuge will provide centripetal acceleration parallel to the astronaut's longitudinal body axis and a restraint system will be configured to permit head movements only in the plane of rotation (to prevent 'cross-coupling'). A head-mounted virtual environment system will be used to develop appropriate 'calibration' between visual motion/orientation signals and inertial motion/orientation signals generated by the centrifuge. This will permit vestibular, visual and somatosensory signal matches to bias central interpretation of otolith signals toward the 'position' responses and to recalibrate the vestibulo-ocular reflex (VOR).

  1. Visualized analysis of mixed numeric and categorical data via extended self-organizing map.

    PubMed

    Hsu, Chung-Chian; Lin, Shu-Han

    2012-01-01

    Many real-world datasets are of mixed types, having numeric and categorical attributes. Even though difficult, analyzing mixed-type datasets is important. In this paper, we propose an extended self-organizing map (SOM), called MixSOM, which utilizes a data structure distance hierarchy to facilitate the handling of numeric and categorical values in a direct, unified manner. Moreover, the extended model regularizes the prototype distance between neighboring neurons in proportion to their map distance so that structures of the clusters can be portrayed better on the map. Extensive experiments on several synthetic and real-world datasets are conducted to demonstrate the capability of the model and to compare MixSOM with several existing models including Kohonen's SOM, the generalized SOM and visualization-induced SOM. The results show that MixSOM is superior to the other models in reflecting the structure of the mixed-type data and facilitates further analysis of the data such as exploration at various levels of granularity.

  2. 3D Printed Molecules and Extended Solid Models for Teaching Symmetry and Point Groups

    ERIC Educational Resources Information Center

    Scalfani, Vincent F.; Vaid, Thomas P.

    2014-01-01

    Tangible models help students and researchers visualize chemical structures in three dimensions (3D). 3D printing offers a unique and straightforward approach to fabricate plastic 3D models of molecules and extended solids. In this article, we prepared a series of digital 3D design files of molecular structures that will be useful for teaching…

  3. Three-Dimensional User Interfaces for Immersive Virtual Reality

    NASA Technical Reports Server (NTRS)

    vanDam, Andries

    1997-01-01

    The focus of this grant was to experiment with novel user interfaces for immersive Virtual Reality (VR) systems, and thus to advance the state of the art of user interface technology for this domain. Our primary test application was a scientific visualization application for viewing Computational Fluid Dynamics (CFD) datasets. This technology has been transferred to NASA via periodic status reports and papers relating to this grant that have been published in conference proceedings. This final report summarizes the research completed over the past year, and extends last year's final report of the first three years of the grant.

  4. Network propagation in the cytoscape cyberinfrastructure.

    PubMed

    Carlin, Daniel E; Demchak, Barry; Pratt, Dexter; Sage, Eric; Ideker, Trey

    2017-10-01

    Network propagation is an important and widely used algorithm in systems biology, with applications in protein function prediction, disease gene prioritization, and patient stratification. However, up to this point it has required significant expertise to run. Here we extend the popular network analysis program Cytoscape to perform network propagation as an integrated function. Such integration greatly increases the access to network propagation by putting it in the hands of biologists and linking it to the many other types of network analysis and visualization available through Cytoscape. We demonstrate the power and utility of the algorithm by identifying mutations conferring resistance to Vemurafenib.

  5. Extended endoscopic transsphenoidal approach infrachiasmatic corridor.

    PubMed

    Ceylan, Savas; Anik, Ihsan; Koc, Kenan; Cabuk, Burak

    2015-01-01

    An extended endoscopic transsphenoidal approach is required for skull base lesions extending to the suprasellar area. Inferior approach using the infrachiasmatic corridor allows access to the lesions through the tumor growth that is favorable for the extended transsphenoidal approaches. Infrachiasmatic corridor is a safer route for the inferior approaches that is made up by basal arachnoid membrane and Liliequist's membrane with its leaves (diencephalic and mesencephalic leaf). This area extends from the optic canal and tuberculum sella to the corpus mamillare. We performed extended endoscopic approach using the infrachiasmatic corridor in 52 cases, including tuberculum sella meningiomas (n:23), craniopharyngiomas (n:16), suprasellar Rathke's cleft cyst (n:6), pituitary adenoma (n:2), fibrous dysplasia (n:1), infundibular granulosa cell tumor (n:2), and epidermoid tumor (n:2). Total resection was achieved in 17 of 23 (74%) with tuberculum sellae meningioma using infrachiasmatic approach. Twenty patients presented with visual disorders and 14 of them improved. There were two postoperative cerebrospinal fluid (CSF) leakages and one transient diabetes insipidus and one permanent diabetes insipidus. Sixteen patients were operated on by the infrachiasmatic approach for craniopharyngiomas. Improvement was reached in seven of eight patients presented with visual disorders. Complete tumor resection was performed in 10 of 16 cases and cyst aspiration in 4 cases, and there were remnants in two cases. Postoperative CSF leakage was seen in two patients. Infrachiasmatic corridor provides an easier and safer inferior route for the removal of middle midline skull base lesions in selected cases.

  6. Evaluating Alignment of Shapes by Ensemble Visualization

    PubMed Central

    Raj, Mukund; Mirzargar, Mahsa; Preston, J. Samuel; Kirby, Robert M.; Whitaker, Ross T.

    2016-01-01

    The visualization of variability in surfaces embedded in 3D, which is a type of ensemble uncertainty visualization, provides a means of understanding the underlying distribution of a collection or ensemble of surfaces. Although ensemble visualization for isosurfaces has been described in the literature, we conduct an expert-based evaluation of various ensemble visualization techniques in a particular medical imaging application: the construction of atlases or templates from a population of images. In this work, we extend contour boxplot to 3D, allowing us to evaluate it against an enumeration-style visualization of the ensemble members and other conventional visualizations used by atlas builders, namely examining the atlas image and the corresponding images/data provided as part of the construction process. We present feedback from domain experts on the efficacy of contour boxplot compared to other modalities when used as part of the atlas construction and analysis stages of their work. PMID:26186768

  7. Are Deaf Students Visual Learners?

    PubMed Central

    Marschark, Marc; Morrison, Carolyn; Lukomski, Jennifer; Borgna, Georgianna; Convertino, Carol

    2013-01-01

    It is frequently assumed that by virtue of their hearing losses, deaf students are visual learners. Deaf individuals have some visual-spatial advantages relative to hearing individuals, but most have been are linked to use of sign language rather than auditory deprivation. How such cognitive differences might affect academic performance has been investigated only rarely. This study examined relations among deaf college students’ language and visual-spatial abilities, mathematics problem solving, and hearing thresholds. Results extended some previous findings and clarified others. Contrary to what might be expected, hearing students exhibited visual-spatial skills equal to or better than deaf students. Scores on a Spatial Relations task were associated with better mathematics problem solving. Relations among the several variables, however, suggested that deaf students are no more likely to be visual learners than hearing students and that their visual-spatial skill may be related more to their hearing than to sign language skills. PMID:23750095

  8. General visual robot controller networks via artificial evolution

    NASA Astrophysics Data System (ADS)

    Cliff, David; Harvey, Inman; Husbands, Philip

    1993-08-01

    We discuss recent results from our ongoing research concerning the application of artificial evolution techniques (i.e., an extended form of genetic algorithm) to the problem of developing `neural' network controllers for visually guided robots. The robot is a small autonomous vehicle with extremely low-resolution vision, employing visual sensors which could readily be constructed from discrete analog components. In addition to visual sensing, the robot is equipped with a small number of mechanical tactile sensors. Activity from the sensors is fed to a recurrent dynamical artificial `neural' network, which acts as the robot controller, providing signals to motors governing the robot's motion. Prior to presentation of new results, this paper summarizes our rationale and past work, which has demonstrated that visually guided control networks can arise without any explicit specification that visual processing should be employed: the evolutionary process opportunistically makes use of visual information if it is available.

  9. Visual perceptual load induces inattentional deafness.

    PubMed

    Macdonald, James S P; Lavie, Nilli

    2011-08-01

    In this article, we establish a new phenomenon of "inattentional deafness" and highlight the level of load on visual attention as a critical determinant of this phenomenon. In three experiments, we modified an inattentional blindness paradigm to assess inattentional deafness. Participants made either a low- or high-load visual discrimination concerning a cross shape (respectively, a discrimination of line color or of line length with a subtle length difference). A brief pure tone was presented simultaneously with the visual task display on a final trial. Failures to notice the presence of this tone (i.e., inattentional deafness) reached a rate of 79% in the high-visual-load condition, significantly more than in the low-load condition. These findings establish the phenomenon of inattentional deafness under visual load, thereby extending the load theory of attention (e.g., Lavie, Journal of Experimental Psychology. Human Perception and Performance, 25, 596-616, 1995) to address the cross-modal effects of visual perceptual load.

  10. A Joint Gaussian Process Model for Active Visual Recognition with Expertise Estimation in Crowdsourcing

    PubMed Central

    Long, Chengjiang; Hua, Gang; Kapoor, Ashish

    2015-01-01

    We present a noise resilient probabilistic model for active learning of a Gaussian process classifier from crowds, i.e., a set of noisy labelers. It explicitly models both the overall label noise and the expertise level of each individual labeler with two levels of flip models. Expectation propagation is adopted for efficient approximate Bayesian inference of our probabilistic model for classification, based on which, a generalized EM algorithm is derived to estimate both the global label noise and the expertise of each individual labeler. The probabilistic nature of our model immediately allows the adoption of the prediction entropy for active selection of data samples to be labeled, and active selection of high quality labelers based on their estimated expertise to label the data. We apply the proposed model for four visual recognition tasks, i.e., object category recognition, multi-modal activity recognition, gender recognition, and fine-grained classification, on four datasets with real crowd-sourced labels from the Amazon Mechanical Turk. The experiments clearly demonstrate the efficacy of the proposed model. In addition, we extend the proposed model with the Predictive Active Set Selection Method to speed up the active learning system, whose efficacy is verified by conducting experiments on the first three datasets. The results show our extended model can not only preserve a higher accuracy, but also achieve a higher efficiency. PMID:26924892

  11. Object grouping based on real-world regularities facilitates perception by reducing competitive interactions in visual cortex

    PubMed Central

    Kaiser, Daniel; Stein, Timo; Peelen, Marius V.

    2014-01-01

    In virtually every real-life situation humans are confronted with complex and cluttered visual environments that contain a multitude of objects. Because of the limited capacity of the visual system, objects compete for neural representation and cognitive processing resources. Previous work has shown that such attentional competition is partly object based, such that competition among elements is reduced when these elements perceptually group into an object based on low-level cues. Here, using functional MRI (fMRI) and behavioral measures, we show that the attentional benefit of grouping extends to higher-level grouping based on the relative position of objects as experienced in the real world. An fMRI study designed to measure competitive interactions among objects in human visual cortex revealed reduced neural competition between objects when these were presented in commonly experienced configurations, such as a lamp above a table, relative to the same objects presented in other configurations. In behavioral visual search studies, we then related this reduced neural competition to improved target detection when distracter objects were shown in regular configurations. Control studies showed that low-level grouping could not account for these results. We interpret these findings as reflecting the grouping of objects based on higher-level spatial-relational knowledge acquired through a lifetime of seeing objects in specific configurations. This interobject grouping effectively reduces the number of objects that compete for representation and thereby contributes to the efficiency of real-world perception. PMID:25024190

  12. Student Visual Communication of Evolution

    NASA Astrophysics Data System (ADS)

    Oliveira, Alandeom W.; Cook, Kristin

    2017-06-01

    Despite growing recognition of the importance of visual representations to science education, previous research has given attention mostly to verbal modalities of evolution instruction. Visual aspects of classroom learning of evolution are yet to be systematically examined by science educators. The present study attends to this issue by exploring the types of evolutionary imagery deployed by secondary students. Our visual design analysis revealed that students resorted to two larger categories of images when visually communicating evolution: spatial metaphors (images that provided a spatio-temporal account of human evolution as a metaphorical "walk" across time and space) and symbolic representations ("icons of evolution" such as personal portraits of Charles Darwin that simply evoked evolutionary theory rather than metaphorically conveying its conceptual contents). It is argued that students need opportunities to collaboratively critique evolutionary imagery and to extend their visual perception of evolution beyond dominant images.

  13. Visual hallucinatory syndromes and the anatomy of the visual brain.

    PubMed

    Santhouse, A M; Howard, R J; ffytche, D H

    2000-10-01

    We have set out to identify phenomenological correlates of cerebral functional architecture within Charles Bonnet syndrome (CBS) hallucinations by looking for associations between specific hallucination categories. Thirty-four CBS patients were examined with a structured interview/questionnaire to establish the presence of 28 different pathological visual experiences. Associations between categories of pathological experience were investigated by an exploratory factor analysis. Twelve of the pathological experiences partitioned into three segregated syndromic clusters. The first cluster consisted of hallucinations of extended landscape scenes and small figures in costumes with hats; the second, hallucinations of grotesque, disembodied and distorted faces with prominent eyes and teeth; and the third, visual perseveration and delayed palinopsia. The three visual psycho-syndromes mirror the segregation of hierarchical visual pathways into streams and suggest a novel theoretical framework for future research into the pathophysiology of neuropsychiatric syndromes.

  14. 2D microwave imaging reflectometer electronics.

    PubMed

    Spear, A G; Domier, C W; Hu, X; Muscatello, C M; Ren, X; Tobias, B J; Luhmann, N C

    2014-11-01

    A 2D microwave imaging reflectometer system has been developed to visualize electron density fluctuations on the DIII-D tokamak. Simultaneously illuminated at four probe frequencies, large aperture optics image reflections from four density-dependent cutoff surfaces in the plasma over an extended region of the DIII-D plasma. Localized density fluctuations in the vicinity of the plasma cutoff surfaces modulate the plasma reflections, yielding a 2D image of electron density fluctuations. Details are presented of the receiver down conversion electronics that generate the in-phase (I) and quadrature (Q) reflectometer signals from which 2D density fluctuation data are obtained. Also presented are details on the control system and backplane used to manage the electronics as well as an introduction to the computer based control program.

  15. Privacy Law As It Affected Journalism, 1890-1978: Privacy Is a Visual Tort.

    ERIC Educational Resources Information Center

    Dow, Caroline

    To determine the treatment of visual journalism by privacy law from the origins of privacy law in 1890 until 1978, an analysis was made of the mass media legal cases occurring between those years. The cases were subjectively divided into three categories: those that established or extended a freedom of the press or the right of a defendant to…

  16. Extended depth of focus contact lenses vs. two commercial multifocals: Part 2. Visual performance after 1 week of lens wear.

    PubMed

    Bakaraju, Ravi C; Tilia, Daniel; Sha, Jennifer; Diec, Jennie; Chung, Jiyoon; Kho, Danny; Delaney, Shona; Munro, Anna; Thomas, Varghese

    To compare the visual performance of prototype contact lenses designed via deliberate manipulation of higher-order spherical aberrations to extend-depth-of-focus with two commercial multifocals, after 1 week of lens wear. In a prospective, participant-masked, cross-over, randomized, 1-week dispensing clinical-trial, 43 presbyopes [age: 42-63 years] each wore AIROPTIX Aqua multifocal (AOMF), ACUVUE OASYS for presbyopia (AOP) and extended-depth-of-focus prototypes (EDOF) appropriate to their add requirements. Measurements comprised high-contrast-visual-acuity (HCVA) at 6m, 70cm, 50cm and 40cm; low-contrast-visual-acuity (LCVA) and contrast-sensitivity (CS) at 6m and stereopsis at 40cm. A self-administered questionnaire on a numeric-rating-scale (1-10) assessed subjective visual performance comprising clarity-of-vision and lack-of-ghosting at various distances during day/night-viewing conditions and overall-vision-satisfaction. EDOF was significantly better than AOMF and AOP for HCVA averaged across distances (p≤0.038); significantly worse than AOMF for LCVA (p=0.021) and significantly worse than AOMF for CS in medium and high add-groups (p=0.006). None of these differences were clinically significant (≤2 letters). EDOF was significantly better than AOMF and AOP for mean stereoacuity (36 and 13 seconds-of-arc, respectively: p≤0.05). For clarity-of-vision, EDOF was significantly better than AOP at all distances and AOMF at intermediate and near (p≤0.028). For lack-of-ghosting averaged across distances, EDOF was significantly better than AOP (p<0.001) but not AOMF (p=0.186). EDOF was significantly better than AOMF and AOP for overall-vision-satisfaction (p≤0.024). EDOF provides better intermediate and near vision performance than either AOMF or AOP with no difference for distance vision after 1 week of lens wear. Copyright © 2017 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  17. Comparison of visual outcomes after bilateral implantation of extended range of vision and trifocal intraocular lenses.

    PubMed

    Ruiz-Mesa, Ramón; Abengózar-Vela, Antonio; Aramburu, Ana; Ruiz-Santos, María

    2017-06-26

    To compare visual outcomes after cataract surgery with bilateral implantation of 2 intraocular lenses (IOLs): extended range of vision and trifocal. Each group of this prospective study comprised 40 eyes (20 patients). Phacoemulsification followed by bilateral implantation of a FineVision IOL (group 1) or a Symfony IOL (group 2) was performed. The following outcomes were assessed up to 1 year postoperatively: binocular uncorrected distance visual acuity (UDVA), binocular uncorrected intermediate visual acuity (UIVA) at 60 cm, binocular uncorrected near visual acuity (UNVA) at 40 cm, spherical equivalent (SE) refraction, defocus curves, mesopic and photopic contrast sensitivity, halometry, posterior capsule opacification (PCO), and responses to a patient questionnaire. The mean binocular values in group 1 and group 2, respectively, were SE -0.15 ± 0.25 D and -0.19 ± 0.18 D; UDVA 0.01 ± 0.03 logMAR and 0.01 ± 0.02 logMAR; UIVA 0.11 ± 0.08 logMAR and 0.09 ± 0.08 logMAR; UNVA 0.06 ± 0.07 logMAR and 0.17 ± 0.06 logMAR. Difference in UNVA between IOLs (p<0.05) was statistically significant. There were no significant differences in contrast sensitivity, halometry, or PCO between groups. Defocus curves were similar between groups from 0 D to -2 D, but showed significant differences from -2.50 D to -4.00 D (p<0.05). Both IOLs provided excellent distance and intermediate visual outcomes. The FineVision IOL showed better near visual acuity. Predictability of the refractive results and optical performance were excellent; all patients achieved spectacle independence. The 2 IOLs gave similar and good contrast sensitivity in photopic and mesopic conditions and low perception of halos by patients.

  18. Sensor fusion of phase measuring profilometry and stereo vision for three-dimensional inspection of electronic components assembled on printed circuit boards.

    PubMed

    Hong, Deokhwa; Lee, Hyunki; Kim, Min Young; Cho, Hyungsuck; Moon, Jeon Il

    2009-07-20

    Automatic optical inspection (AOI) for printed circuit board (PCB) assembly plays a very important role in modern electronics manufacturing industries. Well-developed inspection machines in each assembly process are required to ensure the manufacturing quality of the electronics products. However, generally almost all AOI machines are based on 2D image-analysis technology. In this paper, a 3D-measurement-method-based AOI system is proposed consisting of a phase shifting profilometer and a stereo vision system for assembled electronic components on a PCB after component mounting and the reflow process. In this system information from two visual systems is fused to extend the shape measurement range limited by 2pi phase ambiguity of the phase shifting profilometer, and finally to maintain fine measurement resolution and high accuracy of the phase shifting profilometer with the measurement range extended by the stereo vision. The main purpose is to overcome the low inspection reliability problem of 2D-based inspection machines by using 3D information of components. The 3D shape measurement results on PCB-mounted electronic components are shown and compared with results from contact and noncontact 3D measuring machines. Based on a series of experiments, the usefulness of the proposed sensor system and its fusion technique are discussed and analyzed in detail.

  19. Unipro UGENE: a unified bioinformatics toolkit.

    PubMed

    Okonechnikov, Konstantin; Golosova, Olga; Fursov, Mikhail

    2012-04-15

    Unipro UGENE is a multiplatform open-source software with the main goal of assisting molecular biologists without much expertise in bioinformatics to manage, analyze and visualize their data. UGENE integrates widely used bioinformatics tools within a common user interface. The toolkit supports multiple biological data formats and allows the retrieval of data from remote data sources. It provides visualization modules for biological objects such as annotated genome sequences, Next Generation Sequencing (NGS) assembly data, multiple sequence alignments, phylogenetic trees and 3D structures. Most of the integrated algorithms are tuned for maximum performance by the usage of multithreading and special processor instructions. UGENE includes a visual environment for creating reusable workflows that can be launched on local resources or in a High Performance Computing (HPC) environment. UGENE is written in C++ using the Qt framework. The built-in plugin system and structured UGENE API make it possible to extend the toolkit with new functionality. UGENE binaries are freely available for MS Windows, Linux and Mac OS X at http://ugene.unipro.ru/download.html. UGENE code is licensed under the GPLv2; the information about the code licensing and copyright of integrated tools can be found in the LICENSE.3rd_party file provided with the source bundle.

  20. CoryneRegNet 3.0--an interactive systems biology platform for the analysis of gene regulatory networks in corynebacteria and Escherichia coli.

    PubMed

    Baumbach, Jan; Wittkop, Tobias; Rademacher, Katrin; Rahmann, Sven; Brinkrolf, Karina; Tauch, Andreas

    2007-04-30

    CoryneRegNet is an ontology-based data warehouse for the reconstruction and visualization of transcriptional regulatory interactions in prokaryotes. To extend the biological content of CoryneRegNet, we added comprehensive data on transcriptional regulations in the model organism Escherichia coli K-12, originally deposited in the international reference database RegulonDB. The enhanced web interface of CoryneRegNet offers several types of search options. The results of a search are displayed in a table-based style and include a visualization of the genetic organization of the respective gene region. Information on DNA binding sites of transcriptional regulators is depicted by sequence logos. The results can also be displayed by several layouters implemented in the graphical user interface GraphVis, allowing, for instance, the visualization of genome-wide network reconstructions and the homology-based inter-species comparison of reconstructed gene regulatory networks. In an application example, we compare the composition of the gene regulatory networks involved in the SOS response of E. coli and Corynebacterium glutamicum. CoryneRegNet is available at the following URL: http://www.cebitec.uni-bielefeld.de/groups/gi/software/coryneregnet/.

  1. Performance degradation of grid-tied photovoltaic modules in a hot-dry climatic condition

    NASA Astrophysics Data System (ADS)

    Suleske, Adam; Singh, Jaspreet; Kuitche, Joseph; Tamizh-Mani, Govindasamy

    2011-09-01

    The crystalline silicon photovoltaic (PV) modules under open circuit conditions typically degrade at a rate of about 0.5% per year. However, it is suspected that the modules in an array level may degrade, depending on equipment/frame grounding and array grounding, at higher rates because of higher string voltage and increased module mismatch over the years of operation in the field. This paper compares and analyzes the degradation rates of grid-tied photovoltaic modules operating over 10-17 years in a desert climatic condition of Arizona. The nameplate open-circuit voltages of the arrays ranged between 400 and 450 V. Six different types/models of crystalline silicon modules with glass/glass and glass/polymer constructions were evaluated. About 1865 modules were inspected using an extended visual inspection checklist and infrared (IR) scanning. The visual inspection checklist included encapsulant discoloration, cell/interconnect cracks, delamination and corrosion. Based on the visual inspection and IR studies, a large fraction of these modules were identified as allegedly healthy and unhealthy modules and they were electrically isolated from the system for currentvoltage (I-V) measurements of individual modules. The annual degradation rate for each module type is determined based on the I-V measurements.

  2. Cue-recruitment for extrinsic signals after training with low information stimuli.

    PubMed

    Jain, Anshul; Fuller, Stuart; Backus, Benjamin T

    2014-01-01

    Cue-recruitment occurs when a previously ineffective signal comes to affect the perceptual appearance of a target object, in a manner similar to the trusted cues with which the signal was put into correlation during training. Jain, Fuller and Backus reported that extrinsic signals, those not carried by the target object itself, were not recruited even after extensive training. However, recent studies have shown that training using weakened trusted cues can facilitate recruitment of intrinsic signals. The current study was designed to examine whether extrinsic signals can be recruited by putting them in correlation with weakened trusted cues. Specifically, we tested whether an extrinsic visual signal, the rotary motion direction of an annulus of random dots, and an extrinsic auditory signal, direction of an auditory pitch glide, can be recruited as cues for the rotation direction of a Necker cube. We found learning, albeit weak, for visual but not for auditory signals. These results extend the generality of the cue-recruitment phenomenon to an extrinsic signal and provide further evidence that the visual system learns to use new signals most quickly when other, long-trusted cues are unavailable or unreliable.

  3. Cognitive Virtualization: Combining Cognitive Models and Virtual Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuan Q. Tran; David I. Gertman; Donald D. Dudenhoeffer

    2007-08-01

    3D manikins are often used in visualizations to model human activity in complex settings. Manikins assist in developing understanding of human actions, movements and routines in a variety of different environments representing new conceptual designs. One such environment is a nuclear power plant control room, here they have the potential to be used to simulate more precise ergonomic assessments of human work stations. Next generation control rooms will pose numerous challenges for system designers. The manikin modeling approach by itself, however, may be insufficient for dealing with the desired technical advancements and challenges of next generation automated systems. Uncertainty regardingmore » effective staffing levels; and the potential for negative human performance consequences in the presence of advanced automated systems (e.g., reduced vigilance, poor situation awareness, mistrust or blind faith in automation, higher information load and increased complexity) call for further research. Baseline assessment of novel control room equipment(s) and configurations needs to be conducted. These design uncertainties can be reduced through complementary analysis that merges ergonomic manikin models with models of higher cognitive functions, such as attention, memory, decision-making, and problem-solving. This paper will discuss recent advancements in merging a theoretical-driven cognitive modeling framework within a 3D visualization modeling tool to evaluate of next generation control room human factors and ergonomic assessment. Though this discussion primary focuses on control room design, the application for such a merger between 3D visualization and cognitive modeling can be extended to various areas of focus such as training and scenario planning.« less

  4. A basis for a visual language for describing, archiving and analyzing functional models of complex biological systems

    PubMed Central

    Cook, Daniel L; Farley, Joel F; Tapscott, Stephen J

    2001-01-01

    Background: We propose that a computerized, internet-based graphical description language for systems biology will be essential for describing, archiving and analyzing complex problems of biological function in health and disease. Results: We outline here a conceptual basis for designing such a language and describe BioD, a prototype language that we have used to explore the utility and feasibility of this approach to functional biology. Using example models, we demonstrate that a rather limited lexicon of icons and arrows suffices to describe complex cell-biological systems as discrete models that can be posted and linked on the internet. Conclusions: Given available computer and internet technology, BioD may be implemented as an extensible, multidisciplinary language that can be used to archive functional systems knowledge and be extended to support both qualitative and quantitative functional analysis. PMID:11305940

  5. Dissociating Medial Temporal and Striatal Memory Systems With a Same/Different Matching Task: Evidence for Two Neural Systems in Human Recognition.

    PubMed

    Sinha, Neha; Glass, Arnold Lewis

    2017-01-01

    The medial temporal lobe and striatum have both been implicated as brain substrates of memory and learning. Here, we show dissociation between these two memory systems using a same/different matching task, in which subjects judged whether four-letter strings were the same or different. Different RT was determined by the left-to-right location of the first letter different between the study and test string, consistent with a left-to-right comparison of the study and test strings, terminating when a difference was found. This comparison process results in same responses being slower than different responses. Nevertheless, same responses were faster than different responses. Same responses were associated with hippocampus activation. Different responses were associated with both caudate and hippocampus activation. These findings are consistent with the dual-system hypothesis of mammalian memory and extend the model to human visual recognition.

  6. The OMG Modelling Language (SYSML)

    NASA Astrophysics Data System (ADS)

    Hause, M.

    2007-08-01

    On July 6th 2006, the Object Management Group (OMG) announced the adoption of the OMG Systems Modeling Language (OMG SysML). The SysML specification was in response to the joint Request for Proposal issued by the OMG and INCOSE (the International Council on Systems Engineering) for a customized version of UML 2, designed to address the specific needs of system engineers. SysML is a visual modeling language that extends UML 2 in order to support the specification, analysis, design, verification and validation of complex systems. This paper will look at the background of SysML and summarize the SysML specification including the modifications to UML 2.0, along with the new requirement and parametric diagrams. It will also show how SysML artifacts can be used to specify the requirements for other solution spaces such as software and hardware to provide handover to other disciplines.

  7. Introducing the VISAGE project - Visualization for Integrated Satellite, Airborne, and Ground-based data Exploration

    NASA Astrophysics Data System (ADS)

    Gatlin, P. N.; Conover, H.; Berendes, T.; Maskey, M.; Naeger, A. R.; Wingo, S. M.

    2017-12-01

    A key component of NASA's Earth observation system is its field experiments, for intensive observation of particular weather phenomena, or for ground validation of satellite observations. These experiments collect data from a wide variety of airborne and ground-based instruments, on different spatial and temporal scales, often in unique formats. The field data are often used with high volume satellite observations that have very different spatial and temporal coverage. The challenges inherent in working with such diverse datasets make it difficult for scientists to rapidly collect and analyze the data for physical process studies and validation of satellite algorithms. The newly-funded VISAGE project will address these issues by combining and extending nascent efforts to provide on-line data fusion, exploration, analysis and delivery capabilities. A key building block is the Field Campaign Explorer (FCX), which allows users to examine data collected during field campaigns and simplifies data acquisition for event-based research. VISAGE will extend FCX's capabilities beyond interactive visualization and exploration of coincident datasets, to provide interrogation of data values and basic analyses such as ratios and differences between data fields. The project will also incorporate new, higher level fused and aggregated analysis products from the System for Integrating Multi-platform data to Build the Atmospheric column (SIMBA), which combines satellite and ground-based observations into a common gridded atmospheric column data product; and the Validation Network (VN), which compiles a nationwide database of coincident ground- and satellite-based radar measurements of precipitation for larger scale scientific analysis. The VISAGE proof-of-concept will target "golden cases" from Global Precipitation Measurement Ground Validation campaigns. This presentation will introduce the VISAGE project, initial accomplishments and near term plans.

  8. Influence of combined visual and vestibular cues on human perception and control of horizontal rotation

    NASA Technical Reports Server (NTRS)

    Zacharias, G. L.; Young, L. R.

    1981-01-01

    Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation is modeled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A dual-input describing function analysis supports the complementary model; vestibular cues dominate sensation at higher frequencies. The describing function model is extended by the proposal of a nonlinear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.

  9. SimCheck: An Expressive Type System for Simulink

    NASA Technical Reports Server (NTRS)

    Roy, Pritam; Shankar, Natarajan

    2010-01-01

    MATLAB Simulink is a member of a class of visual languages that are used for modeling and simulating physical and cyber-physical systems. A Simulink model consists of blocks with input and output ports connected using links that carry signals. We extend the type system of Simulink with annotations and dimensions/units associated with ports and links. These types can capture invariants on signals as well as relations between signals. We define a type-checker that checks the wellformedness of Simulink blocks with respect to these type annotations. The type checker generates proof obligations that are solved by SRI's Yices solver for satisfiability modulo theories (SMT). This translation can be used to detect type errors, demonstrate counterexamples, generate test cases, or prove the absence of type errors. Our work is an initial step toward the symbolic analysis of MATLAB Simulink models.

  10. A preliminary test of the application of the Lightning Detection and Ranging System (LDAR) as a thunderstorm warning and location device for the FHA including a correlation with updrafts, turbulence, and radar precipitation echoes

    NASA Technical Reports Server (NTRS)

    Poehler, H. A.

    1978-01-01

    Results of a test of the use of a Lightning Detection and Ranging (LDAR) remote display in the Patrick AFB RAPCON facility are presented. Agreement between LDAR and radar precipitation echoes of the RAPCON radar was observed, as well as agreement between LDAR and pilot's visual observations of lightning flashes. A more precise comparison between LDAR and KSC based radars is achieved by the superposition of LDAR precipitation echoes. Airborne measurements of updrafts and turbulence by an armored T-28 aircraft flying through the thunderclouds are correlated with LDAR along the flight path. Calibration and measurements of the accuracy of the LDAR System are discussed, and the extended range of the system is illustrated.

  11. Cheating prevention in visual cryptography.

    PubMed

    Hu, Chih-Ming; Tzeng, Wen-Guey

    2007-01-01

    Visual cryptography (VC) is a method of encrypting a secret image into shares such that stacking a sufficient number of shares reveals the secret image. Shares are usually presented in transparencies. Each participant holds a transparency. Most of the previous research work on VC focuses on improving two parameters: pixel expansion and contrast. In this paper, we studied the cheating problem in VC and extended VC. We considered the attacks of malicious adversaries who may deviate from the scheme in any way. We presented three cheating methods and applied them on attacking existent VC or extended VC schemes. We improved one cheat-preventing scheme. We proposed a generic method that converts a VCS to another VCS that has the property of cheating prevention. The overhead of the conversion is near optimal in both contrast degression and pixel expansion.

  12. Interference within the focus of attention: working memory tasks reflect more than temporary maintenance.

    PubMed

    Shipstead, Zach; Engle, Randall W

    2013-01-01

    One approach to understanding working memory (WM) holds that individual differences in WM capacity arise from the amount of information a person can store in WM over short periods of time. This view is especially prevalent in WM research conducted with the visual arrays task. Within this tradition, many researchers have concluded that the average person can maintain approximately 4 items in WM. The present study challenges this interpretation by demonstrating that performance on the visual arrays task is subject to time-related factors that are associated with retrieval from long-term memory. Experiment 1 demonstrates that memory for an array does not decay as a product of absolute time, which is consistent with both maintenance- and retrieval-based explanations of visual arrays performance. Experiment 2 introduced a manipulation of temporal discriminability by varying the relative spacing of trials in time. We found that memory for a target array was significantly influenced by its temporal compression with, or isolation from, a preceding trial. Subsequent experiments extend these effects to sub-capacity set sizes and demonstrate that changes in the size of k are meaningful to prediction of performance on other measures of WM capacity as well as general fluid intelligence. We conclude that performance on the visual arrays task does not reflect a multi-item storage system but instead measures a person's ability to accurately retrieve information in the face of proactive interference.

  13. Visual Detection Under Uncertainty Operates Via an Early Static, Not Late Dynamic, Non-Linearity

    PubMed Central

    Neri, Peter

    2010-01-01

    Signals in the environment are rarely specified exactly: our visual system may know what to look for (e.g., a specific face), but not its exact configuration (e.g., where in the room, or in what orientation). Uncertainty, and the ability to deal with it, is a fundamental aspect of visual processing. The MAX model is the current gold standard for describing how human vision handles uncertainty: of all possible configurations for the signal, the observer chooses the one corresponding to the template associated with the largest response. We propose an alternative model in which the MAX operation, which is a dynamic non-linearity (depends on multiple inputs from several stimulus locations) and happens after the input stimulus has been matched to the possible templates, is replaced by an early static non-linearity (depends only on one input corresponding to one stimulus location) which is applied before template matching. By exploiting an integrated set of analytical and experimental tools, we show that this model is able to account for a number of empirical observations otherwise unaccounted for by the MAX model, and is more robust with respect to the realistic limitations imposed by the available neural hardware. We then discuss how these results, currently restricted to a simple visual detection task, may extend to a wider range of problems in sensory processing. PMID:21212835

  14. Divergent receiver responses to components of multimodal signals in two foot-flagging frog species.

    PubMed

    Preininger, Doris; Boeckle, Markus; Sztatecsny, Marc; Hödl, Walter

    2013-01-01

    Multimodal communication of acoustic and visual signals serves a vital role in the mating system of anuran amphibians. To understand signal evolution and function in multimodal signal design it is critical to test receiver responses to unimodal signal components versus multimodal composite signals. We investigated two anuran species displaying a conspicuous foot-flagging behavior in addition to or in combination with advertisement calls while announcing their signaling sites to conspecifics. To investigate the conspicuousness of the foot-flagging signals, we measured and compared spectral reflectance of foot webbings of Micrixalus saxicola and Staurois parvus using a spectrophotometer. We performed behavioral field experiments using a model frog including an extendable leg combined with acoustic playbacks to test receiver responses to acoustic, visual and combined audio-visual stimuli. Our results indicated that the foot webbings of S. parvus achieved a 13 times higher contrast against their visual background than feet of M. saxicola. The main response to all experimental stimuli in S. parvus was foot flagging, whereas M. saxicola responded primarily with calls but never foot flagged. Together these across-species differences suggest that in S. parvus foot-flagging behavior is applied as a salient and frequently used communicative signal during agonistic behavior, whereas we propose it constitutes an evolutionary nascent state in ritualization of the current fighting behavior in M. saxicola.

  15. Visual tasks and postural sway in children with and without autism spectrum disorders.

    PubMed

    Chang, Chih-Hui; Wade, Michael G; Stoffregen, Thomas A; Hsu, Chin-Yu; Pan, Chien-Yu

    2010-01-01

    We investigated the influences of two different suprapostural visual tasks, visual searching and visual inspection, on the postural sway of children with and without autism spectrum disorder (ASD). Sixteen ASD children (age=8.75±1.34 years; height=130.34±11.03 cm) were recruited from a local support group. Individuals with an intellectual disability as a co-occurring condition and those with severe behavior problems that required formal intervention were excluded. Twenty-two sex- and age-matched typically developing (TD) children (age=8.93±1.39 years; height=133.47±8.21 cm) were recruited from a local public elementary school. Postural sway was recorded using a magnetic tracking system (Flock of Birds, Ascension Technologies, Inc., Burlington, VT). Results indicated that the ASD children exhibited greater sway than the TD children. Despite this difference, both TD and ASD children showed reduced sway during the search task, relative to sway during the inspection task. These findings replicate those of Stoffregen et al. (2000), Stoffregen, Giveans, et al. (2009), Stoffregen, Villard, et al. (2009) and Prado et al. (2007) and extend them to TD children as well as ASD children. Both TD and ASD children were able to functionally modulate postural sway to facilitate the performance of a task that required higher perceptual effort. Copyright © 2010 Elsevier Ltd. All rights reserved.

  16. AppEEARS: A Simple Tool that Eases Complex Data Integration and Visualization Challenges for Users

    NASA Astrophysics Data System (ADS)

    Maiersperger, T.

    2017-12-01

    The Application for Extracting and Exploring Analysis-Ready Samples (AppEEARS) offers a simple and efficient way to perform discovery, processing, visualization, and acquisition across large quantities and varieties of Earth science data. AppEEARS brings significant value to a very broad array of user communities by 1) significantly reducing data volumes, at-archive, based on user-defined space-time-variable subsets, 2) promoting interoperability across a wide variety of datasets via format and coordinate reference system harmonization, 3) increasing the velocity of both data analysis and insight by providing analysis-ready data packages and by allowing interactive visual exploration of those packages, and 4) ensuring veracity by making data quality measures more apparent and usable and by providing standards-based metadata and processing provenance. Development and operation of AppEEARS is led by the National Aeronautics and Space Administration (NASA) Land Processes Distributed Active Archive Center (LP DAAC). The LP DAAC also partners with several other archives to extend the capability across a larger federation of geospatial data providers. Over one hundred datasets are currently available, covering a diversity of variables including land cover, population, elevation, vegetation indices, and land surface temperature. Many hundreds of users have already used this new web-based capability to make the complex tasks of data integration and visualization much simpler and more efficient.

  17. TU-FG-BRB-12: Real-Time Visualization of Discrete Spot Scanning Proton Therapy Beam for Quality Assurance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuzaki, Y; Jenkins, C; Yang, Y

    Purpose: With the growing adoption of proton beam therapy there is an increasing need for effective and user-friendly tools for performing quality assurance (QA) measurements. The speed and versatility of spot-scanning proton beam (PB) therapy systems present unique challenges for traditional QA tools. To address these challenges a proof-of-concept system was developed to visualize, in real-time, the delivery of individual spots from a spot-scanning PB in order to perform QA measurements. Methods: The PB is directed toward a custom phantom with planar faces coated with a radioluminescent phosphor (Gd2O2s:Tb). As the proton beam passes through the phantom visible light ismore » emitted from the coating and collected by a nearby CMOS camera. The images are processed to determine the locations at which the beam impinges on each face of the phantom. By so doing, the location of each beam can be determined relative to the phantom. The cameras are also used to capture images of the laser alignment system. The phantom contains x-ray fiducials so that it can be easily located with kV imagers. Using this data several quality assurance parameters can be evaluated. Results: The proof-of-concept system was able to visualize discrete PB spots with energies ranging from 70 MeV to 220 MeV. Images were obtained with integration times ranging from 20 to 0.019 milliseconds. If not limited by data transmission, this would correspond to a frame rate of 52,000 fps. Such frame rates enabled visualization of individual spots in real time. Spot locations were found to be highly correlated (R{sup 2}=0.99) with the nozzle-mounted spot position monitor indicating excellent spot positioning accuracy Conclusion: The system was shown to be capable of imaging individual spots for all clinical beam energies. Future development will focus on extending the image processing software to provide automated results for a variety of QA tests.« less

  18. An Experimental Analysis of Memory Processing

    PubMed Central

    Wright, Anthony A

    2007-01-01

    Rhesus monkeys were trained and tested in visual and auditory list-memory tasks with sequences of four travel pictures or four natural/environmental sounds followed by single test items. Acquisitions of the visual list-memory task are presented. Visual recency (last item) memory diminished with retention delay, and primacy (first item) memory strengthened. Capuchin monkeys, pigeons, and humans showed similar visual-memory changes. Rhesus learned an auditory memory task and showed octave generalization for some lists of notes—tonal, but not atonal, musical passages. In contrast with visual list memory, auditory primacy memory diminished with delay and auditory recency memory strengthened. Manipulations of interitem intervals, list length, and item presentation frequency revealed proactive and retroactive inhibition among items of individual auditory lists. Repeating visual items from prior lists produced interference (on nonmatching tests) revealing how far back memory extended. The possibility of using the interference function to separate familiarity vs. recollective memory processing is discussed. PMID:18047230

  19. Declarative language design for interactive visualization.

    PubMed

    Heer, Jeffrey; Bostock, Michael

    2010-01-01

    We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude.

  20. Simulations and Visualizations of Hurricane Sandy (2012) as Revealed by the NASA CAMVis

    NASA Technical Reports Server (NTRS)

    Shen, Bo-Wen

    2013-01-01

    Storm Sandy first appeared as a tropical storm in the southern Caribbean Sea on Oct. 22, 2012, moved northeastward, turned northwestward, and made landfall near Brigantine, New Jersey in late October. Sandy devastated surrounding areas, caused an estimated damage of $50 billion, and became the second costliest tropical cyclone (TC) in U.S. History surpassed only by Hurricane Katrina (2005). To save lives and mitigate economic damage, a central question to be addressed is to what extent the lead time of severe storm prediction such as Sandy can be extended (e.g., Emanuel 2012; Kerr 2012). In this study, we present 10 numerical experiments initialized at 00 and 1200 UTC Oct. 22-26, 2012, with the NASA coupled advanced global modeling and visualization systems (CAMVis). All of the predictions realistically capture Sandy's movement with the northwestward turn prior to its landfall. However, three experiments (initialized at 0000 UTC Oct. 22 and 24 and 1200 UTC Oct. 22) produce larger errors. Among the 10 experiments, the control run initialized at 0000 UTC Oct. 23 produces a remarkable 7-day forecast. To illustrate the impact of environmental flows on the predictability of Sandy, we produce and discuss four-dimensional (4-D) visualizations with the control run. 4-D visualizations clearly demonstrate the following multiscale processes that led to the sinuous track of Sandy: the initial steering impact of an upper-level trough (appearing over the northwestern Caribbean Sea and Gulf of Mexico), the blocking impact of systems to the northeast of Sandy, and the binary interaction with a mid-latitude, upper-level trough that appeared at 130degrees west longitude on Oct. 23, moved to the East Coast and intensified during the period of Oct. 29-30 prior to Sandy's landfall.

  1. Auditory cross-modal reorganization in cochlear implant users indicates audio-visual integration.

    PubMed

    Stropahl, Maren; Debener, Stefan

    2017-01-01

    There is clear evidence for cross-modal cortical reorganization in the auditory system of post-lingually deafened cochlear implant (CI) users. A recent report suggests that moderate sensori-neural hearing loss is already sufficient to initiate corresponding cortical changes. To what extend these changes are deprivation-induced or related to sensory recovery is still debated. Moreover, the influence of cross-modal reorganization on CI benefit is also still unclear. While reorganization during deafness may impede speech recovery, reorganization also has beneficial influences on face recognition and lip-reading. As CI users were observed to show differences in multisensory integration, the question arises if cross-modal reorganization is related to audio-visual integration skills. The current electroencephalography study investigated cortical reorganization in experienced post-lingually deafened CI users ( n  = 18), untreated mild to moderately hearing impaired individuals (n = 18) and normal hearing controls ( n  = 17). Cross-modal activation of the auditory cortex by means of EEG source localization in response to human faces and audio-visual integration, quantified with the McGurk illusion, were measured. CI users revealed stronger cross-modal activations compared to age-matched normal hearing individuals. Furthermore, CI users showed a relationship between cross-modal activation and audio-visual integration strength. This may further support a beneficial relationship between cross-modal activation and daily-life communication skills that may not be fully captured by laboratory-based speech perception tests. Interestingly, hearing impaired individuals showed behavioral and neurophysiological results that were numerically between the other two groups, and they showed a moderate relationship between cross-modal activation and the degree of hearing loss. This further supports the notion that auditory deprivation evokes a reorganization of the auditory system even at early stages of hearing loss.

  2. Focal and Ambient Processing of Built Environments: Intellectual and Atmospheric Experiences of Architecture

    PubMed Central

    Rooney, Kevin K.; Condia, Robert J.; Loschky, Lester C.

    2017-01-01

    Neuroscience has well established that human vision divides into the central and peripheral fields of view. Central vision extends from the point of gaze (where we are looking) out to about 5° of visual angle (the width of one’s fist at arm’s length), while peripheral vision is the vast remainder of the visual field. These visual fields project to the parvo and magno ganglion cells, which process distinctly different types of information from the world around us and project that information to the ventral and dorsal visual streams, respectively. Building on the dorsal/ventral stream dichotomy, we can further distinguish between focal processing of central vision, and ambient processing of peripheral vision. Thus, our visual processing of and attention to objects and scenes depends on how and where these stimuli fall on the retina. The built environment is no exception to these dependencies, specifically in terms of how focal object perception and ambient spatial perception create different types of experiences we have with built environments. We argue that these foundational mechanisms of the eye and the visual stream are limiting parameters of architectural experience. We hypothesize that people experience architecture in two basic ways based on these visual limitations; by intellectually assessing architecture consciously through focal object processing and assessing architecture in terms of atmosphere through pre-conscious ambient spatial processing. Furthermore, these separate ways of processing architectural stimuli operate in parallel throughout the visual perceptual system. Thus, a more comprehensive understanding of architecture must take into account that built environments are stimuli that are treated differently by focal and ambient vision, which enable intellectual analysis of architectural experience versus the experience of architectural atmosphere, respectively. We offer this theoretical model to help advance a more precise understanding of the experience of architecture, which can be tested through future experimentation. (298 words) PMID:28360867

  3. Focal and Ambient Processing of Built Environments: Intellectual and Atmospheric Experiences of Architecture.

    PubMed

    Rooney, Kevin K; Condia, Robert J; Loschky, Lester C

    2017-01-01

    Neuroscience has well established that human vision divides into the central and peripheral fields of view. Central vision extends from the point of gaze (where we are looking) out to about 5° of visual angle (the width of one's fist at arm's length), while peripheral vision is the vast remainder of the visual field. These visual fields project to the parvo and magno ganglion cells, which process distinctly different types of information from the world around us and project that information to the ventral and dorsal visual streams, respectively. Building on the dorsal/ventral stream dichotomy, we can further distinguish between focal processing of central vision, and ambient processing of peripheral vision. Thus, our visual processing of and attention to objects and scenes depends on how and where these stimuli fall on the retina. The built environment is no exception to these dependencies, specifically in terms of how focal object perception and ambient spatial perception create different types of experiences we have with built environments. We argue that these foundational mechanisms of the eye and the visual stream are limiting parameters of architectural experience. We hypothesize that people experience architecture in two basic ways based on these visual limitations; by intellectually assessing architecture consciously through focal object processing and assessing architecture in terms of atmosphere through pre-conscious ambient spatial processing. Furthermore, these separate ways of processing architectural stimuli operate in parallel throughout the visual perceptual system. Thus, a more comprehensive understanding of architecture must take into account that built environments are stimuli that are treated differently by focal and ambient vision, which enable intellectual analysis of architectural experience versus the experience of architectural atmosphere, respectively. We offer this theoretical model to help advance a more precise understanding of the experience of architecture, which can be tested through future experimentation. (298 words).

  4. Interpersonal motor resonance in autism spectrum disorder: evidence against a global “mirror system” deficit

    PubMed Central

    Enticott, Peter G.; Kennedy, Hayley A.; Rinehart, Nicole J.; Bradshaw, John L.; Tonge, Bruce J.; Daskalakis, Zafiris J.; Fitzgerald, Paul B.

    2013-01-01

    The mirror neuron hypothesis of autism is highly controversial, in part because there are conflicting reports as to whether putative indices of mirror system activity are actually deficient in autism spectrum disorder (ASD). Recent evidence suggests that a typical putative mirror system response may be seen in people with an ASD when there is a degree of social relevance to the visual stimuli used to elicit that response. Individuals with ASD (n = 32) and matched neurotypical controls (n = 32) completed a transcranial magnetic stimulation (TMS) experiment in which the left primary motor cortex (M1) was stimulated during the observation of static hands, individual (i.e., one person) hand actions, and interactive (i.e., two person) hand actions. Motor-evoked potentials (MEP) were recorded from the contralateral first dorsal interosseous, and used to generate an index of interpersonal motor resonance (IMR; a putative measure of mirror system activity) during action observation. There was no difference between ASD and NT groups in the level of IMR during the observation of these actions. These findings provide evidence against a global mirror system deficit in ASD, and this evidence appears to extend beyond stimuli that have social relevance. Attentional and visual processing influences may be important for understanding the apparent role of IMR in the pathophysiology of ASD. PMID:23734121

  5. Case studies in machine vision integration

    NASA Astrophysics Data System (ADS)

    Ahlers, Rolf-Juergen

    1991-09-01

    Many countries in the world, e.g. Germany and Japan, depend on high export rates. It is therefore necessary for them to strive for a high degree of quality in the products and processes exported. The example of Japan shows in a significant manner that a competitor should not be feared just because he can offer cheaper products. They become a "source of danger" when these products also achieve a high degree of quality. Thus, survival in the market depends on the ability to recognize the implications of technical and economic developments, to draw the perhaps unpopular conclusions for production, and to make the right decisions. This particularly applies to measurement and inspection equipment for quality control. Here, besides electro-optical sensors in general, image processing systems play an important role because they can emulate the conventional form of visual inspection by a human operator — i.e., the methods used in industry when dealing with quality inspection and control. In combination with precision indexing tables and industrial robots, image processing systems can be extended to new fields of application. The great awareness of the potential applications of vision and image processing systems has led to a variety of realized applications, some of which will be described below under three topics: • electro-optical measurement systems, • automation of visual inspection tasks, and • robot guidance.

  6. A sensitive colorimetric assay system for nucleic acid detection based on isothermal signal amplification technology.

    PubMed

    Hu, Bo; Guo, Jing; Xu, Ying; Wei, Hua; Zhao, Guojie; Guan, Yifu

    2017-08-01

    Rapid and accurate detection of microRNAs in biological systems is of great importance. Here, we report the development of a visual colorimetric assay which possesses the high amplification capabilities and high selectivity of the rolling circle amplification (RCA) method and the simplicity and convenience of gold nanoparticles used as a signal indicator. The designed padlock probe recognizes the target miRNA and is circularized, and then acts as the template to extend the target miRNA into a long single-stranded nucleotide chain of many tandem repeats of nucleotide sequences. Next, the RCA product is hybridized with oligonucleotides tagged onto gold nanoparticles. This interaction leads to the aggregation of gold nanoparticles, and the color of the system changes from wine red to dark blue according to the abundance of miRNA. A linear correlation between fluorescence and target oligonucleotide content was obtained in the range 0.3-300 pM, along with a detection limit of 0.13 pM (n = 7) and a RSD of 3.9% (30 pM, n = 9). The present approach provides a simple, rapid, and accurate visual colorimetric assay that allows sensitive biodetection and bioanalysis of DNA and RNA nucleotides of interest in biologically important samples. Graphical abstract The colorimetric assay system for analyzing target oligonucleotides.

  7. Nebula: reconstruction and visualization of scattering data in reciprocal space.

    PubMed

    Reiten, Andreas; Chernyshov, Dmitry; Mathiesen, Ragnvald H

    2015-04-01

    Two-dimensional solid-state X-ray detectors can now operate at considerable data throughput rates that allow full three-dimensional sampling of scattering data from extended volumes of reciprocal space within second to minute time-scales. For such experiments, simultaneous analysis and visualization allows for remeasurements and a more dynamic measurement strategy. A new software, Nebula , is presented. It efficiently reconstructs X-ray scattering data, generates three-dimensional reciprocal space data sets that can be visualized interactively, and aims to enable real-time processing in high-throughput measurements by employing parallel computing on commodity hardware.

  8. Nebula: reconstruction and visualization of scattering data in reciprocal space

    PubMed Central

    Reiten, Andreas; Chernyshov, Dmitry; Mathiesen, Ragnvald H.

    2015-01-01

    Two-dimensional solid-state X-ray detectors can now operate at considerable data throughput rates that allow full three-dimensional sampling of scattering data from extended volumes of reciprocal space within second to minute time­scales. For such experiments, simultaneous analysis and visualization allows for remeasurements and a more dynamic measurement strategy. A new software, Nebula, is presented. It efficiently reconstructs X-ray scattering data, generates three-dimensional reciprocal space data sets that can be visualized interactively, and aims to enable real-time processing in high-throughput measurements by employing parallel computing on commodity hardware. PMID:25844083

  9. How mutation alters the evolutionary dynamics of cooperation on networks

    NASA Astrophysics Data System (ADS)

    Ichinose, Genki; Satotani, Yoshiki; Sayama, Hiroki

    2018-05-01

    Cooperation is ubiquitous at every level of living organisms. It is known that spatial (network) structure is a viable mechanism for cooperation to evolve. A recently proposed numerical metric, average gradient of selection (AGoS), a useful tool for interpreting and visualizing evolutionary dynamics on networks, allows simulation results to be visualized on a one-dimensional phase space. However, stochastic mutation of strategies was not considered in the analysis of AGoS. Here we extend AGoS so that it can analyze the evolution of cooperation where mutation may alter strategies of individuals on networks. We show that our extended AGoS correctly visualizes the final states of cooperation with mutation in the individual-based simulations. Our analyses revealed that mutation always has a negative effect on the evolution of cooperation regardless of the payoff functions, fraction of cooperators, and network structures. Moreover, we found that scale-free networks are the most vulnerable to mutation and thus the dynamics of cooperation are altered from bistability to coexistence on those networks, undergoing an imperfect pitchfork bifurcation.

  10. Extending the Cortical Grasping Network: Pre-supplementary Motor Neuron Activity During Vision and Grasping of Objects.

    PubMed

    Lanzilotto, Marco; Livi, Alessandro; Maranesi, Monica; Gerbella, Marzio; Barz, Falk; Ruther, Patrick; Fogassi, Leonardo; Rizzolatti, Giacomo; Bonini, Luca

    2016-12-01

    Grasping relies on a network of parieto-frontal areas lying on the dorsolateral and dorsomedial parts of the hemispheres. However, the initiation and sequencing of voluntary actions also requires the contribution of mesial premotor regions, particularly the pre-supplementary motor area F6. We recorded 233 F6 neurons from 2 monkeys with chronic linear multishank neural probes during reaching-grasping visuomotor tasks. We showed that F6 neurons play a role in the control of forelimb movements and some of them (26%) exhibit visual and/or motor specificity for the target object. Interestingly, area F6 neurons form 2 functionally distinct populations, showing either visually-triggered or movement-related bursts of activity, in contrast to the sustained visual-to-motor activity displayed by ventral premotor area F5 neurons recorded in the same animals and with the same task during previous studies. These findings suggest that F6 plays a role in object grasping and extend existing models of the cortical grasping network. © The Author 2016. Published by Oxford University Press.

  11. Task alters category representations in prefrontal but not high-level visual cortex.

    PubMed

    Bugatus, Lior; Weiner, Kevin S; Grill-Spector, Kalanit

    2017-07-15

    A central question in neuroscience is how cognitive tasks affect category representations across the human brain. Regions in lateral occipito-temporal cortex (LOTC), ventral temporal cortex (VTC), and ventro-lateral prefrontal cortex (VLFPC) constitute the extended "what" pathway, which is considered instrumental for visual category processing. However, it is unknown (1) whether distributed responses across LOTC, VTC, and VLPFC explicitly represent category, task, or some combination of both, and (2) in what way representations across these subdivisions of the extended 'what' pathway may differ. To fill these gaps in knowledge, we scanned 12 participants using fMRI to test the effect of category and task on distributed responses across LOTC, VTC, and VLPFC. Results reveal that task and category modulate responses in both high-level visual regions, as well as prefrontal cortex. However, we found fundamentally different types of representations across the brain. Distributed responses in high-level visual regions are more strongly driven by category than task, and exhibit task-independent category representations. In contrast, distributed responses in prefrontal cortex are more strongly driven by task than category, and contain task-dependent category representations. Together, these findings of differential representations across the brain support a new idea that LOTC and VTC maintain stable category representations allowing efficient processing of visual information, while prefrontal cortex contains flexible representations in which category information may emerge only when relevant to the task. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Visualizing Viral Infection In Vivo by Multi-Photon Intravital Microscopy.

    PubMed

    Sewald, Xaver

    2018-06-20

    Viral pathogens have adapted to the host organism to exploit the cellular machinery for virus replication and to modulate the host cells for efficient systemic dissemination and immune evasion. Much of our knowledge of the effects that virus infections have on cells originates from in vitro imaging studies using experimental culture systems consisting of cell lines and primary cells. Recently, intravital microscopy using multi-photon excitation of fluorophores has been applied to observe virus dissemination and pathogenesis in real-time under physiological conditions in living organisms. Critical steps during viral infection and pathogenesis could be studied by direct visualization of fluorescent virus particles, virus-infected cells, and the immune response to viral infection. In this review, I summarize the latest research on in vivo studies of viral infections using multi-photon intravital microscopy (MP-IVM). Initially, the underlying principle of multi-photon microscopy is introduced and experimental challenges during microsurgical animal preparation and fluorescent labeling strategies for intravital imaging are discussed. I will further highlight recent studies that combine MP-IVM with optogenetic tools and transcriptional analysis as a powerful approach to extend the significance of in vivo imaging studies of viral pathogens.

  13. Virtual surgery in a (tele-)radiology framework.

    PubMed

    Glombitza, G; Evers, H; Hassfeld, S; Engelmann, U; Meinzer, H P

    1999-09-01

    This paper presents telemedicine as an extension of a teleradiology framework through tools for virtual surgery. To classify the described methods and applications, the research field of virtual reality (VR) is broadly reviewed. Differences with respect to technical equipment, methodological requirements and areas of application are pointed out. Desktop VR, augmented reality, and virtual reality are differentiated and discussed in some typical contexts of diagnostic support, surgical planning, therapeutic procedures, simulation and training. Visualization techniques are compared as a prerequisite for virtual reality and assigned to distinct levels of immersion. The advantage of a hybrid visualization kernel is emphasized with respect to the desktop VR applications that are subsequently shown. Moreover, software design aspects are considered by outlining functional openness in the architecture of the host system. Here, a teleradiology workstation was extended by dedicated tools for surgical planning through a plug-in mechanism. Examples of recent areas of application are introduced such as liver tumor resection planning, diagnostic support in heart surgery, and craniofacial surgery planning. In the future, surgical planning systems will become more important. They will benefit from improvements in image acquisition and communication, new image processing approaches, and techniques for data presentation. This will facilitate preoperative planning and intraoperative applications.

  14. Introducing visual participatory methods to develop local knowledge on HIV in rural South Africa.

    PubMed

    Brooks, Chloe; D'Ambruoso, Lucia; Kazimierczak, Karolina; Ngobeni, Sizzy; Twine, Rhian; Tollman, Stephen; Kahn, Kathleen; Byass, Peter

    2017-01-01

    South Africa is a country faced with complex health and social inequalities, in which HIV/AIDS has had devastating impacts. The study aimed to gain insights into the perspectives of rural communities on HIV-related mortality. A participatory action research (PAR) process, inclusive of a visual participatory method (Photovoice), was initiated to elicit and organise local knowledge and to identify priorities for action in a rural subdistrict underpinned by the Agincourt Health and Socio-Demographic Surveillance System (HDSS). We convened three village-based discussion groups, presented HDSS data on HIV-related mortality, elicited subjective perspectives on HIV/AIDS, systematised these into collective accounts and identified priorities for action. Framework analysis was performed on narrative and visual data, and practice theory was used to interpret the findings. A range of social and health systems factors were identified as causes and contributors of HIV mortality. These included alcohol use/abuse, gender inequalities, stigma around disclosure of HIV status, problems with informal care, poor sanitation, harmful traditional practices, delays in treatment, problems with medications and problematic staff-patient relationships. To address these issues, developing youth facilities in communities, improving employment opportunities, timely treatment and extending community outreach for health education and health promotion were identified. Addressing social practices of blame, stigma and mistrust around HIV-related mortality may be a useful focus for policy and planning. Research that engages communities and authorities to coproduce evidence can capture these practices, improve communication and build trust. Actions to reduce HIV should go beyond individual agency and structural forces to focus on how social practices embody these elements. Initiating PAR inclusive of visual methods can build shared understandings of disease burdens in social and health systems contexts. This can develop shared accountability and improve staff-patient relationships, which, over time, may address the issues identified, here related to stigma and blame.

  15. Introducing visual participatory methods to develop local knowledge on HIV in rural South Africa

    PubMed Central

    Brooks, Chloe; Kazimierczak, Karolina; Ngobeni, Sizzy; Twine, Rhian; Tollman, Stephen; Kahn, Kathleen; Byass, Peter

    2017-01-01

    Introduction South Africa is a country faced with complex health and social inequalities, in which HIV/AIDS has had devastating impacts. The study aimed to gain insights into the perspectives of rural communities on HIV-related mortality. Methods A participatory action research (PAR) process, inclusive of a visual participatory method (Photovoice), was initiated to elicit and organise local knowledge and to identify priorities for action in a rural subdistrict underpinned by the Agincourt Health and Socio-Demographic Surveillance System (HDSS). We convened three village-based discussion groups, presented HDSS data on HIV-related mortality, elicited subjective perspectives on HIV/AIDS, systematised these into collective accounts and identified priorities for action. Framework analysis was performed on narrative and visual data, and practice theory was used to interpret the findings. Findings A range of social and health systems factors were identified as causes and contributors of HIV mortality. These included alcohol use/abuse, gender inequalities, stigma around disclosure of HIV status, problems with informal care, poor sanitation, harmful traditional practices, delays in treatment, problems with medications and problematic staff–patient relationships. To address these issues, developing youth facilities in communities, improving employment opportunities, timely treatment and extending community outreach for health education and health promotion were identified. Discussion Addressing social practices of blame, stigma and mistrust around HIV-related mortality may be a useful focus for policy and planning. Research that engages communities and authorities to coproduce evidence can capture these practices, improve communication and build trust. Conclusion Actions to reduce HIV should go beyond individual agency and structural forces to focus on how social practices embody these elements. Initiating PAR inclusive of visual methods can build shared understandings of disease burdens in social and health systems contexts. This can develop shared accountability and improve staff–patient relationships, which, over time, may address the issues identified, here related to stigma and blame. PMID:29071128

  16. Imagining the truth and the moon: an electrophysiological study of abstract and concrete word processing.

    PubMed

    Gullick, Margaret M; Mitra, Priya; Coch, Donna

    2013-05-01

    Previous event-related potential studies have indicated that both a widespread N400 and an anterior N700 index differential processing of concrete and abstract words, but the nature of these components in relation to concreteness and imagery has been unclear. Here, we separated the effects of word concreteness and task demands on the N400 and N700 in a single word processing paradigm with a within-subjects, between-tasks design and carefully controlled word stimuli. The N400 was larger to concrete words than to abstract words, and larger in the visualization task condition than in the surface task condition, with no interaction. A marked anterior N700 was elicited only by concrete words in the visualization task condition, suggesting that this component indexes imagery. These findings are consistent with a revised or extended dual coding theory according to which concrete words benefit from greater activation in both verbal and imagistic systems. Copyright © 2013 Society for Psychophysiological Research.

  17. Reaction times to weak test lights. [psychophysics biological model

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.; Ahumada, P.; Welsh, D.

    1984-01-01

    Maloney and Wandell (1984) describe a model of the response of a single visual channel to weak test lights. The initial channel response is a linearly filtered version of the stimulus. The filter output is randomly sampled over time. Each time a sample occurs there is some probability increasing with the magnitude of the sampled response - that a discrete detection event is generated. Maloney and Wandell derive the statistics of the detection events. In this paper a test is conducted of the hypothesis that the reaction time responses to the presence of a weak test light are initiated at the first detection event. This makes it possible to extend the application of the model to lights that are slightly above threshold, but still within the linear operating range of the visual system. A parameter-free prediction of the model proposed by Maloney and Wandell for lights detected by this statistic is tested. The data are in agreement with the prediction.

  18. Boom, Doom and Rocks - The Intersection of Physics, Video Games and Geology

    NASA Astrophysics Data System (ADS)

    McBride, J. H.; Keach, R. W.

    2008-12-01

    Geophysics is a field that incorporates the rigor of physics with the field methods of geology. The onset and rapid development of the computer games that students play bring new hardware and software technologies that significantly improve our understanding and research capabilities. Together they provide unique insights to the subsurface of the earth in ways only imagined just a few short years ago. 3D geological visualization has become an integral part of many petroleum industry exploration efforts. This technology is now being extended to increasing numbers of universities through grants from software vendors. This talk will explore 3D visualization techniques and how they can be used for both teaching and research. Come see examples of 3D geophysical techniques used to: image the geology of ancient river systems off the coast of Brazil and in the Uinta Basin of Utah, guide archaeological excavations on the side of Mt. Vesuvius, Italy, and to study how volcanoes were formed off the coast of New Zealand.

  19. Positive visualization of implanted devices with susceptibility gradient mapping using the original resolution.

    PubMed

    Varma, Gopal; Clough, Rachel E; Acher, Peter; Sénégas, Julien; Dahnke, Hannes; Keevil, Stephen F; Schaeffter, Tobias

    2011-05-01

    In magnetic resonance imaging, implantable devices are usually visualized with a negative contrast. Recently, positive contrast techniques have been proposed, such as susceptibility gradient mapping (SGM). However, SGM reduces the spatial resolution making positive visualization of small structures difficult. Here, a development of SGM using the original resolution (SUMO) is presented. For this, a filter is applied in k-space and the signal amplitude is analyzed in the image domain to determine quantitatively the susceptibility gradient for each pixel. It is shown in simulations and experiments that SUMO results in a better visualization of small structures in comparison to SGM. SUMO is applied to patient datasets for visualization of stent and prostate brachytherapy seeds. In addition, SUMO also provides quantitative information about the number of prostate brachytherapy seeds. The method might be extended to application for visualization of other interventional devices, and, like SGM, it might also be used to visualize magnetically labelled cells. Copyright © 2010 Wiley-Liss, Inc.

  20. Towards a Visual Quality Metric for Digital Video

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1998-01-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.

  1. Automated Assessment of Visual Quality of Digital Video

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ellis, Stephen R. (Technical Monitor)

    1997-01-01

    The advent of widespread distribution of digital video creates a need for automated methods for evaluating visual quality of digital video. This is particularly so since most digital video is compressed using lossy methods, which involve the controlled introduction of potentially visible artifacts. Compounding the problem is the bursty nature of digital video, which requires adaptive bit allocation based on visual quality metrics. In previous work, we have developed visual quality metrics for evaluating, controlling, and optimizing the quality of compressed still images[1-4]. These metrics incorporate simplified models of human visual sensitivity to spatial and chromatic visual signals. The challenge of video quality metrics is to extend these simplified models to temporal signals as well. In this presentation I will discuss a number of the issues that must be resolved in the design of effective video quality metrics. Among these are spatial, temporal, and chromatic sensitivity and their interactions, visual masking, and implementation complexity. I will also touch on the question of how to evaluate the performance of these metrics.

  2. Early esophageal cancer detection using RF classifiers

    NASA Astrophysics Data System (ADS)

    Janse, Markus H. A.; van der Sommen, Fons; Zinger, Svitlana; Schoon, Erik J.; de With, Peter H. N.

    2016-03-01

    Esophageal cancer is one of the fastest rising forms of cancer in the Western world. Using High-Definition (HD) endoscopy, gastroenterology experts can identify esophageal cancer at an early stage. Recent research shows that early cancer can be found using a state-of-the-art computer-aided detection (CADe) system based on analyzing static HD endoscopic images. Our research aims at extending this system by applying Random Forest (RF) classification, which introduces a confidence measure for detected cancer regions. To visualize this data, we propose a novel automated annotation system, employing the unique characteristics of the previous confidence measure. This approach allows reliable modeling of multi-expert knowledge and provides essential data for real-time video processing, to enable future use of the system in a clinical setting. The performance of the CADe system is evaluated on a 39-patient dataset, containing 100 images annotated by 5 expert gastroenterologists. The proposed system reaches a precision of 75% and recall of 90%, thereby improving the state-of-the-art results by 11 and 6 percentage points, respectively.

  3. Two adults with multiple disabilities use a computer-aided telephone system to make phone calls independently.

    PubMed

    Lancioni, Giulio E; O'Reilly, Mark F; Singh, Nirbhay N; Sigafoos, Jeff; Oliva, Doretta; Alberti, Gloria; Lang, Russell

    2011-01-01

    This study extended the assessment of a newly developed computer-aided telephone system with two participants (adults) who presented with blindness or severe visual impairment and motor or motor and intellectual disabilities. For each participant, the study was carried out according to an ABAB design, in which the A represented baseline phases and the B represented intervention phases, during which the special telephone system was available. The system involved among others a net-book computer provided with specific software, a global system for mobile communication modem, and a microswitch. Both participants learned to use the system very rapidly and managed to make phone calls independently to a variety of partners such as family members, friends and staff personnel. The results were discussed in terms of the technology under investigation (its advantages, drawbacks, and need of improvement) and the social-communication impact it can make for persons with multiple disabilities. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Interactive MPEG-4 low-bit-rate speech/audio transmission over the Internet

    NASA Astrophysics Data System (ADS)

    Liu, Fang; Kim, JongWon; Kuo, C.-C. Jay

    1999-11-01

    The recently developed MPEG-4 technology enables the coding and transmission of natural and synthetic audio-visual data in the form of objects. In an effort to extend the object-based functionality of MPEG-4 to real-time Internet applications, architectural prototypes of multiplex layer and transport layer tailored for transmission of MPEG-4 data over IP are under debate among Internet Engineering Task Force (IETF), and MPEG-4 systems Ad Hoc group. In this paper, we present an architecture for interactive MPEG-4 speech/audio transmission system over the Internet. It utilities a framework of Real Time Streaming Protocol (RTSP) over Real-time Transport Protocol (RTP) to provide controlled, on-demand delivery of real time speech/audio data. Based on a client-server model, a couple of low bit-rate bit streams (real-time speech/audio, pre- encoded speech/audio) are multiplexed and transmitted via a single RTP channel to the receiver. The MPEG-4 Scene Description (SD) and Object Descriptor (OD) bit streams are securely sent through the RTSP control channel. Upon receiving, an initial MPEG-4 audio- visual scene is constructed after de-multiplexing, decoding of bit streams, and scene composition. A receiver is allowed to manipulate the initial audio-visual scene presentation locally, or interactively arrange scene changes by sending requests to the server. A server may also choose to update the client with new streams and list of contents for user selection.

  5. Making journals accessible to the visually impaired: the future is near

    PubMed Central

    GARDNER, John; BULATOV, Vladimir; KELLY, Robert

    2010-01-01

    The American Physical Society (APS) has been a leader in using markup languages for publishing. ViewPlus has led development of innovative technologies for graphical information accessibility by people with print disabilities. APS, ViewPlus, and other collaborators in the Enhanced Reading Project are working together to develop the necessary technology and infrastructure for APS to publish its journals in the DAISY (Digital Accessible Information SYstem) eXtended Markup Language (XML) format, in which all text, math, and figures would be accessible to people who are blind or have other print disabilities. The first APS DAISY XML publications are targeted for late 2010. PMID:20676358

  6. Visual Astronomy; A guide to understanding the night sky

    NASA Astrophysics Data System (ADS)

    Photinos, Panos

    2015-03-01

    This book introduces the basics of observational astronomy. It explains the essentials of time and coordinate systems, and their use in basic observations of the night sky. The fundamental concepts and terminology are introduced in simple language making very little use of equations and math. Examples illustrate how to select the relevant information from widely accessible resources, and how to use the information to determine what is visible and when it is visible from the reader's particular location. Particular attention is paid to the dependence of the appearance and motion on the observer's location, by extending the discussion to include various latitudes in both North and South hemispheres.

  7. Multi-particle three-dimensional coordinate estimation in real-time optical manipulation

    NASA Astrophysics Data System (ADS)

    Dam, J. S.; Perch-Nielsen, I.; Palima, D.; Gluckstad, J.

    2009-11-01

    We have previously shown how stereoscopic images can be obtained in our three-dimensional optical micromanipulation system [J. S. Dam et al, Opt. Express 16, 7244 (2008)]. Here, we present an extension and application of this principle to automatically gather the three-dimensional coordinates for all trapped particles with high tracking range and high reliability without requiring user calibration. Through deconvolving of the red, green, and blue colour planes to correct for bleeding between colour planes, we show that we can extend the system to also utilize green illumination, in addition to the blue and red. Applying the green colour as on-axis illumination yields redundant information for enhanced error correction, which is used to verify the gathered data, resulting in reliable coordinates as well as producing visually attractive images.

  8. The role of spatial integration in the perception of surface orientation with active touch.

    PubMed

    Giachritsis, Christos D; Wing, Alan M; Lovell, Paul G

    2009-10-01

    Vision research has shown that perception of line orientation, in the fovea area, improves with line length (Westheimer & Ley, 1997). This suggests that the visual system may use spatial integration to improve perception of orientation. In the present experiments, we investigated the role of spatial integration in the perception of surface orientation using kinesthetic and proprioceptive information from shoulder and elbow. With their left index fingers, participants actively explored virtual slanted surfaces of different lengths and orientations, and were asked to reproduce an orientation or discriminate between two orientations. Results showed that reproduction errors and discrimination thresholds improve with surface length. This suggests that the proprioceptive shoulder-elbow system may integrate redundant spatial information resulting from extended arm movements to improve orientation judgments.

  9. The wisdom of crowds for visual search

    PubMed Central

    Juni, Mordechai Z.; Eckstein, Miguel P.

    2017-01-01

    Decision-making accuracy typically increases through collective integration of people’s judgments into group decisions, a phenomenon known as the wisdom of crowds. For simple perceptual laboratory tasks, classic signal detection theory specifies the upper limit for collective integration benefits obtained by weighted averaging of people’s confidences, and simple majority voting can often approximate that limit. Life-critical perceptual decisions often involve searching large image data (e.g., medical, security, and aerial imagery), but the expected benefits and merits of using different pooling algorithms are unknown for such tasks. Here, we show that expected pooling benefits are significantly greater for visual search than for single-location perceptual tasks and the prediction given by classic signal detection theory. In addition, we show that simple majority voting obtains inferior accuracy benefits for visual search relative to averaging and weighted averaging of observers’ confidences. Analysis of gaze behavior across observers suggests that the greater collective integration benefits for visual search arise from an interaction between the foveated properties of the human visual system (high foveal acuity and low peripheral acuity) and observers’ nonexhaustive search patterns, and can be predicted by an extended signal detection theory framework with trial to trial sampling from a varying mixture of high and low target detectabilities across observers (SDT-MIX). These findings advance our theoretical understanding of how to predict and enhance the wisdom of crowds for real world search tasks and could apply more generally to any decision-making task for which the minority of group members with high expertise varies from decision to decision. PMID:28490500

  10. Adaptive Kalman filtering for real-time mapping of the visual field

    PubMed Central

    Ward, B. Douglas; Janik, John; Mazaheri, Yousef; Ma, Yan; DeYoe, Edgar A.

    2013-01-01

    This paper demonstrates the feasibility of real-time mapping of the visual field for clinical applications. Specifically, three aspects of this problem were considered: (1) experimental design, (2) statistical analysis, and (3) display of results. Proper experimental design is essential to achieving a successful outcome, particularly for real-time applications. A random-block experimental design was shown to have less sensitivity to measurement noise, as well as greater robustness to error in modeling of the hemodynamic impulse response function (IRF) and greater flexibility than common alternatives. In addition, random encoding of the visual field allows for the detection of voxels that are responsive to multiple, not necessarily contiguous, regions of the visual field. Due to its recursive nature, the Kalman filter is ideally suited for real-time statistical analysis of visual field mapping data. An important feature of the Kalman filter is that it can be used for nonstationary time series analysis. The capability of the Kalman filter to adapt, in real time, to abrupt changes in the baseline arising from subject motion inside the scanner and other external system disturbances is important for the success of clinical applications. The clinician needs real-time information to evaluate the success or failure of the imaging run and to decide whether to extend, modify, or terminate the run. Accordingly, the analytical software provides real-time displays of (1) brain activation maps for each stimulus segment, (2) voxel-wise spatial tuning profiles, (3) time plots of the variability of response parameters, and (4) time plots of activated volume. PMID:22100663

  11. Reshaping the brain after stroke: The effect of prismatic adaptation in patients with right brain damage.

    PubMed

    Crottaz-Herbette, Sonia; Fornari, Eleonora; Notter, Michael P; Bindschaedler, Claire; Manzoni, Laura; Clarke, Stephanie

    2017-09-01

    Prismatic adaptation has been repeatedly reported to alleviate neglect symptoms; in normal subjects, it was shown to enhance the representation of the left visual space within the left inferior parietal cortex. Our study aimed to determine in humans whether similar compensatory mechanisms underlie the beneficial effect of prismatic adaptation in neglect. Fifteen patients with right hemispheric lesions and 11 age-matched controls underwent a prismatic adaptation session which was preceded and followed by fMRI using a visual detection task. In patients, the prismatic adaptation session improved the accuracy of target detection in the left and central space and enhanced the representation of this visual space within the left hemisphere in parts of the temporal convexity, inferior parietal lobule and prefrontal cortex. Across patients, the increase in neuronal activation within the temporal regions correlated with performance improvements in this visual space. In control subjects, prismatic adaptation enhanced the representation of the left visual space within the left inferior parietal lobule and decreased it within the left temporal cortex. Thus, a brief exposure to prismatic adaptation enhances, both in patients and in control subjects, the competence of the left hemisphere for the left space, but the regions extended beyond the inferior parietal lobule to the temporal convexity in patients. These results suggest that the left hemisphere provides compensatory mechanisms in neglect by assuming the representation of the whole space within the ventral attentional system. The rapidity of the change suggests that the underlying mechanism relies on uncovering pre-existing synaptic connections. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. FluoRender: joint freehand segmentation and visualization for many-channel fluorescence data analysis.

    PubMed

    Wan, Yong; Otsuna, Hideo; Holman, Holly A; Bagley, Brig; Ito, Masayoshi; Lewis, A Kelsey; Colasanto, Mary; Kardon, Gabrielle; Ito, Kei; Hansen, Charles

    2017-05-26

    Image segmentation and registration techniques have enabled biologists to place large amounts of volume data from fluorescence microscopy, morphed three-dimensionally, onto a common spatial frame. Existing tools built on volume visualization pipelines for single channel or red-green-blue (RGB) channels have become inadequate for the new challenges of fluorescence microscopy. For a three-dimensional atlas of the insect nervous system, hundreds of volume channels are rendered simultaneously, whereas fluorescence intensity values from each channel need to be preserved for versatile adjustment and analysis. Although several existing tools have incorporated support of multichannel data using various strategies, the lack of a flexible design has made true many-channel visualization and analysis unavailable. The most common practice for many-channel volume data presentation is still converting and rendering pseudosurfaces, which are inaccurate for both qualitative and quantitative evaluations. Here, we present an alternative design strategy that accommodates the visualization and analysis of about 100 volume channels, each of which can be interactively adjusted, selected, and segmented using freehand tools. Our multichannel visualization includes a multilevel streaming pipeline plus a triple-buffer compositing technique. Our method also preserves original fluorescence intensity values on graphics hardware, a crucial feature that allows graphics-processing-unit (GPU)-based processing for interactive data analysis, such as freehand segmentation. We have implemented the design strategies as a thorough restructuring of our original tool, FluoRender. The redesign of FluoRender not only maintains the existing multichannel capabilities for a greatly extended number of volume channels, but also enables new analysis functions for many-channel data from emerging biomedical-imaging techniques.

  13. Analyzing Living Surveys: Visualization Beyond the Data Release

    NASA Astrophysics Data System (ADS)

    Buddelmeijer, H.; Noorishad, P.; Williams, D.; Ivanova, M.; Roerdink, J. B. T. M.; Valentijn, E. A.

    2015-09-01

    Surveys need to provide more than periodic data releases. Science often requires data that is not captured in such releases. This mismatch between the constraints set by a fixed data release and the needs of the scientists is solved in the Astro-WISE information system by extending its request-driven data handling into the analysis domain. This leads to Query-Driven Visualization, where all data handling is automated and scalable by exploiting the strengths of data pulling. Astro-WISE is data-centric: new data creates itself automatically, if no suitable existing data can be found to fulfill a request. This approach allows scientists to visualize exactly the data they need, without any manual data management, freeing their time for research. The benefits of query-driven visualization are highlighted by searching for distant quasars in KiDS, a 1500 square degree optical survey. KiDS needs to be treated as a living survey to minimize the time between observation and (spectral) followup. The first window of opportunity would be missed if it were necessary to wait for data releases. The results from the default processing pipelines are used for a quick and broad selection of quasar candidates. More precise measurements of source properties can subsequently be requested to downsize the candidate set, requiring partial reprocessing of the images. Finally, the raw and reduced pixels themselves are inspected by eye to rank the final candidate list. The quality of the resulting candidate list and the speed of its creation were only achievable due to query driven-visualization of the living archive.

  14. Learning Visualization Strategies: A qualitative investigation

    NASA Astrophysics Data System (ADS)

    Halpern, Daniel; Oh, Kyong Eun; Tremaine, Marilyn; Chiang, James; Bemis, Karen; Silver, Deborah

    2015-12-01

    The following study investigates the range of strategies individuals develop to infer and interpret cross-sections of three-dimensional objects. We focus on the identification of mental representations and problem-solving processes made by 11 individuals with the goal of building training applications that integrate the strategies developed by the participants in our study. Our results suggest that although spatial transformation and perspective-taking techniques are useful for visualizing cross-section problems, these visual processes are augmented by analytical thinking. Further, our study shows that participants employ general analytic strategies for extended periods which evolve through practice into a set of progressively more expert strategies. Theoretical implications are discussed and five main findings are recommended for integration into the design of education software that facilitates visual learning and comprehension.

  15. Darkfield Adapter for Whole Slide Imaging: Adapting a Darkfield Internal Reflection Illumination System to Extend WSI Applications

    PubMed Central

    Kawano, Yoshihiro; Higgins, Christopher; Yamamoto, Yasuhito; Nyhus, Julie; Bernard, Amy; Dong, Hong-Wei; Karten, Harvey J.; Schilling, Tobias

    2013-01-01

    We present a new method for whole slide darkfield imaging. Whole Slide Imaging (WSI), also sometimes called virtual slide or virtual microscopy technology, produces images that simultaneously provide high resolution and a wide field of observation that can encompass the entire section, extending far beyond any single field of view. For example, a brain slice can be imaged so that both overall morphology and individual neuronal detail can be seen. We extended the capabilities of traditional whole slide systems and developed a prototype system for darkfield internal reflection illumination (DIRI). Our darkfield system uses an ultra-thin light-emitting diode (LED) light source to illuminate slide specimens from the edge of the slide. We used a new type of side illumination, a variation on the internal reflection method, to illuminate the specimen and create a darkfield image. This system has four main advantages over traditional darkfield: (1) no oil condenser is required for high resolution imaging (2) there is less scatter from dust and dirt on the slide specimen (3) there is less halo, providing a more natural darkfield contrast image, and (4) the motorized system produces darkfield, brightfield and fluorescence images. The WSI method sometimes allows us to image using fewer stains. For instance, diaminobenzidine (DAB) and fluorescent staining are helpful tools for observing protein localization and volume in tissues. However, these methods usually require counter-staining in order to visualize tissue structure, limiting the accuracy of localization of labeled cells within the complex multiple regions of typical neurohistological preparations. Darkfield imaging works on the basis of light scattering from refractive index mismatches in the sample. It is a label-free method of producing contrast in a sample. We propose that adapting darkfield imaging to WSI is very useful, particularly when researchers require additional structural information without the use of further staining. PMID:23520500

  16. Clinical outcome and higher order aberrations after bilateral implantation of an extended depth of focus intraocular lens.

    PubMed

    Pilger, Daniel; Homburg, David; Brockmann, Tobias; Torun, Necip; Bertelmann, Eckart; von Sonnleithner, Christoph

    2018-04-01

    The purpose of this study was to assess the clinical outcome after a bilateral implantation of an extended depth of focus intraocular lens in comparison to a monofocal intraocular lens. Department of Ophthalmology, Charité-Medical University Berlin, Germany. A total of 60 eyes of 30 patients were enrolled in this prospective, single-center study. The cataract patients underwent phacoemulsification with bilateral implantation of a TECNIS ® Symfony (Abbott Medical Optics, Santa Ana, CA, USA, 15 patients) or a TECNIS Monofocal ZCB00 (Abbott Medical Optics, Santa Ana, CA, USA, 15 patients). Postoperative evaluations were performed after 1 and 3 months, including visual acuities at far, intermediate, and near distance. Mesopic, scotopic vision, and contrast sensitivity were investigated. Aberrometry was performed using an iTrace aberrometer with a pupil scan size of 5.0 mm. After 3 months, the TECNIS Symfony group reached an uncorrected visual acuity at far distance of -0.02 logMAR compared to -0.06 logMAR in the TECNIS Monofocal group ( p = 0.03). Regarding the uncorrected vision at intermediate and near distance the following values were obtained: intermediate visual acuity -0.13 versus 0.0 logMAR (TECNIS Symfony vs TECNIS Monofocal, p = 0.001) and near visual acuity 0.11 versus 0.26 logMAR (TECNIS Symfony vs TECNIS Monofocal, p = 0.001). Low-contrast visual acuities were 0.27 versus 0.20 logMar (TECNIS Symfony vs TECNIS Monofocal, p = 0.023). The TECNIS Symfony intraocular lens can be considered an appropriate alternative to multifocal intraocular lenses because of good visual results at far, intermediate, and near distance as well as in low-contrast vision.

  17. Enhancing sensitivity of high resolution optical coherence tomography using an optional spectrally encoded extended source (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yu, Xiaojun; Liu, Xinyu; Chen, Si; Wang, Xianghong; Liu, Linbo

    2016-03-01

    High-resolution optical coherence tomography (OCT) is of critical importance to disease diagnosis because it is capable of providing detailed microstructural information of the biological tissues. However, a compromise usually has to be made between its spatial resolutions and sensitivity due to the suboptimal spectral response of the system components, such as the linear camera, the dispersion grating, and the focusing lenses, etc. In this study, we demonstrate an OCT system that achieves both high spatial resolutions and enhanced sensitivity through utilizing a spectrally encoded source. The system achieves a lateral resolution of 3.1 μm and an axial resolution of 2.3 μm in air; when with a simple dispersive prism placed in the infinity space of the sample arm optics, the illumination beam on the sample is transformed into a line source with a visual angle of 10.3 mrad. Such an extended source technique allows a ~4 times larger maximum permissible exposure (MPE) than its point source counterpart, which thus improves the system sensitivity by ~6dB. In addition, the dispersive prism can be conveniently switched to a reflector. Such flexibility helps increase the penetration depth of the system without increasing the complexity of the current point source devices. We conducted experiments to characterize the system's imaging capability using the human fingertip in vivo and the swine eye optic never disc ex vivo. The higher penetration depth of such a system over the conventional point source OCT system is also demonstrated in these two tissues.

  18. Predictive Coding: A Fresh View of Inhibition in the Retina

    NASA Astrophysics Data System (ADS)

    Srinivasan, M. V.; Laughlin, S. B.; Dubs, A.

    1982-11-01

    Interneurons exhibiting centre--surround antagonism within their receptive fields are commonly found in peripheral visual pathways. We propose that this organization enables the visual system to encode spatial detail in a manner that minimizes the deleterious effects of intrinsic noise, by exploiting the spatial correlation that exists within natural scenes. The antagonistic surround takes a weighted mean of the signals in neighbouring receptors to generate a statistical prediction of the signal at the centre. The predicted value is subtracted from the actual centre signal, thus minimizing the range of outputs transmitted by the centre. In this way the entire dynamic range of the interneuron can be devoted to encoding a small range of intensities, thus rendering fine detail detectable against intrinsic noise injected at later stages in processing. This predictive encoding scheme also reduces spatial redundancy, thereby enabling the array of interneurons to transmit a larger number of distinguishable images, taking into account the expected structure of the visual world. The profile of the required inhibitory field is derived from statistical estimation theory. This profile depends strongly upon the signal: noise ratio and weakly upon the extent of lateral spatial correlation. The receptive fields that are quantitatively predicted by the theory resemble those of X-type retinal ganglion cells and show that the inhibitory surround should become weaker and more diffuse at low intensities. The latter property is unequivocally demonstrated in the first-order interneurons of the fly's compound eye. The theory is extended to the time domain to account for the phasic responses of fly interneurons. These comparisons suggest that, in the early stages of processing, the visual system is concerned primarily with coding the visual image to protect against subsequent intrinsic noise, rather than with reconstructing the scene or extracting specific features from it. The treatment emphasizes that a neuron's dynamic range should be matched to both its receptive field and the statistical properties of the visual pattern expected within this field. Finally, the analysis is synthetic because it is an extension of the background suppression hypothesis (Barlow & Levick 1976), satisfies the redundancy reduction hypothesis (Barlow 1961 a, b) and is equivalent to deblurring under certain conditions (Ratliff 1965).

  19. Extending the Lunar Mapping and Modeling Portal - New Capabilities and New Worlds

    NASA Technical Reports Server (NTRS)

    Day, B.; Law, E.; Arevalo, E.; Bui, B.; Chang, G.; Dodge, K.; Kim, R.; Malhotra, S.; Sadaqathullah, S.; Schmidt, G.; hide

    2015-01-01

    NASA's Lunar Mapping and Modeling Portal (LMMP) provides a web-based Portal and a suite of interactive visualization and analysis tools to enable mission planners, lunar scientists, and engineers to access mapped lunar data products from past and current lunar missions (http://lmmp.nasa.gov). During the past year, the capabilities and data served by LMMP have been significantly expanded. New interfaces are providing improved ways to access and visualize data. At the request of NASA's Science Mission Directorate, LMMP's technology and capabilities are now being extended to additional planetary bodies. New portals for Vesta and Mars are the first of these new products to be released. This presentation will provide an overview of LMMP, Vesta Trek, and Mars Trek, demonstrate their uses and capabilities, highlight new features, and preview coming enhancements.

  20. Breaking the cycle: extending the persistent pain cycle diagram using an affective pictorial metaphor.

    PubMed

    Stones, Catherine; Cole, Frances

    2014-01-01

    The persistent pain cycle diagram is a common feature of pain management literature. but how is it designed and is it fulfilling its potential in terms of providing information to motivate behavioral change? This article examines on-line persistent pain diagrams and critically discusses their purpose and design approach. By using broad information design theories by Karabeg and particular approaches to dialogic visual communications in business, this article argues the need for motivational as well as cognitive diagrams. It also outlines the design of a new persistent pain cycle that is currently being used with chronic pain patients in NHS Bradford, UK. This new cycle adopts and then visually extends an established verbal metaphor within acceptance and commitment therapy (ACT) in an attempt to increase the motivational aspects of the vicious circle diagram format.

  1. Extending the Lunar Mapping and Modeling Portal - New Capabilities and New Worlds

    NASA Astrophysics Data System (ADS)

    Day, B.; Law, E.; Arevalo, E.; Bui, B.; Chang, G.; Dodge, K.; Kim, R.; Malhotra, S.; Sadaqathullah, S.; Schmidt, G.; Bailey, B.

    2015-10-01

    NASA's Lunar Mapping and Modeling Portal (LMMP) provides a web-based Portal and a suite of interactive visualization and analysis tools to enable mission planners, lunar scientists, and engineers to access mapped lunar data products from past and current lunar missions (http://lmmp.nasa.gov). During the past year, the capabilities and data served by LMMP have been significantly expanded. New interfaces are providing improved ways to access and visualize data. At the request of NASA's Science Mission Directorate, LMMP's technology and capabilities are now being extended to additional planetary bodies. New portals for Vesta and Mars are the first of these new products to be released. This presentation will provide an overview of LMMP, Vesta Trek, and Mars Trek, demonstrate their uses and capabilities, highlight new features, and preview coming enhancements.

  2. SU-E-J-167: Improvement of Time-Ordered Four Dimensional Cone-Beam CT; Image Mosaicing with Real and Virtual Projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakano, M; Kida, S; Masutani, Y

    2014-06-01

    Purpose: In the previous study, we developed time-ordered fourdimensional (4D) cone-beam CT (CBCT) technique to visualize nonperiodic organ motion, such as peristaltic motion of gastrointestinal organs and adjacent area, using half-scan reconstruction method. One important obstacle was that truncation of projection was caused by asymmetric location of flat-panel detector (FPD) in order to cover whole abdomen or pelvis in one rotation. In this study, we propose image mosaicing to extend projection data to make possible to reconstruct full field-of-view (FOV) image using half-scan reconstruction. Methods: The projections of prostate cancer patients were acquired using the X-ray Volume Imaging system (XVI,more » version 4.5) on Synergy linear accelerator system (Elekta, UK). The XVI system has three options of FOV, S, M and L, and M FOV was chosen for pelvic CBCT acquisition, with a FPD panel 11.5 cm offset. The method to produce extended projections consists of three main steps: First, normal three-dimensional (3D) reconstruction which contains whole pelvis was implemented using real projections. Second, virtual projections were produced by reprojection process of the reconstructed 3D image. Third, real and virtual projections in each angle were combined into one extended mosaic projection. Then, 4D CBCT images were reconstructed using our inhouse reconstruction software based on Feldkamp, Davis and Kress algorithm. The angular range of each reconstruction phase in the 4D reconstruction was 180 degrees, and the range moved as time progressed. Results: Projection data were successfully extended without discontinuous boundary between real and virtual projections. Using mosaic projections, 4D CBCT image sets were reconstructed without artifacts caused by the truncation, and thus, whole pelvis was clearly visible. Conclusion: The present method provides extended projections which contain whole pelvis. The presented reconstruction method also enables time-ordered 4D CBCT reconstruction of organs with non-periodic motion with full FOV without projection-truncation artifacts. This work was partly supported by the JSPS Core-to-Core Program(No. 23003). This work was partly supported by JSPS KAKENHI 24234567.« less

  3. Novel System for Real-Time Integration of 3-D Echocardiography and Fluoroscopy for Image-Guided Cardiac Interventions: Preclinical Validation and Clinical Feasibility Evaluation.

    PubMed

    Arujuna, Aruna V; Housden, R James; Ma, Yingliang; Rajani, Ronak; Gao, Gang; Nijhof, Niels; Cathier, Pascal; Bullens, Roland; Gijsbers, Geert; Parish, Victoria; Kapetanakis, Stamatis; Hancock, Jane; Rinaldi, C Aldo; Cooklin, Michael; Gill, Jaswinder; Thomas, Martyn; O'neill, Mark D; Razavi, Reza; Rhode, Kawal S

    2014-01-01

    Real-time imaging is required to guide minimally invasive catheter-based cardiac interventions. While transesophageal echocardiography allows for high-quality visualization of cardiac anatomy, X-ray fluoroscopy provides excellent visualization of devices. We have developed a novel image fusion system that allows real-time integration of 3-D echocardiography and the X-ray fluoroscopy. The system was validated in the following two stages: 1) preclinical to determine function and validate accuracy; and 2) in the clinical setting to assess clinical workflow feasibility and determine overall system accuracy. In the preclinical phase, the system was assessed using both phantom and porcine experimental studies. Median 2-D projection errors of 4.5 and 3.3 mm were found for the phantom and porcine studies, respectively. The clinical phase focused on extending the use of the system to interventions in patients undergoing either atrial fibrillation catheter ablation (CA) or transcatheter aortic valve implantation (TAVI). Eleven patients were studied with nine in the CA group and two in the TAVI group. Successful real-time view synchronization was achieved in all cases with a calculated median distance error of 2.2 mm in the CA group and 3.4 mm in the TAVI group. A standard clinical workflow was established using the image fusion system. These pilot data confirm the technical feasibility of accurate real-time echo-fluoroscopic image overlay in clinical practice, which may be a useful adjunct for real-time guidance during interventional cardiac procedures.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Samuel; Patterson, David; Oliker, Leonid

    This article consists of a collection of slides from the authors' conference presentation. The Roofline model is a visually intuitive figure for kernel analysis and optimization. We believe undergraduates will find it useful in assessing performance and scalability limitations. It is easily extended to other architectural paradigms. It is easily extendable to other metrics: performance (sort, graphics, crypto..) bandwidth (L2, PCIe, ..). Furthermore, a performance counters could be used to generate a runtime-specific roofline that would greatly aide the optimization.

  5. An OpenEarth Framework (OEF) for Integrating and Visualizing Earth Science Data

    NASA Astrophysics Data System (ADS)

    Moreland, J. L.; Nadeau, D. R.; Baru, C.; Crosby, C. J.

    2009-12-01

    The integration of data is essential to make transformative progress in understanding the complex processes operating at the Earth’s surface and within its interior. While our current ability to collect massive amounts of data, develop structural models, and generate high-resolution dynamics models is well developed, our ability to quantitatively integrate these data and models into holistic interpretations of Earth systems is poorly developed. We lack the basic tools to realize a first-order goal in Earth science of developing integrated 4D models of Earth structure and processes using a complete range of available constraints, at a time when the research agenda of major efforts such as EarthScope demand such a capability. Among the challenges to 3D data integration are data that may be in different coordinate spaces, units, value ranges, file formats, and data structures. While several file format standards exist, they are infrequently or incorrectly used. Metadata is often missing, misleading, or relegated to README text files along side the data. This leaves much of the work to integrate data bogged down by simple data management tasks. The OpenEarth Framework (OEF) being developed by GEON addresses these data management difficulties. The software incorporates file format parsers, data interpretation heuristics, user interfaces to prompt for missing information, and visualization techniques to merge data into a common visual model. The OEF’s data access libraries parse formal and de facto standard file formats and map their data into a common data model. The software handles file format quirks, storage details, caching, local and remote file access, and web service protocol handling. Heuristics are used to determine coordinate spaces, units, and other key data features. Where multiple data structure, naming, and file organization conventions exist, those heuristics check for each convention’s use to find a high confidence interpretation of the data. When no convention or embedded data yields a suitable answer, the user is prompted to fill in the blanks. The OEF’s interaction libraries assist in the construction of user interfaces for data management. These libraries support data import, data prompting, data introspection, the management of the contents of a common data model, and the creation of derived data to support visualization. Finally, visualization libraries provide interactive visualization using an extended version of NASA WorldWind. The OEF viewer supports visualization of terrains, point clouds, 3D volumes, imagery, cutting planes, isosurfaces, and more. Data may be color coded, shaded, and displayed above, or below the terrain, and always registered into a common coordinate space. The OEF architecture is open and cross-platform software libraries are available separately for use with other software projects, while modules from other projects may be integrated into the OEF to extend its features. The OEF is currently being used to visualize data from EarthScope-related research in the Western US.

  6. Comparison of visual outcomes after bilateral implantation of a diffractive trifocal intraocular lens and blended implantation of an extended depth of focus intraocular lens with a diffractive bifocal intraocular lens

    PubMed Central

    de Medeiros, André Lins; de Araújo Rolim, André Gustavo; Motta, Antonio Francisco Pimenta; Ventura, Bruna Vieira; Vilar, César; Chaves, Mário Augusto Pereira Dias; Carricondo, Pedro Carlos; Hida, Wilson Takashi

    2017-01-01

    Purpose The purpose of this study was to compare the visual outcomes and subjective visual quality between bilateral implantation of a diffractive trifocal intraocular lens, Alcon Acrysof IQ® PanOptix® TNFT00 (group A), and blended implantation of an extended depth of focus lens, J&J Tecnis Symfony® ZXR00 with a diffractive bifocal intraocular lens, J&J Vision Tecnis® ZMB00 (group B). Methods This prospective, nonrandomized, consecutive, comparative study included the assessment of 40 eyes in 20 patients implanted with multifocal intraocular lens. Exclusion criteria were existence of any corneal, retina, or optic nerve disease, previous eye surgery, illiteracy, previous refractive surgery, high axial myopia, expected postoperative corneal astigmatism of >1.00 cylindrical diopter (D), and intraoperative or postoperative complications. Binocular visual acuity was tested in all cases. Ophthalmological evaluation included the measurement of uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), uncorrected near visual acuity (UNVA), and uncorrected intermediate visual acuity (UIVA), with the analysis of contrast sensitivity (CS), and visual defocus curve. Results Postoperative UDVA was 0.01 and −0.096 logMAR (p<0.01) in groups A and B, respectively; postoperative CDVA was −0.07 and −0.16 logMAR (p<0.01) in groups A and B, respectively; UIVA was 0.14 and 0.20 logMAR (p<0.01) in groups A and B, respectively; UNVA was −0.03 and 0.11 logMAR (p<0.01) in groups A and B, respectively. Under photopic conditions group B had better CS at low frequencies with and without glare. Conclusion Both groups promoted good quality of vision for long, intermediate, and short distances. Group B exhibited a better performance for very short distances and for intermediate and long distances ≥−1.50 D of vergence. Group A exhibited a better performance for UIVA at 60 cm and for UNVA at 40 cm. PMID:29138533

  7. Simultaneous acquisition of 3D shape and deformation by combination of interferometric and correlation-based laser speckle metrology.

    PubMed

    Dekiff, Markus; Berssenbrügge, Philipp; Kemper, Björn; Denz, Cornelia; Dirksen, Dieter

    2015-12-01

    A metrology system combining three laser speckle measurement techniques for simultaneous determination of 3D shape and micro- and macroscopic deformations is presented. While microscopic deformations are determined by a combination of Digital Holographic Interferometry (DHI) and Digital Speckle Photography (DSP), macroscopic 3D shape, position and deformation are retrieved by photogrammetry based on digital image correlation of a projected laser speckle pattern. The photogrammetrically obtained data extend the measurement range of the DHI-DSP system and also increase the accuracy of the calculation of the sensitivity vector. Furthermore, a precise assignment of microscopic displacements to the object's macroscopic shape for enhanced visualization is achieved. The approach allows for fast measurements with a simple setup. Key parameters of the system are optimized, and its precision and measurement range are demonstrated. As application examples, the deformation of a mandible model and the shrinkage of dental impression material are measured.

  8. [Design and Realization of Personalized Corneal Analysis Software Based on Corneal Topography System].

    PubMed

    Huang, Xueping; Xie, Zhonghao; Cen, Qin; Zheng, Suilian

    2016-08-01

    As the most important refraction part in the optical system,cornea possesses characteristics which are important parameters in ophthalmology clinical surgery.During the measurement of the cornea in our study,we acquired the corneal data of Orbscan Ⅱ corneal topographer in real time using the Hook technology under Windows,and then took the data into the corneal analysis software.We then further analyzed and calculated the data to obtain individual Q-value of overall corneal 360semi-meridian.The corneal analysis software took Visual C++ 6.0as development environment,used OpenGL graphics technology to draw three-dimensional individual corneal morphological map and the distribution curve of the Q-value,and achieved real-time corneal data query.It could be concluded that the analysis would further extend the function of the corneal topography system,and provide a solid foundation for the further study of automatic screening of corneal diseases.

  9. Committor of elementary reactions on multistate systems

    NASA Astrophysics Data System (ADS)

    Király, Péter; Kiss, Dóra Judit; Tóth, Gergely

    2018-04-01

    In our study, we extend the committor concept on multi-minima systems, where more than one reaction may proceed, but the feasible data evaluation needs the projection onto partial reactions. The elementary reaction committor and the corresponding probability density of the reactive trajectories are defined and calculated on a three-hole two-dimensional model system explored by single-particle Langevin dynamics. We propose a method to visualize more elementary reaction committor functions or probability densities of reactive trajectories on a single plot that helps to identify the most important reaction channels and the nonreactive domains simultaneously. We suggest a weighting for the energy-committor plots that correctly shows the limits of both the minimal energy path and the average energy concepts. The methods also performed well on the analysis of molecular dynamics trajectories of 2-chlorobutane, where an elementary reaction committor, the probability densities, the potential energy/committor, and the free-energy/committor curves are presented.

  10. Handling of huge multispectral image data volumes from a spectral hole burning device (SHBD)

    NASA Astrophysics Data System (ADS)

    Graff, Werner; Rosselet, Armel C.; Wild, Urs P.; Gschwind, Rudolf; Keller, Christoph U.

    1995-06-01

    We use chlorin-doped polymer films at low temperatures as the primary imaging detector. Based on the principles of persistent spectral hole burning, this system is capable of storing spatial and spectral information simultaneously in one exposure with extremely high resolution. The sun as an extended light source has been imaged onto the film. The information recorded amounts to tens of GBytes. This data volume is read out by scanning the frequency of a tunable dye laser and reading the images with a digital CCD camera. For acquisition, archival, processing, and visualization, we use MUSIC (MUlti processor System with Intelligent Communication), a single instruction multiple data parallel processor system equipped with the necessary I/O facilities. The huge amount of data requires the developemnt of sophisticated algorithms to efficiently calibrate the data and to extract useful and new information for solar physics.

  11. Person and gesture tracking with smart stereo cameras

    NASA Astrophysics Data System (ADS)

    Gordon, Gaile; Chen, Xiangrong; Buck, Ron

    2008-02-01

    Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems implemented on this platform, and discuss some deployed applications.

  12. A technique system for the measurement, reconstruction and character extraction of rice plant architecture

    PubMed Central

    Li, Xumeng; Wang, Xiaohui; Wei, Hailin; Zhu, Xinguang; Peng, Yulin; Li, Ming; Li, Tao; Huang, Huang

    2017-01-01

    This study developed a technique system for the measurement, reconstruction, and trait extraction of rice canopy architectures, which have challenged functional–structural plant modeling for decades and have become the foundation of the design of ideo-plant architectures. The system uses the location-separation-measurement method (LSMM) for the collection of data on the canopy architecture and the analytic geometry method for the reconstruction and visualization of the three-dimensional (3D) digital architecture of the rice plant. It also uses the virtual clipping method for extracting the key traits of the canopy architecture such as the leaf area, inclination, and azimuth distribution in spatial coordinates. To establish the technique system, we developed (i) simple tools to measure the spatial position of the stem axis and azimuth of the leaf midrib and to capture images of tillers and leaves; (ii) computer software programs for extracting data on stem diameter, leaf nodes, and leaf midrib curves from the tiller images and data on leaf length, width, and shape from the leaf images; (iii) a database of digital architectures that stores the measured data and facilitates the reconstruction of the 3D visual architecture and the extraction of architectural traits; and (iv) computation algorithms for virtual clipping to stratify the rice canopy, to extend the stratified surface from the horizontal plane to a general curved surface (including a cylindrical surface), and to implement in silico. Each component of the technique system was quantitatively validated and visually compared to images, and the sensitivity of the virtual clipping algorithms was analyzed. This technique is inexpensive and accurate and provides high throughput for the measurement, reconstruction, and trait extraction of rice canopy architectures. The technique provides a more practical method of data collection to serve functional–structural plant models of rice and for the optimization of rice canopy types. Moreover, the technique can be easily adapted for other cereal crops such as wheat, which has numerous stems and leaves sheltering each other. PMID:28558045

  13. The sophisticated visual system of a tiny Cambrian crustacean: analysis of a stalked fossil compound eye

    PubMed Central

    Schoenemann, Brigitte; Castellani, Christopher; Clarkson, Euan N. K.; Haug, Joachim T.; Maas, Andreas; Haug, Carolin; Waloszek, Dieter

    2012-01-01

    Fossilized compound eyes from the Cambrian, isolated and three-dimensionally preserved, provide remarkable insights into the lifestyle and habitat of their owners. The tiny stalked compound eyes described here probably possessed too few facets to form a proper image, but they represent a sophisticated system for detecting moving objects. The eyes are preserved as almost solid, mace-shaped blocks of phosphate, in which the original positions of the rhabdoms in one specimen are retained as deep cavities. Analysis of the optical axes reveals four visual areas, each with different properties in acuity of vision. They are surveyed by lenses directed forwards, laterally, backwards and inwards, respectively. The most intriguing of these is the putatively inwardly orientated zone, where the optical axes, like those orientated to the front, interfere with axes of the other eye of the contralateral side. The result is a three-dimensional visual net that covers not only the front, but extends also far laterally to either side. Thus, a moving object could be perceived by a two-dimensional coordinate (which is formed by two axes of those facets, one of the left and one of the right eye, which are orientated towards the moving object) in a wide three-dimensional space. This compound eye system enables small arthropods equipped with an eye of low acuity to estimate velocity, size or distance of possible food items efficiently. The eyes are interpreted as having been derived from individuals of the early crustacean Henningsmoenicaris scutula pointing to the existence of highly efficiently developed eyes in the early evolutionary lineage leading towards the modern Crustacea. PMID:22048954

  14. The role of sensorimotor learning in the perception of letter-like forms: tracking the causes of neural specialization for letters.

    PubMed

    James, Karin H; Atwood, Thea P

    2009-02-01

    Functional specialization in the brain is considered a hallmark of efficient processing. It is therefore not surprising that there are brain areas specialized for processing letters. To better understand the causes of functional specialization for letters, we explore the emergence of this pattern of response in the ventral processing stream through a training paradigm. Previously, we hypothesized that the specialized response pattern seen during letter perception may be due in part to our experience in writing letters. The work presented here investigates whether or not this aspect of letter processing-the integration of sensorimotor systems through writing-leads to functional specialization in the visual system. To test this idea, we investigated whether or not different types of experiences with letter-like stimuli ("pseudoletters") led to functional specialization similar to that which exists for letters. Neural activation patterns were measured using functional magnetic resonance imaging (fMRI) before and after three different types of training sessions. Participants were trained to recognize pseudoletters by writing, typing, or purely visual practice. Results suggested that only after writing practice did neural activation patterns to pseudoletters resemble patterns seen for letters. That is, neural activation in the left fusiform and dorsal precentral gyrus was greater when participants viewed pseudoletters than other, similar stimuli but only after writing experience. Neural activation also increased after typing practice in the right fusiform and left precentral gyrus, suggesting that in some areas, any motor experience may change visual processing. The results of this experiment suggest an intimate interaction among perceptual and motor systems during pseudoletter perception that may be extended to everyday letter perception.

  15. Visualization of Flow Alternatives, Lower Missouri River

    USGS Publications Warehouse

    Jacobson, Robert B.; Heuser, Jeanne

    2002-01-01

    Background The U.S. Army Corps of Engineers (COE) 'Missouri River Master Water Control Manual' (Master Manual) review has resulted in consideration of many flow alternatives for managing the water in the river (COE, 2001; 1998a). The purpose of this report is to present flow-management alternative model results in a way that can be easily visualized and understood. This report was updated in October 2001 to focus on the specific flow-management alternatives presented by the COE in the 'Master Manual Revised Draft Environmental Impact Statement' (RDEIS; COE, 2001). The original version (February 2000) is available by clicking here. The COE, U.S. Fish and Wildlife Service (FWS), Missouri River states, and Missouri River basin tribes have been participating in discussions concerning water management of the Missouri River mainstem reservoir system (MRMRS), the Missouri River Bank Stabilization and Navigation Project, and the Kansas River reservoir system since 1986. These discussions include general input to the revision of the Master Manual as well as formal consultation under Section 7 of the Endangered Species Act. In 2000, the FWS issued a Biological Opinion that prescribed changes to reservoir management on the Missouri River that were believed to be necessary to preclude jeopardy to three endangered species, the pallid sturgeon, piping plover, and interior least tern (USFWS, 2000). The combined Missouri River system is large and complex, including many reservoirs, control structures, and free-flowing reaches extending over a broad region. The ability to assess future impacts of altered management scenarios necessarily involves complex, computational models that attempt to integrate physical, chemical, biological, and economic effects. Graphical visualization of the model output is intended to improve understanding of the differences among flow-management alternatives.

  16. Global Coastal and Marine Spatial Planning (CMSP) from Space Based AIS Ship Tracking

    NASA Astrophysics Data System (ADS)

    Schwehr, K. D.; Foulkes, J. A.; Lorenzini, D.; Kanawati, M.

    2011-12-01

    All nations need to be developing long term integrated strategies for how to use and preserve our natural resources. As a part of these strategies, we must evalutate how communities of users react to changes in rules and regulations of ocean use. Global characterization of the vessel traffic on our Earth's oceans is essential to understanding the existing uses to develop international Coast and Marine Spatial Planning (CMSP). Ship traffic within 100-200km is beginning to be effectively covered in low latitudes by ground based receivers collecting position reports from the maritime Automatic Identification System (AIS). Unfortunately, remote islands, high latitudes, and open ocean Marine Protected Areas (MPA) are not covered by these ground systems. Deploying enough autonomous airborne (UAV) and surface (USV) vessels and buoys to provide adequate coverage is a difficult task. While the individual device costs are plummeting, a large fleet of AIS receivers is expensive to maintain. The global AIS coverage from SpaceQuest's low Earth orbit satellite receivers combined with the visualization and data storage infrastructure of Google (e.g. Maps, Earth, and Fusion Tables) provide a platform that enables researchers and resource managers to begin answer the question of how ocean resources are being utilized. Near real-time vessel traffic data will allow managers of marine resources to understand how changes to education, enforcement, rules, and regulations alter usage and compliance patterns. We will demonstrate the potential for this system using a sample SpaceQuest data set processed with libais which stores the results in a Fusion Table. From there, the data is imported to PyKML and visualized in Google Earth with a custom gx:Track visualization utilizing KML's extended data functionality to facilitate ship track interrogation. Analysts can then annotate and discuss vessel tracks in Fusion Tables.

  17. Efficiency of electronically monitored amblyopia treatment between 5 and 16 years of age: new insight into declining susceptibility of the visual system.

    PubMed

    Fronius, Maria; Cirina, Licia; Ackermann, Hanns; Kohnen, Thomas; Diehl, Corinna M

    2014-10-01

    The notion of a limited, early period of plasticity of the visual system has been challenged by more recent research demonstrating functional enhancement even into adulthood. In amblyopia ("lazy eye") it is still unclear to what extent the reduced effect of treatment after early childhood is due to declining plasticity or lower compliance with prescribed patching. The aim of this study was to determine the dose-response relationship and treatment efficiency from acuity gain and electronically recorded patching dose rates, and to infer from these parameters on a facet of age dependence of functional plasticity related to occlusion for amblyopia. The Occlusion Dose Monitor was used to record occlusion in 27 participants with previously untreated strabismic and/or anisometropic amblyopia aged between 5.4 and 15.8 (mean 9.2) years during 4months of conventional treatment. Group data showed improvement of acuity throughout the age span, but significantly more in patients younger than 7years despite comparable patching dosages. Treatment efficiency declined with age, with the most pronounced effects before the age of 7years. Thus, electronic recording allowed this first quantitative insight into occlusion treatment spanning the age range from within to beyond the conventional age for patching. Though demonstrating improvement in over 7year old patients, it confirmed the importance of early detection and treatment of amblyopia. Treatment efficiency is presented as a tool extending insight into age-dependent functional plasticity of the visual system, and providing a basis for comparisons of effects of patching vs. emerging alternative treatment approaches for amblyopia. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Information-Theoretic Assessment of Sample Imaging Systems

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Alter-Gartenberg, Rachel; Park, Stephen K.; Rahman, Zia-ur

    1999-01-01

    By rigorously extending modern communication theory to the assessment of sampled imaging systems, we develop the formulations that are required to optimize the performance of these systems within the critical constraints of image gathering, data transmission, and image display. The goal of this optimization is to produce images with the best possible visual quality for the wide range of statistical properties of the radiance field of natural scenes that one normally encounters. Extensive computational results are presented to assess the performance of sampled imaging systems in terms of information rate, theoretical minimum data rate, and fidelity. Comparisons of this assessment with perceptual and measurable performance demonstrate that (1) the information rate that a sampled imaging system conveys from the captured radiance field to the observer is closely correlated with the fidelity, sharpness and clarity with which the observed images can be restored and (2) the associated theoretical minimum data rate is closely correlated with the lowest data rate with which the acquired signal can be encoded for efficient transmission.

  19. Low-cost telepresence for collaborative virtual environments.

    PubMed

    Rhee, Seon-Min; Ziegler, Remo; Park, Jiyoung; Naef, Martin; Gross, Markus; Kim, Myoung-Hee

    2007-01-01

    We present a novel low-cost method for visual communication and telepresence in a CAVE -like environment, relying on 2D stereo-based video avatars. The system combines a selection of proven efficient algorithms and approximations in a unique way, resulting in a convincing stereoscopic real-time representation of a remote user acquired in a spatially immersive display. The system was designed to extend existing projection systems with acquisition capabilities requiring minimal hardware modifications and cost. The system uses infrared-based image segmentation to enable concurrent acquisition and projection in an immersive environment without a static background. The system consists of two color cameras and two additional b/w cameras used for segmentation in the near-IR spectrum. There is no need for special optics as the mask and color image are merged using image-warping based on a depth estimation. The resulting stereo image stream is compressed, streamed across a network, and displayed as a frame-sequential stereo texture on a billboard in the remote virtual environment.

  20. Spatio-temporal assessment of food safety risks in Canadian food distribution systems using GIS.

    PubMed

    Hashemi Beni, Leila; Villeneuve, Sébastien; LeBlanc, Denyse I; Côté, Kevin; Fazil, Aamir; Otten, Ainsley; McKellar, Robin; Delaquis, Pascal

    2012-09-01

    While the value of geographic information systems (GIS) is widely applied in public health there have been comparatively few examples of applications that extend to the assessment of risks in food distribution systems. GIS can provide decision makers with strong computing platforms for spatial data management, integration, analysis, querying and visualization. The present report addresses some spatio-analyses in a complex food distribution system and defines influence areas as travel time zones generated through road network analysis on a national scale rather than on a community scale. In addition, a dynamic risk index is defined to translate a contamination event into a public health risk as time progresses. More specifically, in this research, GIS is used to map the Canadian produce distribution system, analyze accessibility to contaminated product by consumers, and estimate the level of risk associated with a contamination event over time, as illustrated in a scenario. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  1. Mira: Libro de apresto (Look: Preparatory Book).

    ERIC Educational Resources Information Center

    Martinez, Emiliano; And Others

    This primer picture book may be used in various games and activities to extend the child's vocabulary and to provide pre-reading practice in letter and sound identification, categorization, and audio-visual discrimination. (Author/SK)

  2. Anatomical Analysis of the Retinal Specializations to a Crypto-Benthic, Micro-Predatory Lifestyle in the Mediterranean Triplefin Blenny Tripterygion delaisi

    PubMed Central

    Fritsch, Roland; Collin, Shaun P.; Michiels, Nico K.

    2017-01-01

    The environment and lifestyle of a species are known to exert selective pressure on the visual system, often demonstrating a tight link between visual morphology and ecology. Many studies have predicted the visual requirements of a species by examining the anatomical features of the eye. However, among the vast number of studies on visual specializations in aquatic animals, only a few have focused on small benthic fishes that occupy a heterogeneous and spatially complex visual environment. This study investigates the general retinal anatomy including the topography of both the photoreceptor and ganglion cell populations and estimates the spatial resolving power (SRP) of the eye of the Mediterranean triplefin Tripterygion delaisi. Retinal wholemounts were prepared to systematically and quantitatively analyze photoreceptor and retinal ganglion cell (RGC) densities using design-based stereology. To further examine the retinal structure, we also used magnetic resonance imaging (MRI) and histological examination of retinal cross sections. Observations of the triplefin’s eyes revealed them to be highly mobile, allowing them to view the surroundings without body movements. A rostral aphakic gap and the elliptical shape of the eye extend its visual field rostrally and allow for a rostro-caudal accommodatory axis, enabling this species to focus on prey at close range. Single and twin cones dominate the retina and are consistently arranged in one of two regular patterns, which may enhance motion detection and color vision. The retina features a prominent, dorso-temporal, convexiclivate fovea with an average density of 104,400 double and 30,800 single cones per mm2, and 81,000 RGCs per mm2. Based on photoreceptor spacing, SRP was calculated to be between 6.7 and 9.0 cycles per degree. Location and resolving power of the fovea would benefit the detection and identification of small prey in the lower frontal region of the visual field. PMID:29311852

  3. Anatomical Analysis of the Retinal Specializations to a Crypto-Benthic, Micro-Predatory Lifestyle in the Mediterranean Triplefin Blenny Tripterygion delaisi.

    PubMed

    Fritsch, Roland; Collin, Shaun P; Michiels, Nico K

    2017-01-01

    The environment and lifestyle of a species are known to exert selective pressure on the visual system, often demonstrating a tight link between visual morphology and ecology. Many studies have predicted the visual requirements of a species by examining the anatomical features of the eye. However, among the vast number of studies on visual specializations in aquatic animals, only a few have focused on small benthic fishes that occupy a heterogeneous and spatially complex visual environment. This study investigates the general retinal anatomy including the topography of both the photoreceptor and ganglion cell populations and estimates the spatial resolving power (SRP) of the eye of the Mediterranean triplefin Tripterygion delaisi . Retinal wholemounts were prepared to systematically and quantitatively analyze photoreceptor and retinal ganglion cell (RGC) densities using design-based stereology. To further examine the retinal structure, we also used magnetic resonance imaging (MRI) and histological examination of retinal cross sections. Observations of the triplefin's eyes revealed them to be highly mobile, allowing them to view the surroundings without body movements. A rostral aphakic gap and the elliptical shape of the eye extend its visual field rostrally and allow for a rostro-caudal accommodatory axis, enabling this species to focus on prey at close range. Single and twin cones dominate the retina and are consistently arranged in one of two regular patterns, which may enhance motion detection and color vision. The retina features a prominent, dorso-temporal, convexiclivate fovea with an average density of 104,400 double and 30,800 single cones per mm 2 , and 81,000 RGCs per mm 2 . Based on photoreceptor spacing, SRP was calculated to be between 6.7 and 9.0 cycles per degree. Location and resolving power of the fovea would benefit the detection and identification of small prey in the lower frontal region of the visual field.

  4. fMRI evidence for sensorimotor transformations in human cortex during smooth pursuit eye movements.

    PubMed

    Kimmig, H; Ohlendorf, S; Speck, O; Sprenger, A; Rutschmann, R M; Haller, S; Greenlee, M W

    2008-01-01

    Smooth pursuit eye movements (SP) are driven by moving objects. The pursuit system processes the visual input signals and transforms this information into an oculomotor output signal. Despite the object's movement on the retina and the eyes' movement in the head, we are able to locate the object in space implying coordinate transformations from retinal to head and space coordinates. To test for the visual and oculomotor components of SP and the possible transformation sites, we investigated three experimental conditions: (I) fixation of a stationary target with a second target moving across the retina (visual), (II) pursuit of the moving target with the second target moving in phase (oculomotor), (III) pursuit of the moving target with the second target remaining stationary (visuo-oculomotor). Precise eye movement data were simultaneously measured with the fMRI data. Visual components of activation during SP were located in the motion-sensitive, temporo-parieto-occipital region MT+ and the right posterior parietal cortex (PPC). Motor components comprised more widespread activation in these regions and additional activations in the frontal and supplementary eye fields (FEF, SEF), the cingulate gyrus and precuneus. The combined visuo-oculomotor stimulus revealed additional activation in the putamen. Possible transformation sites were found in MT+ and PPC. The MT+ activation evoked by the motion of a single visual dot was very localized, while the activation of the same single dot motion driving the eye was rather extended across MT+. The eye movement information appeared to be dispersed across the visual map of MT+. This could be interpreted as a transfer of the one-dimensional eye movement information into the two-dimensional visual map. Potentially, the dispersed information could be used to remap MT+ to space coordinates rather than retinal coordinates and to provide the basis for a motor output control. A similar interpretation holds for our results in the PPC region.

  5. Differential verbal, visual, and spatial working memory in written language production.

    PubMed

    Raulerson, Bascom A; Donovan, Michael J; Whiteford, Alison P; Kellogg, Ronald T

    2010-02-01

    The contributions of verbal, visual, and spatial working memory to written language production were investigated. Participants composed definitions for nouns while concurrently performing a task which required updating, storing, and retrieving information coded either verbally, visually, or spatially. The present study extended past findings by showing the linguistic encoding of planned conceptual content makes its largest demand on verbal working memory for both low and high frequency nouns. Kellogg, Olive, and Piolat in 2007 found that concrete nouns place substantial demands on visual working memory when imaging the nouns' referents during planning, whereas abstract nouns make no demand. The current study further showed that this pattern was not an artifact of visual working memory being sensitive to manipulation of just any lexical property of the noun prompts. In contrast to past results, writing made a small but detectible demand on spatial working memory.

  6. Visualizing Article Similarities via Sparsified Article Network and Map Projection for Systematic Reviews.

    PubMed

    Ji, Xiaonan; Machiraju, Raghu; Ritter, Alan; Yen, Po-Yin

    2017-01-01

    Systematic Reviews (SRs) of biomedical literature summarize evidence from high-quality studies to inform clinical decisions, but are time and labor intensive due to the large number of article collections. Article similarities established from textual features have been shown to assist in the identification of relevant articles, thus facilitating the article screening process efficiently. In this study, we visualized article similarities to extend its utilization in practical settings for SR researchers, aiming to promote human comprehension of article distributions and hidden patterns. To prompt an effective visualization in an interpretable, intuitive, and scalable way, we implemented a graph-based network visualization with three network sparsification approaches and a distance-based map projection via dimensionality reduction. We evaluated and compared three network sparsification approaches and the visualization types (article network vs. article map). We demonstrated the effectiveness in revealing article distribution and exhibiting clustering patterns of relevant articles with practical meanings for SRs.

  7. BIOLOGICAL NETWORK EXPLORATION WITH CYTOSCAPE 3

    PubMed Central

    Su, Gang; Morris, John H.; Demchak, Barry; Bader, Gary D.

    2014-01-01

    Cytoscape is one of the most popular open-source software tools for the visual exploration of biomedical networks composed of protein, gene and other types of interactions. It offers researchers a versatile and interactive visualization interface for exploring complex biological interconnections supported by diverse annotation and experimental data, thereby facilitating research tasks such as predicting gene function and pathway construction. Cytoscape provides core functionality to load, visualize, search, filter and save networks, and hundreds of Apps extend this functionality to address specific research needs. The latest generation of Cytoscape (version 3.0 and later) has substantial improvements in function, user interface and performance relative to previous versions. This protocol aims to jump-start new users with specific protocols for basic Cytoscape functions, such as installing Cytoscape and Cytoscape Apps, loading data, visualizing and navigating the network, visualizing network associated data (attributes) and identifying clusters. It also highlights new features that benefit experienced users. PMID:25199793

  8. Neural Integration in Body Perception.

    PubMed

    Ramsey, Richard

    2018-06-19

    The perception of other people is instrumental in guiding social interactions. For example, the appearance of the human body cues a wide range of inferences regarding sex, age, health, and personality, as well as emotional state and intentions, which influence social behavior. To date, most neuroscience research on body perception has aimed to characterize the functional contribution of segregated patches of cortex in the ventral visual stream. In light of the growing prominence of network architectures in neuroscience, the current article reviews neuroimaging studies that measure functional integration between different brain regions during body perception. The review demonstrates that body perception is not restricted to processing in the ventral visual stream but instead reflects a functional alliance between the ventral visual stream and extended neural systems associated with action perception, executive functions, and theory of mind. Overall, these findings demonstrate how body percepts are constructed through interactions in distributed brain networks and underscore that functional segregation and integration should be considered together when formulating neurocognitive theories of body perception. Insight from such an updated model of body perception generalizes to inform the organizational structure of social perception and cognition more generally and also informs disorders of body image, such as anorexia nervosa, which may rely on atypical integration of body-related information.

  9. UPIOM: a new tool of MFA and its application to the flow of iron and steel associated with car production.

    PubMed

    Nakamura, Shinichiro; Kondo, Yasushi; Matsubae, Kazuyo; Nakajima, Kenichi; Nagasaka, Tetsuya

    2011-02-01

    Identification of the flow of materials and substances associated with a product system provides useful information for Life Cycle Analysis (LCA), and contributes to extending the scope of complementarity between LCA and Materials Flow Analysis/Substances Flow Analysis (MFA/SFA), the two major tools of industrial ecology. This paper proposes a new methodology based on input-output analysis for identifying the physical input-output flow of individual materials that is associated with the production of a unit of given product, the unit physical input-output by materials (UPIOM). While the Sankey diagram has been a standard tool for the visualization of MFA/SFA, with an increase in the complexity of the flows under consideration, which will be the case when economy-wide intersectoral flows of materials are involved, the Sankey diagram may become too complex for effective visualization. An alternative way to visually represent material flows is proposed which makes use of triangulation of the flow matrix based on degrees of fabrication. The proposed methodology is applied to the flow of pig iron and iron and steel scrap that are associated with the production of a passenger car in Japan. Its usefulness to identify a specific MFA pattern from the original IO table is demonstrated.

  10. Changes in glance behaviour when using a visual eco-driving system - A field study.

    PubMed

    Ahlstrom, Christer; Kircher, Katja

    2017-01-01

    While in-vehicle eco-driving support systems have the potential to reduce greenhouse gas emissions and save fuel, they may also distract drivers, especially if the system makes use of a visual interface. The objective of this study is to investigate the visual behaviour of drivers interacting with such a system, implemented on a five-inch screen mounted above the middle console. Ten drivers participated in a real-world, on-road driving study where they drove a route nine times (2 pre-baseline drives, 5 treatment drives, 2 post-baseline drives). The route was 96 km long and consisted of rural roads, urban roads and a dual-lane motorway. The results show that drivers look at the system for 5-8% of the time, depending on road type, with a glance duration of about 0.6 s, and with 0.05% long glances (>2s) per kilometre. These figures are comparable to what was found for glances to the speedometer in this study. Glance behaviour away from the windscreen is slightly increased in treatment as compared to pre- and post-baseline, mirror glances decreased in treatment and post-baseline compared to pre-baseline, and speedometer glances increased compared to pre-baseline. The eco-driving support system provided continuous information interspersed with additional advice pop-ups (announced by a beep) and feedback pop-ups (no auditory cue). About 20% of sound initiated advice pop-ups were disregarded, and the remaining cases were usually looked at within the first two seconds. About 40% of the feedback pop-ups were disregarded. The amount of glances to the system immediately before the onset of a pop-up was clearly higher for feedback than for advice. All in all, the eco-driving support system under investigation is not likely to have a strong negative impact on glance behaviour. However, there is room for improvements. We recommend that eco-driving information is integrated with the speedometer, that optional activation of sound alerts for intermittent information is made available, and that the pop-up duration should be extended to facilitate self-regulation of information intake. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Code C# for chaos analysis of relativistic many-body systems

    NASA Astrophysics Data System (ADS)

    Grossu, I. V.; Besliu, C.; Jipa, Al.; Bordeianu, C. C.; Felea, D.; Stan, E.; Esanu, T.

    2010-08-01

    This work presents a new Microsoft Visual C# .NET code library, conceived as a general object oriented solution for chaos analysis of three-dimensional, relativistic many-body systems. In this context, we implemented the Lyapunov exponent and the “fragmentation level” (defined using the graph theory and the Shannon entropy). Inspired by existing studies on billiard nuclear models and clusters of galaxies, we tried to apply the virial theorem for a simplified many-body system composed by nucleons. A possible application of the “virial coefficient” to the stability analysis of chaotic systems is also discussed. Catalogue identifier: AEGH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 30 053 No. of bytes in distributed program, including test data, etc.: 801 258 Distribution format: tar.gz Programming language: Visual C# .NET 2005 Computer: PC Operating system: .Net Framework 2.0 running on MS Windows Has the code been vectorized or parallelized?: Each many-body system is simulated on a separate execution thread RAM: 128 Megabytes Classification: 6.2, 6.5 External routines: .Net Framework 2.0 Library Nature of problem: Chaos analysis of three-dimensional, relativistic many-body systems. Solution method: Second order Runge-Kutta algorithm for simulating relativistic many-body systems. Object oriented solution, easy to reuse, extend and customize, in any development environment which accepts .Net assemblies or COM components. Implementation of: Lyapunov exponent, “fragmentation level”, “average system radius”, “virial coefficient”, and energy conservation precision test. Additional comments: Easy copy/paste based deployment method. Running time: Quadratic complexity.

  12. Sustained Attention in Real Classroom Settings: An EEG Study.

    PubMed

    Ko, Li-Wei; Komarov, Oleksii; Hairston, W David; Jung, Tzyy-Ping; Lin, Chin-Teng

    2017-01-01

    Sustained attention is a process that enables the maintenance of response persistence and continuous effort over extended periods of time. Performing attention-related tasks in real life involves the need to ignore a variety of distractions and inhibit attention shifts to irrelevant activities. This study investigates electroencephalography (EEG) spectral changes during a sustained attention task within a real classroom environment. Eighteen healthy students were instructed to recognize as fast as possible special visual targets that were displayed during regular university lectures. Sorting their EEG spectra with respect to response times, which indicated the level of visual alertness to randomly introduced visual stimuli, revealed significant changes in the brain oscillation patterns. The results of power-frequency analysis demonstrated a relationship between variations in the EEG spectral dynamics and impaired performance in the sustained attention task. Across subjects and sessions, prolongation of the response time was preceded by an increase in the delta and theta EEG powers over the occipital region, and decrease in the beta power over the occipital and temporal regions. Meanwhile, implementation of the complex attention task paradigm into a real-world classroom setting makes it possible to investigate specific mutual links between brain activities and factors that cause impaired behavioral performance, such as development and manifestation of classroom mental fatigue. The findings of the study set a basis for developing a system capable of estimating the level of visual attention during real classroom activities by monitoring changes in the EEG spectra.

  13. Sustained Attention in Real Classroom Settings: An EEG Study

    PubMed Central

    Ko, Li-Wei; Komarov, Oleksii; Hairston, W. David; Jung, Tzyy-Ping; Lin, Chin-Teng

    2017-01-01

    Sustained attention is a process that enables the maintenance of response persistence and continuous effort over extended periods of time. Performing attention-related tasks in real life involves the need to ignore a variety of distractions and inhibit attention shifts to irrelevant activities. This study investigates electroencephalography (EEG) spectral changes during a sustained attention task within a real classroom environment. Eighteen healthy students were instructed to recognize as fast as possible special visual targets that were displayed during regular university lectures. Sorting their EEG spectra with respect to response times, which indicated the level of visual alertness to randomly introduced visual stimuli, revealed significant changes in the brain oscillation patterns. The results of power-frequency analysis demonstrated a relationship between variations in the EEG spectral dynamics and impaired performance in the sustained attention task. Across subjects and sessions, prolongation of the response time was preceded by an increase in the delta and theta EEG powers over the occipital region, and decrease in the beta power over the occipital and temporal regions. Meanwhile, implementation of the complex attention task paradigm into a real-world classroom setting makes it possible to investigate specific mutual links between brain activities and factors that cause impaired behavioral performance, such as development and manifestation of classroom mental fatigue. The findings of the study set a basis for developing a system capable of estimating the level of visual attention during real classroom activities by monitoring changes in the EEG spectra. PMID:28824396

  14. An integrated domain specific language for post-processing and visualizing electrophysiological signals in Java.

    PubMed

    Strasser, T; Peters, T; Jagle, H; Zrenner, E; Wilke, R

    2010-01-01

    Electrophysiology of vision - especially the electroretinogram (ERG) - is used as a non-invasive way for functional testing of the visual system. The ERG is a combined electrical response generated by neural and non-neuronal cells in the retina in response to light stimulation. This response can be recorded and used for diagnosis of numerous disorders. For both clinical practice and clinical trials it is important to process those signals in an accurate and fast way and to provide the results as structured, consistent reports. Therefore, we developed a freely available and open-source framework in Java (http://www.eye.uni-tuebingen.de/project/idsI4sigproc). The framework is focused on an easy integration with existing applications. By leveraging well-established software patterns like pipes-and-filters and fluent interfaces as well as by designing the application programming interfaces (API) as an integrated domain specific language (DSL) the overall framework provides a smooth learning curve. Additionally, it already contains several processing methods and visualization features and can be extended easily by implementing the provided interfaces. In this way, not only can new processing methods be added but the framework can also be adopted for other areas of signal processing. This article describes in detail the structure and implementation of the framework and demonstrate its application through the software package used in clinical practice and clinical trials at the University Eye Hospital Tuebingen one of the largest departments in the field of visual electrophysiology in Europe.

  15. Escher: A Web Application for Building, Sharing, and Embedding Data-Rich Visualizations of Biological Pathways

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Zachary A.; Drager, Andreas; Ebrahim, Ali

    Escher is a web application for visualizing data on biological pathways. Three key features make Escher a uniquely effective tool for pathway visualization. First, users can rapidly design new pathway maps. Escher provides pathway suggestions based on user data and genome-scale models, so users can draw pathways in a semi-automated way. Second, users can visualize data related to genes or proteins on the associated reactions and pathways, using rules that define which enzymes catalyze each reaction. Thus, users can identify trends in common genomic data types (e.g. RNA-Seq, proteomics, ChIP)—in conjunction with metabolite- and reaction-oriented data types (e.g. metabolomics, fluxomics).more » Third, Escher harnesses the strengths of web technologies (SVG, D3, developer tools) so that visualizations can be rapidly adapted, extended, shared, and embedded. This paper provides examples of each of these features and explains how the development approach used for Escher can be used to guide the development of future visualization tools.« less

  16. Escher: A Web Application for Building, Sharing, and Embedding Data-Rich Visualizations of Biological Pathways

    PubMed Central

    King, Zachary A.; Dräger, Andreas; Ebrahim, Ali; Sonnenschein, Nikolaus; Lewis, Nathan E.; Palsson, Bernhard O.

    2015-01-01

    Escher is a web application for visualizing data on biological pathways. Three key features make Escher a uniquely effective tool for pathway visualization. First, users can rapidly design new pathway maps. Escher provides pathway suggestions based on user data and genome-scale models, so users can draw pathways in a semi-automated way. Second, users can visualize data related to genes or proteins on the associated reactions and pathways, using rules that define which enzymes catalyze each reaction. Thus, users can identify trends in common genomic data types (e.g. RNA-Seq, proteomics, ChIP)—in conjunction with metabolite- and reaction-oriented data types (e.g. metabolomics, fluxomics). Third, Escher harnesses the strengths of web technologies (SVG, D3, developer tools) so that visualizations can be rapidly adapted, extended, shared, and embedded. This paper provides examples of each of these features and explains how the development approach used for Escher can be used to guide the development of future visualization tools. PMID:26313928

  17. Escher: A Web Application for Building, Sharing, and Embedding Data-Rich Visualizations of Biological Pathways

    DOE PAGES

    King, Zachary A.; Drager, Andreas; Ebrahim, Ali; ...

    2015-08-27

    Escher is a web application for visualizing data on biological pathways. Three key features make Escher a uniquely effective tool for pathway visualization. First, users can rapidly design new pathway maps. Escher provides pathway suggestions based on user data and genome-scale models, so users can draw pathways in a semi-automated way. Second, users can visualize data related to genes or proteins on the associated reactions and pathways, using rules that define which enzymes catalyze each reaction. Thus, users can identify trends in common genomic data types (e.g. RNA-Seq, proteomics, ChIP)—in conjunction with metabolite- and reaction-oriented data types (e.g. metabolomics, fluxomics).more » Third, Escher harnesses the strengths of web technologies (SVG, D3, developer tools) so that visualizations can be rapidly adapted, extended, shared, and embedded. This paper provides examples of each of these features and explains how the development approach used for Escher can be used to guide the development of future visualization tools.« less

  18. Manual control of yaw motion with combined visual and vestibular cues

    NASA Technical Reports Server (NTRS)

    Zacharias, G. L.; Young, L. R.

    1977-01-01

    Measurements are made of manual control performance in the closed-loop task of nulling perceived self-rotation velocity about an earth-vertical axis. Self-velocity estimation was modelled as a function of the simultaneous presentation of vestibular and peripheral visual field motion cues. Based on measured low-frequency operator behavior in three visual field environments, a parallel channel linear model is proposed which has separate visual and vestibular pathways summing in a complementary manner. A correction to the frequency responses is provided by a separate measurement of manual control performance in an analogous visual pursuit nulling task. The resulting dual-input describing function for motion perception dependence on combined cue presentation supports the complementary model, in which vestibular cues dominate sensation at frequencies above 0.05 Hz. The describing function model is extended by the proposal of a non-linear cue conflict model, in which cue weighting depends on the level of agreement between visual and vestibular cues.

  19. [Review of visual display system in flight simulator].

    PubMed

    Xie, Guang-hui; Wei, Shao-ning

    2003-06-01

    Visual display system is the key part and plays a very important role in flight simulators and flight training devices. The developing history of visual display system is recalled and the principle and characters of some visual display systems including collimated display systems and back-projected collimated display systems are described. The future directions of visual display systems are analyzed.

  20. Towards systems neuroscience of ADHD: A meta-analysis of 55 fMRI studies

    PubMed Central

    Cortese, Samuele; Kelly, Clare; Chabernaud, Camille; Proal, Erika; Di Martino, Adriana; Milham, Michael P.; Castellanos, F. Xavier

    2013-01-01

    Objective To perform a comprehensive meta-analysis of task-based functional MRI studies of Attention-Deficit/Hyperactivity Disorder (ADHD). Method PubMed, Ovid, EMBASE, Web of Science, ERIC, CINHAL, and NeuroSynth were searched for studies published through 06/30/2011. Significant differences in activation of brain regions between individuals with ADHD and comparisons were detected using activation likelihood estimation meta-analysis (p<0.05, corrected). Dysfunctional regions in ADHD were related to seven reference neuronal systems. We performed a set of meta-analyses focused on age groups (children; adults), clinical characteristics (history of stimulant treatment; presence of psychiatric comorbidities), and specific neuropsychological tasks (inhibition; working memory; vigilance/attention). Results Fifty-five studies were included (39 in children, 16 in adults). In children, hypoactivation in ADHD vs. comparisons was found mostly in systems involved in executive functions (frontoparietal network) and attention (ventral attentional network). Significant hyperactivation in ADHD vs. comparisons was observed predominantly within the default, ventral attention, and somatomotor networks. In adults, ADHD-related hypoactivation was predominant in the frontoparietal system, while ADHD-related hyperactivation was present in the visual, dorsal attention, and default networks. Significant ADHD-related dysfunction largely reflected task features and was detected even in the absence of comorbid mental disorders or history of stimulant treatment. Conclusions A growing literature provides evidence of ADHD-related dysfunction within multiple neuronal systems involved in higher-level cognitive functions but also in sensorimotor processes, including the visual system, and in the default network. This meta-analytic evidence extends early models of ADHD pathophysiology focused on prefrontal-striatal circuits. PMID:22983386

  1. Author’s response: A universal approach to modeling visual word recognition and reading: not only possible, but also inevitable.

    PubMed

    Frost, Ram

    2012-10-01

    I have argued that orthographic processing cannot be understood and modeled without considering the manner in which orthographic structure represents phonological, semantic, and morphological information in a given writing system. A reading theory, therefore, must be a theory of the interaction of the reader with his/her linguistic environment. This outlines a novel approach to studying and modeling visual word recognition, an approach that focuses on the common cognitive principles involved in processing printed words across different writing systems. These claims were challenged by several commentaries that contested the merits of my general theoretical agenda, the relevance of the evolution of writing systems, and the plausibility of finding commonalities in reading across orthographies. Other commentaries extended the scope of the debate by bringing into the discussion additional perspectives. My response addresses all these issues. By considering the constraints of neurobiology on modeling reading, developmental data, and a large scope of cross-linguistic evidence, I argue that front-end implementations of orthographic processing that do not stem from a comprehensive theory of the complex information conveyed by writing systems do not present a viable approach for understanding reading. The common principles by which writing systems have evolved to represent orthographic, phonological, and semantic information in a language reveal the critical distributional characteristics of orthographic structure that govern reading behavior. Models of reading should thus be learning models, primarily constrained by cross-linguistic developmental evidence that describes how the statistical properties of writing systems shape the characteristics of orthographic processing. When this approach is adopted, a universal model of reading is possible.

  2. Celeris: A GPU-accelerated open source software with a Boussinesq-type wave solver for real-time interactive simulation and visualization

    NASA Astrophysics Data System (ADS)

    Tavakkol, Sasan; Lynett, Patrick

    2017-08-01

    In this paper, we introduce an interactive coastal wave simulation and visualization software, called Celeris. Celeris is an open source software which needs minimum preparation to run on a Windows machine. The software solves the extended Boussinesq equations using a hybrid finite volume-finite difference method and supports moving shoreline boundaries. The simulation and visualization are performed on the GPU using Direct3D libraries, which enables the software to run faster than real-time. Celeris provides a first-of-its-kind interactive modeling platform for coastal wave applications and it supports simultaneous visualization with both photorealistic and colormapped rendering capabilities. We validate our software through comparison with three standard benchmarks for non-breaking and breaking waves.

  3. FERN - a Java framework for stochastic simulation and evaluation of reaction networks.

    PubMed

    Erhard, Florian; Friedel, Caroline C; Zimmer, Ralf

    2008-08-29

    Stochastic simulation can be used to illustrate the development of biological systems over time and the stochastic nature of these processes. Currently available programs for stochastic simulation, however, are limited in that they either a) do not provide the most efficient simulation algorithms and are difficult to extend, b) cannot be easily integrated into other applications or c) do not allow to monitor and intervene during the simulation process in an easy and intuitive way. Thus, in order to use stochastic simulation in innovative high-level modeling and analysis approaches more flexible tools are necessary. In this article, we present FERN (Framework for Evaluation of Reaction Networks), a Java framework for the efficient simulation of chemical reaction networks. FERN is subdivided into three layers for network representation, simulation and visualization of the simulation results each of which can be easily extended. It provides efficient and accurate state-of-the-art stochastic simulation algorithms for well-mixed chemical systems and a powerful observer system, which makes it possible to track and control the simulation progress on every level. To illustrate how FERN can be easily integrated into other systems biology applications, plugins to Cytoscape and CellDesigner are included. These plugins make it possible to run simulations and to observe the simulation progress in a reaction network in real-time from within the Cytoscape or CellDesigner environment. FERN addresses shortcomings of currently available stochastic simulation programs in several ways. First, it provides a broad range of efficient and accurate algorithms both for exact and approximate stochastic simulation and a simple interface for extending to new algorithms. FERN's implementations are considerably faster than the C implementations of gillespie2 or the Java implementations of ISBJava. Second, it can be used in a straightforward way both as a stand-alone program and within new systems biology applications. Finally, complex scenarios requiring intervention during the simulation progress can be modelled easily with FERN.

  4. An Information Management System for CHIKYU Operation and its Future

    NASA Astrophysics Data System (ADS)

    Kuramoto, S.; Matsuda, S.; Ito, H.

    2005-12-01

    The CDEX (Center for Deep Earth Exploration, JAMSTEC) is an implementing organization of a riser drilling vessel, CHIKYU ("Earth"). CHIKYU has a large capability to produce a wide variety of data, core measurement data, logging data, mud logging data, cuttings data and monitoring data in boreholes, etc. Also CDEX conducts site survey for safety drilling and publication before and after cruises. It is critical that these diverse data be managed using a unified, coherent method, and that they be organized and provided to users in an intuitive, clearly understandable way that reflects the aims and underlying philosophies of the IODP and JAMSTEC. It is crucial that these data are accessible to users through an integrated interface in which all data formats, management tools, and procedures are standardized. Meeting these goals will assure total usability for scientists, administrators, and the public, from data creation to uploading and cataloging, to end use and publication. CDEX is developing an integrated information management system, call "SIO7" (Scientific Information from 7 Oceans) for CHIKYU operation, and would like to extend to adopt various information handling systems in geosciences. The SIO7 composed of 2 major systems, J-CORES (JAMSTEC Core Systematics) and DEXIS (Deep Earth Exploration Information System) (see http://sio7.jamstec.go.jp/ for the details). J-CORES is a database system designated to manage all aspects of core data. The system is modeled on the JANUS system developed by and for ODP, but implements an extended, somewhat modified data model. The functions that support onboard and real time data input operations have also been strengthened. A variety of data visualization and visual core description functions have been added, and data loading from those applications has been automated, making the system as a whole both powerful and easy to use. On the other hand, DEXIS is developed based on the combination and integration of existing off-the-shelf application software that are tuned-up and optimized. DEXIS comprised two main functions: data browsing and data interpretation. The functions are available to use though an internet at anytime and from anywhere users want. Most standard data format are accepted for site survey data and logging data and GIS functions are involved. We will coordinate more data items in SIO7 with other JAMSTEC data that archived and provided data services by different systems. Also we will try to provide data tools and/or applications to contribute international colleagues who are working in geoscience fields. J-CORES is an open source application, and we encourage users to educate how to use the tools.

  5. aGEM: an integrative system for analyzing spatial-temporal gene-expression information

    PubMed Central

    Jiménez-Lozano, Natalia; Segura, Joan; Macías, José Ramón; Vega, Juanjo; Carazo, José María

    2009-01-01

    Motivation: The work presented here describes the ‘anatomical Gene-Expression Mapping (aGEM)’ Platform, a development conceived to integrate phenotypic information with the spatial and temporal distributions of genes expressed in the mouse. The aGEM Platform has been built by extending the Distributed Annotation System (DAS) protocol, which was originally designed to share genome annotations over the WWW. DAS is a client-server system in which a single client integrates information from multiple distributed servers. Results: The aGEM Platform provides information to answer three main questions. (i) Which genes are expressed in a given mouse anatomical component? (ii) In which mouse anatomical structures are a given gene or set of genes expressed? And (iii) is there any correlation among these findings? Currently, this Platform includes several well-known mouse resources (EMAGE, GXD and GENSAT), hosting gene-expression data mostly obtained from in situ techniques together with a broad set of image-derived annotations. Availability: The Platform is optimized for Firefox 3.0 and it is accessed through a friendly and intuitive display: http://agem.cnb.csic.es Contact: natalia@cnb.csic.es Supplementary information: Supplementary data are available at http://bioweb.cnb.csic.es/VisualOmics/aGEM/home.html and http://bioweb.cnb.csic.es/VisualOmics/index_VO.html and Bioinformatics online. PMID:19592395

  6. Behavioral consequences of dopamine deficiency in the Drosophila central nervous system

    PubMed Central

    Riemensperger, Thomas; Isabel, Guillaume; Coulom, Hélène; Neuser, Kirsa; Seugnet, Laurent; Kume, Kazuhiko; Iché-Torres, Magali; Cassar, Marlène; Strauss, Roland; Preat, Thomas; Hirsh, Jay; Birman, Serge

    2011-01-01

    The neuromodulatory function of dopamine (DA) is an inherent feature of nervous systems of all animals. To learn more about the function of neural DA in Drosophila, we generated mutant flies that lack tyrosine hydroxylase, and thus DA biosynthesis, selectively in the nervous system. We found that DA is absent or below detection limits in the adult brain of these flies. Despite this, they have a lifespan similar to WT flies. These mutants show reduced activity, extended sleep time, locomotor deficits that increase with age, and they are hypophagic. Whereas odor and electrical shock avoidance are not affected, aversive olfactory learning is abolished. Instead, DA-deficient flies have an apparently “masochistic” tendency to prefer the shock-associated odor 2 h after conditioning. Similarly, sugar preference is absent, whereas sugar stimulation of foreleg taste neurons induces normal proboscis extension. Feeding the DA precursor l-DOPA to adults substantially rescues the learning deficit as well as other impaired behaviors that were tested. DA-deficient flies are also defective in positive phototaxis, without alteration in visual perception and optomotor response. Surprisingly, visual tracking is largely maintained, and these mutants still possess an efficient spatial orientation memory. Our findings show that flies can perform complex brain functions in the absence of neural DA, whereas specific behaviors involving, in particular, arousal and choice require normal levels of this neuromodulator. PMID:21187381

  7. Extending Beyond Qualitative Interviewing to Illuminate the Tacit Nature of Everyday Occupation: Occupational Mapping and Participatory Occupation Methods.

    PubMed

    Huot, Suzanne; Rudman, Debbie Laliberte

    2015-07-01

    The study of human occupation requires a variety of methods to fully elucidate its complex, multifaceted nature. Although qualitative approaches have commonly been used within occupational therapy and occupational science, we contend that such qualitative research must extend beyond the sole use of interviews. Drawing on qualitative methodological literature, we discuss the limits of interview methods and outline other methods, particularly visual methods, as productive means to enhance qualitative research. We then provide an overview of our critical ethnographic study that used narrative, visual, and observational methods to explore the occupational transitions experienced by immigrants to Canada. We describe our use of occupational mapping and participatory occupation methods and the contributions of these combined methods. We conclude that adopting a variety of methods can enable a deeper understanding of the tacit nature of everyday occupation, and is key to advancing knowledge regarding occupation and to informing occupational therapy practice.

  8. Postural response to predictable and nonpredictable visual flow in children and adults.

    PubMed

    Schmuckler, Mark A

    2017-11-01

    Children's (3-5years) and adults' postural reactions to different conditions of visual flow information varying in its frequency content was examined using a moving room apparatus. Both groups experienced four conditions of visual input: low-frequency (0.20Hz) visual oscillations, high-frequency (0.60Hz) oscillations, multifrequency nonpredictable visual input, and no imposed visual information. Analyses of the frequency content of anterior-posterior (AP) sway revealed that postural reactions to the single-frequency conditions replicated previous findings; children were responsive to low- and high-frequency oscillations, whereas adults were responsive to low-frequency information. Extending previous work, AP sway in response to the nonpredictable condition revealed that both groups were responsive to the different components contained in the multifrequency visual information, although adults retained their frequency selectivity to low-frequency versus high-frequency content. These findings are discussed in relation to work examining feedback versus feedforward control of posture, and the reweighting of sensory inputs for postural control, as a function of development and task context. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. The cognitive science of visual-spatial displays: implications for design.

    PubMed

    Hegarty, Mary

    2011-07-01

    This paper reviews cognitive science perspectives on the design of visual-spatial displays and introduces the other papers in this topic. It begins by classifying different types of visual-spatial displays, followed by a discussion of ways in which visual-spatial displays augment cognition and an overview of the perceptual and cognitive processes involved in using displays. The paper then argues for the importance of cognitive science methods to the design of visual displays and reviews some of the main principles of display design that have emerged from these approaches to date. Cognitive scientists have had good success in characterizing the performance of well-defined tasks with relatively simple visual displays, but many challenges remain in understanding the use of complex displays for ill-defined tasks. Current research exemplified by the papers in this topic extends empirical approaches to new displays and domains, informs the development of general principles of graphic design, and addresses current challenges in display design raised by the recent explosion in availability of complex data sets and new technologies for visualizing and interacting with these data. Copyright © 2011 Cognitive Science Society, Inc.

  10. Visual Associative Learning in Restrained Honey Bees with Intact Antennae

    PubMed Central

    Dobrin, Scott E.; Fahrbach, Susan E.

    2012-01-01

    A restrained honey bee can be trained to extend its proboscis in response to the pairing of an odor with a sucrose reward, a form of olfactory associative learning referred to as the proboscis extension response (PER). Although the ability of flying honey bees to respond to visual cues is well-established, associative visual learning in restrained honey bees has been challenging to demonstrate. Those few groups that have documented vision-based PER have reported that removing the antennae prior to training is a prerequisite for learning. Here we report, for a simple visual learning task, the first successful performance by restrained honey bees with intact antennae. Honey bee foragers were trained on a differential visual association task by pairing the presentation of a blue light with a sucrose reward and leaving the presentation of a green light unrewarded. A negative correlation was found between age of foragers and their performance in the visual PER task. Using the adaptations to the traditional PER task outlined here, future studies can exploit pharmacological and physiological techniques to explore the neural circuit basis of visual learning in the honey bee. PMID:22701575

  11. Fire protection for launch facilities using machine vision fire detection

    NASA Astrophysics Data System (ADS)

    Schwartz, Douglas B.

    1993-02-01

    Fire protection of critical space assets, including launch and fueling facilities and manned flight hardware, demands automatic sensors for continuous monitoring, and in certain high-threat areas, fast-reacting automatic suppression systems. Perhaps the most essential characteristic for these fire detection and suppression systems is high reliability; in other words, fire detectors should alarm only on actual fires and not be falsely activated by extraneous sources. Existing types of fire detectors have been greatly improved in the past decade; however, fundamental limitations of their method of operation leaves open a significant possibility of false alarms and restricts their usefulness. At the Civil Engineering Laboratory at Tyndall Air Force Base in Florida, a new type of fire detector is under development which 'sees' a fire visually, like a human being, and makes a reliable decision based on known visual characteristics of flames. Hardware prototypes of the Machine Vision (MV) Fire Detection System have undergone live fire tests and demonstrated extremely high accuracy in discriminating actual fires from false alarm sources. In fact, this technology promises to virtually eliminate false activations. This detector could be used to monitor fueling facilities, launch towers, clean rooms, and other high-value and high-risk areas. Applications can extend to space station and in-flight shuttle operations as well; fiber optics and remote camera heads enable the system to see around obstructed areas and crew compartments. The capability of the technology to distinguish fires means that fire detection can be provided even during maintenance operations, such as welding.

  12. Fire protection for launch facilities using machine vision fire detection

    NASA Technical Reports Server (NTRS)

    Schwartz, Douglas B.

    1993-01-01

    Fire protection of critical space assets, including launch and fueling facilities and manned flight hardware, demands automatic sensors for continuous monitoring, and in certain high-threat areas, fast-reacting automatic suppression systems. Perhaps the most essential characteristic for these fire detection and suppression systems is high reliability; in other words, fire detectors should alarm only on actual fires and not be falsely activated by extraneous sources. Existing types of fire detectors have been greatly improved in the past decade; however, fundamental limitations of their method of operation leaves open a significant possibility of false alarms and restricts their usefulness. At the Civil Engineering Laboratory at Tyndall Air Force Base in Florida, a new type of fire detector is under development which 'sees' a fire visually, like a human being, and makes a reliable decision based on known visual characteristics of flames. Hardware prototypes of the Machine Vision (MV) Fire Detection System have undergone live fire tests and demonstrated extremely high accuracy in discriminating actual fires from false alarm sources. In fact, this technology promises to virtually eliminate false activations. This detector could be used to monitor fueling facilities, launch towers, clean rooms, and other high-value and high-risk areas. Applications can extend to space station and in-flight shuttle operations as well; fiber optics and remote camera heads enable the system to see around obstructed areas and crew compartments. The capability of the technology to distinguish fires means that fire detection can be provided even during maintenance operations, such as welding.

  13. Congruence as a measurement of extended haplotype structure across the genome

    PubMed Central

    2012-01-01

    Background Historically, extended haplotypes have been defined using only a few data points, such as alleles for several HLA genes in the MHC. High-density SNP data, and the increasing affordability of whole genome SNP typing, creates the opportunity to define higher resolution extended haplotypes. This drives the need for new tools that support quantification and visualization of extended haplotypes as defined by as many as 2000 SNPs. Confronted with high-density SNP data across the major histocompatibility complex (MHC) for 2,300 complete families, compiled by the Type 1 Diabetes Genetics Consortium (T1DGC), we developed software for studying extended haplotypes. Methods The software, called ExHap (Extended Haplotype), uses a similarity measurement we term congruence to identify and quantify long-range allele identity. Using ExHap, we analyzed congruence in both the T1DGC data and family-phased data from the International HapMap Project. Results Congruent chromosomes from the T1DGC data have between 96.5% and 99.9% allele identity over 1,818 SNPs spanning 2.64 megabases of the MHC (HLA-DRB1 to HLA-A). Thirty-three of 132 DQ-DR-B-A defined haplotype groups have > 50% congruent chromosomes in this region. For example, 92% of chromosomes within the DR3-B8-A1 haplotype are congruent from HLA-DRB1 to HLA-A (99.8% allele identity). We also applied ExHap to all 22 autosomes for both CEU and YRI cohorts from the International HapMap Project, identifying multiple candidate extended haplotypes. Conclusions Long-range congruence is not unique to the MHC region. Patterns of allele identity on phased chromosomes provide a simple, straightforward approach to visually and quantitatively inspect complex long-range structural patterns in the genome. Such patterns aid the biologist in appreciating genetic similarities and differences across cohorts, and can lead to hypothesis generation for subsequent studies. PMID:22369243

  14. Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2016-05-01

    Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.

  15. Choosing MUSE: Validation of a Low-Cost, Portable EEG System for ERP Research.

    PubMed

    Krigolson, Olave E; Williams, Chad C; Norton, Angela; Hassall, Cameron D; Colino, Francisco L

    2017-01-01

    In recent years there has been an increase in the number of portable low-cost electroencephalographic (EEG) systems available to researchers. However, to date the validation of the use of low-cost EEG systems has focused on continuous recording of EEG data and/or the replication of large system EEG setups reliant on event-markers to afford examination of event-related brain potentials (ERP). Here, we demonstrate that it is possible to conduct ERP research without being reliant on event markers using a portable MUSE EEG system and a single computer. Specifically, we report the results of two experiments using data collected with the MUSE EEG system-one using the well-known visual oddball paradigm and the other using a standard reward-learning task. Our results demonstrate that we could observe and quantify the N200 and P300 ERP components in the visual oddball task and the reward positivity (the mirror opposite component to the feedback-related negativity) in the reward-learning task. Specifically, single sample t -tests of component existence (all p 's < 0.05), computation of Bayesian credible intervals, and 95% confidence intervals all statistically verified the existence of the N200, P300, and reward positivity in all analyses. We provide with this research paper an open source website with all the instructions, methods, and software to replicate our findings and to provide researchers with an easy way to use the MUSE EEG system for ERP research. Importantly, our work highlights that with a single computer and a portable EEG system such as the MUSE one can conduct ERP research with ease thus greatly extending the possible use of the ERP methodology to a variety of novel contexts.

  16. The time course of auditory-visual processing of speech and body actions: evidence for the simultaneous activation of an extended neural network for semantic processing.

    PubMed

    Meyer, Georg F; Harrison, Neil R; Wuerger, Sophie M

    2013-08-01

    An extensive network of cortical areas is involved in multisensory object and action recognition. This network draws on inferior frontal, posterior temporal, and parietal areas; activity is modulated by familiarity and the semantic congruency of auditory and visual component signals even if semantic incongruences are created by combining visual and auditory signals representing very different signal categories, such as speech and whole body actions. Here we present results from a high-density ERP study designed to examine the time-course and source location of responses to semantically congruent and incongruent audiovisual speech and body actions to explore whether the network involved in action recognition consists of a hierarchy of sequentially activated processing modules or a network of simultaneously active processing sites. We report two main results:1) There are no significant early differences in the processing of congruent and incongruent audiovisual action sequences. The earliest difference between congruent and incongruent audiovisual stimuli occurs between 240 and 280 ms after stimulus onset in the left temporal region. Between 340 and 420 ms, semantic congruence modulates responses in central and right frontal areas. Late differences (after 460 ms) occur bilaterally in frontal areas.2) Source localisation (dipole modelling and LORETA) reveals that an extended network encompassing inferior frontal, temporal, parasaggital, and superior parietal sites are simultaneously active between 180 and 420 ms to process auditory–visual action sequences. Early activation (before 120 ms) can be explained by activity in mainly sensory cortices. . The simultaneous activation of an extended network between 180 and 420 ms is consistent with models that posit parallel processing of complex action sequences in frontal, temporal and parietal areas rather than models that postulate hierarchical processing in a sequence of brain regions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Real-time visual mosaicking and navigation on the seafloor

    NASA Astrophysics Data System (ADS)

    Richmond, Kristof

    Remote robotic exploration holds vast potential for gaining knowledge about extreme environments accessible to humans only with great difficulty. Robotic explorers have been sent to other solar system bodies, and on this planet into inaccessible areas such as caves and volcanoes. In fact, the largest unexplored land area on earth lies hidden in the airless cold and intense pressure of the ocean depths. Exploration in the oceans is further hindered by water's high absorption of electromagnetic radiation, which both inhibits remote sensing from the surface, and limits communications with the bottom. The Earth's oceans thus provide an attractive target for developing remote exploration capabilities. As a result, numerous robotic vehicles now routinely survey this environment, from remotely operated vehicles piloted over tethers from the surface to torpedo-shaped autonomous underwater vehicles surveying the mid-waters. However, these vehicles are limited in their ability to navigate relative to their environment. This limits their ability to return to sites with precision without the use of external navigation aids, and to maneuver near and interact with objects autonomously in the water and on the sea floor. The enabling of environment-relative positioning on fully autonomous underwater vehicles will greatly extend their power and utility for remote exploration in the furthest reaches of the Earth's waters---even under ice and under ground---and eventually in extraterrestrial liquid environments such as Europa's oceans. This thesis presents an operational, fielded system for visual navigation of underwater robotic vehicles in unexplored areas of the seafloor. The system does not depend on external sensing systems, using only instruments on board the vehicle. As an area is explored, a camera is used to capture images and a composite view, or visual mosaic, of the ocean bottom is created in real time. Side-to-side visual registration of images is combined with dead-reckoned navigation information in a framework allowing the creation and updating of large, locally consistent mosaics. These mosaics are used as maps in which the vehicle can navigate and localize itself with respect to points in the environment. The system achieves real-time performance in several ways. First, wherever possible, direct sensing of motion parameters is used in place of extracting them from visual data. Second, trajectories are chosen to enable a hierarchical search for side-to-side links which limits the amount of searching performed without sacrificing robustness. Finally, the map estimation is formulated as a sparse, linear information filter allowing rapid updating of large maps. The visual navigation enabled by the work in this thesis represents a new capability for remotely operated vehicles, and an enabling capability for a new generation of autonomous vehicles which explore and interact with remote, unknown and unstructured underwater environments. The real-time mosaic can be used on current tethered vehicles to create pilot aids and provide a vehicle user with situational awareness of the local environment and the position of the vehicle within it. For autonomous vehicles, the visual navigation system enables precise environment-relative positioning and mapping, without requiring external navigation systems, opening the way for ever-expanding autonomous exploration capabilities. The utility of this system was demonstrated in the field at sites of scientific interest using the ROVs Ventana and Tiburon operated by the Monterey Bay Aquarium Research Institute. A number of sites in and around Monterey Bay, California were mosaicked using the system, culminating in a complete imaging of the wreck site of the USS Macon , where real-time visual mosaics containing thousands of images were generated while navigating using only sensor systems on board the vehicle.

  18. Ultrastructural aspects of feeding and secretion-excretion by the equine parasite Strongylus vulgaris.

    PubMed

    Mobarak, M S; Ryan, M F

    1999-06-01

    Light, scanning, and transmission electron microscopy were employed to provide further data on the putative origins of the immunogenic secretory-excretory product (ESP) of Strongylus vulgaris (Looss 1900). The sharply delineated but superficial attachment to the equine caecum by the mouth leaves behind an oval area devoid of epithelial cells. Attachment does not extend deeply enough to reach the muscularis mucosa layer of the equine intestine. The progressive digestion of the ingested plug of tissue (epithelial cells, blood cells and mucous) was visualized. The coelomocytes, floating cells and membranous structures located in the pseudocoelom and intimately associated with the digestive, excretory and reproductive systems, and with the somatic muscles are described. The secretory-excretory system comprises two, ventrally-located, secretory-excretory glands connected to tubular elements. These glands synthesize granules of various sizes and densities that are delineated.

  19. A system for respiratory motion detection using optical fibers embedded into textiles.

    PubMed

    D'Angelo, L T; Weber, S; Honda, Y; Thiel, T; Narbonneau, F; Luth, T C

    2008-01-01

    In this contribution, a first prototype for mobile respiratory motion detection using optical fibers embedded into textiles is presented. The developed system consists of a T-shirt with an integrated fiber sensor and a portable monitoring unit with a wireless communication link enabling the data analysis and visualization on a PC. A great effort is done worldwide to develop mobile solutions for health monitoring of vital signs for patients needing continuous medical care. Wearable, comfortable and smart textiles incorporating sensors are good approaches to solve this problem. In most of the cases, electrical sensors are integrated, showing significant limits such as for the monitoring of anaesthetized patients during Magnetic Resonance Imaging (MRI). OFSETH (Optical Fibre Embedded into technical Textile for Healthcare) uses optical sensor technologies to extend the current capabilities of medical technical textiles.

  20. DCO-VIVO: A Collaborative Data Platform for the Deep Carbon Science Communities

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, Y.; West, P.; Erickson, J. S.; Ma, X.; Fox, P. A.

    2014-12-01

    Deep Carbon Observatory (DCO) is a decade-long scientific endeavor to understand carbon in the complex deep Earth system. Thousands of DCO scientists from institutions across the globe are organized into communities representing four domains of exploration: Extreme Physics and Chemistry, Reservoirs and Fluxes, Deep Energy, and Deep Life. Cross-community and cross-disciplinary collaboration is one of the most distinctive features in DCO's flexible research framework. VIVO is an open-source Semantic Web platform that facilitates cross-institutional researcher and research discovery. it includes a number of standard ontologies that interconnect people, organizations, publications, activities, locations, and other entities of research interest to enable browsing, searching, visualizing, and generating Linked Open (research) Data. The DCO-VIVO solution expedites research collaboration between DCO scientists and communities. Based on DCO's specific requirements, the DCO Data Science team developed a series of extensions to the VIVO platform including extending the VIVO information model, extended query over the semantic information within VIVO, integration with other open source collaborative environments and data management systems, using single sign-on, assigning of unique Handles to DCO objects, and publication and dataset ingesting extensions using existing publication systems. We present here the iterative development of these requirements that are now in daily use by the DCO community of scientists for research reporting, information sharing, and resource discovery in support of research activities and program management.

  1. A coaxially focused multi-mode beam for optical coherence tomography imaging with extended depth of focus (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yin, Biwei; Liang, Chia-Pin; Vuong, Barry; Tearney, Guillermo J.

    2017-02-01

    Conventional OCT images, obtained using a focused Gaussian beam have a lateral resolution of approximately 30 μm and a depth of focus (DOF) of 2-3 mm, defined as the confocal parameter (twice of Gaussian beam Rayleigh range). Improvement of lateral resolution without sacrificing imaging range requires techniques that can extend the DOF. Previously, we described a self-imaging wavefront division optical system that provided an estimated one order of magnitude DOF extension. In this study, we further investigate the properties of the coaxially focused multi-mode (CAFM) beam created by this self-imaging wavefront division optical system and demonstrate its feasibility for real-time biological tissue imaging. Gaussian beam and CAFM beam fiber optic probes with similar numerical apertures (objective NA≈0.5) were fabricated, providing lateral resolutions of approximately 2 μm. Rigorous lateral resolution characterization over depth was performed for both probes. The CAFM beam probe was found to be able to provide a DOF that was approximately one order of magnitude greater than that of Gaussian beam probe. By incorporating the CAFM beam fiber optic probe into a μOCT system with 1.5 μm axial resolution, we were able to acquire cross-sectional images of swine small intestine ex vivo, enabling the visualization of subcellular structures, providing high quality OCT images over more than a 300 μm depth range.

  2. Cell-selective metabolic labeling of biomolecules with bioorthogonal functionalities.

    PubMed

    Xie, Ran; Hong, Senlian; Chen, Xing

    2013-10-01

    Metabolic labeling of biomolecules with bioorthogonal functionalities enables visualization, enrichment, and analysis of the biomolecules of interest in their physiological environments. This versatile strategy has found utility in probing various classes of biomolecules in a broad range of biological processes. On the other hand, metabolic labeling is nonselective with respect to cell type, which imposes limitations for studies performed in complex biological systems. Herein, we review the recent methodological developments aiming to endow metabolic labeling strategies with cell-type selectivity. The cell-selective metabolic labeling strategies have emerged from protein and glycan labeling. We envision that these strategies can be readily extended to labeling of other classes of biomolecules. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Top-down predictions in the cognitive brain

    PubMed Central

    Kveraga, Kestutis; Ghuman, Avniel S.; Bar, Moshe

    2007-01-01

    The human brain is not a passive organ simply waiting to be activated by external stimuli. Instead, it is proposed tat the brain continuously employs memory of past experiences to interpret sensory information and predict the immediately relevant future. This review concentrates on visual recognition as the model system for developing and testing ideas about the role and mechanisms of top-down predictions in the brain. We cover relevant behavioral, computational and neural aspects. These ideas are then extended to other domains. The basic elements of this proposal include analogical mapping, associative representations and the generation of predictions. Connections to a host of cognitive processes will be made and implications to several mental disorders will be proposed. PMID:17923222

  4. High Dynamic Range Spectral Imaging Pipeline For Multispectral Filter Array Cameras.

    PubMed

    Lapray, Pierre-Jean; Thomas, Jean-Baptiste; Gouton, Pierre

    2017-06-03

    Spectral filter arrays imaging exhibits a strong similarity with color filter arrays. This permits us to embed this technology in practical vision systems with little adaptation of the existing solutions. In this communication, we define an imaging pipeline that permits high dynamic range (HDR)-spectral imaging, which is extended from color filter arrays. We propose an implementation of this pipeline on a prototype sensor and evaluate the quality of our implementation results on real data with objective metrics and visual examples. We demonstrate that we reduce noise, and, in particular we solve the problem of noise generated by the lack of energy balance. Data are provided to the community in an image database for further research.

  5. Analysis of single quantum-dot mobility inside 1D nanochannel devices

    NASA Astrophysics Data System (ADS)

    Hoang, H. T.; Segers-Nolten, I. M.; Tas, N. R.; van Honschoten, J. W.; Subramaniam, V.; Elwenspoek, M. C.

    2011-07-01

    We visualized individual quantum dots using a combination of a confining nanochannel and an ultra-sensitive microscope system, equipped with a high numerical aperture lens and a highly sensitive camera. The diffusion coefficients of the confined quantum dots were determined from the experimentally recorded trajectories according to the classical diffusion theory for Brownian motion in two dimensions. The calculated diffusion coefficients were three times smaller than those in bulk solution. These observations confirm and extend the results of Eichmann et al (2008 Langmuir 24 714-21) to smaller particle diameters and more narrow confinement. A detailed analysis shows that the observed reduction in mobility cannot be explained by conventional hydrodynamic theory.

  6. Advanced information society(2)

    NASA Astrophysics Data System (ADS)

    Masuyama, Keiichi

    Our modern life is full of information and information infiltrates into our daily life. Networking of the telecommunication is extended to society, company, and individual level. Although we have just entered the advanced information society, business world and our daily life have been steadily transformed by the advancement of information network. This advancement of information brings a big influence on economy, and will play they the main role in the expansion of domestic demands. This paper tries to view the image of coming advanced information society, focusing on the transforming businessman's life and the situation of our daily life, which became wealthy by the spread of daily life information and the visual information by satellite system, in the development of the intelligent city.

  7. PRay - A graphical user interface for interactive visualization and modification of rayinvr models

    NASA Astrophysics Data System (ADS)

    Fromm, T.

    2016-01-01

    PRay is a graphical user interface for interactive displaying and editing of velocity models for seismic refraction. It is optimized for editing rayinvr models but can also be used as a dynamic viewer for ray tracing results from other software. The main features are the graphical editing of nodes and fast adjusting of the display (stations and phases). It can be extended by user-defined shell scripts and links to phase picking software. PRay is open source software written in the scripting language Perl, runs on Unix-like operating systems including Mac OS X and provides a version controlled source code repository for community development (https://sourceforge.net/projects/pray-plot-rayinvr/).

  8. Real-time image mosaicing for medical applications.

    PubMed

    Loewke, Kevin E; Camarillo, David B; Jobst, Christopher A; Salisbury, J Kenneth

    2007-01-01

    In this paper we describe the development of a robotically-assisted image mosaicing system for medical applications. The processing occurs in real-time due to a fast initial image alignment provided by robotic position sensing. Near-field imaging, defined by relatively large camera motion, requires translations as well as pan and tilt orientations to be measured. To capture these measurements we use 5-d.o.f. sensing along with a hand-eye calibration to account for sensor offset. This sensor-based approach speeds up the mosaicing, eliminates cumulative errors, and readily handles arbitrary camera motions. Our results have produced visually satisfactory mosaics on a dental model but can be extended to other medical images.

  9. Documentation for the machine-readable version of the SAO-HD-GC-DM cross index version 1983

    NASA Technical Reports Server (NTRS)

    Roman, N. G.; Warren, W. H., Jr.; Schofield, N., Jr.

    1983-01-01

    An updated and extended machine readable version of the Smithsonian Astrophysical Observatory star catalog (SAO) is described. A correction of all errors which were found since preparation of the original catalog which resulted from misidentifications and omissions of components in multiple star systems and missing Durchmusterung numbers (the common identifier) in the SAO Catalog are included and component identifications from the Index of Visual Double Stars (IDS) are appended to all multiple SAO entries with the same DM numbers, and lower case letter identifiers for supplemental BD stars are added. A total of 11,398 individual corrections and data additions is incorporated into the present version of the cross index.

  10. Theoretical tuning of the firefly bioluminescence spectra by the modification of oxyluciferin

    NASA Astrophysics Data System (ADS)

    Cheng, Yuan-Yuan; Zhu, Jia; Liu, Ya-Jun

    2014-01-01

    Extending the firefly bioluminescence is of practical significance for the improved visualization of living cells and the development of a multicolor reporter. Tuning the color of bioluminescence in fireflies mainly involves the modification of luciferase and luciferin. In this Letter, we theoretically studied the emission spectra of 9 firefly oxyluciferin analogs in the gas phase and in solutions. Three density functionals, including B3LYP, CAM-B3LYP and M06-2X, were employed to theoretically predict the efficiently luminescent analogs. The reliable functionals for calculating the targeted systems were suggested. The luminescence efficiency, solvent effects, and substituent effects are discussed based on the calculated results.

  11. The Design and Development of an Omni-Directional Mobile Robot Oriented to an Intelligent Manufacturing System

    PubMed Central

    Qian, Jun; Zi, Bin; Ma, Yangang; Zhang, Dan

    2017-01-01

    In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields. PMID:28891964

  12. The Design and Development of an Omni-Directional Mobile Robot Oriented to an Intelligent Manufacturing System.

    PubMed

    Qian, Jun; Zi, Bin; Wang, Daoming; Ma, Yangang; Zhang, Dan

    2017-09-10

    In order to transport materials flexibly and smoothly in a tight plant environment, an omni-directional mobile robot based on four Mecanum wheels was designed. The mechanical system of the mobile robot is made up of three separable layers so as to simplify its combination and reorganization. Each modularized wheel was installed on a vertical suspension mechanism, which ensures the moving stability and keeps the distances of four wheels invariable. The control system consists of two-level controllers that implement motion control and multi-sensor data processing, respectively. In order to make the mobile robot navigate in an unknown semi-structured indoor environment, the data from a Kinect visual sensor and four wheel encoders were fused to localize the mobile robot using an extended Kalman filter with specific processing. Finally, the mobile robot was integrated in an intelligent manufacturing system for material conveying. Experimental results show that the omni-directional mobile robot can move stably and autonomously in an indoor environment and in industrial fields.

  13. Functional MRI Representational Similarity Analysis Reveals a Dissociation between Discriminative and Relative Location Information in the Human Visual System.

    PubMed

    Roth, Zvi N

    2016-01-01

    Neural responses in visual cortex are governed by a topographic mapping from retinal locations to cortical responses. Moreover, at the voxel population level early visual cortex (EVC) activity enables accurate decoding of stimuli locations. However, in many cases information enabling one to discriminate between locations (i.e., discriminative information) may be less relevant than information regarding the relative location of two objects (i.e., relative information). For example, when planning to grab a cup, determining whether the cup is located at the same retinal location as the hand is hardly relevant, whereas the location of the cup relative to the hand is crucial for performing the action. We have previously used multivariate pattern analysis techniques to measure discriminative location information, and found the highest levels in EVC, in line with other studies. Here we show, using representational similarity analysis, that availability of discriminative information in fMRI activation patterns does not entail availability of relative information. Specifically, we find that relative location information can be reliably extracted from activity patterns in posterior intraparietal sulcus (pIPS), but not from EVC, where we find the spatial representation to be warped. We further show that this variability in relative information levels between regions can be explained by a computational model based on an array of receptive fields. Moreover, when the model's receptive fields are extended to include inhibitory surround regions, the model can account for the spatial warping in EVC. These results demonstrate how size and shape properties of receptive fields in human visual cortex contribute to the transformation of discriminative spatial representations into relative spatial representations along the visual stream.

  14. Functional MRI Representational Similarity Analysis Reveals a Dissociation between Discriminative and Relative Location Information in the Human Visual System

    PubMed Central

    Roth, Zvi N.

    2016-01-01

    Neural responses in visual cortex are governed by a topographic mapping from retinal locations to cortical responses. Moreover, at the voxel population level early visual cortex (EVC) activity enables accurate decoding of stimuli locations. However, in many cases information enabling one to discriminate between locations (i.e., discriminative information) may be less relevant than information regarding the relative location of two objects (i.e., relative information). For example, when planning to grab a cup, determining whether the cup is located at the same retinal location as the hand is hardly relevant, whereas the location of the cup relative to the hand is crucial for performing the action. We have previously used multivariate pattern analysis techniques to measure discriminative location information, and found the highest levels in EVC, in line with other studies. Here we show, using representational similarity analysis, that availability of discriminative information in fMRI activation patterns does not entail availability of relative information. Specifically, we find that relative location information can be reliably extracted from activity patterns in posterior intraparietal sulcus (pIPS), but not from EVC, where we find the spatial representation to be warped. We further show that this variability in relative information levels between regions can be explained by a computational model based on an array of receptive fields. Moreover, when the model's receptive fields are extended to include inhibitory surround regions, the model can account for the spatial warping in EVC. These results demonstrate how size and shape properties of receptive fields in human visual cortex contribute to the transformation of discriminative spatial representations into relative spatial representations along the visual stream. PMID:27242455

  15. Target-responsive DNAzyme cross-linked hydrogel for visual quantitative detection of lead.

    PubMed

    Huang, Yishun; Ma, Yanli; Chen, Yahong; Wu, Xuemeng; Fang, Luting; Zhu, Zhi; Yang, Chaoyong James

    2014-11-18

    Because of the severe health risks associated with lead pollution, rapid, sensitive, and portable detection of low levels of Pb(2+) in biological and environmental samples is of great importance. In this work, a Pb(2+)-responsive hydrogel was prepared using a DNAzyme and its substrate as cross-linker for rapid, sensitive, portable, and quantitative detection of Pb(2+). Gold nanoparticles (AuNPs) were first encapsulated in the hydrogel as an indicator for colorimetric analysis. In the absence of lead, the DNAzyme is inactive, and the substrate cross-linker maintains the hydrogel in the gel form. In contrast, the presence of lead activates the DNAzyme to cleave the substrate, decreasing the cross-linking density of the hydrogel and resulting in dissolution of the hydrogel and release of AuNPs for visual detection. As low as 10 nM Pb(2+) can be detected by the naked eye. Furthermore, to realize quantitative visual detection, a volumetric bar-chart chip (V-chip) was used for quantitative readout of the hydrogel system by replacing AuNPs with gold-platinum core-shell nanoparticles (Au@PtNPs). The Au@PtNPs released from the hydrogel upon target activation can efficiently catalyze the decomposition of H2O2 to generate a large volume of O2. The gas pressure moves an ink bar in the V-chip for portable visual quantitative detection of lead with a detection limit less than 5 nM. The device was able to detect lead in digested blood with excellent accuracy. The method developed can be used for portable lead quantitation in many applications. Furthermore, the method can be further extended to portable visual quantitative detection of a variety of targets by replacing the lead-responsive DNAzyme with other DNAzymes.

  16. A Visualization Tool for Integrating Research Results at an Underground Mine

    NASA Astrophysics Data System (ADS)

    Boltz, S.; Macdonald, B. D.; Orr, T.; Johnson, W.; Benton, D. J.

    2016-12-01

    Researchers with the National Institute for Occupational Safety and Health are conducting research at a deep, underground metal mine in Idaho to develop improvements in ground control technologies that reduce the effects of dynamic loading on mine workings, thereby decreasing the risk to miners. This research is multifaceted and includes: photogrammetry, microseismic monitoring, geotechnical instrumentation, and numerical modeling. When managing research involving such a wide range of data, understanding how the data relate to each other and to the mining activity quickly becomes a daunting task. In an effort to combine this diverse research data into a single, easy-to-use system, a three-dimensional visualization tool was developed. The tool was created using the Unity3d video gaming engine and includes the mine development entries, production stopes, important geologic structures, and user-input research data. The tool provides the user with a first-person, interactive experience where they are able to walk through the mine as well as navigate the rock mass surrounding the mine to view and interpret the imported data in the context of the mine and as a function of time. The tool was developed using data from a single mine; however, it is intended to be a generic tool that can be easily extended to other mines. For example, a similar visualization tool is being developed for an underground coal mine in Colorado. The ultimate goal is for NIOSH researchers and mine personnel to be able to use the visualization tool to identify trends that may not otherwise be apparent when viewing the data separately. This presentation highlights the features and capabilities of the mine visualization tool and explains how it may be used to more effectively interpret data and reduce the risk of ground fall hazards to underground miners.

  17. Interaction between vibration-evoked proprioceptive illusions and mirror-evoked visual illusions in an arm-matching task.

    PubMed

    Tsuge, Mikio; Izumizaki, Masahiko; Kigawa, Kazuyoshi; Atsumi, Takashi; Homma, Ikuo

    2012-12-01

    We studied the influence of false proprioceptive information generated by arm vibration and false visual information provided by a mirror in which subjects saw a reflection of another arm on perception of arm position, in a forearm position-matching task in right-handed subjects (n = 17). The mirror was placed between left and right arms, and arranged so that the reflected left arm appeared to the subjects to be their unseen right (reference) arm. The felt position of the right arm, indicated with a paddle, was influenced by vision of the mirror image of the left arm. If the left arm appeared flexed in the mirror, subjects felt their right arm to be more flexed than it was. Conversely, if the left arm was extended, they felt their right arm to be more extended than it was. When reference elbow flexors were vibrated at 70-80 Hz, an illusion of extension of the vibrated arm was elicited. The illusion of a more flexed reference arm evoked by seeing a mirror image of the flexed left arm was reduced by vibration. However, the illusion of extension of the right arm evoked by seeing a mirror image of the extended left arm was increased by vibration. That is, when the mirror and vibration illusions were in the same direction, they reinforced each other. However, when they were in opposite directions, they tended to cancel one another. The present study shows the interaction between proprioceptive and visual information in perception of arm position.

  18. PathVisio 3: an extendable pathway analysis toolbox.

    PubMed

    Kutmon, Martina; van Iersel, Martijn P; Bohler, Anwesha; Kelder, Thomas; Nunes, Nuno; Pico, Alexander R; Evelo, Chris T

    2015-02-01

    PathVisio is a commonly used pathway editor, visualization and analysis software. Biological pathways have been used by biologists for many years to describe the detailed steps in biological processes. Those powerful, visual representations help researchers to better understand, share and discuss knowledge. Since the first publication of PathVisio in 2008, the original paper was cited more than 170 times and PathVisio was used in many different biological studies. As an online editor PathVisio is also integrated in the community curated pathway database WikiPathways. Here we present the third version of PathVisio with the newest additions and improvements of the application. The core features of PathVisio are pathway drawing, advanced data visualization and pathway statistics. Additionally, PathVisio 3 introduces a new powerful extension systems that allows other developers to contribute additional functionality in form of plugins without changing the core application. PathVisio can be downloaded from http://www.pathvisio.org and in 2014 PathVisio 3 has been downloaded over 5,500 times. There are already more than 15 plugins available in the central plugin repository. PathVisio is a freely available, open-source tool published under the Apache 2.0 license (http://www.apache.org/licenses/LICENSE-2.0). It is implemented in Java and thus runs on all major operating systems. The code repository is available at http://svn.bigcat.unimaas.nl/pathvisio. The support mailing list for users is available on https://groups.google.com/forum/#!forum/wikipathways-discuss and for developers on https://groups.google.com/forum/#!forum/wikipathways-devel.

  19. Chromatic and achromatic visual fields in relation to choroidal thickness in patients with high myopia: A pilot study.

    PubMed

    García-Domene, M C; Luque, M J; Díez-Ajenjo, M A; Desco-Esteban, M C; Artigas, J M

    2018-02-01

    To analyse the relationship between the choroidal thickness and the visual perception of patients with high myopia but without retinal damage. All patients underwent ophthalmic evaluation including a slit lamp examination and dilated ophthalmoscopy, subjective refraction, best corrected visual acuity, axial length, optical coherence tomography, contrast sensitivity function and sensitivity of the visual pathways. We included eleven eyes of subjects with high myopia. There are statistical correlations between choroidal thickness and almost all the contrast sensitivity values. The sensitivity of magnocellular and koniocellular pathways is the most affected, and the homogeneity of the sensibility of the magnocellular pathway depends on the choroidal thickness; when the thickness decreases, the sensitivity impairment extends from the center to the periphery of the visual field. Patients with high myopia without any fundus changes have visual impairments. We have found that choroidal thickness correlates with perceptual parameters such as contrast sensitivity or mean defect and pattern standard deviation of the visual fields of some visual pathways. Our study shows that the magnocellular and koniocellular pathways are the most affected, so that these patients have impairment in motion perception and blue-yellow contrast perception. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  20. In-vivo imaging of the palisades of Vogt and the limbal crypts with sub-micrometer axial resolution optical coherence tomography

    PubMed Central

    Bizheva, Kostadinka; Tan, Bingyao; MacLellan, Benjamin; Hosseinaee, Zohreh; Mason, Erik; Hileeto, Denise; Sorbara, Luigina

    2017-01-01

    A research-grade OCT system was used to image in-vivo and without contact with the tissue, the cellular structure and microvasculature of the healthy human corneo-scleral limbus. The OCT system provided 0.95 µm axial and 4 µm (2 µm) lateral resolution in biological tissue depending on the magnification of the imaging objective. Cross-sectional OCT images acquired tangentially from the inferior limbus showed reflective, loop-like features that correspond to the fibrous folds of the palisades of Vogt (POV). The high OCT resolution allowed for visualization of individual cells inside the limbal crypts, capillaries extending from the inside of the POV’s fibrous folds and connecting to a lateral grid of micro-vessels located in the connective tissue directly below the POV, as well as reflections from individual red blood cells inside the capillaries. Difference in the reflective properties of the POV was observed among subjects of various pigmentation levels of the POV. Morphological features observed in the high resolution OCT images correlated well with histology. The ability to visualize the limbal morphology and microvasculature in-vivo at cellular level can aid the diagnostics and treatment of limbal stem cell dysfunction and dystrophies. PMID:28966853

Top