Sample records for system visualization project

  1. A low-cost and versatile system for projecting wide-field visual stimuli within fMRI scanners

    PubMed Central

    Greco, V.; Frijia, F.; Mikellidou, K.; Montanaro, D.; Farini, A.; D’Uva, M.; Poggi, P.; Pucci, M.; Sordini, A.; Morrone, M. C.; Burr, D. C.

    2016-01-01

    We have constructed and tested a custom-made magnetic-imaging-compatible visual projection system designed to project on a very wide visual field (~80°). A standard projector was modified with a coupling lens, projecting images into the termination of an image fiber. The other termination of the fiber was placed in the 3-T scanner room with a projection lens, which projected the images relayed by the fiber onto a screen over the head coil, viewed by a participant wearing magnifying goggles. To validate the system, wide-field stimuli were presented in order to identify retinotopic visual areas. The results showed that this low-cost and versatile optical system may be a valuable tool to map visual areas in the brain that process peripheral receptive fields. PMID:26092392

  2. Differential expression of vesicular glutamate transporters 1 and 2 may identify distinct modes of glutamatergic transmission in the macaque visual system

    PubMed Central

    Balaram, Pooja; Hackett, Troy A.; Kaas, Jon H.

    2013-01-01

    Glutamate is the primary neurotransmitter utilized by the mammalian visual system for excitatory neurotransmission. The sequestration of glutamate into synaptic vesicles, and the subsequent transport of filled vesicles to the presynaptic terminal membrane, is regulated by a family of proteins known as vesicular glutamate transporters (VGLUTs). Two VGLUT proteins, VGLUT1 and VGLUT2, characterize distinct sets of glutamatergic projections between visual structures in rodents and prosimian primates, yet little is known about their distributions in the visual system of anthropoid primates. We have examined the mRNA and protein expression patterns of VGLUT1 and VGLUT2 in the visual system of macaque monkeys, an Old World anthropoid primate, in order to determine their relative distributions in the superior colliculus, lateral geniculate nucleus, pulvinar complex, V1 and V2. Distinct expression patterns for both VGLUT1 and VGLUT2 identified architectonic boundaries in all structures, as well as anatomical subdivisions of the superior colliculus, pulvinar complex, and V1. These results suggest that VGLUT1 and VGLUT2 clearly identify regions of glutamatergic input in visual structures, and may identify common architectonic features of visual areas and nuclei across the primate radiation. Additionally, we find that VGLUT1 and VGLUT2 characterize distinct subsets of glutamatergic projections in the macaque visual system; VGLUT2 predominates in driving or feedforward projections from lower order to higher order visual structures while VGLUT1 predominates in modulatory or feedback projections from higher order to lower order visual structures. The distribution of these two proteins suggests that VGLUT1 and VGLUT2 may identify class 1 and class 2 type glutamatergic projections within the primate visual system (Sherman and Guillery, 2006). PMID:23524295

  3. Differential expression of vesicular glutamate transporters 1 and 2 may identify distinct modes of glutamatergic transmission in the macaque visual system.

    PubMed

    Balaram, Pooja; Hackett, Troy A; Kaas, Jon H

    2013-05-01

    Glutamate is the primary neurotransmitter utilized by the mammalian visual system for excitatory neurotransmission. The sequestration of glutamate into synaptic vesicles, and the subsequent transport of filled vesicles to the presynaptic terminal membrane, is regulated by a family of proteins known as vesicular glutamate transporters (VGLUTs). Two VGLUT proteins, VGLUT1 and VGLUT2, characterize distinct sets of glutamatergic projections between visual structures in rodents and prosimian primates, yet little is known about their distributions in the visual system of anthropoid primates. We have examined the mRNA and protein expression patterns of VGLUT1 and VGLUT2 in the visual system of macaque monkeys, an Old World anthropoid primate, in order to determine their relative distributions in the superior colliculus, lateral geniculate nucleus, pulvinar complex, V1 and V2. Distinct expression patterns for both VGLUT1 and VGLUT2 identified architectonic boundaries in all structures, as well as anatomical subdivisions of the superior colliculus, pulvinar complex, and V1. These results suggest that VGLUT1 and VGLUT2 clearly identify regions of glutamatergic input in visual structures, and may identify common architectonic features of visual areas and nuclei across the primate radiation. Additionally, we find that VGLUT1 and VGLUT2 characterize distinct subsets of glutamatergic projections in the macaque visual system; VGLUT2 predominates in driving or feedforward projections from lower order to higher order visual structures while VGLUT1 predominates in modulatory or feedback projections from higher order to lower order visual structures. The distribution of these two proteins suggests that VGLUT1 and VGLUT2 may identify class 1 and class 2 type glutamatergic projections within the primate visual system (Sherman and Guillery, 2006). Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Solar System Visualization (SSV) Project

    NASA Technical Reports Server (NTRS)

    Todd, Jessida L.

    2005-01-01

    The Solar System Visualization (SSV) project aims at enhancing scientific and public understanding through visual representations and modeling procedures. The SSV project's objectives are to (1) create new visualization technologies, (2) organize science observations and models, and (3) visualize science results and mission Plans. The SSV project currently supports the Mars Exploration Rovers (MER) mission, the Mars Reconnaissance Orbiter (MRO), and Cassini. In support of the these missions, the SSV team has produced pan and zoom animations of large mosaics to reveal details of surface features and topography, created 3D animations of science instruments and procedures, formed 3-D anaglyphs from left and right stereo pairs, and animated registered multi-resolution mosaics to provide context for microscopic images.

  5. Neural network based visualization of collaborations in a citizen science project

    NASA Astrophysics Data System (ADS)

    Morais, Alessandra M. M.; Santos, Rafael D. C.; Raddick, M. Jordan

    2014-05-01

    Citizen science projects are those in which volunteers are asked to collaborate in scientific projects, usually by volunteering idle computer time for distributed data processing efforts or by actively labeling or classifying information - shapes of galaxies, whale sounds, historical records are all examples of citizen science projects in which users access a data collecting system to label or classify images and sounds. In order to be successful, a citizen science project must captivate users and keep them interested on the project and on the science behind it, increasing therefore the time the users spend collaborating with the project. Understanding behavior of citizen scientists and their interaction with the data collection systems may help increase the involvement of the users, categorize them accordingly to different parameters, facilitate their collaboration with the systems, design better user interfaces, and allow better planning and deployment of similar projects and systems. Users behavior can be actively monitored or derived from their interaction with the data collection systems. Records of the interactions can be analyzed using visualization techniques to identify patterns and outliers. In this paper we present some results on the visualization of more than 80 million interactions of almost 150 thousand users with the Galaxy Zoo I citizen science project. Visualization of the attributes extracted from their behaviors was done with a clustering neural network (the Self-Organizing Map) and a selection of icon- and pixel-based techniques. These techniques allows the visual identification of groups of similar behavior in several different ways.

  6. [Review of visual display system in flight simulator].

    PubMed

    Xie, Guang-hui; Wei, Shao-ning

    2003-06-01

    Visual display system is the key part and plays a very important role in flight simulators and flight training devices. The developing history of visual display system is recalled and the principle and characters of some visual display systems including collimated display systems and back-projected collimated display systems are described. The future directions of visual display systems are analyzed.

  7. Stereoscopic applications for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2007-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  8. Cortico-fugal output from visual cortex promotes plasticity of innate motor behaviour.

    PubMed

    Liu, Bao-Hua; Huberman, Andrew D; Scanziani, Massimo

    2016-10-20

    The mammalian visual cortex massively innervates the brainstem, a phylogenetically older structure, via cortico-fugal axonal projections. Many cortico-fugal projections target brainstem nuclei that mediate innate motor behaviours, but the function of these projections remains poorly understood. A prime example of such behaviours is the optokinetic reflex (OKR), an innate eye movement mediated by the brainstem accessory optic system, that stabilizes images on the retina as the animal moves through the environment and is thus crucial for vision. The OKR is plastic, allowing the amplitude of this reflex to be adaptively adjusted relative to other oculomotor reflexes and thereby ensuring image stability throughout life. Although the plasticity of the OKR is thought to involve subcortical structures such as the cerebellum and vestibular nuclei, cortical lesions have suggested that the visual cortex might also be involved. Here we show that projections from the mouse visual cortex to the accessory optic system promote the adaptive plasticity of the OKR. OKR potentiation, a compensatory plastic increase in the amplitude of the OKR in response to vestibular impairment, is diminished by silencing visual cortex. Furthermore, targeted ablation of a sparse population of cortico-fugal neurons that specifically project to the accessory optic system severely impairs OKR potentiation. Finally, OKR potentiation results from an enhanced drive exerted by the visual cortex onto the accessory optic system. Thus, cortico-fugal projections to the brainstem enable the visual cortex, an area that has been principally studied for its sensory processing function, to plastically adapt the execution of innate motor behaviours.

  9. Visualizing Terrestrial and Aquatic Systems in 3D

    EPA Science Inventory

    The need for better visualization tools for environmental science is well documented, and the Visualization for Terrestrial and Aquatic Systems project (VISTAS) aims to both help scientists produce effective environmental science visualizations and to determine which visualizatio...

  10. Stereoscopic display of 3D models for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2006-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  11. Cortico-fugal output from visual cortex promotes plasticity of innate motor behaviour

    PubMed Central

    Liu, Bao-hua; Huberman, Andrew D.; Scanziani, Massimo

    2017-01-01

    The mammalian visual cortex massively innervates the brainstem, a phylogenetically older structure, via cortico-fugal axonal projections1. Many cortico-fugal projections target brainstem nuclei that mediate innate motor behaviours, but the function of these projections remains poorly understood1–4. A prime example of such behaviours is the optokinetic reflex (OKR), an innate eye movement mediated by the brainstem accessory optic system3,5,6, that stabilizes images on the retina as the animal moves through the environment and is thus crucial for vision5. The OKR is plastic, allowing the amplitude of this reflex to be adaptively adjusted relative to other oculomotor reflexes and thereby ensuring image stability throughout life7–11. Although the plasticity of the OKR is thought to involve subcortical structures such as the cerebellum and vestibular nuclei10–13, cortical lesions have suggested that the visual cortex might also be involved9,14,15. Here we show that projections from the mouse visual cortex to the accessory optic system promote the adaptive plasticity of the OKR. OKR potentiation, a compensatory plastic increase in the amplitude of the OKR in response to vestibular impairment11,16–18, is diminished by silencing visual cortex. Furthermore, targeted ablation of a sparse population of cortico-fugal neurons that specifically project to the accessory optic system severely impairs OKR potentiation. Finally, OKR potentiation results from an enhanced drive exerted by the visual cortex onto the accessory optic system. Thus, cortico-fugal projections to the brainstem enable the visual cortex, an area that has been principally studied for its sensory processing function19, to plastically adapt the execution of innate motor behaviours. PMID:27732573

  12. Visualizing Terrestrial and Aquatic Systems in 3D - in IEEE VisWeek 2014

    EPA Science Inventory

    The need for better visualization tools for environmental science is well documented, and the Visualization for Terrestrial and Aquatic Systems project (VISTAS) aims to both help scientists produce effective environmental science visualizations and to determine which visualizatio...

  13. Research on Multimedia Access to Microcomputers for Visually Impaired Youth.

    ERIC Educational Resources Information Center

    Ashcroft, S. C.

    This final report discusses the outcomes of a federally funded project that studied visual, auditory, and tactual methods designed to give youth with visual impairments access to microcomputers for curricular, prevocational, and avocational purposes. The objectives of the project were: (1) to research microcomputer systems that could be made…

  14. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase two, volume 1 : outreach and commercialization of IRSV prototype.

    DOT National Transportation Integrated Search

    2012-03-01

    The Integrated Remote Sensing and Visualization System (IRSV) was developed in Phase One of this project in order to : accommodate the needs of todays Bridge Engineers at the state and local level. Overall goals of this project are: : Better u...

  15. Shape equivalence under perspective and projective transformations.

    PubMed

    Wagemans, J; Lamote, C; Van Gool, L

    1997-06-01

    When a planar shape is viewed obliquely, it is deformed by a perspective deformation. If the visual system were to pick up geometrical invariants from such projections, these would necessarily be invariant under the wider class of projective transformations. To what extent can the visual system tell the difference between perspective and nonperspective but still projective deformations of shapes? To investigate this, observers were asked to indicate which of two test patterns most resembled a standard pattern. The test patterns were related to the standard pattern by a perspective or projective transformation, or they were completely unrelated. Performance was slightly better in a matching task with perspective and unrelated test patterns (92.6%) than in a projective-random matching task (88.8%). In a direct comparison, participants had a small preference (58.5%) for the perspectively related patterns over the projectively related ones. Preferences were based on the values of the transformation parameters (slant and shear). Hence, perspective and projective transformations yielded perceptual differences, but they were not treated in a categorically different manner by the human visual system.

  16. Automated Visual Cognitive Tasks for Recording Neural Activity Using a Floor Projection Maze

    PubMed Central

    Kent, Brendon W.; Yang, Fang-Chi; Burwell, Rebecca D.

    2014-01-01

    Neuropsychological tasks used in primates to investigate mechanisms of learning and memory are typically visually guided cognitive tasks. We have developed visual cognitive tasks for rats using the Floor Projection Maze1,2 that are optimized for visual abilities of rats permitting stronger comparisons of experimental findings with other species. In order to investigate neural correlates of learning and memory, we have integrated electrophysiological recordings into fully automated cognitive tasks on the Floor Projection Maze1,2. Behavioral software interfaced with an animal tracking system allows monitoring of the animal's behavior with precise control of image presentation and reward contingencies for better trained animals. Integration with an in vivo electrophysiological recording system enables examination of behavioral correlates of neural activity at selected epochs of a given cognitive task. We describe protocols for a model system that combines automated visual presentation of information to rodents and intracranial reward with electrophysiological approaches. Our model system offers a sophisticated set of tools as a framework for other cognitive tasks to better isolate and identify specific mechanisms contributing to particular cognitive processes. PMID:24638057

  17. Visual examination apparatus

    NASA Technical Reports Server (NTRS)

    Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)

    1976-01-01

    An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location including a projection system for displaying to a patient a series of visual stimuli. A response switch enables him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system thereby provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.

  18. Envision: An interactive system for the management and visualization of large geophysical data sets

    NASA Technical Reports Server (NTRS)

    Searight, K. R.; Wojtowicz, D. P.; Walsh, J. E.; Pathi, S.; Bowman, K. P.; Wilhelmson, R. B.

    1995-01-01

    Envision is a software project at the University of Illinois and Texas A&M, funded by NASA's Applied Information Systems Research Project. It provides researchers in the geophysical sciences convenient ways to manage, browse, and visualize large observed or model data sets. Envision integrates data management, analysis, and visualization of geophysical data in an interactive environment. It employs commonly used standards in data formats, operating systems, networking, and graphics. It also attempts, wherever possible, to integrate with existing scientific visualization and analysis software. Envision has an easy-to-use graphical interface, distributed process components, and an extensible design. It is a public domain package, freely available to the scientific community.

  19. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase two, volume 4 : web-based bridge information database--visualization analytics and distributed sensing.

    DOT National Transportation Integrated Search

    2012-03-01

    This report introduces the design and implementation of a Web-based bridge information visual analytics system. This : project integrates Internet, multiple databases, remote sensing, and other visualization technologies. The result : combines a GIS ...

  20. Visualizing Terrestrial and Aquatic Systems in 3-D

    EPA Science Inventory

    The environmental modeling community has a long-standing need for affordable, easy-to-use tools that support 3-D visualization of complex spatial and temporal model output. The Visualization of Terrestrial and Aquatic Systems project (VISTAS) aims to help scientists produce effe...

  1. Visual examination apparatus

    NASA Technical Reports Server (NTRS)

    Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)

    1973-01-01

    An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location is described. The apparatus includes a projection system for displaying to a patient a series of visual stimuli, a response switch enabling him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.

  2. Retinal projections in the electric catfish (Malapterurus electricus).

    PubMed

    Ebbesson, S O; O'Donnel, D

    1980-01-01

    The poorly developed visual system of the electric catfish was studied with silver-degeneration methods. Retinal projections were entirely contralateral to the hypothalamic optic nucleus, the lateral geniculate nucleus, the dorsomedial optic nucleus, the pretectal nuclei including the cortical nucleus, and the optic tectum. The small size and lack of differentiation of the visual system in the electric catfish suggest a relatively small role for this sensory system in this species.

  3. Vision

    NASA Technical Reports Server (NTRS)

    Taylor, J. H.

    1973-01-01

    Some data on human vision, important in present and projected space activities, are presented. Visual environment and performance and structure of the visual system are also considered. Visual perception during stress is included.

  4. Integrated remote sensing and visualization (IRSV) system for transportation infrastructure operations and management, phase one, volume 4 : use of knowledge integrated visual analytics system in supporting bridge management.

    DOT National Transportation Integrated Search

    2009-12-01

    The goals of integration should be: Supporting domain oriented data analysis through the use of : knowledge augmented visual analytics system. In this project, we focus on: : Providing interactive data exploration for bridge managements. : ...

  5. An Updated Account of the WISELAV Project: A Visual Construction of the English Verb System

    ERIC Educational Resources Information Center

    Pablos, Andrés Palacios

    2016-01-01

    This article presents the state of the art in WISELAV, an on-going research project based on the metaphor Languages Are (like) Visuals (LAV) and its mapping Words-In-Shapes Exchange (WISE). First, the cognitive premises that motivate the proposal are recalled: the power of images, students' increasingly visual cognitive learning style, and the…

  6. A quantitative comparison of the hemispheric, areal, and laminar origins of sensory and motor cortical projections to the superior colliculus of the cat.

    PubMed

    Butler, Blake E; Chabot, Nicole; Lomber, Stephen G

    2016-09-01

    The superior colliculus (SC) is a midbrain structure central to orienting behaviors. The organization of descending projections from sensory cortices to the SC has garnered much attention; however, rarely have projections from multiple modalities been quantified and contrasted, allowing for meaningful conclusions within a single species. Here, we examine corticotectal projections from visual, auditory, somatosensory, motor, and limbic cortices via retrograde pathway tracers injected throughout the superficial and deep layers of the cat SC. As anticipated, the majority of cortical inputs to the SC originate in the visual cortex. In fact, each field implicated in visual orienting behavior makes a substantial projection. Conversely, only one area of the auditory orienting system, the auditory field of the anterior ectosylvian sulcus (fAES), and no area involved in somatosensory orienting, shows significant corticotectal inputs. Although small relative to visual inputs, the projection from the fAES is of particular interest, as it represents the only bilateral cortical input to the SC. This detailed, quantitative study allows for comparison across modalities in an animal that serves as a useful model for both auditory and visual perception. Moreover, the differences in patterns of corticotectal projections between modalities inform the ways in which orienting systems are modulated by cortical feedback. J. Comp. Neurol. 524:2623-2642, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. Software for Scientists Facing Wicked Problems: Lessons from the VISTAS Project

    EPA Science Inventory

    The Visualization for Terrestrial and Aquatic Systems project (VISTAS) aims to help scientists produce effective environmental science visualizations for their own use and for use in presenting their work to a wide range of stakeholders (including other scientists, decision maker...

  8. Augmented reality-guided neurosurgery: accuracy and intraoperative application of an image projection technique.

    PubMed

    Besharati Tabrizi, Leila; Mahvash, Mehran

    2015-07-01

    An augmented reality system has been developed for image-guided neurosurgery to project images with regions of interest onto the patient's head, skull, or brain surface in real time. The aim of this study was to evaluate system accuracy and to perform the first intraoperative application. Images of segmented brain tumors in different localizations and sizes were created in 10 cases and were projected to a head phantom using a video projector. Registration was performed using 5 fiducial markers. After each registration, the distance of the 5 fiducial markers from the visualized tumor borders was measured on the virtual image and on the phantom. The difference was considered a projection error. Moreover, the image projection technique was intraoperatively applied in 5 patients and was compared with a standard navigation system. Augmented reality visualization of the tumors succeeded in all cases. The mean time for registration was 3.8 minutes (range 2-7 minutes). The mean projection error was 0.8 ± 0.25 mm. There were no significant differences in accuracy according to the localization and size of the tumor. Clinical feasibility and reliability of the augmented reality system could be proved intraoperatively in 5 patients (projection error 1.2 ± 0.54 mm). The augmented reality system is accurate and reliable for the intraoperative projection of images to the head, skull, and brain surface. The ergonomic advantage of this technique improves the planning of neurosurgical procedures and enables the surgeon to use direct visualization for image-guided neurosurgery.

  9. A neural-visualization IDS for honeynet data.

    PubMed

    Herrero, Álvaro; Zurutuza, Urko; Corchado, Emilio

    2012-04-01

    Neural intelligent systems can provide a visualization of the network traffic for security staff, in order to reduce the widely known high false-positive rate associated with misuse-based Intrusion Detection Systems (IDSs). Unlike previous work, this study proposes an unsupervised neural models that generate an intuitive visualization of the captured traffic, rather than network statistics. These snapshots of network events are immensely useful for security personnel that monitor network behavior. The system is based on the use of different neural projection and unsupervised methods for the visual inspection of honeypot data, and may be seen as a complementary network security tool that sheds light on internal data structures through visual inspection of the traffic itself. Furthermore, it is intended to facilitate verification and assessment of Snort performance (a well-known and widely-used misuse-based IDS), through the visualization of attack patterns. Empirical verification and comparison of the proposed projection methods are performed in a real domain, where two different case studies are defined and analyzed.

  10. [A technological device for optimizing the time taken for blind people to learn Braille].

    PubMed

    Hernández, Cesar; Pedraza, Luis F; López, Danilo

    2011-10-01

    This project was aimed at designing and putting an electronic prototype into practice for improving the initial time taken by visually handicapped people for learning Braille, especially children. This project was mainly based on a prototype digital electronic device which identifies and translates material written by a user in Braille by a voice synthesis system, producing artificial words to determine whether a handicapped person's writing in Braille has been correct. A global system for mobile communications (GSM) module was also incorporated into the device which allowed it to send text messages, thereby involving innovation in the field of articles for aiding visually handicapped people. This project's main result was an easily accessed and understandable prototype device which improved visually handicapped people's initial learning of Braille. The time taken for visually handicapped people to learn Braille became significantly reduced whilst their interest increased, as did their concentration time regarding such learning.

  11. Visualization techniques to aid in the analysis of multi-spectral astrophysical data sets

    NASA Technical Reports Server (NTRS)

    Domik, Gitta; Alam, Salim; Pinkney, Paul

    1992-01-01

    This report describes our project activities for the period Sep. 1991 - Oct. 1992. Our activities included stabilizing the software system STAR, porting STAR to IDL/widgets (improved user interface), targeting new visualization techniques for multi-dimensional data visualization (emphasizing 3D visualization), and exploring leading-edge 3D interface devices. During the past project year we emphasized high-end visualization techniques, by exploring new tools offered by state-of-the-art visualization software (such as AVS3 and IDL4/widgets), by experimenting with tools still under research at the Department of Computer Science (e.g., use of glyphs for multidimensional data visualization), and by researching current 3D input/output devices as they could be used to explore 3D astrophysical data. As always, any project activity is driven by the need to interpret astrophysical data more effectively.

  12. Distributions of vesicular glutamate transporters 1 and 2 in the visual system of tree shrews (Tupaia belangeri)1

    PubMed Central

    Balaram, P; Isaamullah, M; Petry, HM; Bickford, ME; Kaas, JH

    2014-01-01

    Vesicular glutamate transporter (VGLUT) proteins regulate the storage and release of glutamate from synapses of excitatory neurons. Two isoforms, VGLUT1 and VGLUT2, are found in most glutamatergic projections across the mammalian visual system, and appear to differentially identify subsets of excitatory projections between visual structures. To expand current knowledge on the distribution of VGLUT isoforms in highly visual mammals, we examined the mRNA and protein expression patterns of VGLUT1 and VGLUT2 in the lateral geniculate nucleus (LGN), superior colliculus, pulvinar complex, and primary visual cortex (V1) in tree shrews (Tupaia belangeri), which are closely related to primates but classified as a separate order (Scandentia). We found that VGLUT1 was distributed in intrinsic and corticothalamic connections, whereas VGLUT2 was predominantly distributed in subcortical and thalamocortical connections. VGLUT1 and VGLUT2 were coexpressed in the LGN and in the pulvinar complex, as well as in restricted layers of V1, suggesting a greater heterogeneity in the range of efferent glutamatergic projections from these structures. These findings provide further evidence that VGLUT1 and VGLUT2 identify distinct populations of excitatory neurons in visual brain structures across mammals. Observed variations in individual projections may highlight the evolution of these connections through the mammalian lineage. PMID:25521420

  13. Parallel Visualization Co-Processing of Overnight CFD Propulsion Applications

    NASA Technical Reports Server (NTRS)

    Edwards, David E.; Haimes, Robert

    1999-01-01

    An interactive visualization system pV3 is being developed for the investigation of advanced computational methodologies employing visualization and parallel processing for the extraction of information contained in large-scale transient engineering simulations. Visual techniques for extracting information from the data in terms of cutting planes, iso-surfaces, particle tracing and vector fields are included in this system. This paper discusses improvements to the pV3 system developed under NASA's Affordable High Performance Computing project.

  14. Development of a geographic visualization and communications systems (GVCS) for monitoring remote vehicles

    DOT National Transportation Integrated Search

    1998-03-30

    The purpose of this project is to integrate a variety of geographic information systems : capabilities and telecommunication technologies for potential use in geographic network and : visualization applications. The specific technical goals of the pr...

  15. Dynamic registration of an optical see-through HMD into a wide field-of-view rotorcraft flight simulation environment

    NASA Astrophysics Data System (ADS)

    Viertler, Franz; Hajek, Manfred

    2015-05-01

    To overcome the challenge of helicopter flight in degraded visual environments, current research considers headmounted displays with 3D-conformal (scene-linked) visual cues as most promising display technology. For pilot-in-theloop simulations with HMDs, a highly accurate registration of the augmented visual system is required. In rotorcraft flight simulators the outside visual cues are usually provided by a dome projection system, since a wide field-of-view (e.g. horizontally > 200° and vertically > 80°) is required, which can hardly be achieved with collimated viewing systems. But optical see-through HMDs do mostly not have an equivalent focus compared to the distance of the pilot's eye-point position to the curved screen, which is also dependant on head motion. Hence, a dynamic vergence correction has been implemented to avoid binocular disparity. In addition, the parallax error induced by even small translational head motions is corrected with a head-tracking system to be adjusted onto the projected screen. For this purpose, two options are presented. The correction can be achieved by rendering the view with yaw and pitch offset angles dependent on the deviating head position from the design eye-point of the spherical projection system. Furthermore, it can be solved by implementing a dynamic eye-point in the multi-channel projection system for the outside visual cues. Both options have been investigated for the integration of a binocular HMD into the Rotorcraft Simulation Environment (ROSIE) at the Technische Universitaet Muenchen. Pros and cons of both possibilities with regard on integration issues and usability in flight simulations will be discussed.

  16. Holodeck: Telepresence Dome Visualization System Simulations

    NASA Technical Reports Server (NTRS)

    Hite, Nicolas

    2012-01-01

    This paper explores the simulation and consideration of different image-projection strategies for the Holodeck, a dome that will be used for highly immersive telepresence operations in future endeavors of the National Aeronautics and Space Administration (NASA). Its visualization system will include a full 360 degree projection onto the dome's interior walls in order to display video streams from both simulations and recorded video. Because humans innately trust their vision to precisely report their surroundings, the Holodeck's visualization system is crucial to its realism. This system will be rigged with an integrated hardware and software infrastructure-namely, a system of projectors that will relay with a Graphics Processing Unit (GPU) and computer to both project images onto the dome and correct warping in those projections in real-time. Using both Computer-Aided Design (CAD) and ray-tracing software, virtual models of various dome/projector geometries were created and simulated via tracking and analysis of virtual light sources, leading to the selection of two possible configurations for installation. Research into image warping and the generation of dome-ready video content was also conducted, including generation of fisheye images, distortion correction, and the generation of a reliable content-generation pipeline.

  17. A system to program projects to meet visual quality objectives

    Treesearch

    Fred L. Henley; Frank L. Hunsaker

    1979-01-01

    The U. S. Forest Service has established Visual Quality Objectives for National Forest lands and determined a method to ascertain the Visual Absorption Capability of those lands. Combining the two mapping inventories has allowed the Forest Service to retain the visual quality while managing natural resources.

  18. Mars @ ASDC

    NASA Astrophysics Data System (ADS)

    Carraro, Francesco

    "Mars @ ASDC" is a project born with the goal of using the new web technologies to assist researches involved in the study of Mars. This project employs Mars map and javascript APIs provided by Google to visualize data acquired by space missions on the planet. So far, visualization of tracks acquired by MARSIS and regions observed by VIRTIS-Rosetta has been implemented. The main reason for the creation of this kind of tool is the difficulty in handling hundreds or thousands of acquisitions, like the ones from MARSIS, and the consequent difficulty in finding observations related to a particular region. This led to the development of a tool which allows to search for acquisitions either by defining the region of interest through a set of geometrical parameters or by manually selecting the region on the map through a few mouse clicks The system allows the visualization of tracks (acquired by MARSIS) or regions (acquired by VIRTIS-Rosetta) which intersect the user defined region. MARSIS tracks can be visualized both in Mercator and polar projections while the regions observed by VIRTIS can presently be visualized only in Mercator projection. The Mercator projection is the standard map provided by Google. The polar projections are provided by NASA and have been developed to be used in combination with APIs provided by Google The whole project has been developed following the "open source" philosophy: the client-side code which handles the functioning of the web page is written in javascript; the server-side code which executes the searches for tracks or regions is written in PHP and the DB which undergoes the system is MySQL.

  19. Distributions of vesicular glutamate transporters 1 and 2 in the visual system of tree shrews (Tupaia belangeri).

    PubMed

    Balaram, P; Isaamullah, M; Petry, H M; Bickford, M E; Kaas, J H

    2015-08-15

    Vesicular glutamate transporter (VGLUT) proteins regulate the storage and release of glutamate from synapses of excitatory neurons. Two isoforms, VGLUT1 and VGLUT2, are found in most glutamatergic projections across the mammalian visual system, and appear to differentially identify subsets of excitatory projections between visual structures. To expand current knowledge on the distribution of VGLUT isoforms in highly visual mammals, we examined the mRNA and protein expression patterns of VGLUT1 and VGLUT2 in the lateral geniculate nucleus (LGN), superior colliculus, pulvinar complex, and primary visual cortex (V1) in tree shrews (Tupaia belangeri), which are closely related to primates but classified as a separate order (Scandentia). We found that VGLUT1 was distributed in intrinsic and corticothalamic connections, whereas VGLUT2 was predominantly distributed in subcortical and thalamocortical connections. VGLUT1 and VGLUT2 were coexpressed in the LGN and in the pulvinar complex, as well as in restricted layers of V1, suggesting a greater heterogeneity in the range of efferent glutamatergic projections from these structures. These findings provide further evidence that VGLUT1 and VGLUT2 identify distinct populations of excitatory neurons in visual brain structures across mammals. Observed variations in individual projections may highlight the evolution of these connections through the mammalian lineage. © 2015 Wiley Periodicals, Inc.

  20. Selective binocular vision loss in two subterranean caviomorph rodents: Spalacopus cyanus and Ctenomys talarum

    PubMed Central

    Vega-Zuniga, T.; Medina, F. S.; Marín, G.; Letelier, J. C.; Palacios, A. G.; Němec, P.; Schleich, C. E.; Mpodozis, J.

    2017-01-01

    To what extent can the mammalian visual system be shaped by visual behavior? Here we analyze the shape of the visual fields, the densities and distribution of cells in the retinal ganglion-cell layer and the organization of the visual projections in two species of facultative non-strictly subterranean rodents, Spalacopus cyanus and Ctenomys talarum, aiming to compare these traits with those of phylogenetically closely related species possessing contrasting diurnal/nocturnal visual habits. S. cyanus shows a definite zone of frontal binocular overlap and a corresponding area centralis, but a highly reduced amount of ipsilateral retinal projections. The situation in C. talarum is more extreme as it lacks of a fronto-ventral area of binocular superposition, has no recognizable area centralis and shows no ipsilateral retinal projections except to the suprachiasmatic nucleus. In both species, the extension of the monocular visual field and of the dorsal region of binocular overlap as well as the whole set of contralateral visual projections, appear well-developed. We conclude that these subterranean rodents exhibit, paradoxically, diurnal instead of nocturnal visual specializations, but at the same time suffer a specific regression of the anatomical substrate for stereopsis. We discuss these findings in light of the visual ecology of subterranean lifestyles. PMID:28150809

  1. A Visual Servoing-Based Method for ProCam Systems Calibration

    PubMed Central

    Berry, Francois; Aider, Omar Ait; Mosnier, Jeremie

    2013-01-01

    Projector-camera systems are currently used in a wide field of applications, such as 3D reconstruction and augmented reality, and can provide accurate measurements, depending on the configuration and calibration. Frequently, the calibration task is divided into two steps: camera calibration followed by projector calibration. The latter still poses certain problems that are not easy to solve, such as the difficulty in obtaining a set of 2D–3D points to compute the projection matrix between the projector and the world. Existing methods are either not sufficiently accurate or not flexible. We propose an easy and automatic method to calibrate such systems that consists in projecting a calibration pattern and superimposing it automatically on a known printed pattern. The projected pattern is provided by a virtual camera observing a virtual pattern in an OpenGL model. The projector displays what the virtual camera visualizes. Thus, the projected pattern can be controlled and superimposed on the printed one with the aid of visual servoing. Our experimental results compare favorably with those of other methods considering both usability and accuracy. PMID:24084121

  2. MetaRep, an extended CMAS 3D program to visualize mafic (CMAS, ACF-S, ACF-N) and pelitic (AFM-K, AFM-S, AKF-S) projections

    NASA Astrophysics Data System (ADS)

    France, Lydéric; Nicollet, Christian

    2010-06-01

    MetaRep is a program based on our earlier program CMAS 3D. It is developed in MATLAB ® script. MetaRep objectives are to visualize and project major element compositions of mafic and pelitic rocks and their minerals in the pseudo-quaternary projections of the ACF-S, ACF-N, CMAS, AFM-K, AFM-S and AKF-S systems. These six systems are commonly used to describe metamorphic mineral assemblages and magmatic evolutions. Each system, made of four apices, can be represented in a tetrahedron that can be visualized in three dimensions with MetaRep; the four tetrahedron apices represent oxides or combination of oxides that define the composition of the projected rock or mineral. The three-dimensional representation allows one to obtain a better understanding of the topology of the relationships between the rocks and minerals and relations. From these systems, MetaRep can also project data in ternary plots (for example, the ACF, AFM and AKF ternary projections can be generated). A functional interface makes it easy to use and does not require any knowledge of MATLAB ® programming. To facilitate the use, MetaRep loads, from the main interface, data compiled in a Microsoft Excel ™ spreadsheet. Although useful for scientific research, the program is also a powerful tool for teaching. We propose an application example that, by using two combined systems (ACF-S and ACF-N), provides strong confirmation in the petrological interpretation.

  3. JUICE: a data management system that facilitates the analysis of large volumes of information in an EST project workflow.

    PubMed

    Latorre, Mariano; Silva, Herman; Saba, Juan; Guziolowski, Carito; Vizoso, Paula; Martinez, Veronica; Maldonado, Jonathan; Morales, Andrea; Caroca, Rodrigo; Cambiazo, Veronica; Campos-Vargas, Reinaldo; Gonzalez, Mauricio; Orellana, Ariel; Retamales, Julio; Meisel, Lee A

    2006-11-23

    Expressed sequence tag (EST) analyses provide a rapid and economical means to identify candidate genes that may be involved in a particular biological process. These ESTs are useful in many Functional Genomics studies. However, the large quantity and complexity of the data generated during an EST sequencing project can make the analysis of this information a daunting task. In an attempt to make this task friendlier, we have developed JUICE, an open source data management system (Apache + PHP + MySQL on Linux), which enables the user to easily upload, organize, visualize and search the different types of data generated in an EST project pipeline. In contrast to other systems, the JUICE data management system allows a branched pipeline to be established, modified and expanded, during the course of an EST project. The web interfaces and tools in JUICE enable the users to visualize the information in a graphical, user-friendly manner. The user may browse or search for sequences and/or sequence information within all the branches of the pipeline. The user can search using terms associated with the sequence name, annotation or other characteristics stored in JUICE and associated with sequences or sequence groups. Groups of sequences can be created by the user, stored in a clipboard and/or downloaded for further analyses. Different user profiles restrict the access of each user depending upon their role in the project. The user may have access exclusively to visualize sequence information, access to annotate sequences and sequence information, or administrative access. JUICE is an open source data management system that has been developed to aid users in organizing and analyzing the large amount of data generated in an EST Project workflow. JUICE has been used in one of the first functional genomics projects in Chile, entitled "Functional Genomics in nectarines: Platform to potentiate the competitiveness of Chile in fruit exportation". However, due to its ability to organize and visualize data from external pipelines, JUICE is a flexible data management system that should be useful for other EST/Genome projects. The JUICE data management system is released under the Open Source GNU Lesser General Public License (LGPL). JUICE may be downloaded from http://genoma.unab.cl/juice_system/ or http://www.genomavegetal.cl/juice_system/.

  4. NASA's Global Imagery Browse Services - Technologies for Visualizing Earth Science Data

    NASA Astrophysics Data System (ADS)

    Cechini, M. F.; Boller, R. A.; Baynes, K.; Schmaltz, J. E.; Thompson, C. K.; Roberts, J. T.; Rodriguez, J.; Wong, M. M.; King, B. A.; King, J.; De Luca, A. P.; Pressley, N. N.

    2017-12-01

    For more than 20 years, the NASA Earth Observing System (EOS) has collected earth science data for thousands of scientific parameters now totaling nearly 15 Petabytes of data. In 2013, NASA's Global Imagery Browse Services (GIBS) formed its vision to "transform how end users interact and discover [EOS] data through visualizations." This vision included leveraging scientific and community best practices and standards to provide a scalable, compliant, and authoritative source for EOS earth science data visualizations. Since that time, GIBS has grown quickly and now services millions of daily requests for over 500 imagery layers representing hundreds of earth science parameters to a broad community of users. For many of these parameters, visualizations are available within hours of acquisition from the satellite. For others, visualizations are available for the entire mission of the satellite. The GIBS system is built upon the OnEarth and MRF open source software projects, which are provided by the GIBS team. This software facilitates standards-based access for compliance with existing GIS tools. The GIBS imagery layers are predominantly rasterized images represented in two-dimensional coordinate systems, though multiple projections are supported. The OnEarth software also supports the GIBS ingest pipeline to facilitate low latency updates to new or updated visualizations. This presentation will focus on the following topics: Overview of GIBS visualizations and user community Current benefits and limitations of the OnEarth and MRF software projects and related standards GIBS access methods and their in/compatibilities with existing GIS libraries and applications Considerations for visualization accuracy and understandability Future plans for more advanced visualization concepts including Vertical Profiles and Vector-Based Representations Future plans for Amazon Web Service support and deployments

  5. Head mounted DMD based projection system for natural and prosthetic visual stimulation in freely moving rats.

    PubMed

    Arens-Arad, Tamar; Farah, Nairouz; Ben-Yaish, Shai; Zlotnik, Alex; Zalevsky, Zeev; Mandel, Yossi

    2016-10-12

    Novel technologies are constantly under development for vision restoration in blind patients. Many of these emerging technologies are based on the projection of high intensity light patterns at specific wavelengths, raising the need for the development of specialized projection systems. Here we present and characterize a novel projection system that meets the requirements for artificial retinal stimulation in rats and enables the recording of cortical responses. The system is based on a customized miniature Digital Mirror Device (DMD) for pattern projection, in both visible (525 nm) and NIR (915 nm) wavelengths, and a lens periscope for relaying the pattern directly onto the animal's retina. Thorough system characterization and the investigation of the effect of various parameters on obtained image quality were performed using ZEMAX. Simulation results revealed that images with an MTF higher than 0.8 were obtained with little effect of the vertex distance. Increased image quality was obtained at an optimal pupil diameter and smaller field of view. Visual cortex activity data was recorded simultaneously with pattern projection, further highlighting the importance of the system for prosthetic vision studies. This novel head mounted projection system may prove to be a vital tool in studying natural and artificial vision in behaving animals.

  6. Head mounted DMD based projection system for natural and prosthetic visual stimulation in freely moving rats

    PubMed Central

    Arens-Arad, Tamar; Farah, Nairouz; Ben-Yaish, Shai; Zlotnik, Alex; Zalevsky, Zeev; Mandel, Yossi

    2016-01-01

    Novel technologies are constantly under development for vision restoration in blind patients. Many of these emerging technologies are based on the projection of high intensity light patterns at specific wavelengths, raising the need for the development of specialized projection systems. Here we present and characterize a novel projection system that meets the requirements for artificial retinal stimulation in rats and enables the recording of cortical responses. The system is based on a customized miniature Digital Mirror Device (DMD) for pattern projection, in both visible (525 nm) and NIR (915 nm) wavelengths, and a lens periscope for relaying the pattern directly onto the animal’s retina. Thorough system characterization and the investigation of the effect of various parameters on obtained image quality were performed using ZEMAX. Simulation results revealed that images with an MTF higher than 0.8 were obtained with little effect of the vertex distance. Increased image quality was obtained at an optimal pupil diameter and smaller field of view. Visual cortex activity data was recorded simultaneously with pattern projection, further highlighting the importance of the system for prosthetic vision studies. This novel head mounted projection system may prove to be a vital tool in studying natural and artificial vision in behaving animals. PMID:27731346

  7. Head mounted DMD based projection system for natural and prosthetic visual stimulation in freely moving rats

    NASA Astrophysics Data System (ADS)

    Arens-Arad, Tamar; Farah, Nairouz; Ben-Yaish, Shai; Zlotnik, Alex; Zalevsky, Zeev; Mandel, Yossi

    2016-10-01

    Novel technologies are constantly under development for vision restoration in blind patients. Many of these emerging technologies are based on the projection of high intensity light patterns at specific wavelengths, raising the need for the development of specialized projection systems. Here we present and characterize a novel projection system that meets the requirements for artificial retinal stimulation in rats and enables the recording of cortical responses. The system is based on a customized miniature Digital Mirror Device (DMD) for pattern projection, in both visible (525 nm) and NIR (915 nm) wavelengths, and a lens periscope for relaying the pattern directly onto the animal’s retina. Thorough system characterization and the investigation of the effect of various parameters on obtained image quality were performed using ZEMAX. Simulation results revealed that images with an MTF higher than 0.8 were obtained with little effect of the vertex distance. Increased image quality was obtained at an optimal pupil diameter and smaller field of view. Visual cortex activity data was recorded simultaneously with pattern projection, further highlighting the importance of the system for prosthetic vision studies. This novel head mounted projection system may prove to be a vital tool in studying natural and artificial vision in behaving animals.

  8. Modeling human comprehension of data visualizations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzen, Laura E.; Haass, Michael Joseph; Divis, Kristin Marie

    This project was inspired by two needs. The first is a need for tools to help scientists and engineers to design effective data visualizations for communicating information, whether to the user of a system, an analyst who must make decisions based on complex data, or in the context of a technical report or publication. Most scientists and engineers are not trained in visualization design, and they could benefit from simple metrics to assess how well their visualization's design conveys the intended message. In other words, will the most important information draw the viewer's attention? The second is the need formore » cognition-based metrics for evaluating new types of visualizations created by researchers in the information visualization and visual analytics communities. Evaluating visualizations is difficult even for experts. However, all visualization methods and techniques are intended to exploit the properties of the human visual system to convey information efficiently to a viewer. Thus, developing evaluation methods that are rooted in the scientific knowledge of the human visual system could be a useful approach. In this project, we conducted fundamental research on how humans make sense of abstract data visualizations, and how this process is influenced by their goals and prior experience. We then used that research to develop a new model, the Data Visualization Saliency Model, that can make accurate predictions about which features in an abstract visualization will draw a viewer's attention. The model is an evaluation tool that can address both of the needs described above, supporting both visualization research and Sandia mission needs.« less

  9. Models of Speed Discrimination

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The prime purpose of this project was to investigate various theoretical issues concerning the integration of information across visual space. To date, most of the research efforts in the study of the visual system seem to have been focused in two almost non-overlaping directions. One research focus has been the low level perception as studied by psychophysics. The other focus has been the study of high level vision exemplified by the study of object perception. Most of the effort in psychophysics has been devoted to the search for the fundamental "features" of perception. The general idea is that the most peripheral processes of the visual system decompose the input into features that are then used for classification and recognition. The experimental and theoretical focus has been on finding and describing these analyzers that decompose images into useful components. Various models are then compared to the physiological measurements performed on neurons in the sensory systems. In the study of higher level perception, the work has been focused on the representation of objects and on the connections between various physical effects and object perception. In this category we find the perception of 3D from a variety of physical measurements including motion, shading and other physical phenomena. With few exceptions, there seem to be very limited development of theories describing how the visual system might combine the output of the analyzers to form the representation of visual objects. Therefore, the processes underlying the integration of information over space represent critical aspects of vision system. The understanding of these processes will have implications on our expectations for the underlying physiological mechanisms, as well as for our models of the internal representation for visual percepts. In this project, we explored several mechanisms related to spatial summation, attention, and eye movements. The project comprised three components: 1. Modeling visual search for the detection of speed deviation. 2. Perception of moving objects. 3. Exploring the role of eye movements in various visual tasks.

  10. The visual information system

    Treesearch

    Merlyn J. Paulson

    1979-01-01

    This paper outlines a project level process (V.I.S.) which utilizes very accurate and flexible computer algorithms in combination with contemporary site analysis and design techniques for visual evaluation, design and management. The process provides logical direction and connecting bridges through problem identification, information collection and verification, visual...

  11. Spherical Coordinate Systems for Streamlining Suited Mobility Analysis

    NASA Technical Reports Server (NTRS)

    Benson, Elizabeth; Cowley, Matthew; Harvill, Lauren; Rajulu. Sudhakar

    2015-01-01

    Introduction: When describing human motion, biomechanists generally report joint angles in terms of Euler angle rotation sequences. However, there are known limitations in using this method to describe complex motions such as the shoulder joint during a baseball pitch. Euler angle notation uses a series of three rotations about an axis where each rotation is dependent upon the preceding rotation. As such, the Euler angles need to be regarded as a set to get accurate angle information. Unfortunately, it is often difficult to visualize and understand these complex motion representations. It has been shown that using a spherical coordinate system allows Anthropometry and Biomechanics Facility (ABF) personnel to increase their ability to transmit important human mobility data to engineers, in a format that is readily understandable and directly translatable to their design efforts. Objectives: The goal of this project was to use innovative analysis and visualization techniques to aid in the examination and comprehension of complex motions. Methods: This project consisted of a series of small sub-projects, meant to validate and verify a new method before it was implemented in the ABF's data analysis practices. A mechanical test rig was built and tracked in 3D using an optical motion capture system. Its position and orientation were reported in both Euler and spherical reference systems. In the second phase of the project, the ABF estimated the error inherent in a spherical coordinate system, and evaluated how this error would vary within the reference frame. This stage also involved expanding a kinematic model of the shoulder to include the rest of the joints of the body. The third stage of the project involved creating visualization methods to assist in interpreting motion in a spherical frame. These visualization methods will be incorporated in a tool to evaluate a database of suited mobility data, which is currently in development. Results: Initial results demonstrated that a spherical coordinate system is helpful in describing and visualizing the motion of a space suit. The system is particularly useful in describing the motion of the shoulder, where multiple degrees of freedom can lead to very complex motion paths.

  12. SDDOT'S enhanced pavement management system : visual distress survey manual [2017

    DOT National Transportation Integrated Search

    2017-05-11

    In 1993, the South Dakota Department of Transportation initiated the Research Project SD93-14, Enhancement of South Dakotas Pavement Management System. As the Research Project progressed, it was determined that to better evaluate the condition of ...

  13. Visual exploration of high-dimensional data through subspace analysis and dynamic projections

    DOE PAGES

    Liu, S.; Wang, B.; Thiagarajan, J. J.; ...

    2015-06-01

    Here, we introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that createmore » smooth animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less

  14. Visual Exploration of High-Dimensional Data through Subspace Analysis and Dynamic Projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, S.; Wang, B.; Thiagarajan, Jayaraman J.

    2015-06-01

    We introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that create smoothmore » animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less

  15. Prototype crawling robotics system for remote visual inspection of high-mast light poles.

    DOT National Transportation Integrated Search

    1997-01-01

    This report presents the results of a project to develop a crawling robotics system for the remote visual inspection of high-mast light poles in Virginia. The first priority of this study was to develop a simple robotics application that would reduce...

  16. Priority Determination for AVC Funded R&D Projects.

    ERIC Educational Resources Information Center

    Wilkinson, Gene L.

    As an extension of ideas suggested in an earlier paper which proposed a project control system for Indiana University's Audio-Visual Center (see EM 010 306), this paper examines the establishment of project legitimacy and priority within the system and reviews the need to stimulate specific research proposals as well as generating a matrix of…

  17. Patient-Clinician Encounter Information Modeling Through Web Based Intelligent 3D Visual Interface

    DTIC Science & Technology

    2002-09-01

    system must allow immediate access to the lab data without the need to abort the evaluation process), and (5) must apply visual thinking principles. It... Systems Research, Incorporated For a period of five (5) years after completion of the project from which the data was generated, the Government’s rights...Report 3 Sigma Systems Research, Inc. List of Figures FIGURE 1. THE TWO MAJOR ELEMENTS OF THE DEVELOPED MEDICAL DATA VISUALIZATION FRAMEWORK ..... 7

  18. Planetary Surface Visualization and Analytics

    NASA Astrophysics Data System (ADS)

    Law, E. S.; Solar System Treks Team

    2018-04-01

    An introduction and update of the Solar System Treks Project which provides a suite of interactive visualization and analysis tools to enable users (engineers, scientists, public) to access large amounts of mapped planetary data products.

  19. JUICE: a data management system that facilitates the analysis of large volumes of information in an EST project workflow

    PubMed Central

    Latorre, Mariano; Silva, Herman; Saba, Juan; Guziolowski, Carito; Vizoso, Paula; Martinez, Veronica; Maldonado, Jonathan; Morales, Andrea; Caroca, Rodrigo; Cambiazo, Veronica; Campos-Vargas, Reinaldo; Gonzalez, Mauricio; Orellana, Ariel; Retamales, Julio; Meisel, Lee A

    2006-01-01

    Background Expressed sequence tag (EST) analyses provide a rapid and economical means to identify candidate genes that may be involved in a particular biological process. These ESTs are useful in many Functional Genomics studies. However, the large quantity and complexity of the data generated during an EST sequencing project can make the analysis of this information a daunting task. Results In an attempt to make this task friendlier, we have developed JUICE, an open source data management system (Apache + PHP + MySQL on Linux), which enables the user to easily upload, organize, visualize and search the different types of data generated in an EST project pipeline. In contrast to other systems, the JUICE data management system allows a branched pipeline to be established, modified and expanded, during the course of an EST project. The web interfaces and tools in JUICE enable the users to visualize the information in a graphical, user-friendly manner. The user may browse or search for sequences and/or sequence information within all the branches of the pipeline. The user can search using terms associated with the sequence name, annotation or other characteristics stored in JUICE and associated with sequences or sequence groups. Groups of sequences can be created by the user, stored in a clipboard and/or downloaded for further analyses. Different user profiles restrict the access of each user depending upon their role in the project. The user may have access exclusively to visualize sequence information, access to annotate sequences and sequence information, or administrative access. Conclusion JUICE is an open source data management system that has been developed to aid users in organizing and analyzing the large amount of data generated in an EST Project workflow. JUICE has been used in one of the first functional genomics projects in Chile, entitled "Functional Genomics in nectarines: Platform to potentiate the competitiveness of Chile in fruit exportation". However, due to its ability to organize and visualize data from external pipelines, JUICE is a flexible data management system that should be useful for other EST/Genome projects. The JUICE data management system is released under the Open Source GNU Lesser General Public License (LGPL). JUICE may be downloaded from or . PMID:17123449

  20. A vision fusion treatment system based on ATtiny26L

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoqing; Zhang, Chunxi; Wang, Jiqiang

    2006-11-01

    Vision fusion treatment is an important and effective project to strabismus children. The vision fusion treatment system based on the principle for eyeballs to follow the moving visual survey pole is put forward first. In this system the original position of visual survey pole is about 35 centimeters far from patient's face before its moving to the middle position between the two eyeballs. The eyeballs of patient will follow the movement of the visual survey pole. When they can't follow, one or two eyeballs will turn to other position other than the visual survey pole. This displacement is recorded every time. A popular single chip microcomputer ATtiny26L is used in this system, which has a PWM output signal to control visual survey pole to move with continuously variable speed. The movement of visual survey pole accords to the modulating law of eyeballs to follow visual survey pole.

  1. Visualization and characterization of users in a citizen science project

    NASA Astrophysics Data System (ADS)

    Morais, Alessandra M. M.; Raddick, Jordan; Coelho dos Santos, Rafael D.

    2013-05-01

    Recent technological advances allowed the creation and use of internet-based systems where many users can collaborate gathering and sharing information for specific or general purposes: social networks, e-commerce review systems, collaborative knowledge systems, etc. Since most of the data collected in these systems is user-generated, understanding of the motivations and general behavior of users is a very important issue. Of particular interest are citizen science projects, where users without scientific training are asked for collaboration labeling and classifying information (either automatically by giving away idle computer time or manually by actually seeing data and providing information about it). Understanding behavior of users of those types of data collection systems may help increase the involvement of the users, categorize users accordingly to different parameters, facilitate their collaboration with the systems, design better user interfaces, and allow better planning and deployment of similar projects and systems. Behavior of those users could be estimated through analysis of their collaboration track: registers of which user did what and when can be easily and unobtrusively collected in several different ways, the simplest being a log of activities. In this paper we present some results on the visualization and characterization of almost 150.000 users with more than 80.000.000 collaborations with a citizen science project - Galaxy Zoo I, which asked users to classify galaxies' images. Basic visualization techniques are not applicable due to the number of users, so techniques to characterize users' behavior based on feature extraction and clustering are used.

  2. Geowall: Investigations into low-cost stereo display technologies

    USGS Publications Warehouse

    Steinwand, Daniel R.; Davis, Brian; Weeks, Nathan

    2003-01-01

    Recently, the combination of new projection technology, fast, low-cost graphics cards, and Linux-powered personal computers has made it possible to provide a stereoprojection and stereoviewing system that is much more affordable than previous commercial solutions. These Geowall systems are low-cost visualization systems built with commodity off-the-shelf components, run on open-source (and other) operating systems, and using open-source applications software. In short, they are ?Beowulf-class? visualization systems that provide a cost-effective way for the U. S. Geological Survey to broaden participation in the visualization community and view stereoimagery and three-dimensional models2.

  3. Using Visual Simulation Tools And Learning Outcomes-Based Curriculum To Help Transportation Engineering Students And Practitioners To Better Understand And Design Traffic Signal Control Systems

    DOT National Transportation Integrated Search

    2012-06-01

    The use of visual simulation tools to convey complex concepts has become a useful tool in education as well as in research. : This report describes a project that developed curriculum and visualization tools to train transportation engineering studen...

  4. Functional localization in the nucleus rotundus.

    DOT National Transportation Integrated Search

    1977-10-01

    Work has suggested that the effects of psychoactive drugs on visual performance may best be understood, and/or predicted, by studying differential effects of the drugs on functionally differentiated sets of neurones in visual projection systems in th...

  5. Looking like Limulus? – Retinula axons and visual neuropils of the median and lateral eyes of scorpions

    PubMed Central

    2013-01-01

    Background Despite ongoing interest in the neurophysiology of visual systems in scorpions, aspects of their neuroanatomy have received little attention. Lately sets of neuroanatomical characters have contributed important arguments to the discussion of arthropod ground patterns and phylogeny. In various attempts to reconstruct phylogeny (from morphological, morphological + molecular, or molecular data) scorpions were placed either as basalmost Arachnida, or within Arachnida with changing sister-group relationships, or grouped with the extinct Eurypterida and Xiphosura inside the Merostomata. Thus, the position of scorpions is a key to understanding chelicerate evolution. To shed more light on this, the present study for the first time combines various techniques (Cobalt fills, DiI / DiO labelling, osmium-ethyl gallate procedure, and AMIRA 3D-reconstruction) to explore central projections and visual neuropils of median and lateral eyes in Euscorpius italicus (Herbst, 1800) and E. hadzii Di Caporiacco, 1950. Results Scorpion median eye retinula cells are linked to a first and a second visual neuropil, while some fibres additionally connect the median eyes with the arcuate body. The lateral eye retinula cells are linked to a first and a second visual neuropil as well, with the second neuropil being partly shared by projections from both eyes. Conclusions Comparing these results to previous studies on the visual systems of scorpions and other chelicerates, we found striking similarities to the innervation pattern in Limulus polyphemus for both median and lateral eyes. This supports from a visual system point of view at least a phylogenetically basal position of Scorpiones in Arachnida, or even a close relationship to Xiphosura. In addition, we propose a ground pattern for the central projections of chelicerate median eyes. PMID:23842208

  6. Evaluating visual discomfort in stereoscopic projection-based CAVE system with a close viewing distance

    NASA Astrophysics Data System (ADS)

    Song, Weitao; Weng, Dongdong; Feng, Dan; Li, Yuqian; Liu, Yue; Wang, Yongtian

    2015-05-01

    As one of popular immersive Virtual Reality (VR) systems, stereoscopic cave automatic virtual environment (CAVE) system is typically consisted of 4 to 6 3m-by-3m sides of a room made of rear-projected screens. While many endeavors have been made to reduce the size of the projection-based CAVE system, the issue of asthenopia caused by lengthy exposure to stereoscopic images in such CAVE with a close viewing distance was seldom tangled. In this paper, we propose a light-weighted approach which utilizes a convex eyepiece to reduce visual discomfort induced by stereoscopic vision. An empirical experiment was conducted to examine the feasibility of convex eyepiece in a large depth of field (DOF) at close viewing distance both objectively and subjectively. The result shows the positive effects of convex eyepiece on the relief of eyestrain.

  7. The manager's guide to NASA graphics standards

    NASA Technical Reports Server (NTRS)

    1980-01-01

    NASA managers have the responsibility to initiate and carry out communication projects with a degree of sophistication that properly reflects the agency's substantial work. Over the course of the last decade, it has become more important to clearly communicate NASA's objectives in aeronautical research, space exploration, and related sciences. Many factors come into play when preparing communication materials for internal and external use. Three overriding factors are: producing the materials by the most cost-efficient method; ensuring that each item reflects the vitality, knowledge, and precision of NASA; and portraying all visual materials with a unified appearance. This guide will serve as the primary tool in meeting these criteria. This publication spells out the many benefits inherent in the Unified Visual Communication System and describes how the system was developed. The last section lists the graphic coordinators at headquarters and the centers who can assist with graphic projects. By understanding the Unified Visual Communication System, NASA managers will be able to manage a project from inception through production in the most cost-effective manner while maintaining the quality of NASA communications.

  8. Three-dimensional user interfaces for scientific visualization

    NASA Technical Reports Server (NTRS)

    Vandam, Andries

    1995-01-01

    The main goal of this project is to develop novel and productive user interface techniques for creating and managing visualizations of computational fluid dynamics (CFD) datasets. We have implemented an application framework in which we can visualize computational fluid dynamics user interfaces. This UI technology allows users to interactively place visualization probes in a dataset and modify some of their parameters. We have also implemented a time-critical scheduling system which strives to maintain a constant frame-rate regardless of the number of visualization techniques. In the past year, we have published parts of this research at two conferences, the research annotation system at Visualization 1994, and the 3D user interface at UIST 1994. The real-time scheduling system has been submitted to SIGGRAPH 1995 conference. Copies of these documents are included with this report.

  9. Structural organization of parallel information processing within the tectofugal visual system of the pigeon.

    PubMed

    Hellmann, B; Güntürkün, O

    2001-01-01

    Visual information processing within the ascending tectofugal pathway to the forebrain undergoes essential rearrangements between the mesencephalic tectum opticum and the diencephalic nucleus rotundus of birds. The outer tectal layers constitute a two-dimensional map of the visual surrounding, whereas nucleus rotundus is characterized by functional domains in which different visual features such as movement, color, or luminance are processed in parallel. Morphologic correlates of this reorganization were investigated by means of focal injections of the neuronal tracer choleratoxin subunit B into different regions of the nuclei rotundus and triangularis of the pigeon. Dependent on the thalamic injection site, variations in the retrograde labeling pattern of ascending tectal efferents were observed. All rotundal projecting neurons were located within the deep tectal layer 13. Five different cell populations were distinguished that could be differentiated according to their dendritic ramifications within different retinorecipient laminae and their axons projecting to different subcomponents of the nucleus rotundus. Because retinorecipient tectal layers differ in their input from distinct classes of retinal ganglion cells, each tectorotundal cell type probably processes different aspects of the visual surrounding. Therefore, the differential input/output connections of the five tectorotundal cell groups might constitute the structural basis for spatially segregated parallel information processing of different stimulus aspects within the tectofugal visual system. Because two of five rotundal projecting cell groups additionally exhibited quantitative shifts along the dorsoventral extension of the tectum, data also indicate visual field-dependent alterations in information processing for particular visual features. Copyright 2001 Wiley-Liss, Inc.

  10. Visualizing speciation in artificial cichlid fish.

    PubMed

    Clement, Ross

    2006-01-01

    The Cichlid Speciation Project (CSP) is an ALife simulation system for investigating open problems in the speciation of African cichlid fish. The CSP can be used to perform a wide range of experiments that show that speciation is a natural consequence of certain biological systems. A visualization system capable of extracting the history of speciation from low-level trace data and creating a phylogenetic tree has been implemented. Unlike previous approaches, this visualization system presents a concrete trace of speciation, rather than a summary of low-level information from which the viewer can make subjective decisions on how speciation progressed. The phylogenetic trees are a more objective visualization of speciation, and enable automated collection and summarization of the results of experiments. The visualization system is used to create a phylogenetic tree from an experiment that models sympatric speciation.

  11. Automatic transfer function design for medical visualization using visibility distributions and projective color mapping.

    PubMed

    Cai, Lile; Tay, Wei-Liang; Nguyen, Binh P; Chui, Chee-Kong; Ong, Sim-Heng

    2013-01-01

    Transfer functions play a key role in volume rendering of medical data, but transfer function manipulation is unintuitive and can be time-consuming; achieving an optimal visualization of patient anatomy or pathology is difficult. To overcome this problem, we present a system for automatic transfer function design based on visibility distribution and projective color mapping. Instead of assigning opacity directly based on voxel intensity and gradient magnitude, the opacity transfer function is automatically derived by matching the observed visibility distribution to a target visibility distribution. An automatic color assignment scheme based on projective mapping is proposed to assign colors that allow for the visual discrimination of different structures, while also reflecting the degree of similarity between them. When our method was tested on several medical volumetric datasets, the key structures within the volume were clearly visualized with minimal user intervention. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. GROTTO visualization for decision support

    NASA Astrophysics Data System (ADS)

    Lanzagorta, Marco O.; Kuo, Eddy; Uhlmann, Jeffrey K.

    1998-08-01

    In this paper we describe the GROTTO visualization projects being carried out at the Naval Research Laboratory. GROTTO is a CAVE-like system, that is, a surround-screen, surround- sound, immersive virtual reality device. We have explored the GROTTO visualization in a variety of scientific areas including oceanography, meteorology, chemistry, biochemistry, computational fluid dynamics and space sciences. Research has emphasized the applications of GROTTO visualization for military, land and sea-based command and control. Examples include the visualization of ocean current models for the simulation and stud of mine drifting and, inside our computational steering project, the effects of electro-magnetic radiation on missile defense satellites. We discuss plans to apply this technology to decision support applications involving the deployment of autonomous vehicles into contaminated battlefield environments, fire fighter control and hostage rescue operations.

  13. Market basket analysis visualization on a spherical surface

    NASA Astrophysics Data System (ADS)

    Hao, Ming C.; Hsu, Meichun; Dayal, Umeshwar; Wei, Shu F.; Sprenger, Thomas; Holenstein, Thomas

    2001-05-01

    This paper discusses the visualization of the relationships in e-commerce transactions. To date, many practical research projects have shown the usefulness of a physics-based mass- spring technique to layout data items with close relationships on a graph. We describe a market basket analysis visualization system using this technique. This system is described as the following: (1) integrates a physics-based engine into a visual data mining platform; (2) use a 3D spherical surface to visualize the cluster of related data items; and (3) for large volumes of transactions, uses hidden structures to unclutter the display. Several examples of market basket analysis are also provided.

  14. Visual processing in the central bee brain.

    PubMed

    Paulk, Angelique C; Dacks, Andrew M; Phillips-Portillo, James; Fellous, Jean-Marc; Gronenberg, Wulfila

    2009-08-12

    Visual scenes comprise enormous amounts of information from which nervous systems extract behaviorally relevant cues. In most model systems, little is known about the transformation of visual information as it occurs along visual pathways. We examined how visual information is transformed physiologically as it is communicated from the eye to higher-order brain centers using bumblebees, which are known for their visual capabilities. We recorded intracellularly in vivo from 30 neurons in the central bumblebee brain (the lateral protocerebrum) and compared these neurons to 132 neurons from more distal areas along the visual pathway, namely the medulla and the lobula. In these three brain regions (medulla, lobula, and central brain), we examined correlations between the neurons' branching patterns and their responses primarily to color, but also to motion stimuli. Visual neurons projecting to the anterior central brain were generally color sensitive, while neurons projecting to the posterior central brain were predominantly motion sensitive. The temporal response properties differed significantly between these areas, with an increase in spike time precision across trials and a decrease in average reliable spiking as visual information processing progressed from the periphery to the central brain. These data suggest that neurons along the visual pathway to the central brain not only are segregated with regard to the physical features of the stimuli (e.g., color and motion), but also differ in the way they encode stimuli, possibly to allow for efficient parallel processing to occur.

  15. Mobile Monitoring Stations and Web Visualization of Biotelemetric System - Guardian II

    NASA Astrophysics Data System (ADS)

    Krejcar, Ondrej; Janckulik, Dalibor; Motalova, Leona; Kufel, Jan

    The main area of interest of our project is to provide solution which can be used in different areas of health care and which will be available through PDAs (Personal Digital Assistants), web browsers or desktop clients. The realized system deals with an ECG sensor connected to mobile equipment, such as PDA/Embedded, based on Microsoft Windows Mobile operating system. The whole system is based on the architecture of .NET Compact Framework, and Microsoft SQL Server. Visualization possibilities of web interface and ECG data are also discussed and final suggestion is made to Microsoft Silverlight solution along with current screenshot representation of implemented solution. The project was successfully tested in real environment in cryogenic room (-136OC).

  16. Organization of the Drosophila larval visual circuit

    PubMed Central

    Gendre, Nanae; Neagu-Maier, G Larisa; Fetter, Richard D; Schneider-Mizell, Casey M; Truman, James W; Zlatic, Marta; Cardona, Albert

    2017-01-01

    Visual systems transduce, process and transmit light-dependent environmental cues. Computation of visual features depends on photoreceptor neuron types (PR) present, organization of the eye and wiring of the underlying neural circuit. Here, we describe the circuit architecture of the visual system of Drosophila larvae by mapping the synaptic wiring diagram and neurotransmitters. By contacting different targets, the two larval PR-subtypes create two converging pathways potentially underlying the computation of ambient light intensity and temporal light changes already within this first visual processing center. Locally processed visual information then signals via dedicated projection interneurons to higher brain areas including the lateral horn and mushroom body. The stratified structure of the larval optic neuropil (LON) suggests common organizational principles with the adult fly and vertebrate visual systems. The complete synaptic wiring diagram of the LON paves the way to understanding how circuits with reduced numerical complexity control wide ranges of behaviors.

  17. A GeoWall with Physics and Astronomy Applications

    NASA Astrophysics Data System (ADS)

    Dukes, Phillip; Bruton, Dan

    2008-03-01

    A GeoWall is a passive stereoscopic projection system that can be used by students, teachers, and researchers for visualization of the structure and dynamics of three-dimensional systems and data. The type of system described here adequately provides 3-D visualization in natural color for large or small groups of viewers. The name ``GeoWall'' derives from its initial development to visualize data in the geosciences.1 An early GeoWall system was developed by Paul Morin at the electronic visualization laboratory at the University of Minnesota and was applied in an introductory geology course in spring of 2001. Since that time, several stereoscopic media, which are applicable to introductory-level physics and astronomy classes, have been developed and released into the public domain. In addition to the GeoWall's application in the classroom, there is considerable value in its use as part of a general science outreach program. In this paper we briefly describe the theory of operation of stereoscopic projection and the basic necessary components of a GeoWall system. Then we briefly describe how we are using a GeoWall as an instructional tool for the classroom and informal astronomy education and in research. Finally, we list sources for several of the free software media in physics and astronomy available for use with a GeoWall system.

  18. Molecular Dynamics Visualization (MDV): Stereoscopic 3D Display of Biomolecular Structure and Interactions Using the Unity Game Engine.

    PubMed

    Wiebrands, Michael; Malajczuk, Chris J; Woods, Andrew J; Rohl, Andrew L; Mancera, Ricardo L

    2018-06-21

    Molecular graphics systems are visualization tools which, upon integration into a 3D immersive environment, provide a unique virtual reality experience for research and teaching of biomolecular structure, function and interactions. We have developed a molecular structure and dynamics application, the Molecular Dynamics Visualization tool, that uses the Unity game engine combined with large scale, multi-user, stereoscopic visualization systems to deliver an immersive display experience, particularly with a large cylindrical projection display. The application is structured to separate the biomolecular modeling and visualization systems. The biomolecular model loading and analysis system was developed as a stand-alone C# library and provides the foundation for the custom visualization system built in Unity. All visual models displayed within the tool are generated using Unity-based procedural mesh building routines. A 3D user interface was built to allow seamless dynamic interaction with the model while being viewed in 3D space. Biomolecular structure analysis and display capabilities are exemplified with a range of complex systems involving cell membranes, protein folding and lipid droplets.

  19. Hypothalamic Projections to the Optic Tectum in Larval Zebrafish

    PubMed Central

    Heap, Lucy A.; Vanwalleghem, Gilles C.; Thompson, Andrew W.; Favre-Bulle, Itia; Rubinsztein-Dunlop, Halina; Scott, Ethan K.

    2018-01-01

    The optic tectum of larval zebrafish is an important model for understanding visual processing in vertebrates. The tectum has been traditionally viewed as dominantly visual, with a majority of studies focusing on the processes by which tectal circuits receive and process retinally-derived visual information. Recently, a handful of studies have shown a much more complex role for the optic tectum in larval zebrafish, and anatomical and functional data from these studies suggest that this role extends beyond the visual system, and beyond the processing of exclusively retinal inputs. Consistent with this evolving view of the tectum, we have used a Gal4 enhancer trap line to identify direct projections from rostral hypothalamus (RH) to the tectal neuropil of larval zebrafish. These projections ramify within the deepest laminae of the tectal neuropil, the stratum album centrale (SAC)/stratum griseum periventriculare (SPV), and also innervate strata distinct from those innervated by retinal projections. Using optogenetic stimulation of the hypothalamic projection neurons paired with calcium imaging in the tectum, we find rebound firing in tectal neurons consistent with hypothalamic inhibitory input. Our results suggest that tectal processing in larval zebrafish is modulated by hypothalamic inhibitory inputs to the deep tectal neuropil. PMID:29403362

  20. Hypothalamic Projections to the Optic Tectum in Larval Zebrafish.

    PubMed

    Heap, Lucy A; Vanwalleghem, Gilles C; Thompson, Andrew W; Favre-Bulle, Itia; Rubinsztein-Dunlop, Halina; Scott, Ethan K

    2017-01-01

    The optic tectum of larval zebrafish is an important model for understanding visual processing in vertebrates. The tectum has been traditionally viewed as dominantly visual, with a majority of studies focusing on the processes by which tectal circuits receive and process retinally-derived visual information. Recently, a handful of studies have shown a much more complex role for the optic tectum in larval zebrafish, and anatomical and functional data from these studies suggest that this role extends beyond the visual system, and beyond the processing of exclusively retinal inputs. Consistent with this evolving view of the tectum, we have used a Gal4 enhancer trap line to identify direct projections from rostral hypothalamus (RH) to the tectal neuropil of larval zebrafish. These projections ramify within the deepest laminae of the tectal neuropil, the stratum album centrale (SAC)/stratum griseum periventriculare (SPV), and also innervate strata distinct from those innervated by retinal projections. Using optogenetic stimulation of the hypothalamic projection neurons paired with calcium imaging in the tectum, we find rebound firing in tectal neurons consistent with hypothalamic inhibitory input. Our results suggest that tectal processing in larval zebrafish is modulated by hypothalamic inhibitory inputs to the deep tectal neuropil.

  1. Interactive Learning to Stimulate the Brain's Visual Center and to Enhance Memory Retention

    ERIC Educational Resources Information Center

    Yun, Yang H.; Allen, Philip A.; Chaumpanich, Kritsakorn; Xiao, Yingcai

    2014-01-01

    This short paper describes an ongoing NSF-funded project on enhancing science and engineering education using the latest technology. More specifically, the project aims at developing an interactive learning system with Microsoft Kinect™ and Unity3D game engine. This system promotes active, rather than passive, learning by employing embodied…

  2. Project ME: A Report on the Learning Wall System.

    ERIC Educational Resources Information Center

    Heilig, Morton L.

    The learning wall system, which consists primarily of a special wall used instead of a screen for a variety of projection purposes, is described, shown diagrammatically, and pictured. Designed to provide visual perceptual motor training on a level that would fall between gross and fine motor performance for perceptually handicapped children, the…

  3. Dynamic Interactions for Network Visualization and Simulation

    DTIC Science & Technology

    2009-03-01

    projects.htm, Site accessed January 5, 2009. 12. John S. Weir, Major, USAF, Mediated User-Simulator Interactive Command with Visualization ( MUSIC -V). Master’s...Computing Sciences in Colleges, December 2005). 14. Enrique Campos -Nanez, “nscript user manual,” Department of System Engineer- ing University of

  4. Vision in two cyprinid fish: implications for collective behavior

    PubMed Central

    Moore, Bret A.; Tyrrell, Luke P.; Fernández-Juricic, Esteban

    2015-01-01

    Many species of fish rely on their visual systems to interact with conspecifics and these interactions can lead to collective behavior. Individual-based models have been used to predict collective interactions; however, these models generally make simplistic assumptions about the sensory systems that are applied without proper empirical testing to different species. This could limit our ability to predict (and test empirically) collective behavior in species with very different sensory requirements. In this study, we characterized components of the visual system in two species of cyprinid fish known to engage in visually dependent collective interactions (zebrafish Danio rerio and golden shiner Notemigonus crysoleucas) and derived quantitative predictions about the positioning of individuals within schools. We found that both species had relatively narrow binocular and blind fields and wide visual coverage. However, golden shiners had more visual coverage in the vertical plane (binocular field extending behind the head) and higher visual acuity than zebrafish. The centers of acute vision (areae) of both species projected in the fronto-dorsal region of the visual field, but those of the zebrafish projected more dorsally than those of the golden shiner. Based on this visual sensory information, we predicted that: (a) predator detection time could be increased by >1,000% in zebrafish and >100% in golden shiners with an increase in nearest neighbor distance, (b) zebrafish schools would have a higher roughness value (surface area/volume ratio) than those of golden shiners, (c) and that nearest neighbor distance would vary from 8 to 20 cm to visually resolve conspecific striping patterns in both species. Overall, considering between-species differences in the sensory system of species exhibiting collective behavior could change the predictions about the positioning of individuals in the group as well as the shape of the school, which can have implications for group cohesion. We suggest that more effort should be invested in assessing the role of the sensory system in shaping local interactions driving collective behavior. PMID:26290783

  5. ML-o-Scope: A Diagnostic Visualization System for Deep Machine Learning Pipelines

    DTIC Science & Technology

    2014-05-16

    ML-o-scope: a diagnostic visualization system for deep machine learning pipelines Daniel Bruckner Electrical Engineering and Computer Sciences... machine learning pipelines 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f...the system as a support for tuning large scale object-classification pipelines. 1 Introduction A new generation of pipelined machine learning models

  6. Exomars VisLoc- The Visual Localisation System for the Exomars Rover

    NASA Astrophysics Data System (ADS)

    Ward, R.; Hamilton, W.; Silva, N.; Pereira, V.

    2016-08-01

    Maintaining accurate knowledge of the current position of vehicles on the surface of Mars is a considerable problem. The lack of an orbital GPS means that the absolute position of a rover at any instant is very difficult to determine, and with that it is difficult to accurately and safely plan hazard avoidance manoeuvres.Some on-board methods of determining the evolving POSE of a rover are well known, such as using wheel odometry to keep a log of the distance travelled. However there are associated problems - wheels can slip in the martial soil providing odometry readings which can mislead navigation algorithms. One solution to this is to use a visual localisation system, which uses cameras to determine the actual rover motion from images of the terrain. By measuring movement from the terrain an independent measure of the actual movement can be obtained to a high degree of accuracy.This paper presents the progress of the project to develop a the Visual Localisation system for the ExoMars rover (VisLoc). The core algorithmm used in the system is known as OVO (Oxford Visual Odometry), developed at the Mobile Robotics Group at the University of Oxford. Over a number of projects this system has been adapted from its original purpose (navigation systems for autonomous vehicles) to be a viable system for the unique challenges associated with extra-terrestrial use.

  7. ESIF 2016: Modernizing Our Grid and Energy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Becelaere, Kimberly

    This 2016 annual report highlights work conducted at the Energy Systems Integration Facility (ESIF) in FY 2016, including grid modernization, high-performance computing and visualization, and INTEGRATE projects.

  8. Simulation and Visualization of Chaos in a Driven Nonlinear Pendulum -- An Aid to Introducing Chaotic Systems in Physics

    NASA Astrophysics Data System (ADS)

    Akpojotor, Godfrey; Ehwerhemuepha, Louis; Amromanoh, Ogheneriobororue

    2013-03-01

    The presence of physical systems whose characteristics change in a seemingly erratic manner gives rise to the study of chaotic systems. The characteristics of these systems are due to their hypersensitivity to changes in initial conditions. In order to understand chaotic systems, some sort of simulation and visualization is pertinent. Consequently, in this work, we have simulated and graphically visualized chaos in a driven nonlinear pendulum as a means of introducing chaotic systems. The results obtained which highlight the hypersensitivity of the pendulum are used to discuss the effectiveness of teaching and learning the physics of chaotic system using Python. This study is one of the many studies under the African Computational Science and Engineering Tour Project (PASET) which is using Python to model, simulate and visualize concepts, laws and phenomena in Science and Engineering to compliment the teaching/learning of theory and experiment.

  9. Macroscopic features of quantum fluctuations in large-N qubit systems

    NASA Astrophysics Data System (ADS)

    Klimov, Andrei B.; Muñoz, Carlos

    2014-05-01

    We introduce a discrete Q function of an N-qubit system projected into the space of symmetric measurements as a tool for analyzing general properties of quantum systems in the macroscopic limit. For known states the projected Q function helps to visualize the results of collective measurements, and for unknown states it can be approximately reconstructed by measuring the lowest moments of the collective variables.

  10. IN13B-1660: Analytics and Visualization Pipelines for Big Data on the NASA Earth Exchange (NEX) and OpenNEX

    NASA Technical Reports Server (NTRS)

    Chaudhary, Aashish; Votava, Petr; Nemani, Ramakrishna R.; Michaelis, Andrew; Kotfila, Chris

    2016-01-01

    We are developing capabilities for an integrated petabyte-scale Earth science collaborative analysis and visualization environment. The ultimate goal is to deploy this environment within the NASA Earth Exchange (NEX) and OpenNEX in order to enhance existing science data production pipelines in both high-performance computing (HPC) and cloud environments. Bridging of HPC and cloud is a fairly new concept under active research and this system significantly enhances the ability of the scientific community to accelerate analysis and visualization of Earth science data from NASA missions, model outputs and other sources. We have developed a web-based system that seamlessly interfaces with both high-performance computing (HPC) and cloud environments, providing tools that enable science teams to develop and deploy large-scale analysis, visualization and QA pipelines of both the production process and the data products, and enable sharing results with the community. Our project is developed in several stages each addressing separate challenge - workflow integration, parallel execution in either cloud or HPC environments and big-data analytics or visualization. This work benefits a number of existing and upcoming projects supported by NEX, such as the Web Enabled Landsat Data (WELD), where we are developing a new QA pipeline for the 25PB system.

  11. Analytics and Visualization Pipelines for Big ­Data on the NASA Earth Exchange (NEX) and OpenNEX

    NASA Astrophysics Data System (ADS)

    Chaudhary, A.; Votava, P.; Nemani, R. R.; Michaelis, A.; Kotfila, C.

    2016-12-01

    We are developing capabilities for an integrated petabyte-scale Earth science collaborative analysis and visualization environment. The ultimate goal is to deploy this environment within the NASA Earth Exchange (NEX) and OpenNEX in order to enhance existing science data production pipelines in both high-performance computing (HPC) and cloud environments. Bridging of HPC and cloud is a fairly new concept under active research and this system significantly enhances the ability of the scientific community to accelerate analysis and visualization of Earth science data from NASA missions, model outputs and other sources. We have developed a web-based system that seamlessly interfaces with both high-performance computing (HPC) and cloud environments, providing tools that enable science teams to develop and deploy large-scale analysis, visualization and QA pipelines of both the production process and the data products, and enable sharing results with the community. Our project is developed in several stages each addressing separate challenge - workflow integration, parallel execution in either cloud or HPC environments and big-data analytics or visualization. This work benefits a number of existing and upcoming projects supported by NEX, such as the Web Enabled Landsat Data (WELD), where we are developing a new QA pipeline for the 25PB system.

  12. Looking above the prairie: localized and upward acute vision in a native grassland bird.

    PubMed

    Tyrrell, Luke P; Moore, Bret A; Loftis, Christopher; Fernández-Juricic, Esteban

    2013-12-02

    Visual systems of open habitat vertebrates are predicted to have a band of acute vision across the retina (visual streak) and wide visual coverage to gather information along the horizon. We tested whether the eastern meadowlark (Sturnella magna) had this visual configuration given that it inhabits open grasslands. Contrary to our expectations, the meadowlark retina has a localized spot of acute vision (fovea) and relatively narrow visual coverage. The fovea projects above rather than towards the horizon with the head at rest, and individuals modify their body posture in tall grass to maintain a similar foveal projection. Meadowlarks have relatively large binocular fields and can see their bill tips, which may help with their probe-foraging technique. Overall, meadowlark vision does not fit the profile of vertebrates living in open habitats. The binocular field may control foraging while the fovea may be used for detecting and tracking aerial stimuli (predators, conspecifics).

  13. The Integration of Multi-State Clarus Data into Data Visualization Tools

    DOT National Transportation Integrated Search

    2011-12-20

    This project focused on the integration of all Clarus Data into the Regional Integrated Transportation Information System (RITIS) for real-time situational awareness and historical safety data analysis. The initial outcomes of this project are the fu...

  14. Integrated Data Visualization and Virtual Reality Tool

    NASA Technical Reports Server (NTRS)

    Dryer, David A.

    1998-01-01

    The Integrated Data Visualization and Virtual Reality Tool (IDVVRT) Phase II effort was for the design and development of an innovative Data Visualization Environment Tool (DVET) for NASA engineers and scientists, enabling them to visualize complex multidimensional and multivariate data in a virtual environment. The objectives of the project were to: (1) demonstrate the transfer and manipulation of standard engineering data in a virtual world; (2) demonstrate the effects of design and changes using finite element analysis tools; and (3) determine the training and engineering design and analysis effectiveness of the visualization system.

  15. CYCLOPS-3 System Research.

    ERIC Educational Resources Information Center

    Marill, Thomas; And Others

    The aim of the CYCLOPS Project research is the development of techniques for allowing computers to perform visual scene analysis, pre-processing of visual imagery, and perceptual learning. Work on scene analysis and learning has previously been described. The present report deals with research on pre-processing and with further work on scene…

  16. Cloud Based Resource for Data Hosting, Visualization and Analysis Using UCSC Cancer Genomics Browser | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    The Cancer Analysis Virtual Machine (CAVM) project will leverage cloud technology, the UCSC Cancer Genomics Browser, and the Galaxy analysis workflow system to provide investigators with a flexible, scalable platform for hosting, visualizing and analyzing their own genomic data.

  17. Learning Science Through Visualization

    NASA Technical Reports Server (NTRS)

    Chaudhury, S. Raj

    2005-01-01

    In the context of an introductory physical science course for non-science majors, I have been trying to understand how scientific visualizations of natural phenomena can constructively impact student learning. I have also necessarily been concerned with the instructional and assessment approaches that need to be considered when focusing on learning science through visually rich information sources. The overall project can be broken down into three distinct segments : (i) comparing students' abilities to demonstrate proportional reasoning competency on visual and verbal tasks (ii) decoding and deconstructing visualizations of an object falling under gravity (iii) the role of directed instruction to elicit alternate, valid scientific visualizations of the structure of the solar system. Evidence of student learning was collected in multiple forms for this project - quantitative analysis of student performance on written, graded assessments (tests and quizzes); qualitative analysis of videos of student 'think aloud' sessions. The results indicate that there are significant barriers for non-science majors to succeed in mastering the content of science courses, but with informed approaches to instruction and assessment, these barriers can be overcome.

  18. A Unified Air-Sea Visualization System: Survey on Gridding Structures

    NASA Technical Reports Server (NTRS)

    Anand, Harsh; Moorhead, Robert

    1995-01-01

    The goal is to develop a Unified Air-Sea Visualization System (UASVS) to enable the rapid fusion of observational, archival, and model data for verification and analysis. To design and develop UASVS, modelers were polled to determine the gridding structures and visualization systems used, and their needs with respect to visual analysis. A basic UASVS requirement is to allow a modeler to explore multiple data sets within a single environment, or to interpolate multiple datasets onto one unified grid. From this survey, the UASVS should be able to visualize 3D scalar/vector fields; render isosurfaces; visualize arbitrary slices of the 3D data; visualize data defined on spectral element grids with the minimum number of interpolation stages; render contours; produce 3D vector plots and streamlines; provide unified visualization of satellite images, observations and model output overlays; display the visualization on a projection of the users choice; implement functions so the user can derive diagnostic values; animate the data to see the time-evolution; animate ocean and atmosphere at different rates; store the record of cursor movement, smooth the path, and animate a window around the moving path; repeatedly start and stop the visual time-stepping; generate VHS tape animations; work on a variety of workstations; and allow visualization across clusters of workstations and scalable high performance computer systems.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Kwan-Liu

    In this project, we have developed techniques for visualizing large-scale time-varying multivariate particle and field data produced by the GPS_TTBP team. Our basic approach to particle data visualization is to provide the user with an intuitive interactive interface for exploring the data. We have designed a multivariate filtering interface for scientists to effortlessly isolate those particles of interest for revealing structures in densely packed particles as well as the temporal behaviors of selected particles. With such a visualization system, scientists on the GPS-TTBP project can validate known relationships and temporal trends, and possibly gain new insights in their simulations. Wemore » have tested the system using over several millions of particles on a single PC. We will also need to address the scalability of the system to handle billions of particles using a cluster of PCs. To visualize the field data, we choose to use direct volume rendering. Because the data provided by PPPL is on a curvilinear mesh, several processing steps have to be taken. The mesh is curvilinear in nature, following the shape of a deformed torus. Additionally, in order to properly interpolate between the given slices we cannot use simple linear interpolation in Cartesian space but instead have to interpolate along the magnetic field lines given to us by the scientists. With these limitations, building a system that can provide an accurate visualization of the dataset is quite a challenge to overcome. In the end we use a combination of deformation methods such as deformation textures in order to fit a normal torus into their deformed torus, allowing us to store the data in toroidal coordinates in order to take advantage of modern GPUs to perform the interpolation along the field lines for us. The resulting new rendering capability produces visualizations at a quality and detail level previously not available to the scientists at the PPPL. In summary, in this project we have successfully created new capabilities for the scientists to visualize their 3D data at higher accuracy and quality, enhancing their ability to evaluate the simulations and understand the modeled phenomena.« less

  20. Projection-type see-through holographic three-dimensional display

    NASA Astrophysics Data System (ADS)

    Wakunami, Koki; Hsieh, Po-Yuan; Oi, Ryutaro; Senoh, Takanori; Sasaki, Hisayuki; Ichihashi, Yasuyuki; Okui, Makoto; Huang, Yi-Pai; Yamamoto, Kenji

    2016-10-01

    Owing to the limited spatio-temporal resolution of display devices, dynamic holographic three-dimensional displays suffer from a critical trade-off between the display size and the visual angle. Here we show a projection-type holographic three-dimensional display, in which a digitally designed holographic optical element and a digital holographic projection technique are combined to increase both factors at the same time. In the experiment, the enlarged holographic image, which is twice as large as the original display device, projected on the screen of the digitally designed holographic optical element was concentrated at the target observation area so as to increase the visual angle, which is six times as large as that for a general holographic display. Because the display size and the visual angle can be designed independently, the proposed system will accelerate the adoption of holographic three-dimensional displays in industrial applications, such as digital signage, in-car head-up displays, smart-glasses and head-mounted displays.

  1. RAVEL: retrieval and visualization in ELectronic health records.

    PubMed

    Thiessard, Frantz; Mougin, Fleur; Diallo, Gayo; Jouhet, Vianney; Cossin, Sébastien; Garcelon, Nicolas; Campillo, Boris; Jouini, Wassim; Grosjean, Julien; Massari, Philippe; Griffon, Nicolas; Dupuch, Marie; Tayalati, Fayssal; Dugas, Edwige; Balvet, Antonio; Grabar, Natalia; Pereira, Suzanne; Frandji, Bruno; Darmoni, Stefan; Cuggia, Marc

    2012-01-01

    Because of the ever-increasing amount of information in patients' EHRs, healthcare professionals may face difficulties for making diagnoses and/or therapeutic decisions. Moreover, patients may misunderstand their health status. These medical practitioners need effective tools to locate in real time relevant elements within the patients' EHR and visualize them according to synthetic and intuitive presentation models. The RAVEL project aims at achieving this goal by performing a high profile industrial research and development program on the EHR considering the following areas: (i) semantic indexing, (ii) information retrieval, and (iii) data visualization. The RAVEL project is expected to implement a generic, loosely coupled to data sources prototype so that it can be transposed into different university hospitals information systems.

  2. Updating and improving methodology for prioritizing highway project locations on the strategic intermodal system : [summary].

    DOT National Transportation Integrated Search

    2016-05-01

    Florida International University researchers examined the existing performance measures and the project prioritization method in the CMP and updated them to better reflect the current conditions and strategic goals of FDOT. They also developed visual...

  3. Pseudohaptic interaction with knot diagrams

    NASA Astrophysics Data System (ADS)

    Weng, Jianguang; Zhang, Hui

    2012-07-01

    To make progress in understanding knot theory, we need to interact with the projected representations of mathematical knots, which are continuous in three dimensions (3-D) but significantly interrupted in the projective images. One way to achieve such a goal is to design an interactive system that allows us to sketch two-dimensional (2-D) knot diagrams by taking advantage of a collision-sensing controller and explore their underlying smooth structures through a continuous motion. Recent advances of interaction techniques have been made that allow progress in this direction. Pseudohaptics that simulate haptic effects using pure visual feedback can be used to develop such an interactive system. We outline one such pseudohaptic knot diagram interface. Our interface derives from the familiar pencil-and-paper process of drawing 2-D knot diagrams and provides haptic-like sensations to facilitate the creation and exploration of knot diagrams. A centerpiece of the interaction model simulates a physically reactive mouse cursor, which is exploited to resolve the apparent conflict between the continuous structure of the actual smooth knot and the visual discontinuities in the knot diagram representation. Another value in exploiting pseudohaptics is that an acceleration (or deceleration) of the mouse cursor (or surface locator) can be used to indicate the slope of the curve (or surface) of which the projective image is being explored. By exploiting these additional visual cues, we proceed to a full-featured extension to a pseudohaptic four-dimensional (4-D) visualization system that simulates the continuous navigation on 4-D objects and allows us to sense the bumps and holes in the fourth dimension. Preliminary tests of the software show that main features of the interface overcome some expected perceptual limitations in our interaction with 2-D knot diagrams of 3-D knots and 3-D projective images of 4-D mathematical objects.

  4. A comparative examination of neural circuit and brain patterning between the lamprey and amphioxus reveals the evolutionary origin of the vertebrate visual center.

    PubMed

    Suzuki, Daichi G; Murakami, Yasunori; Escriva, Hector; Wada, Hiroshi

    2015-02-01

    Vertebrates are equipped with so-called camera eyes, which provide them with image-forming vision. Vertebrate image-forming vision evolved independently from that of other animals and is regarded as a key innovation for enhancing predatory ability and ecological success. Evolutionary changes in the neural circuits, particularly the visual center, were central for the acquisition of image-forming vision. However, the evolutionary steps, from protochordates to jaw-less primitive vertebrates and then to jawed vertebrates, remain largely unknown. To bridge this gap, we present the detailed development of retinofugal projections in the lamprey, the neuroarchitecture in amphioxus, and the brain patterning in both animals. Both the lateral eye in larval lamprey and the frontal eye in amphioxus project to a light-detecting visual center in the caudal prosencephalic region marked by Pax6, which possibly represents the ancestral state of the chordate visual system. Our results indicate that the visual system of the larval lamprey represents an evolutionarily primitive state, forming a link from protochordates to vertebrates and providing a new perspective of brain evolution based on developmental mechanisms and neural functions. © 2014 Wiley Periodicals, Inc.

  5. The dorsal "action" pathway.

    PubMed

    Gallivan, Jason P; Goodale, Melvyn A

    2018-01-01

    In 1992, Goodale and Milner proposed a division of labor in the visual pathways of the primate cerebral cortex. According to their account, the ventral pathway, which projects to occipitotemporal cortex, constructs our visual percepts, while the dorsal pathway, which projects to posterior parietal cortex, mediates the visual control of action. Although the framing of the two-visual-system hypothesis has not been without controversy, it is clear that vision for action and vision for perception have distinct computational requirements, and significant support for the proposed neuroanatomic division has continued to emerge over the last two decades from human neuropsychology, neuroimaging, behavioral psychophysics, and monkey neurophysiology. In this chapter, we review much of this evidence, with a particular focus on recent findings from human neuroimaging and monkey neurophysiology, demonstrating a specialized role for parietal cortex in visually guided behavior. But even though the available evidence suggests that dedicated circuits mediate action and perception, in order to produce adaptive goal-directed behavior there must be a close coupling and seamless integration of information processing across these two systems. We discuss such ventral-dorsal-stream interactions and argue that the two pathways play different, yet complementary, roles in the production of skilled behavior. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Projector primary-based optimization for superimposed projection mappings

    NASA Astrophysics Data System (ADS)

    Ahmed, Bilal; Lee, Jong Hun; Lee, Yong Yi; Lee, Kwan H.

    2018-01-01

    Recently, many researchers have focused on fully overlapping projections for three-dimensional (3-D) projection mapping systems but reproducing a high-quality appearance using this technology still remains a challenge. On top of existing color compensation-based methods, much effort is still required to faithfully reproduce an appearance that is free from artifacts, colorimetric inconsistencies, and inappropriate illuminance over the 3-D projection surface. According to our observation, this is due to the fact that overlapping projections are treated as an additive-linear mixture of color. However, this is not the case according to our elaborated observations. We propose a method that enables us to use high-quality appearance data that are measured from original objects and regenerate the same appearance by projecting optimized images using multiple projectors, ensuring that the projection-rendered results look visually close to the real object. We prepare our target appearances by photographing original objects. Then, using calibrated projector-camera pairs, we compensate for missing geometric correspondences to make our method robust against noise. The heart of our method is a target appearance-driven adaptive sampling of the projection surface followed by a representation of overlapping projections in terms of the projector-primary response. This gives off projector-primary weights to facilitate blending and the system is applied with constraints. These samples are used to populate a light transport-based system. Then, the system is solved minimizing the error to get the projection images in a noise-free manner by utilizing intersample overlaps. We ensure that we make the best utilization of available hardware resources to recreate projection mapped appearances that look as close to the original object as possible. Our experimental results show compelling results in terms of visual similarity and colorimetric error.

  7. Visualization techniques to aid in the analysis of multispectral astrophysical data sets

    NASA Technical Reports Server (NTRS)

    Brugel, E. W.; Domik, Gitta O.; Ayres, T. R.

    1993-01-01

    The goal of this project was to support the scientific analysis of multi-spectral astrophysical data by means of scientific visualization. Scientific visualization offers its greatest value if it is not used as a method separate or alternative to other data analysis methods but rather in addition to these methods. Together with quantitative analysis of data, such as offered by statistical analysis, image or signal processing, visualization attempts to explore all information inherent in astrophysical data in the most effective way. Data visualization is one aspect of data analysis. Our taxonomy as developed in Section 2 includes identification and access to existing information, preprocessing and quantitative analysis of data, visual representation and the user interface as major components to the software environment of astrophysical data analysis. In pursuing our goal to provide methods and tools for scientific visualization of multi-spectral astrophysical data, we therefore looked at scientific data analysis as one whole process, adding visualization tools to an already existing environment and integrating the various components that define a scientific data analysis environment. As long as the software development process of each component is separate from all other components, users of data analysis software are constantly interrupted in their scientific work in order to convert from one data format to another, or to move from one storage medium to another, or to switch from one user interface to another. We also took an in-depth look at scientific visualization and its underlying concepts, current visualization systems, their contributions and their shortcomings. The role of data visualization is to stimulate mental processes different from quantitative data analysis, such as the perception of spatial relationships or the discovery of patterns or anomalies while browsing through large data sets. Visualization often leads to an intuitive understanding of the meaning of data values and their relationships by sacrificing accuracy in interpreting the data values. In order to be accurate in the interpretation, data values need to be measured, computed on, and compared to theoretical or empirical models (quantitative analysis). If visualization software hampers quantitative analysis (which happens with some commercial visualization products), its use is greatly diminished for astrophysical data analysis. The software system STAR (Scientific Toolkit for Astrophysical Research) was developed as a prototype during the course of the project to better understand the pragmatic concerns raised in the project. STAR led to a better understanding on the importance of collaboration between astrophysicists and computer scientists. Twenty-one examples of the use of visualization for astrophysical data are included with this report. Sixteen publications related to efforts performed during or initiated through work on this project are listed at the end of this report.

  8. Visualizing projected Climate Changes - the CMIP5 Multi-Model Ensemble

    NASA Astrophysics Data System (ADS)

    Böttinger, Michael; Eyring, Veronika; Lauer, Axel; Meier-Fleischer, Karin

    2017-04-01

    Large ensembles add an additional dimension to climate model simulations. Internal variability of the climate system can be assessed for example by multiple climate model simulations with small variations in the initial conditions or by analyzing the spread in large ensembles made by multiple climate models under common protocols. This spread is often used as a measure of uncertainty in climate projections. In the context of the fifth phase of the WCRP's Coupled Model Intercomparison Project (CMIP5), more than 40 different coupled climate models were employed to carry out a coordinated set of experiments. Time series of the development of integral quantities such as the global mean temperature change for all models visualize the spread in the multi-model ensemble. A similar approach can be applied to 2D-visualizations of projected climate changes such as latitude-longitude maps showing the multi-model mean of the ensemble by adding a graphical representation of the uncertainty information. This has been demonstrated for example with static figures in chapter 12 of the last IPCC report (AR5) using different so-called stippling and hatching techniques. In this work, we focus on animated visualizations of multi-model ensemble climate projections carried out within CMIP5 as a way of communicating climate change results to the scientific community as well as to the public. We take a closer look at measures of robustness or uncertainty used in recent publications suitable for animated visualizations. Specifically, we use the ESMValTool [1] to process and prepare the CMIP5 multi-model data in combination with standard visualization tools such as NCL and the commercial 3D visualization software Avizo to create the animations. We compare different visualization techniques such as height fields or shading with transparency for creating animated visualization of ensemble mean changes in temperature and precipitation including corresponding robustness measures. [1] Eyring, V., Righi, M., Lauer, A., Evaldsson, M., Wenzel, S., Jones, C., Anav, A., Andrews, O., Cionni, I., Davin, E. L., Deser, C., Ehbrecht, C., Friedlingstein, P., Gleckler, P., Gottschaldt, K.-D., Hagemann, S., Juckes, M., Kindermann, S., Krasting, J., Kunert, D., Levine, R., Loew, A., Mäkelä, J., Martin, G., Mason, E., Phillips, A. S., Read, S., Rio, C., Roehrig, R., Senftleben, D., Sterl, A., van Ulft, L. H., Walton, J., Wang, S., and Williams, K. D.: ESMValTool (v1.0) - a community diagnostic and performance metrics tool for routine evaluation of Earth system models in CMIP, Geosci. Model Dev., 9, 1747-1802, doi:10.5194/gmd-9-1747-2016, 2016.

  9. Computer-based visual communication in aphasia.

    PubMed

    Steele, R D; Weinrich, M; Wertz, R T; Kleczewska, M K; Carlson, G S

    1989-01-01

    The authors describe their recently developed Computer-aided VIsual Communication (C-VIC) system, and report results of single-subject experimental designs probing its use with five chronic, severely impaired aphasic individuals. Studies replicate earlier results obtained with a non-computerized system, demonstrate patient competence with the computer implementation, extend the system's utility, and identify promising areas of application. Results of the single-subject experimental designs clarify patients' learning, generalization, and retention patterns, and highlight areas of performance difficulties. Future directions for the project are indicated.

  10. Towards sustainable infrastructure management: knowledge-based service-oriented computing framework for visual analytics

    NASA Astrophysics Data System (ADS)

    Vatcha, Rashna; Lee, Seok-Won; Murty, Ajeet; Tolone, William; Wang, Xiaoyu; Dou, Wenwen; Chang, Remco; Ribarsky, William; Liu, Wanqiu; Chen, Shen-en; Hauser, Edd

    2009-05-01

    Infrastructure management (and its associated processes) is complex to understand, perform and thus, hard to make efficient and effective informed decisions. The management involves a multi-faceted operation that requires the most robust data fusion, visualization and decision making. In order to protect and build sustainable critical assets, we present our on-going multi-disciplinary large-scale project that establishes the Integrated Remote Sensing and Visualization (IRSV) system with a focus on supporting bridge structure inspection and management. This project involves specific expertise from civil engineers, computer scientists, geographers, and real-world practitioners from industry, local and federal government agencies. IRSV is being designed to accommodate the essential needs from the following aspects: 1) Better understanding and enforcement of complex inspection process that can bridge the gap between evidence gathering and decision making through the implementation of ontological knowledge engineering system; 2) Aggregation, representation and fusion of complex multi-layered heterogeneous data (i.e. infrared imaging, aerial photos and ground-mounted LIDAR etc.) with domain application knowledge to support machine understandable recommendation system; 3) Robust visualization techniques with large-scale analytical and interactive visualizations that support users' decision making; and 4) Integration of these needs through the flexible Service-oriented Architecture (SOA) framework to compose and provide services on-demand. IRSV is expected to serve as a management and data visualization tool for construction deliverable assurance and infrastructure monitoring both periodically (annually, monthly, even daily if needed) as well as after extreme events.

  11. VERS: a virtual environment for reconstructive surgery planning

    NASA Astrophysics Data System (ADS)

    Montgomery, Kevin N.

    1997-05-01

    The virtual environment for reconstructive surgery (VERS) project at the NASA Ames Biocomputation Center is applying virtual reality technology to aid surgeons in planning surgeries. We are working with a craniofacial surgeon at Stanford to assemble and visualize the bone structure of patients requiring reconstructive surgery either through developmental abnormalities or trauma. This project is an extension of our previous work in 3D reconstruction, mesh generation, and immersive visualization. The current VR system, consisting of an SGI Onyx RE2, FakeSpace BOOM and ImmersiveWorkbench, Virtual Technologies CyberGlove and Ascension Technologies tracker, is currently in development and has already been used to visualize defects preoperatively. In the near future it will be used to more fully plan the surgery and compute the projected result to soft tissue structure. This paper presents the work in progress and details the production of a high-performance, collaborative, and networked virtual environment.

  12. Visualization and simulated surgery of the left ventricle in the virtual pathological heart of the Virtual Physiological Human

    PubMed Central

    McFarlane, N. J. B.; Lin, X.; Zhao, Y.; Clapworthy, G. J.; Dong, F.; Redaelli, A.; Parodi, O.; Testi, D.

    2011-01-01

    Ischaemic heart failure remains a significant health and economic problem worldwide. This paper presents a user-friendly software system that will form a part of the virtual pathological heart of the Virtual Physiological Human (VPH2) project, currently being developed under the European Commission Virtual Physiological Human (VPH) programme. VPH2 is an integrated medicine project, which will create a suite of modelling, simulation and visualization tools for patient-specific prediction and planning in cases of post-ischaemic left ventricular dysfunction. The work presented here describes a three-dimensional interactive visualization for simulating left ventricle restoration surgery, comprising the operations of cutting, stitching and patching, and for simulating the elastic deformation of the ventricle to its post-operative shape. This will supply the quantitative measurements required for the post-operative prediction tools being developed in parallel in the same project. PMID:22670207

  13. The research on multi-projection correction based on color coding grid array

    NASA Astrophysics Data System (ADS)

    Yang, Fan; Han, Cheng; Bai, Baoxing; Zhang, Chao; Zhao, Yunxiu

    2017-10-01

    There are many disadvantages such as lower timeliness, greater manual intervention in multi-channel projection system, in order to solve the above problems, this paper proposes a multi-projector correction technology based on color coding grid array. Firstly, a color structured light stripe is generated by using the De Bruijn sequences, then meshing the feature information of the color structured light stripe image. We put the meshing colored grid intersection as the center of the circle, and build a white solid circle as the feature sample set of projected images. It makes the constructed feature sample set not only has the perceptual localization, but also has good noise immunity. Secondly, we establish the subpixel geometric mapping relationship between the projection screen and the individual projectors by using the structure of light encoding and decoding based on the color array, and the geometrical mapping relation is used to solve the homography matrix of each projector. Lastly the brightness inconsistency of the multi-channel projection overlap area is seriously interfered, it leads to the corrected image doesn't fit well with the observer's visual needs, and we obtain the projection display image of visual consistency by using the luminance fusion correction algorithm. The experimental results show that this method not only effectively solved the problem of distortion of multi-projection screen and the issue of luminance interference in overlapping region, but also improved the calibration efficient of multi-channel projective system and reduced the maintenance cost of intelligent multi-projection system.

  14. Named Entity Recognition in a Hungarian NL Based QA System

    NASA Astrophysics Data System (ADS)

    Tikkl, Domonkos; Szidarovszky, P. Ferenc; Kardkovacs, Zsolt T.; Magyar, Gábor

    In WoW project our purpose is to create a complex search interface with the following features: search in the deep web content of contracted partners' databases, processing Hungarian natural language (NL) questions and transforming them to SQL queries for database access, image search supported by a visual thesaurus that describes in a structural form the visual content of images (also in Hungarian). This paper primarily focuses on a particular problem of question processing task: the entity recognition. Before going into details we give a short overview of the project's aims.

  15. MoZis: mobile zoo information system: a case study for the city of Osnabrueck

    NASA Astrophysics Data System (ADS)

    Michel, Ulrich

    2007-10-01

    This paper describes a new project of the Institute for Geoinformatics and Remote Sensing, funded by the German Federal Foundation for the Environment (DBU, Deutsche Bundesstiftung Umwelt www.dbu.de). The goal of this project is to develop a mobile zoo information system for Pocket PCs and Smart phones. Visitors of the zoo will be able to use their own mobile devices or use Pocket PCs, which could be borrowed from the zoo to navigate around the zoo's facilities. The system will also provide additional multimedia based information such as audio-based material, animal video clips, and maps of their natural habitat. People could have access to the project at the zoo via wireless local area network or by downloading the necessary files using a home internet connection. Our software environment consists of proprietary and non-proprietary software solutions in order to make it as flexible as possible. Our first prototype was developed with Visual Studio 2003 and Visual Basic.Net.

  16. Penn State's Visual Image User Study

    ERIC Educational Resources Information Center

    Pisciotta, Henry A.; Dooris, Michael J.; Frost, James; Halm, Michael

    2005-01-01

    The Visual Image User Study (VIUS), an extensive needs assessment project at Penn State University, describes academic users of pictures and their perceptions. These findings outline the potential market for digital images and list the likely determinates of whether or not a system will be used. They also explain some key user requirements for…

  17. A web-based spatial decision support system for spatial planning and governance in the Guangdong Province

    NASA Astrophysics Data System (ADS)

    Wu, Qitao; Zhang, Hong-ou; Chen, Fengui; Dou, Jie

    2008-10-01

    After three decades' rapid economic development, Guangdong province faces to thorny problems related to pollution, resource shortage and environmental deterioration. What is worse, the future accelerated development, urbanization and industrialization also comes at the cost of regional imbalance with economic gaps growing and the quality of life in different regions degrading. Development and Reform Commission of Guangdong Province (GDDRC) started a spatial planning project under the national frame in 2007. The prospective project is expected to enhance the equality of different regions and balance the economic development with environmental protection and improved sustainability. This manuscript presents the results of scientific research aiming to develop a Spatial Decision Support System (SDSS) for this spatial planning project. The system composes four modules include the User interface module (UIM), Spatial Analyze module (SAM), Database management module (DMM) and Help module (HM) base on ArcInfo, JSP/Servlet, JavaScript, MapServer, Visual C++ and Visual Basic technologies. The web-based SDSS provides a user-friendly tool for local decision makers, regional planners and other stakeholders in understanding and visualizing the different territorial dimensions of economic development against sustainable environmental and exhausted resources, and in defining, comparing and prioritizing specific territorially-based actions in order to prevent non-sustainable development and implement relevant politics.

  18. Dynamic optical projection of acquired luminescence for aiding oncologic surgery

    NASA Astrophysics Data System (ADS)

    Sarder, Pinaki; Gullicksrud, Kyle; Mondal, Suman; Sudlow, Gail P.; Achilefu, Samuel; Akers, Walter J.

    2013-12-01

    Optical imaging enables real-time visualization of intrinsic and exogenous contrast within biological tissues. Applications in human medicine have demonstrated the power of fluorescence imaging to enhance visualization in dermatology, endoscopic procedures, and open surgery. Although few optical contrast agents are available for human medicine at this time, fluorescence imaging is proving to be a powerful tool in guiding medical procedures. Recently, intraoperative detection of fluorescent molecular probes that target cell-surface receptors has been reported for improvement in oncologic surgery in humans. We have developed a novel system, optical projection of acquired luminescence (OPAL), to further enhance real-time guidance of open oncologic surgery. In this method, collected fluorescence intensity maps are projected onto the imaged surface rather than via wall-mounted display monitor. To demonstrate proof-of-principle for OPAL applications in oncologic surgery, lymphatic transport of indocyanine green was visualized in live mice for intraoperative identification of sentinel lymph nodes. Subsequently, peritoneal tumors in a murine model of breast cancer metastasis were identified using OPAL after systemic administration of a tumor-selective fluorescent molecular probe. These initial results clearly show that OPAL can enhance adoption and ease-of-use of fluorescence imaging in oncologic procedures relative to existing state-of-the-art intraoperative imaging systems.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Springmeyer, R R; Brugger, E; Cook, R

    The Data group provides data analysis and visualization support to its customers. This consists primarily of the development and support of VisIt, a data analysis and visualization tool. Support ranges from answering questions about the tool, providing classes on how to use the tool, and performing data analysis and visualization for customers. The Information Management and Graphics Group supports and develops tools that enhance our ability to access, display, and understand large, complex data sets. Activities include applying visualization software for large scale data exploration; running video production labs on two networks; supporting graphics libraries and tools for end users;more » maintaining PowerWalls and assorted other displays; and developing software for searching and managing scientific data. Researchers in the Center for Applied Scientific Computing (CASC) work on various projects including the development of visualization techniques for large scale data exploration that are funded by the ASC program, among others. The researchers also have LDRD projects and collaborations with other lab researchers, academia, and industry. The IMG group is located in the Terascale Simulation Facility, home to Dawn, Atlas, BGL, and others, which includes both classified and unclassified visualization theaters, a visualization computer floor and deployment workshop, and video production labs. We continued to provide the traditional graphics group consulting and video production support. We maintained five PowerWalls and many other displays. We deployed a 576-node Opteron/IB cluster with 72 TB of memory providing a visualization production server on our classified network. We continue to support a 128-node Opteron/IB cluster providing a visualization production server for our unclassified systems and an older 256-node Opteron/IB cluster for the classified systems, as well as several smaller clusters to drive the PowerWalls. The visualization production systems includes NFS servers to provide dedicated storage for data analysis and visualization. The ASC projects have delivered new versions of visualization and scientific data management tools to end users and continue to refine them. VisIt had 4 releases during the past year, ending with VisIt 2.0. We released version 2.4 of Hopper, a Java application for managing and transferring files. This release included a graphical disk usage view which works on all types of connections and an aggregated copy feature for quickly transferring massive datasets quickly and efficiently to HPSS. We continue to use and develop Blockbuster and Telepath. Both the VisIt and IMG teams were engaged in a variety of movie production efforts during the past year in addition to the development tasks.« less

  20. Stream-related preferences of inputs to the superior colliculus from areas of dorsal and ventral streams of mouse visual cortex.

    PubMed

    Wang, Quanxin; Burkhalter, Andreas

    2013-01-23

    Previous studies of intracortical connections in mouse visual cortex have revealed two subnetworks that resemble the dorsal and ventral streams in primates. Although calcium imaging studies have shown that many areas of the ventral stream have high spatial acuity whereas areas of the dorsal stream are highly sensitive for transient visual stimuli, there are some functional inconsistencies that challenge a simple grouping into "what/perception" and "where/action" streams known in primates. The superior colliculus (SC) is a major center for processing of multimodal sensory information and the motor control of orienting the eyes, head, and body. Visual processing is performed in superficial layers, whereas premotor activity is generated in deep layers of the SC. Because the SC is known to receive input from visual cortex, we asked whether the projections from 10 visual areas of the dorsal and ventral streams terminate in differential depth profiles within the SC. We found that inputs from primary visual cortex are by far the strongest. Projections from the ventral stream were substantially weaker, whereas the sparsest input originated from areas of the dorsal stream. Importantly, we found that ventral stream inputs terminated in superficial layers, whereas dorsal stream inputs tended to be patchy and either projected equally to superficial and deep layers or strongly preferred deep layers. The results suggest that the anatomically defined ventral and dorsal streams contain areas that belong to distinct functional systems, specialized for the processing of visual information and visually guided action, respectively.

  1. The projective field of a retinal amacrine cell

    PubMed Central

    de Vries, Saskia E. J.; Baccus, Stephen A.; Meister, Markus

    2011-01-01

    In sensory systems, neurons are generally characterized by their receptive field, namely the sensitivity to activity patterns at the circuit's input. To assess the neuron's role in the system, one must also know its projective field, namely the spatio-temporal effects the neuron exerts on all the circuit's outputs. We studied both the receptive and projective fields of an amacrine interneuron in the salamander retina. This amacrine type has a sustained OFF response with a small receptive field, but its output projects over a much larger region. Unlike other amacrines, this type is remarkably promiscuous and affects nearly every ganglion cell within reach of its dendrites. Its activity modulates the sensitivity of visual responses in ganglion cells, while leaving their kinetics unchanged. The projective field displays a center-surround structure: Depolarizing a single amacrine suppresses the visual sensitivity of ganglion cells nearby, and enhances it at greater distances. This change in sign is seen even within the receptive field of one ganglion cell; thus the modulation occurs presynaptically on bipolar cell terminals, most likely via GABAB receptors. Such an antagonistic projective field could contribute to the retina's mechanisms for predictive coding. PMID:21653863

  2. Piloting Systems Reset Path Integration Systems during Position Estimation

    ERIC Educational Resources Information Center

    Zhang, Lei; Mou, Weimin

    2017-01-01

    During locomotion, individuals can determine their positions with either idiothetic cues from movement (path integration systems) or visual landmarks (piloting systems). This project investigated how these 2 systems interact in determining humans' positions. In 2 experiments, participants studied the locations of 5 target objects and 1 single…

  3. The eye as metronome of the body.

    PubMed

    Lubkin, Virginia; Beizai, Pouneh; Sadun, Alfredo A

    2002-01-01

    Vision is much more than just resolving small objects. In fact, the eye sends visual information to the brain that is not consciously perceived. One such pathway entails visual information to the hypothalamus. The retinohypothalamic tract (RHT) mediates light entrainment of circadian rhythms. Retinofugal fibers project to several nuclei of the hypothalamus. These and further projections to the pineal via the sympathetic system provide the anatomical substrate for the neuro-endocrine control of diurnal and longer rhythms. Without the influence of light and dark, many rhythms desynchronize and exhibit free-running periods of approximately 24.2-24.9 hours in humans. This review will demonstrate the mechanism by which the RHT synchronizes circadian rhythms and the importance of preserving light perception in those persons with impending visual loss.

  4. Multi-Mission Simulation and Visualization for Real-Time Telemetry Display, Playback and EDL Event Reconstruction

    NASA Technical Reports Server (NTRS)

    Pomerantz, M. I.; Lim, C.; Myint, S.; Woodward, G.; Balaram, J.; Kuo, C.

    2012-01-01

    he Jet Propulsion Laboratory's Entry, Descent and Landing (EDL) Reconstruction Task has developed a software system that provides mission operations personnel and analysts with a real time telemetry-based live display, playback and post-EDL reconstruction capability that leverages the existing high-fidelity, physics-based simulation framework and modern game engine-derived 3D visualization system developed in the JPL Dynamics and Real Time Simulation (DARTS) Lab. Developed as a multi-mission solution, the EDL Telemetry Visualization (ETV) system has been used for a variety of projects including NASA's Mars Science Laboratory (MSL), NASA'S Low Density Supersonic Decelerator (LDSD) and JPL's MoonRise Lunar sample return proposal.

  5. Image-Based Visual Servoing for Robotic Systems: A Nonlinear Lyapunov-Based Control Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dixon, Warren

    2004-06-01

    There is significant motivation to provide robotic systems with improved autonomy as a means to significantly accelerate deactivation and decommissioning (D&D) operations while also reducing the associated costs, removing human operators from hazardous environments, and reducing the required burden and skill of human operators. To achieve improved autonomy, this project focused on the basic science challenges leading to the development of visual servo controllers. The challenge in developing these controllers is that a camera provides 2-dimensional image information about the 3-dimensional Euclidean-space through a perspective (range dependent) projection that can be corrupted by uncertainty in the camera calibration matrix andmore » by disturbances such as nonlinear radial distortion. Disturbances in this relationship (i.e., corruption in the sensor information) propagate erroneous information to the feedback controller of the robot, leading to potentially unpredictable task execution. This research project focused on the development of a visual servo control methodology that targets compensating for disturbances in the camera model (i.e., camera calibration and the recovery of range information) as a means to achieve predictable response by the robotic system operating in unstructured environments. The fundamental idea is to use nonlinear Lyapunov-based techniques along with photogrammetry methods to overcome the complex control issues and alleviate many of the restrictive assumptions that impact current robotic applications. The outcome of this control methodology is a plug-and-play visual servoing control module that can be utilized in conjunction with current technology such as feature recognition and extraction to enable robotic systems with the capabilities of increased accuracy, autonomy, and robustness, with a larger field of view (and hence a larger workspace). The developed methodology has been reported in numerous peer-reviewed publications and the performance and enabling capabilities of the resulting visual servo control modules have been demonstrated on mobile robot and robot manipulator platforms.« less

  6. Use of robotics for nondestructive inspection of steel highway bridges and structures : final report.

    DOT National Transportation Integrated Search

    2005-01-01

    This report presents the results of a project to finalize and apply a crawling robotic system for the remote visual inspection of high-mast light poles. The first part of the project focused on finalizing the prototype crawler robot hardware and cont...

  7. Enhanced Monocular Visual Odometry Integrated with Laser Distance Meter for Astronaut Navigation

    PubMed Central

    Wu, Kai; Di, Kaichang; Sun, Xun; Wan, Wenhui; Liu, Zhaoqin

    2014-01-01

    Visual odometry provides astronauts with accurate knowledge of their position and orientation. Wearable astronaut navigation systems should be simple and compact. Therefore, monocular vision methods are preferred over stereo vision systems, commonly used in mobile robots. However, the projective nature of monocular visual odometry causes a scale ambiguity problem. In this paper, we focus on the integration of a monocular camera with a laser distance meter to solve this problem. The most remarkable advantage of the system is its ability to recover a global trajectory for monocular image sequences by incorporating direct distance measurements. First, we propose a robust and easy-to-use extrinsic calibration method between camera and laser distance meter. Second, we present a navigation scheme that fuses distance measurements with monocular sequences to correct the scale drift. In particular, we explain in detail how to match the projection of the invisible laser pointer on other frames. Our proposed integration architecture is examined using a live dataset collected in a simulated lunar surface environment. The experimental results demonstrate the feasibility and effectiveness of the proposed method. PMID:24618780

  8. Conceptual design study for an advanced cab and visual system, volume 1

    NASA Technical Reports Server (NTRS)

    Rue, R. J.; Cyrus, M. L.; Garnett, T. A.; Nachbor, J. W.; Seery, J. A.; Starr, R. L.

    1980-01-01

    A conceptual design study was conducted to define requirements for an advanced cab and visual system. The rotorcraft system integration simulator is for engineering studies in the area of mission associated vehicle handling qualities. Principally a technology survey and assessment of existing and proposed simulator visual display systems, image generation systems, modular cab designs, and simulator control station designs were performed and are discussed. State of the art survey data were used to synthesize a set of preliminary visual display system concepts of which five candidate display configurations were selected for further evaluation. Basic display concepts incorporated in these configurations included: real image projection, using either periscopes, fiber optic bundles, or scanned laser optics; and virtual imaging with helmet mounted displays. These display concepts were integrated in the study with a simulator cab concept employing a modular base for aircraft controls, crew seating, and instrumentation (or other) displays. A simple concept to induce vibration in the various modules was developed and is described. Results of evaluations and trade offs related to the candidate system concepts are given, along with a suggested weighting scheme for numerically comparing visual system performance characteristics.

  9. Data mining and visualization from planetary missions: the VESPA-Europlanet2020 activity

    NASA Astrophysics Data System (ADS)

    Longobardo, Andrea; Capria, Maria Teresa; Zinzi, Angelo; Ivanovski, Stavro; Giardino, Marco; di Persio, Giuseppe; Fonte, Sergio; Palomba, Ernesto; Antonelli, Lucio Angelo; Fonte, Sergio; Giommi, Paolo; Europlanet VESPA 2020 Team

    2017-06-01

    This paper presents the VESPA (Virtual European Solar and Planetary Access) activity, developed in the context of the Europlanet 2020 Horizon project, aimed at providing tools for analysis and visualization of planetary data provided by space missions. In particular, the activity is focused on minor bodies of the Solar System.The structure of the computation node, the algorithms developed for analysis of planetary surfaces and cometary comae and the tools for data visualization are presented.

  10. Road safety enhancement: an investigation on the visibility of on-road image projections using DMD-based pixel light systems

    NASA Astrophysics Data System (ADS)

    Rizvi, Sadiq; Ley, Peer-Phillip; Knöchelmann, Marvin; Lachmayer, Roland

    2018-02-01

    Research reveals that visual information forms the major portion of the received data for driving. At night -owing to the, sometimes scarcity, sometime inhomogeneity of light- the human physiology and psychology experiences a dramatic alteration. It is found that although the likelihood of accident occurrence is higher during the day due to heavier traffic, the most fatal accidents still occur during night time. How can road safety be improved in limited lighting conditions using DMD-based high resolution headlamps? DMD-based pixel light systems, utilizing HID and LED light sources, are able to address hundreds of thousands of pixels individually. Using camera information, this capability allows 'glare-free' light distributions that perfectly adapt to the needs of all road users. What really enables these systems to stand out however, is their on-road image projection capability. This projection functionality may be used in co-operation with other driver assistance systems as an assist feature for the projection of navigation data, warning signs, car status information etc. Since contrast sensitivity constitutes a decisive measure of the human visual function, here is then a core question: what distributions of luminance in the projection space produce highly visible on-road image projections? This work seeks to address this question. Responses on sets of differently illuminated projections are collected from a group of participants and later interpreted using statistical data obtained using a luminance camera. Some aspects regarding the correlation between contrast ratio, symbol form and attention capture are also discussed.

  11. A unified dynamic neural field model of goal directed eye movements

    NASA Astrophysics Data System (ADS)

    Quinton, J. C.; Goffart, L.

    2018-01-01

    Primates heavily rely on their visual system, which exploits signals of graded precision based on the eccentricity of the target in the visual field. The interactions with the environment involve actively selecting and focusing on visual targets or regions of interest, instead of contemplating an omnidirectional visual flow. Eye-movements specifically allow foveating targets and track their motion. Once a target is brought within the central visual field, eye-movements are usually classified into catch-up saccades (jumping from one orientation or fixation to another) and smooth pursuit (continuously tracking a target with low velocity). Building on existing dynamic neural field equations, we introduce a novel model that incorporates internal projections to better estimate the current target location (associated to a peak of activity). Such estimate is then used to trigger an eye movement, leading to qualitatively different behaviours depending on the dynamics of the whole oculomotor system: (1) fixational eye-movements due to small variations in the weights of projections when the target is stationary, (2) interceptive and catch-up saccades when peaks build and relax on the neural field, (3) smooth pursuit when the peak stabilises near the centre of the field, the system reaching a fixed point attractor. Learning is nevertheless required for tracking a rapidly moving target, and the proposed model thus replicates recent results in the monkey, in which repeated exercise permits the maintenance of the target within in the central visual field at its current (here-and-now) location, despite the delays involved in transmitting retinal signals to the oculomotor neurons.

  12. Off-the-shelf Control of Data Analysis Software

    NASA Astrophysics Data System (ADS)

    Wampler, S.

    The Gemini Project must provide convenient access to data analysis facilities to a wide user community. The international nature of this community makes the selection of data analysis software particularly interesting, with staunch advocates of systems such as ADAM and IRAF among the users. Additionally, the continuing trends towards increased use of networked systems and distributed processing impose additional complexity. To meet these needs, the Gemini Project is proposing the novel approach of using low-cost, off-the-shelf software to abstract out both the control and distribution of data analysis from the functionality of the data analysis software. For example, the orthogonal nature of control versus function means that users might select analysis routines from both ADAM and IRAF as appropriate, distributing these routines across a network of machines. It is the belief of the Gemini Project that this approach results in a system that is highly flexible, maintainable, and inexpensive to develop. The Khoros visualization system is presented as an example of control software that is currently available for providing the control and distribution within a data analysis system. The visual programming environment provided with Khoros is also discussed as a means to providing convenient access to this control.

  13. A web platform for integrated surface water - groundwater modeling and data management

    NASA Astrophysics Data System (ADS)

    Fatkhutdinov, Aybulat; Stefan, Catalin; Junghanns, Ralf

    2016-04-01

    Model-based decision support systems are considered to be reliable and time-efficient tools for resources management in various hydrology related fields. However, searching and acquisition of the required data, preparation of the data sets for simulations as well as post-processing, visualization and publishing of the simulations results often requires significantly more work and time than performing the modeling itself. The purpose of the developed software is to combine data storage facilities, data processing instruments and modeling tools in a single platform which potentially can reduce time required for performing simulations, hence decision making. The system is developed within the INOWAS (Innovative Web Based Decision Support System for Water Sustainability under a Changing Climate) project. The platform integrates spatially distributed catchment scale rainfall - runoff, infiltration and groundwater flow models with data storage, processing and visualization tools. The concept is implemented in a form of a web-GIS application and is build based on free and open source components, including the PostgreSQL database management system, Python programming language for modeling purposes, Mapserver for visualization and publishing the data, Openlayers for building the user interface and others. Configuration of the system allows performing data input, storage, pre- and post-processing and visualization in a single not disturbed workflow. In addition, realization of the decision support system in the form of a web service provides an opportunity to easily retrieve and share data sets as well as results of simulations over the internet, which gives significant advantages for collaborative work on the projects and is able to significantly increase usability of the decision support system.

  14. Electromagnetic tracking of motion in the proximity of computer generated graphical stimuli: a tutorial.

    PubMed

    Schnabel, Ulf H; Hegenloh, Michael; Müller, Hermann J; Zehetleitner, Michael

    2013-09-01

    Electromagnetic motion-tracking systems have the advantage of capturing the tempo-spatial kinematics of movements independently of the visibility of the sensors. However, they are limited in that they cannot be used in the proximity of electromagnetic field sources, such as computer monitors. This prevents exploiting the tracking potential of the sensor system together with that of computer-generated visual stimulation. Here we present a solution for presenting computer-generated visual stimulation that does not distort the electromagnetic field required for precise motion tracking, by means of a back projection medium. In one experiment, we verify that cathode ray tube monitors, as well as thin-film-transistor monitors, distort electro-magnetic sensor signals even at a distance of 18 cm. Our back projection medium, by contrast, leads to no distortion of the motion-tracking signals even when the sensor is touching the medium. This novel solution permits combining the advantages of electromagnetic motion tracking with computer-generated visual stimulation.

  15. Parallel Rendering of Large Time-Varying Volume Data

    NASA Technical Reports Server (NTRS)

    Garbutt, Alexander E.

    2005-01-01

    Interactive visualization of large time-varying 3D volume datasets has been and still is a great challenge to the modem computational world. It stretches the limits of the memory capacity, the disk space, the network bandwidth and the CPU speed of a conventional computer. In this SURF project, we propose to develop a parallel volume rendering program on SGI's Prism, a cluster computer equipped with state-of-the-art graphic hardware. The proposed program combines both parallel computing and hardware rendering in order to achieve an interactive rendering rate. We use 3D texture mapping and a hardware shader to implement 3D volume rendering on each workstation. We use SGI's VisServer to enable remote rendering using Prism's graphic hardware. And last, we will integrate this new program with ParVox, a parallel distributed visualization system developed at JPL. At the end of the project, we Will demonstrate remote interactive visualization using this new hardware volume renderer on JPL's Prism System using a time-varying dataset from selected JPL applications.

  16. A case study of collaborative facilities use in engineering design

    NASA Astrophysics Data System (ADS)

    Monroe, Laura; Pugmire, David

    2010-01-01

    In this paper we describe the use of visualization tools and facilities in the collaborative design of a replacement weapons system, the Reliable Replacement Warhead (RRW). We used not only standard collaboration methods but also a range of visualization software and facilities to bring together domain specialists from laboratories across the country to collaborate on the design and integrate this disparate input early in the design. This was the first time in U.S. weapons history that a weapon had been designed in this collaborative manner. Benefits included projected cost savings, design improvements and increased understanding across the project.

  17. Survey of computer vision technology for UVA navigation

    NASA Astrophysics Data System (ADS)

    Xie, Bo; Fan, Xiang; Li, Sijian

    2017-11-01

    Navigation based on computer version technology, which has the characteristics of strong independence, high precision and is not susceptible to electrical interference, has attracted more and more attention in the filed of UAV navigation research. Early navigation project based on computer version technology mainly applied to autonomous ground robot. In recent years, the visual navigation system is widely applied to unmanned machine, deep space detector and underwater robot. That further stimulate the research of integrated navigation algorithm based on computer version technology. In China, with many types of UAV development and two lunar exploration, the three phase of the project started, there has been significant progress in the study of visual navigation. The paper expounds the development of navigation based on computer version technology in the filed of UAV navigation research and draw a conclusion that visual navigation is mainly applied to three aspects as follows.(1) Acquisition of UAV navigation parameters. The parameters, including UAV attitude, position and velocity information could be got according to the relationship between the images from sensors and carrier's attitude, the relationship between instant matching images and the reference images and the relationship between carrier's velocity and characteristics of sequential images.(2) Autonomous obstacle avoidance. There are many ways to achieve obstacle avoidance in UAV navigation. The methods based on computer version technology ,including feature matching, template matching, image frames and so on, are mainly introduced. (3) The target tracking, positioning. Using the obtained images, UAV position is calculated by using optical flow method, MeanShift algorithm, CamShift algorithm, Kalman filtering and particle filter algotithm. The paper expounds three kinds of mainstream visual system. (1) High speed visual system. It uses parallel structure, with which image detection and processing are carried out at high speed. The system is applied to rapid response system. (2) The visual system of distributed network. There are several discrete image data acquisition sensor in different locations, which transmit image data to the node processor to increase the sampling rate. (3) The visual system combined with observer. The system combines image sensors with the external observers to make up for lack of visual equipment. To some degree, these systems overcome lacks of the early visual system, including low frequency, low processing efficiency and strong noise. In the end, the difficulties of navigation based on computer version technology in practical application are briefly discussed. (1) Due to the huge workload of image operation , the real-time performance of the system is poor. (2) Due to the large environmental impact , the anti-interference ability of the system is poor.(3) Due to the ability to work in a particular environment, the system has poor adaptability.

  18. A Model-Driven Visualization Tool for Use with Model-Based Systems Engineering Projects

    NASA Technical Reports Server (NTRS)

    Trase, Kathryn; Fink, Eric

    2014-01-01

    Model-Based Systems Engineering (MBSE) promotes increased consistency between a system's design and its design documentation through the use of an object-oriented system model. The creation of this system model facilitates data presentation by providing a mechanism from which information can be extracted by automated manipulation of model content. Existing MBSE tools enable model creation, but are often too complex for the unfamiliar model viewer to easily use. These tools do not yet provide many opportunities for easing into the development and use of a system model when system design documentation already exists. This study creates a Systems Modeling Language (SysML) Document Traceability Framework (SDTF) for integrating design documentation with a system model, and develops an Interactive Visualization Engine for SysML Tools (InVEST), that exports consistent, clear, and concise views of SysML model data. These exported views are each meaningful to a variety of project stakeholders with differing subjects of concern and depth of technical involvement. InVEST allows a model user to generate multiple views and reports from a MBSE model, including wiki pages and interactive visualizations of data. System data can also be filtered to present only the information relevant to the particular stakeholder, resulting in a view that is both consistent with the larger system model and other model views. Viewing the relationships between system artifacts and documentation, and filtering through data to see specialized views improves the value of the system as a whole, as data becomes information

  19. 2014 Earth System Grid Federation and Ultrascale Visualization Climate Data Analysis Tools Conference Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Dean N.

    2015-01-27

    The climate and weather data science community met December 9–11, 2014, in Livermore, California, for the fourth annual Earth System Grid Federation (ESGF) and Ultrascale Visualization Climate Data Analysis Tools (UV-CDAT) Face-to-Face (F2F) Conference, hosted by the Department of Energy, National Aeronautics and Space Administration, National Oceanic and Atmospheric Administration, the European Infrastructure for the European Network of Earth System Modelling, and the Australian Department of Education. Both ESGF and UVCDATremain global collaborations committed to developing a new generation of open-source software infrastructure that provides distributed access and analysis to simulated and observed data from the climate and weather communities.more » The tools and infrastructure created under these international multi-agency collaborations are critical to understanding extreme weather conditions and long-term climate change. In addition, the F2F conference fosters a stronger climate and weather data science community and facilitates a stronger federated software infrastructure. The 2014 F2F conference detailed the progress of ESGF, UV-CDAT, and other community efforts over the year and sets new priorities and requirements for existing and impending national and international community projects, such as the Coupled Model Intercomparison Project Phase Six. Specifically discussed at the conference were project capabilities and enhancements needs for data distribution, analysis, visualization, hardware and network infrastructure, standards, and resources.« less

  20. A novel apparatus for testing binocular function using the 'CyberDome' three-dimensional hemispherical visual display system.

    PubMed

    Handa, T; Ishikawa, H; Shimizu, K; Kawamura, R; Nakayama, H; Sawada, K

    2009-11-01

    Virtual reality has recently been highlighted as a promising medium for visual presentation and entertainment. A novel apparatus for testing binocular visual function using a hemispherical visual display system, 'CyberDome', has been developed and tested. Subjects comprised 40 volunteers (mean age, 21.63 years) with corrected visual acuity of -0.08 (LogMAR) or better, and stereoacuity better than 100 s of arc on the Titmus stereo test. Subjects were able to experience visual perception like being surrounded by visual images, a feature of the 'CyberDome' hemispherical visual display system. Visual images to the right and left eyes were projected and superimposed on the dome screen, allowing test images to be seen independently by each eye using polarizing glasses. The hemispherical visual display was 1.4 m in diameter. Three test parameters were evaluated: simultaneous perception (subjective angle of strabismus), motor fusion amplitude (convergence and divergence), and stereopsis (binocular disparity at 1260, 840, and 420 s of arc). Testing was performed in volunteer subjects with normal binocular vision, and results were compared with those using a major amblyoscope. Subjective angle of strabismus and motor fusion amplitude showed a significant correlation between our test and the major amblyoscope. All subjects could perceive the stereoscopic target with a binocular disparity of 480 s of arc. Our novel apparatus using the CyberDome, a hemispherical visual display system, was able to quantitatively evaluate binocular function. This apparatus offers clinical promise in the evaluation of binocular function.

  1. Development of driver’s assistant system of additional visual information of blind areas for Gazelle Next

    NASA Astrophysics Data System (ADS)

    Makarov, V.; Korelin, O.; Koblyakov, D.; Kostin, S.; Komandirov, A.

    2018-02-01

    The article is devoted to the development of the Advanced Driver Assistance Systems (ADAS) for the GAZelle NEXT car. This project is aimed at developing a visual information system for the driver integrated into the windshield racks. The developed system implements the following functions: assistance in maneuvering and parking; Recognition of road signs; Warning the driver about the possibility of a frontal collision; Control of "blind" zones; "Transparent" vision in the windshield racks, widening the field of view, behind them; Visual and sound information about the traffic situation; Control and descent from the lane of the vehicle; Monitoring of the driver’s condition; navigation system; All-round review. The scheme of action of sensors of the developed system of visual information of the driver is provided. The moments of systems on a prototype of a vehicle are considered. Possible changes in the interior and dashboard of the car are given. The results of the implementation are aimed at the implementation of the system - improved informing of the driver about the environment and the development of an ergonomic interior for this system within the new Functional Salon of the Gazelle Next vehicle equipped with a visual information system for the driver.

  2. Seeing the Light: A Classroom-Sized Pinhole Camera Demonstration for Teaching Vision

    ERIC Educational Resources Information Center

    Prull, Matthew W.; Banks, William P.

    2005-01-01

    We describe a classroom-sized pinhole camera demonstration (camera obscura) designed to enhance students' learning of the visual system. The demonstration consists of a suspended rear-projection screen onto which the outside environment projects images through a small hole in a classroom window. Students can observe these images in a darkened…

  3. SATURATION MEASUREMENT OF IMMISCIBLE FLUIDS IN 2-D STATIC SYSTEMS: VALIDATION BY LIGHT TRANSMISSION VISUALIZATION (SAN FRANCISCO, CA)

    EPA Science Inventory

    This study is a part of an ongoing research project that aims at assessing the environmental benefits of DNAPL removal. The laboratory part of the research project is to examine the functional relationship between DNAPL architecture, mass removal and contaminant mass flux in 2-D ...

  4. Visualization techniques to aid in the analysis of multi-spectral astrophysical data sets

    NASA Technical Reports Server (NTRS)

    Brugel, Edward W.; Domik, Gitta O.; Ayres, Thomas R.

    1993-01-01

    The goal of this project was to support the scientific analysis of multi-spectral astrophysical data by means of scientific visualization. Scientific visualization offers its greatest value if it is not used as a method separate or alternative to other data analysis methods but rather in addition to these methods. Together with quantitative analysis of data, such as offered by statistical analysis, image or signal processing, visualization attempts to explore all information inherent in astrophysical data in the most effective way. Data visualization is one aspect of data analysis. Our taxonomy as developed in Section 2 includes identification and access to existing information, preprocessing and quantitative analysis of data, visual representation and the user interface as major components to the software environment of astrophysical data analysis. In pursuing our goal to provide methods and tools for scientific visualization of multi-spectral astrophysical data, we therefore looked at scientific data analysis as one whole process, adding visualization tools to an already existing environment and integrating the various components that define a scientific data analysis environment. As long as the software development process of each component is separate from all other components, users of data analysis software are constantly interrupted in their scientific work in order to convert from one data format to another, or to move from one storage medium to another, or to switch from one user interface to another. We also took an in-depth look at scientific visualization and its underlying concepts, current visualization systems, their contributions, and their shortcomings. The role of data visualization is to stimulate mental processes different from quantitative data analysis, such as the perception of spatial relationships or the discovery of patterns or anomalies while browsing through large data sets. Visualization often leads to an intuitive understanding of the meaning of data values and their relationships by sacrificing accuracy in interpreting the data values. In order to be accurate in the interpretation, data values need to be measured, computed on, and compared to theoretical or empirical models (quantitative analysis). If visualization software hampers quantitative analysis (which happens with some commercial visualization products), its use is greatly diminished for astrophysical data analysis. The software system STAR (Scientific Toolkit for Astrophysical Research) was developed as a prototype during the course of the project to better understand the pragmatic concerns raised in the project. STAR led to a better understanding on the importance of collaboration between astrophysicists and computer scientists.

  5. Spatial Paradigm for Information Retrieval and Exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The SPIRE system consists of software for visual analysis of primarily text based information sources. This technology enables the content analysis of text documents without reading all the documents. It employs several algorithms for text and word proximity analysis. It identifies the key themes within the text documents. From this analysis, it projects the results onto a visual spatial proximity display (Galaxies or Themescape) where items (documents and/or themes) visually close to each other are known to have content which is close to each other. Innovative interaction techniques then allow for dynamic visual analysis of large text based information spaces.

  6. SPIRE1.03. Spatial Paradigm for Information Retrieval and Exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, K.J.; Bohn, S.; Crow, V.

    The SPIRE system consists of software for visual analysis of primarily text based information sources. This technology enables the content analysis of text documents without reading all the documents. It employs several algorithms for text and word proximity analysis. It identifies the key themes within the text documents. From this analysis, it projects the results onto a visual spatial proximity display (Galaxies or Themescape) where items (documents and/or themes) visually close to each other are known to have content which is close to each other. Innovative interaction techniques then allow for dynamic visual analysis of large text based information spaces.

  7. Human-System Integration Scorecard Update to VB.Net

    NASA Technical Reports Server (NTRS)

    Sanders, Blaze D.

    2009-01-01

    The purpose of this project was to create Human-System Integration (HSI) scorecard software, which could be utilized to validate that human factors have been considered early in hardware/system specifications and design. The HSI scorecard is partially based upon the revised Human Rating Requirements (HRR) intended for NASA's Constellation program. This software scorecard will allow for quick appraisal of HSI factors, by using visual aids to highlight low and rapidly changing scores. This project consisted of creating a user-friendly Visual Basic program that could be easily distributed and updated, to and by fellow colleagues. Updating the Microsoft Word version of the HSI scorecard to a computer application will allow for the addition of useful features, improved easy of use, and decreased completion time for user. One significant addition is the ability to create Microsoft Excel graphs automatically from scorecard data, to allow for clear presentation of problematic areas. The purpose of this paper is to describe the rational and benefits of creating the HSI scorecard software, the problems and goals of project, and future work that could be done.

  8. High performance geospatial and climate data visualization using GeoJS

    NASA Astrophysics Data System (ADS)

    Chaudhary, A.; Beezley, J. D.

    2015-12-01

    GeoJS (https://github.com/OpenGeoscience/geojs) is an open-source library developed to support interactive scientific and geospatial visualization of climate and earth science datasets in a web environment. GeoJS has a convenient application programming interface (API) that enables users to harness the fast performance of WebGL and Canvas 2D APIs with sophisticated Scalable Vector Graphics (SVG) features in a consistent and convenient manner. We started the project in response to the need for an open-source JavaScript library that can combine traditional geographic information systems (GIS) and scientific visualization on the web. Many libraries, some of which are open source, support mapping or other GIS capabilities, but lack the features required to visualize scientific and other geospatial datasets. For instance, such libraries are not be capable of rendering climate plots from NetCDF files, and some libraries are limited in regards to geoinformatics (infovis in a geospatial environment). While libraries such as d3.js are extremely powerful for these kinds of plots, in order to integrate them into other GIS libraries, the construction of geoinformatics visualizations must be completed manually and separately, or the code must somehow be mixed in an unintuitive way.We developed GeoJS with the following motivations:• To create an open-source geovisualization and GIS library that combines scientific visualization with GIS and informatics• To develop an extensible library that can combine data from multiple sources and render them using multiple backends• To build a library that works well with existing scientific visualizations tools such as VTKWe have successfully deployed GeoJS-based applications for multiple domains across various projects. The ClimatePipes project funded by the Department of Energy, for example, used GeoJS to visualize NetCDF datasets from climate data archives. Other projects built visualizations using GeoJS for interactively exploring data and analysis regarding 1) the human trafficking domain, 2) New York City taxi drop-offs and pick-ups, and 3) the Ebola outbreak. GeoJS supports advanced visualization features such as picking and selecting, as well as clustering. It also supports 2D contour plots, vector plots, heat maps, and geospatial graphs.

  9. A pseudo-haptic knot diagram interface

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Weng, Jianguang; Hanson, Andrew J.

    2011-01-01

    To make progress in understanding knot theory, we will need to interact with the projected representations of mathematical knots which are of course continuous in 3D but significantly interrupted in the projective images. One way to achieve such a goal would be to design an interactive system that allows us to sketch 2D knot diagrams by taking advantage of a collision-sensing controller and explore their underlying smooth structures through a continuous motion. Recent advances of interaction techniques have been made that allow progress to be made in this direction. Pseudo-haptics that simulates haptic effects using pure visual feedback can be used to develop such an interactive system. This paper outlines one such pseudo-haptic knot diagram interface. Our interface derives from the familiar pencil-and-paper process of drawing 2D knot diagrams and provides haptic-like sensations to facilitate the creation and exploration of knot diagrams. A centerpiece of the interaction model simulates a "physically" reactive mouse cursor, which is exploited to resolve the apparent conflict between the continuous structure of the actual smooth knot and the visual discontinuities in the knot diagram representation. Another value in exploiting pseudo-haptics is that an acceleration (or deceleration) of the mouse cursor (or surface locator) can be used to indicate the slope of the curve (or surface) of whom the projective image is being explored. By exploiting these additional visual cues, we proceed to a full-featured extension to a pseudo-haptic 4D visualization system that simulates the continuous navigation on 4D objects and allows us to sense the bumps and holes in the fourth dimension. Preliminary tests of the software show that main features of the interface overcome some expected perceptual limitations in our interaction with 2D knot diagrams of 3D knots and 3D projective images of 4D mathematical objects.

  10. USL NASA/RECON project presentations at the 1985 ACM Computer Science Conference: Abstracts and visuals

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Chum, Frank Y.; Gallagher, Suzy; Granier, Martin; Hall, Philip P.; Moreau, Dennis R.; Triantafyllopoulos, Spiros

    1985-01-01

    This Working Paper Series entry represents the abstracts and visuals associated with presentations delivered by six USL NASA/RECON research team members at the above named conference. The presentations highlight various aspects of NASA contract activities pursued by the participants as they relate to individual research projects. The titles of the six presentations are as follows: (1) The Specification and Design of a Distributed Workstation; (2) An Innovative, Multidisciplinary Educational Program in Interactive Information Storage and Retrieval; (3) Critical Comparative Analysis of the Major Commercial IS and R Systems; (4) Design Criteria for a PC-Based Common User Interface to Remote Information Systems; (5) The Design of an Object-Oriented Graphics Interface; and (6) Knowledge-Based Information Retrieval: Techniques and Applications.

  11. Virtual reality in the operating room of the future.

    PubMed

    Müller, W; Grosskopf, S; Hildebrand, A; Malkewitz, R; Ziegler, R

    1997-01-01

    In cooperation with the Max-Delbrück-Centrum/Robert-Rössle-Klinik (MDC/RRK) in Berlin, the Fraunhofer Institute for Computer Graphics is currently designing and developing a scenario for the operating room of the future. The goal of this project is to integrate new analysis, visualization and interaction tools in order to optimize and refine tumor diagnostics and therapy in combination with laser technology and remote stereoscopic video transfer. Hence, a human 3-D reference model is reconstructed using CT, MR, and anatomical cryosection images from the National Library of Medicine's Visible Human Project. Applying segmentation algorithms and surface-polygonization methods a 3-D representation is obtained. In addition, a "fly-through" the virtual patient is realized using 3-D input devices (data glove, tracking system, 6-DOF mouse). In this way, the surgeon can experience really new perspectives of the human anatomy. Moreover, using a virtual cutting plane any cut of the CT volume can be interactively placed and visualized in realtime. In conclusion, this project delivers visions for the application of effective visualization and VR systems. Commonly known as Virtual Prototyping and applied by the automotive industry long ago, this project shows, that the use of VR techniques can also prototype an operating room. After evaluating design and functionality of the virtual operating room, MDC plans to build real ORs in the near future. The use of VR techniques provides a more natural interface for the surgeon in the OR (e.g., controlling interactions by voice input). Besides preoperative planning future work will focus on supporting the surgeon in performing surgical interventions. An optimal synthesis of real and synthetic data, and the inclusion of visual, aural, and tactile senses in virtual environments can meet these requirements. This Augmented Reality could represent the environment for the surgeons of tomorrow.

  12. Physically Based Rendering in the Nightshade NG Visualization Platform

    NASA Astrophysics Data System (ADS)

    Berglund, Karrie; Larey-Williams, Trystan; Spearman, Rob; Bogard, Arthur

    2015-01-01

    This poster describes our work on creating a physically based rendering model in Nightshade NG planetarium simulation and visualization software (project website: NightshadeSoftware.org). We discuss techniques used for rendering realistic scenes in the universe and dealing with astronomical distances in real time on consumer hardware. We also discuss some of the challenges of rewriting the software from scratch, a project which began in 2011.Nightshade NG can be a powerful tool for sharing data and visualizations. The desktop version of the software is free for anyone to download, use, and modify; it runs on Windows and Linux (and eventually Mac). If you are looking to disseminate your data or models, please stop by to discuss how we can work together.Nightshade software is used in literally hundreds of digital planetarium systems worldwide. Countless teachers and astronomy education groups run the software on flat screens. This wide use makes Nightshade an effective tool for dissemination to educators and the public.Nightshade NG is an especially powerful visualization tool when projected on a dome. We invite everyone to enter our inflatable dome in the exhibit hall to see this software in a 3D environment.

  13. Advanced engineering environment collaboration project.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamph, Jane Ann; Pomplun, Alan R.; Kiba, Grant W.

    2008-12-01

    The Advanced Engineering Environment (AEE) is a model for an engineering design and communications system that will enhance project collaboration throughout the nuclear weapons complex (NWC). Sandia National Laboratories and Parametric Technology Corporation (PTC) worked together on a prototype project to evaluate the suitability of a portion of PTC's Windchill 9.0 suite of data management, design and collaboration tools as the basis for an AEE. The AEE project team implemented Windchill 9.0 development servers in both classified and unclassified domains and used them to test and evaluate the Windchill tool suite relative to the needs of the NWC using weaponsmore » project use cases. A primary deliverable was the development of a new real time collaborative desktop design and engineering process using PDMLink (data management tool), Pro/Engineer (mechanical computer aided design tool) and ProductView Lite (visualization tool). Additional project activities included evaluations of PTC's electrical computer aided design, visualization, and engineering calculations applications. This report documents the AEE project work to share information and lessons learned with other NWC sites. It also provides PTC with recommendations for improving their products for NWC applications.« less

  14. Project visual analysis for the Allegheny National Forest

    Treesearch

    Gary W. Kell

    1979-01-01

    The Project Visual Analysis is a landscape assessment procedure involving forest vegetative manipulation. A logical step by step analysis leads the user to a specific set of landscape management guidelines to be used as an aid in designing a project or in evaluating whether the proposed project impacts will meet visual objectives. Key elements within the procedure are...

  15. The emergence of polychronization and feature binding in a spiking neural network model of the primate ventral visual system.

    PubMed

    Eguchi, Akihiro; Isbister, James B; Ahmad, Nasir; Stringer, Simon

    2018-07-01

    We present a hierarchical neural network model, in which subpopulations of neurons develop fixed and regularly repeating temporal chains of spikes (polychronization), which respond specifically to randomized Poisson spike trains representing the input training images. The performance is improved by including top-down and lateral synaptic connections, as well as introducing multiple synaptic contacts between each pair of pre- and postsynaptic neurons, with different synaptic contacts having different axonal delays. Spike-timing-dependent plasticity thus allows the model to select the most effective axonal transmission delay between neurons. Furthermore, neurons representing the binding relationship between low-level and high-level visual features emerge through visually guided learning. This begins to provide a way forward to solving the classic feature binding problem in visual neuroscience and leads to a new hypothesis concerning how information about visual features at every spatial scale may be projected upward through successive neuronal layers. We name this hypothetical upward projection of information the "holographic principle." (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. Immersive Visual Data Analysis For Geoscience Using Commodity VR Hardware

    NASA Astrophysics Data System (ADS)

    Kreylos, O.; Kellogg, L. H.

    2017-12-01

    Immersive visualization using virtual reality (VR) display technology offers tremendous benefits for the visual analysis of complex three-dimensional data like those commonly obtained from geophysical and geological observations and models. Unlike "traditional" visualization, which has to project 3D data onto a 2D screen for display, VR can side-step this projection and display 3D data directly, in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection. As a result, researchers can apply their spatial reasoning skills to virtual data in the same way they can to real objects or environments. The UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES, http://keckcaves.org) has been developing VR methods for data analysis since 2005, but the high cost of VR displays has been preventing large-scale deployment and adoption of KeckCAVES technology. The recent emergence of high-quality commodity VR, spearheaded by the Oculus Rift and HTC Vive, has fundamentally changed the field. With KeckCAVES' foundational VR operating system, Vrui, now running natively on the HTC Vive, all KeckCAVES visualization software, including 3D Visualizer, LiDAR Viewer, Crusta, Nanotech Construction Kit, and ProtoShop, are now available to small labs, single researchers, and even home users. LiDAR Viewer and Crusta have been used for rapid response to geologic events including earthquakes and landslides, to visualize the impacts of sealevel rise, to investigate reconstructed paleooceanographic masses, and for exploration of the surface of Mars. The Nanotech Construction Kit is being used to explore the phases of carbon in Earth's deep interior, while ProtoShop can be used to construct and investigate protein structures.

  17. A Visual Editor in Java for View

    NASA Technical Reports Server (NTRS)

    Stansifer, Ryan

    2000-01-01

    In this project we continued the development of a visual editor in the Java programming language to create screens on which to display real-time data. The data comes from the numerous systems monitoring the operation of the space shuttle while on the ground and in space, and from the many tests of subsystems. The data can be displayed on any computer platform running a Java-enabled World Wide Web (WWW) browser and connected to the Internet. Previously a special-purpose program bad been written to display data on emulations of character-based display screens used for many years at NASA. The goal now is to display bit-mapped screens created by a visual editor. We report here on the visual editor that creates the display screens. This project continues the work we bad done previously. Previously we had followed the design of the 'beanbox,' a prototype visual editor created by Sun Microsystems. We abandoned this approach and implemented a prototype using a more direct approach. In addition, our prototype is based on newly released Java 2 graphical user interface (GUI) libraries. The result has been a visually more appealing appearance and a more robust application.

  18. Evolving Our Evaluation of Luminous Environments

    NASA Technical Reports Server (NTRS)

    Clark, Toni

    2016-01-01

    The advance in solid state light emitting technologies and optics for lighting and visual communication necessitates the evaluation of how NASA envisions spacecraft lighting architectures and how NASA uses industry standards for the design and evaluation of lighting systems. Current NASA lighting standards and requirements for existing architectures focus on the separate ability of a lighting system to throw light against a surface or the ability of a display system to provide the appropriate visual contrast. This project investigated large luminous surface lamps as an alternative or supplement to overhead lighting. The efficiency of the technology was evaluated for uniformity and power consumption.

  19. Inferring cortical function in the mouse visual system through large-scale systems neuroscience.

    PubMed

    Hawrylycz, Michael; Anastassiou, Costas; Arkhipov, Anton; Berg, Jim; Buice, Michael; Cain, Nicholas; Gouwens, Nathan W; Gratiy, Sergey; Iyer, Ramakrishnan; Lee, Jung Hoon; Mihalas, Stefan; Mitelut, Catalin; Olsen, Shawn; Reid, R Clay; Teeter, Corinne; de Vries, Saskia; Waters, Jack; Zeng, Hongkui; Koch, Christof

    2016-07-05

    The scientific mission of the Project MindScope is to understand neocortex, the part of the mammalian brain that gives rise to perception, memory, intelligence, and consciousness. We seek to quantitatively evaluate the hypothesis that neocortex is a relatively homogeneous tissue, with smaller functional modules that perform a common computational function replicated across regions. We here focus on the mouse as a mammalian model organism with genetics, physiology, and behavior that can be readily studied and manipulated in the laboratory. We seek to describe the operation of cortical circuitry at the computational level by comprehensively cataloging and characterizing its cellular building blocks along with their dynamics and their cell type-specific connectivities. The project is also building large-scale experimental platforms (i.e., brain observatories) to record the activity of large populations of cortical neurons in behaving mice subject to visual stimuli. A primary goal is to understand the series of operations from visual input in the retina to behavior by observing and modeling the physical transformations of signals in the corticothalamic system. We here focus on the contribution that computer modeling and theory make to this long-term effort.

  20. A distributed analysis and visualization system for model and observational data

    NASA Technical Reports Server (NTRS)

    Wilhelmson, Robert B.

    1994-01-01

    Software was developed with NASA support to aid in the analysis and display of the massive amounts of data generated from satellites, observational field programs, and from model simulations. This software was developed in the context of the PATHFINDER (Probing ATmospHeric Flows in an Interactive and Distributed EnviRonment) Project. The overall aim of this project is to create a flexible, modular, and distributed environment for data handling, modeling simulations, data analysis, and visualization of atmospheric and fluid flows. Software completed with NASA support includes GEMPAK analysis, data handling, and display modules for which collaborators at NASA had primary responsibility, and prototype software modules for three-dimensional interactive and distributed control and display as well as data handling, for which NSCA was responsible. Overall process control was handled through a scientific and visualization application builder from Silicon Graphics known as the Iris Explorer. In addition, the GEMPAK related work (GEMVIS) was also ported to the Advanced Visualization System (AVS) application builder. Many modules were developed to enhance those already available in Iris Explorer including HDF file support, improved visualization and display, simple lattice math, and the handling of metadata through development of a new grid datatype. Complete source and runtime binaries along with on-line documentation is available via the World Wide Web at: http://redrock.ncsa.uiuc.edu/ PATHFINDER/pathre12/top/top.html.

  1. Cross-sectoral optimization and visualization of transformation processes in urban water infrastructures in rural areas.

    PubMed

    Baron, S; Kaufmann Alves, I; Schmitt, T G; Schöffel, S; Schwank, J

    2015-01-01

    Predicted demographic, climatic and socio-economic changes will require adaptations of existing water supply and wastewater disposal systems. Especially in rural areas, these new challenges will affect the functionality of the present systems. This paper presents a joint interdisciplinary research project with the objective of developing an innovative software-based optimization and decision support system for the implementation of long-term transformations of existing infrastructures of water supply, wastewater and energy. The concept of the decision support and optimization tool is described and visualization methods for the presentation of results are illustrated. The model is tested in a rural case study region in the Southwest of Germany. A transformation strategy for a decentralized wastewater treatment concept and its visualization are presented for a model village.

  2. Manufacturing Methods & Technology Project Execution Report. First Half CY 80

    DTIC Science & Technology

    1980-08-01

    1 80 7371 INTEGRATED BLADE INSPECTION SYSTEM (IBIS) INSPECTION OF TURBINE ENGINE BLADES AND VANES NECESSITATES HIGH ACCURACY. THE EFFORT IS TIME...OPTICAL INSP OF PRINTED CIRCUIT BOARDS OPERATOR FATIGUE ALLOWS MANY BAD PCBSS TO PASS VISUAL INSPECTION . 29 PROJECTS ADDED IN 1ST HALF» CY80...2631 TITLED "CRITICAL ELECTROHAGNETIC INSPECTION PROBLEMS WITHIN THE ARMY." FUTURE STATUS WILL BE INCLUDED IN THE PROJECT STATUS FOR M 80 6350

  3. NASA Planning for Orion Multi-Purpose Crew Vehicle Ground Operations

    NASA Technical Reports Server (NTRS)

    Letchworth, Gary; Schlierf, Roland

    2011-01-01

    The NASA Orion Ground Processing Team was originally formed by the Kennedy Space Center (KSC) Constellation (Cx) Project Office's Orion Division to define, refine and mature pre-launch and post-landing ground operations for the Orion human spacecraft. The multidisciplined KSC Orion team consisted of KSC civil servant, SAIC, Productivity Apex, Inc. and Boeing-CAPPS engineers, project managers and safety engineers, as well as engineers from Constellation's Orion Project and Lockheed Martin Orion Prime contractor. The team evaluated the Orion design configurations as the spacecraft concept matured between Systems Design Review (SDR), Systems Requirement Review (SRR) and Preliminary Design Review (PDR). The team functionally decomposed prelaunch and post-landing steps at three levels' of detail, or tiers, beginning with functional flow block diagrams (FFBDs). The third tier FFBDs were used to build logic networks and nominal timelines. Orion ground support equipment (GSE) was identified and mapped to each step. This information was subsequently used in developing lower level operations steps in a Ground Operations Planning Document PDR product. Subject matter experts for each spacecraft and GSE subsystem were used to define 5th - 95th percentile processing times for each FFBD step, using the Delphi Method. Discrete event simulations used this information and the logic network to provide processing timeline confidence intervals for launch rate assessments. The team also used the capabilities of the KSC Visualization Lab, the FFBDs and knowledge of the spacecraft, GSE and facilities to build visualizations of Orion pre-launch and postlanding processing at KSC. Visualizations were a powerful tool for communicating planned operations within the KSC community (i.e., Ground Systems design team), and externally to the Orion Project, Lockheed Martin spacecraft designers and other Constellation Program stakeholders during the SRR to PDR timeframe. Other operations planning tools included Kaizen/Lean events, mockups and human factors analysis. The majority of products developed by this team are applicable as KSC prepares 21st Century Ground Systems for the Orion Multi-Purpose Crew Vehicle and Space Launch System.

  4. The Earth System Documentation (ES-DOC) project

    NASA Astrophysics Data System (ADS)

    Murphy, S.; Greenslade, M. A.; Treshansky, A.; DeLuca, C.; Guilyardi, E.; Denvil, S.

    2013-12-01

    Earth System Documentation (ES-DOC) is an international project supplying high quality tools and services in support of Earth system documentation creation, analysis and dissemination. It is nurturing a sustainable standards based documentation ecosystem that aims to become an integral part of the next generation of exa-scale dataset archives. ES-DOC leverages open source software, and applies a software development methodology that places end-user narratives at the heart of all it does. ES-DOC has initially focused upon nurturing the Earth System Model (ESM) documentation eco-system. Within this context ES-DOC leverages the emerging Common Information Model (CIM) metadata standard, which has supported the following projects: ** Coupled Model Inter-comparison Project Phase 5 (CMIP5); ** Dynamical Core Model Inter-comparison Project (DCMIP-2012); ** National Climate Predictions and Projections Platforms (NCPP) Quantitative Evaluation of Downscaling Workshop (QED-2013). This presentation will introduce the project to a wider audience and will demonstrate the current production level capabilities of the eco-system: ** An ESM documentation Viewer embeddable into any website; ** An ESM Questionnaire configurable on a project by project basis; ** An ESM comparison tool reusable across projects; ** An ESM visualization tool reusable across projects; ** A search engine for speedily accessing published documentation; ** Libraries for streamlining document creation, validation and publishing pipelines.

  5. Virtual Environments for Visualizing Structural Health Monitoring Sensor Networks, Data, and Metadata.

    PubMed

    Napolitano, Rebecca; Blyth, Anna; Glisic, Branko

    2018-01-16

    Visualization of sensor networks, data, and metadata is becoming one of the most pivotal aspects of the structural health monitoring (SHM) process. Without the ability to communicate efficiently and effectively between disparate groups working on a project, an SHM system can be underused, misunderstood, or even abandoned. For this reason, this work seeks to evaluate visualization techniques in the field, identify flaws in current practices, and devise a new method for visualizing and accessing SHM data and metadata in 3D. More precisely, the work presented here reflects a method and digital workflow for integrating SHM sensor networks, data, and metadata into a virtual reality environment by combining spherical imaging and informational modeling. Both intuitive and interactive, this method fosters communication on a project enabling diverse practitioners of SHM to efficiently consult and use the sensor networks, data, and metadata. The method is presented through its implementation on a case study, Streicker Bridge at Princeton University campus. To illustrate the efficiency of the new method, the time and data file size were compared to other potential methods used for visualizing and accessing SHM sensor networks, data, and metadata in 3D. Additionally, feedback from civil engineering students familiar with SHM is used for validation. Recommendations on how different groups working together on an SHM project can create SHM virtual environment and convey data to proper audiences, are also included.

  6. Virtual Environments for Visualizing Structural Health Monitoring Sensor Networks, Data, and Metadata

    PubMed Central

    Napolitano, Rebecca; Blyth, Anna; Glisic, Branko

    2018-01-01

    Visualization of sensor networks, data, and metadata is becoming one of the most pivotal aspects of the structural health monitoring (SHM) process. Without the ability to communicate efficiently and effectively between disparate groups working on a project, an SHM system can be underused, misunderstood, or even abandoned. For this reason, this work seeks to evaluate visualization techniques in the field, identify flaws in current practices, and devise a new method for visualizing and accessing SHM data and metadata in 3D. More precisely, the work presented here reflects a method and digital workflow for integrating SHM sensor networks, data, and metadata into a virtual reality environment by combining spherical imaging and informational modeling. Both intuitive and interactive, this method fosters communication on a project enabling diverse practitioners of SHM to efficiently consult and use the sensor networks, data, and metadata. The method is presented through its implementation on a case study, Streicker Bridge at Princeton University campus. To illustrate the efficiency of the new method, the time and data file size were compared to other potential methods used for visualizing and accessing SHM sensor networks, data, and metadata in 3D. Additionally, feedback from civil engineering students familiar with SHM is used for validation. Recommendations on how different groups working together on an SHM project can create SHM virtual environment and convey data to proper audiences, are also included. PMID:29337877

  7. The Rise of Data in Education Systems: Collection, Visualization and Use

    ERIC Educational Resources Information Center

    Lawn, Martin, Ed.

    2013-01-01

    The growth of education systems and the construction of the state have always been connected. The processes of governing education systems always utilized data through a range of administrative records, pupil testing, efficiency surveys and international projects. By the late twentieth century, quantitative data had gained enormous influence in…

  8. Target dependence of orientation and direction selectivity of corticocortical projection neurons in the mouse V1

    PubMed Central

    Matsui, Teppei; Ohki, Kenichi

    2013-01-01

    Higher order visual areas that receive input from the primary visual cortex (V1) are specialized for the processing of distinct features of visual information. However, it is still incompletely understood how this functional specialization is acquired. Here we used in vivo two photon calcium imaging in the mouse visual cortex to investigate whether this functional distinction exists at as early as the level of projections from V1 to two higher order visual areas, AL and LM. Specifically, we examined whether sharpness of orientation and direction selectivity and optimal spatial and temporal frequency of projection neurons from V1 to higher order visual areas match with that of target areas. We found that the V1 input to higher order visual areas were indeed functionally distinct: AL preferentially received inputs from V1 that were more orientation and direction selective and tuned for lower spatial frequency compared to projection of V1 to LM, consistent with functional differences between AL and LM. The present findings suggest that selective projections from V1 to higher order visual areas initiates parallel processing of sensory information in the visual cortical network. PMID:24068987

  9. Milford Visual Communications Project.

    ERIC Educational Resources Information Center

    Milford Exempted Village Schools, OH.

    This study discusses a visual communications project designed to develop activities to promote visual literacy at the elementary and secondary school levels. The project has four phases: (1) perception of basic forms in the environment, what these forms represent, and how they inter-relate; (2) discovery and communication of more complex…

  10. Neural nets for aligning optical components in harsh environments: Beam smoothing spatial filter as an example

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Krasowski, Michael J.

    1991-01-01

    The goal is to develop an approach to automating the alignment and adjustment of optical measurement, visualization, inspection, and control systems. Classical controls, expert systems, and neural networks are three approaches to automating the alignment of an optical system. Neural networks were chosen for this project and the judgements that led to this decision are presented. Neural networks were used to automate the alignment of the ubiquitous laser-beam-smoothing spatial filter. The results and future plans of the project are presented.

  11. Building Opportunities for Environmental Education Through Student Development of Cyberinfrastructure

    NASA Astrophysics Data System (ADS)

    Moysey, S. M.; Boyer, D. M.; Mobley, C.; Byrd, V. L.

    2014-12-01

    It is increasingly common to utilize simulations and games in the classroom, but learning opportunities can also be created by having students construct these cyberinfrastructure resources themselves. We outline two examples of such projects completed during the summer of 2014 within the NSF ACI sponsored REU Site: Research Experiences for Undergraduates in Collaborative Data Visualization Applications at Clemson University (Award 1359223). The first project focuses on the development of immersive virtual reality field trips of geologic sites using the Oculus Rift headset. This project developed a platform which will allow users to navigate virtual terrains derived from real-world data obtained from the US Geological Survey and Google Earth. The system provides users with the ability to partake in an interactive first-person exploration of a region, such as the Grand Canyon, and thus makes an important educational contribution for students without access to these environmental assets in the real world. The second project focused on providing players visual feedback about the sustainability of their practices within the web-based, multiplayer watershed management game Naranpur Online. Identifying sustainability indicators that communicate meaningful information to players and finding an effective way to visualize these data were a primary challenge faced by the student researcher working on this project. To solve this problem the student translated findings from the literature to the context of the game to develop a hierarchical set of relative sustainability criteria to be accessed by players within a sustainability dashboard. Though the REU focused on visualization, both projects forced the students to transform their thinking to address higher-level questions regarding the utilization and communication of environmental data or concepts, thus enhancing the educational experience for themselves and future students.

  12. Magnetic tape

    NASA Technical Reports Server (NTRS)

    Robinson, Harriss

    1992-01-01

    The move to visualization and image processing in data systems is increasing the demand for larger and faster mass storage systems. The technology of choice is magnetic tape. This paper briefly reviews the technology past, present, and projected. A case is made for standards and the value of the standards to users.

  13. Refractive errors and cataract as causes of visual impairment in Brazil.

    PubMed

    Eduardo Leite Arieta, Carlos; Nicolini Delgado, Alzira Maria; José, Newton Kara; Temporini, Edméia Rita; Alves, Milton Ruiz; de Carvalho Moreira Filho, Djalma

    2003-02-01

    To identify the main causes of visual impairment (VA

  14. Spherical Coordinate Systems for Streamlining Suited Mobility Analysis

    NASA Technical Reports Server (NTRS)

    Benson, Elizabeth; Cowley, Matthew S.; Harvill. Lauren; Rajulu, Sudhakar

    2014-01-01

    When describing human motion, biomechanists generally report joint angles in terms of Euler angle rotation sequences. However, there are known limitations in using this method to describe complex motions such as the shoulder joint during a baseball pitch. Euler angle notation uses a series of three rotations about an axis where each rotation is dependent upon the preceding rotation. As such, the Euler angles need to be regarded as a set to get accurate angle information. Unfortunately, it is often difficult to visualize and understand these complex motion representations. One of our key functions is to help design engineers understand how a human will perform with new designs and all too often traditional use of Euler rotations becomes as much of a hindrance as a help. It is believed that using a spherical coordinate system will allow ABF personnel to more quickly and easily transmit important mobility data to engineers, in a format that is readily understandable and directly translatable to their design efforts. Objectives: The goal of this project is to establish new analysis and visualization techniques to aid in the examination and comprehension of complex motions. Methods: This project consisted of a series of small sub-projects, meant to validate and verify the method before it was implemented in the ABF's data analysis practices. The first stage was a proof of concept, where a mechanical test rig was built and instrumented with an inclinometer, so that its angle from horizontal was known. The test rig was tracked in 3D using an optical motion capture system, and its position and orientation were reported in both Euler and spherical reference systems. The rig was meant to simulate flexion/extension, transverse rotation and abduction/adduction of the human shoulder, but without the variability inherent in human motion. In the second phase of the project, the ABF estimated the error inherent in a spherical coordinate system, and evaluated how this error would vary within the reference frame. This stage also involved expanding a kinematic model of the shoulder, to include the torso, knees, ankle, elbows, wrists and neck. Part of this update included adding a representation of 'roll' about an axis, for upper arm and lower leg rotations. The third stage of the project involved creating visualization methods to assist in interpreting motion in a spherical frame. This visualization method will be incorporated in a tool to evaluate a database of suited mobility data, which is currently in development.

  15. Computational and fMRI Studies of Visualization

    DTIC Science & Technology

    2009-03-31

    spatial thinking in high level cognition, such as in problem-solving and reasoning. In conjunction with the experimental work, the project developed a...computational modeling system (4CAPS) as well as the development of 4CAPS models for particular tasks. The cognitive level of 4CAPS accounts for...neuroarchitecture to interpret and predict the brain activation in a network of cortical areas that underpin the performance of a visual thinking task. The

  16. Scientific Visualization & Modeling for Earth Systems Science Education

    NASA Technical Reports Server (NTRS)

    Chaudhury, S. Raj; Rodriguez, Waldo J.

    2003-01-01

    Providing research experiences for undergraduate students in Earth Systems Science (ESS) poses several challenges at smaller academic institutions that might lack dedicated resources for this area of study. This paper describes the development of an innovative model that involves students with majors in diverse scientific disciplines in authentic ESS research. In studying global climate change, experts typically use scientific visualization techniques applied to remote sensing data collected by satellites. In particular, many problems related to environmental phenomena can be quantitatively addressed by investigations based on datasets related to the scientific endeavours such as the Earth Radiation Budget Experiment (ERBE). Working with data products stored at NASA's Distributed Active Archive Centers, visualization software specifically designed for students and an advanced, immersive Virtual Reality (VR) environment, students engage in guided research projects during a structured 6-week summer program. Over the 5-year span, this program has afforded the opportunity for students majoring in biology, chemistry, mathematics, computer science, physics, engineering and science education to work collaboratively in teams on research projects that emphasize the use of scientific visualization in studying the environment. Recently, a hands-on component has been added through science student partnerships with school-teachers in data collection and reporting for the GLOBE Program (GLobal Observations to Benefit the Environment).

  17. Neural Systems Involved in Fear and Anxiety Measured with Fear-Potentiated Startle

    ERIC Educational Resources Information Center

    Davis, Michael

    2006-01-01

    A good deal is now known about the neural circuitry involved in how conditioned fear can augment a simple reflex (fear-potentiated startle). This involves visual or auditory as well as shock pathways that project via the thalamus and perirhinal or insular cortex to the basolateral amygdala (BLA). The BLA projects to the central (CeA) and medial…

  18. CasCADe: A Novel 4D Visualization System for Virtual Construction Planning.

    PubMed

    Ivson, Paulo; Nascimento, Daniel; Celes, Waldemar; Barbosa, Simone Dj

    2018-01-01

    Building Information Modeling (BIM) provides an integrated 3D environment to manage large-scale engineering projects. The Architecture, Engineering and Construction (AEC) industry explores 4D visualizations over these datasets for virtual construction planning. However, existing solutions lack adequate visual mechanisms to inspect the underlying schedule and make inconsistencies readily apparent. The goal of this paper is to apply best practices of information visualization to improve 4D analysis of construction plans. We first present a review of previous work that identifies common use cases and limitations. We then consulted with AEC professionals to specify the main design requirements for such applications. These guided the development of CasCADe, a novel 4D visualization system where task sequencing and spatio-temporal simultaneity are immediately apparent. This unique framework enables the combination of diverse analytical features to create an information-rich analysis environment. We also describe how engineering collaborators used CasCADe to review the real-world construction plans of an Oil & Gas process plant. The system made evident schedule uncertainties, identified work-space conflicts and helped analyze other constructability issues. The results and contributions of this paper suggest new avenues for future research in information visualization for the AEC industry.

  19. A Lyapunov Function Based Remedial Action Screening Tool Using Real-Time Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitra, Joydeep; Ben-Idris, Mohammed; Faruque, Omar

    This report summarizes the outcome of a research project that comprised the development of a Lyapunov function based remedial action screening tool using real-time data (L-RAS). The L-RAS is an advanced computational tool that is intended to assist system operators in making real-time redispatch decisions to preserve power grid stability. The tool relies on screening contingencies using a homotopy method based on Lyapunov functions to avoid, to the extent possible, the use of time domain simulations. This enables transient stability evaluation at real-time speed without the use of massively parallel computational resources. The project combined the following components. 1. Developmentmore » of a methodology for contingency screening using a homotopy method based on Lyapunov functions and real-time data. 2. Development of a methodology for recommending remedial actions based on the screening results. 3. Development of a visualization and operator interaction interface. 4. Testing of screening tool, validation of control actions, and demonstration of project outcomes on a representative real system simulated on a Real-Time Digital Simulator (RTDS) cluster. The project was led by Michigan State University (MSU), where the theoretical models including homotopy-based screening, trajectory correction using real-time data, and remedial action were developed and implemented in the form of research-grade software. Los Alamos National Laboratory (LANL) contributed to the development of energy margin sensitivity dynamics, which constituted a part of the remedial action portfolio. Florida State University (FSU) and Southern California Edison (SCE) developed a model of the SCE system that was implemented on FSU's RTDS cluster to simulate real-time data that was streamed over the internet to MSU where the L-RAS tool was executed and remedial actions were communicated back to FSU to execute stabilizing controls on the simulated system. LCG Consulting developed the visualization and operator interaction interface, based on specifications provided by MSU. The project was performed from October 2012 to December 2016, at the end of which the L-RAS tool, as described above, was completed and demonstrated. The project resulted in the following innovations and contributions: (a) the L-RAS software prototype, tested on a simulated system, vetted by utility personnel, and potentially ready for wider testing and commercialization; (b) an RTDS-based test bed that can be used for future research in the field; (c) a suite of breakthrough theoretical contributions to the field of power system stability and control; and (d) a new tool for visualization of power system stability margins. While detailed descriptions of the development and implementation of the various project components have been provided in the quarterly reports, this final report provides an overview of the complete project, and is demonstrated using public domain test systems commonly used in the literature. The SCE system, and demonstrations thereon, are not included in this report due to Critical Energy Infrastructure Information (CEII) restrictions.« less

  20. Computer interfaces for the visually impaired

    NASA Technical Reports Server (NTRS)

    Higgins, Gerry

    1991-01-01

    Information access via computer terminals extends to blind and low vision persons employed in many technical and nontechnical disciplines. Two aspects are detailed of providing computer technology for persons with a vision related handicap. First, research into the most effective means of integrating existing adaptive technologies into information systems was made. This was conducted to integrate off the shelf products with adaptive equipment for cohesive integrated information processing systems. Details are included that describe the type of functionality required in software to facilitate its incorporation into a speech and/or braille system. The second aspect is research into providing audible and tactile interfaces to graphics based interfaces. Parameters are included for the design and development of the Mercator Project. The project will develop a prototype system for audible access to graphics based interfaces. The system is being built within the public domain architecture of X windows to show that it is possible to provide access to text based applications within a graphical environment. This information will be valuable to suppliers to ADP equipment since new legislation requires manufacturers to provide electronic access to the visually impaired.

  1. Quantitative simulation of extraterrestrial engineering devices

    NASA Technical Reports Server (NTRS)

    Arabyan, A.; Nikravesh, P. E.; Vincent, T. L.

    1991-01-01

    This is a multicomponent, multidisciplinary project whose overall objective is to build an integrated database, simulation, visualization, and optimization system for the proposed oxygen manufacturing plant on Mars. Specifically, the system allows users to enter physical description, engineering, and connectivity data through a uniform, user-friendly interface and stores the data in formats compatible with other software also developed as part of this project. These latter components include: (1) programs to simulate the behavior of various parts of the plant in Martian conditions; (2) an animation program which, in different modes, provides visual feedback to designers and researchers about the location of and temperature distribution among components as well as heat, mass, and data flow through the plant as it operates in different scenarios; (3) a control program to investigate the stability and response of the system under different disturbance conditions; and (4) an optimization program to maximize or minimize various criteria as the system evolves into its final design. All components of the system are interconnected so that changes entered through one component are reflected in the others.

  2. Development of Communication Technology in Japan: The Hi-OVIS Project.

    ERIC Educational Resources Information Center

    Murata, Toshihiko

    1981-01-01

    Describes the two-way Highly Interactive Optical Visual Information System (Hi-OVIS), involving the transmission and reception of educational, advertising, and public service programing, which has been in experimental use in Japan since 1978. Utilizing fiber optics, the system equips each house with a keyboard, television, television camera, and…

  3. The Newsroom to the Year 2001.

    ERIC Educational Resources Information Center

    Keirstead, Phillip O.

    Projections for a possible scenario for a television broadcast newsroom in 2001 would include a nearly completely computerized system, one which will write scripts, select and create graphics, organize newscasts and visuals, keep records, do research, and manage the newsroom from terminals. This computer system will generate many more newscasts…

  4. Visual detection of driving while intoxicated. Project interim report : identification of visual cues and development of detection methods

    DOT National Transportation Integrated Search

    1979-01-01

    The report describes the initial phase of a two-phase project on the visual, on-the-road detection of driving while intoxicated (DWI). The purpose of the overall project is to develop and test procedures for enhancing on-the-road detection of DWI. Th...

  5. 3D Geovisualization & Stylization to Manage Comprehensive and Participative Local Urban Plans

    NASA Astrophysics Data System (ADS)

    Brasebin, M.; Christophe, S.; Jacquinod, F.; Vinesse, A.; Mahon, H.

    2016-10-01

    3D geo-visualization is more and more used and appreciated to support public participation, and is generally used to present predesigned planned projects. Nevertheless, other participatory processes may benefit from such technology such as the elaboration of urban planning documents. In this article, we present one of the objectives of the PLU++ project: the design of a 3D geo-visualization system that eases the participation concerning local urban plans. Through a pluridisciplinary approach, it aims at covering the different aspects of such a system: the simulation of built configurations to represent regulation information, the efficient stylization of these objects to make people understand their meanings and the interaction between 3D simulation and stylization. The system aims at being adaptive according to the participation context and to the dynamic of the participation. It will offer the possibility to modify simulation results and the rendering styles of the 3D representations to support participation. The proposed 3D rendering styles will be used in a set of practical experiments in order to test and validate some hypothesis from past researches of the project members about 3D simulation, 3D semiotics and knowledge about uses.

  6. The NAS Computational Aerosciences Archive

    NASA Technical Reports Server (NTRS)

    Miceli, Kristina D.; Globus, Al; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    In order to further the state-of-the-art in computational aerosciences (CAS) technology, researchers must be able to gather and understand existing work in the field. One aspect of this information gathering is studying published work available in scientific journals and conference proceedings. However, current scientific publications are very limited in the type and amount of information that they can disseminate. Information is typically restricted to text, a few images, and a bibliography list. Additional information that might be useful to the researcher, such as additional visual results, referenced papers, and datasets, are not available. New forms of electronic publication, such as the World Wide Web (WWW), limit publication size only by available disk space and data transmission bandwidth, both of which are improving rapidly. The Numerical Aerodynamic Simulation (NAS) Systems Division at NASA Ames Research Center is in the process of creating an archive of CAS information on the WWW. This archive will be based on the large amount of information produced by researchers associated with the NAS facility. The archive will contain technical summaries and reports of research performed on NAS supercomputers, visual results (images, animations, visualization system scripts), datasets, and any other supporting meta-information. This information will be available via the WWW through the NAS homepage, located at http://www.nas.nasa.gov/, fully indexed for searching. The main components of the archive are technical summaries and reports, visual results, and datasets. Technical summaries are gathered every year by researchers who have been allotted resources on NAS supercomputers. These summaries, together with supporting visual results and references, are browsable by interested researchers. Referenced papers made available by researchers can be accessed through hypertext links. Technical reports are in-depth accounts of tools and applications research projects performed by NAS staff members and collaborators. Visual results, which may be available in the form of images, animations, and/or visualization scripts, are generated by researchers with respect to a certain research project, depicting dataset features that were determined important by the investigating researcher. For example, script files for visualization systems (e.g. FAST, PLOT3D, AVS) are provided to create visualizations on the user's local workstation to elucidate the key points of the numerical study. Users can then interact with the data starting where the investigator left off. Datasets are intended to give researchers an opportunity to understand previous work, 'mine' solutions for new information (for example, have you ever read a paper thinking "I wonder what the helicity density looks like?"), compare new techniques with older results, collaborate with remote colleagues, and perform validation. Supporting meta-information associated with the research projects is also important to provide additional context for research projects. This may include information such as the software used in the simulation (e.g. grid generators, flow solvers, visualization). In addition to serving the CAS research community, the information archive will also be helpful to students, visualization system developers and researchers, and management. Students (of any age) can use the data to study fluid dynamics, compare results from different flow solvers, learn about meshing techniques, etc., leading to better informed individuals. For these users it is particularly important that visualization be integrated into dataset archives. Visualization researchers can use dataset archives to test algorithms and techniques, leading to better visualization systems, Management can use the data to figure what is really going on behind the viewgraphs. All users will benefit from fast, easy, and convenient access to CFD datasets. The CAS information archive hopes to serve as a useful resource to those interested in computational sciences. At present, only information that may be distributed internationally is made available via the archive. Studies are underway to determine security requirements and solutions to make additional information available. By providing access to the archive via the WWW, the process of information gathering can be more productive and fruitful due to ease of access and ability to manage many different types of information. As the archive grows, additional resources from outside NAS will be added, providing a dynamic source of research results.

  7. The personal receiving document management and the realization of email function in OAS

    NASA Astrophysics Data System (ADS)

    Li, Biqing; Li, Zhao

    2017-05-01

    This software is an independent software system, suitable for small and medium enterprises, contains personal office, scientific research project management and system management functions, independently run in relevant environment, and to solve practical needs. This software is an independent software system, using the current popular B/S (browser/server) structure and ASP.NET technology development, using the Windows 7 operating system, Microsoft SQL Server2005 Visual2008 and database as a development platform, suitable for small and medium enterprises, contains personal office, scientific research project management and system management functions, independently run in relevant environment, and to solve practical needs.

  8. Feed-forward segmentation of figure-ground and assignment of border-ownership.

    PubMed

    Supèr, Hans; Romeo, August; Keil, Matthias

    2010-05-19

    Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment.

  9. Feed-Forward Segmentation of Figure-Ground and Assignment of Border-Ownership

    PubMed Central

    Supèr, Hans; Romeo, August; Keil, Matthias

    2010-01-01

    Figure-ground is the segmentation of visual information into objects and their surrounding backgrounds. Two main processes herein are boundary assignment and surface segregation, which rely on the integration of global scene information. Recurrent processing either by intrinsic horizontal connections that connect surrounding neurons or by feedback projections from higher visual areas provide such information, and are considered to be the neural substrate for figure-ground segmentation. On the contrary, a role of feedforward projections in figure-ground segmentation is unknown. To have a better understanding of a role of feedforward connections in figure-ground organization, we constructed a feedforward spiking model using a biologically plausible neuron model. By means of surround inhibition our simple 3-layered model performs figure-ground segmentation and one-sided border-ownership coding. We propose that the visual system uses feed forward suppression for figure-ground segmentation and border-ownership assignment. PMID:20502718

  10. Image Information Mining Utilizing Hierarchical Segmentation

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Marchisio, Giovanni; Koperski, Krzysztof; Datcu, Mihai

    2002-01-01

    The Hierarchical Segmentation (HSEG) algorithm is an approach for producing high quality, hierarchically related image segmentations. The VisiMine image information mining system utilizes clustering and segmentation algorithms for reducing visual information in multispectral images to a manageable size. The project discussed herein seeks to enhance the VisiMine system through incorporating hierarchical segmentations from HSEG into the VisiMine system.

  11. Creating Visual Materials for Multi-Handicapped Deaf Learners.

    ERIC Educational Resources Information Center

    Hack, Carole; Brosmith, Susan

    1980-01-01

    The article describes two groups of visual materials developed for multiply handicapped deaf teenagers. The daily living skills project includes vocabulary lists, visuals, games and a model related to household cleaning, personal grooming, or consumer skills. The occupational information project includes visuals of tools, materials, and clothing…

  12. MindSeer: a portable and extensible tool for visualization of structural and functional neuroimaging data.

    PubMed

    Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F

    2007-10-15

    Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: http://sig.biostr.washington.edu/projects/MindSeer. MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine.

  13. Neuroanatomical affiliation visualization-interface system.

    PubMed

    Palombi, Olivier; Shin, Jae-Won; Watson, Charles; Paxinos, George

    2006-01-01

    A number of knowledge management systems have been developed to allow users to have access to large quantity of neuroanatomical data. The advent of three-dimensional (3D) visualization techniques allows users to interact with complex 3D object. In order to better understand the structural and functional organization of the brain, we present Neuroanatomical Affiliations Visualization-Interface System (NAVIS) as the original software to see brain structures and neuroanatomical affiliations in 3D. This version of NAVIS has made use of the fifth edition of "The Rat Brain in Stereotaxic coordinates" (Paxinos and Watson, 2005). The NAVIS development environment was based on the scripting language name Python, using visualization toolkit (VTK) as 3D-library and wxPython for the graphic user interface. The following manuscript is focused on the nucleus of the solitary tract (Sol) and the set of affiliated structures in the brain to illustrate the functionality of NAVIS. The nucleus of the Sol is the primary relay center of visceral and taste information, and consists of 14 distinct subnuclei that differ in cytoarchitecture, chemoarchitecture, connections, and function. In the present study, neuroanatomical projection data of the rat Sol were collected from selected literature in PubMed since 1975. Forty-nine identified projection data of Sol were inserted in NAVIS. The standard XML format used as an input for affiliation data allows NAVIS to update data online and/or allows users to manually change or update affiliation data. NAVIS can be extended to nuclei other than Sol.

  14. Research review for information management

    NASA Technical Reports Server (NTRS)

    Bishop, Peter C.

    1988-01-01

    The goal of RICIS research in information management is to apply currently available technology to existing problems in information management. Research projects include the following: the Space Business Research Center (SBRC), the Management Information and Decision Support Environment (MIDSE), and the investigation of visual interface technology. Several additional projects issued reports. New projects include the following: (1) the AdaNET project to develop a technology transfer network for software engineering and the Ada programming language; and (2) work on designing a communication system for the Space Station Project Office at JSC. The central aim of all projects is to use information technology to help people work more productively.

  15. Overview of ICE Project: Integration of Computational Fluid Dynamics and Experiments

    NASA Technical Reports Server (NTRS)

    Stegeman, James D.; Blech, Richard A.; Babrauckas, Theresa L.; Jones, William H.

    2001-01-01

    Researchers at the NASA Glenn Research Center have developed a prototype integrated environment for interactively exploring, analyzing, and validating information from computational fluid dynamics (CFD) computations and experiments. The Integrated CFD and Experiments (ICE) project is a first attempt at providing a researcher with a common user interface for control, manipulation, analysis, and data storage for both experiments and simulation. ICE can be used as a live, on-tine system that displays and archives data as they are gathered; as a postprocessing system for dataset manipulation and analysis; and as a control interface or "steering mechanism" for simulation codes while visualizing the results. Although the full capabilities of ICE have not been completely demonstrated, this report documents the current system. Various applications of ICE are discussed: a low-speed compressor, a supersonic inlet, real-time data visualization, and a parallel-processing simulation code interface. A detailed data model for the compressor application is included in the appendix.

  16. Segregation of feedforward and feedback projections in mouse visual cortex

    PubMed Central

    Berezovskii, Vladimir K.; Nassi, Jonathan J.; Born, Richard T.

    2011-01-01

    Hierarchical organization is a common feature of mammalian neocortex. Neurons that send their axons from lower to higher areas of the hierarchy are referred to as “feedforward” (FF) neurons, whereas those projecting in the opposite direction are called “feedback” (FB) neurons. Anatomical, functional and theoretical studies suggest that these different classes of projections play fundamentally different roles in perception. In primates, laminar differences in projection patterns often distinguish the two projection streams. In rodents, however, these differences are less clear, despite an established hierarchy of visual areas. Thus the rodent provides a strong test of the hypothesis that FF and FB neurons form distinct populations. We tested this hypothesis by injecting retrograde tracers into two different hierarchical levels of mouse visual cortex (areas 17 and AL) and then determining the relative proportions of double-labeled FB and FF neurons in an area intermediate to them (LM). Despite finding singly labeled neurons densely intermingled with no laminar segregation, we found few double-labeled neurons (~5% of each singly labeled population). We also examined the development of FF and FB connections. FF connections were present at the earliest time-point we examined (postnatal day two, P2), while FB connections were not detectable until P11. Our findings indicate that, even in cortices without laminar segregation of FF and FB neurons, the two projection systems are largely distinct at the neuronal level and also differ with respect to the timing of their outgrowth. PMID:21618232

  17. Data Mining and Analysis

    NASA Technical Reports Server (NTRS)

    Samms, Kevin O.

    2015-01-01

    The Data Mining project seeks to bring the capability of data visualization to NASA anomaly and problem reporting systems for the purpose of improving data trending, evaluations, and analyses. Currently NASA systems are tailored to meet the specific needs of its organizations. This tailoring has led to a variety of nomenclatures and levels of annotation for procedures, parts, and anomalies making difficult the realization of the common causes for anomalies. Making significant observations and realizing the connection between these causes without a common way to view large data sets is difficult to impossible. In the first phase of the Data Mining project a portal was created to present a common visualization of normalized sensitive data to customers with the appropriate security access. The tool of the visualization itself was also developed and fine-tuned. In the second phase of the project we took on the difficult task of searching and analyzing the target data set for common causes between anomalies. In the final part of the second phase we have learned more about how much of the analysis work will be the job of the Data Mining team, how to perform that work, and how that work may be used by different customers in different ways. In this paper I detail how our perspective has changed after gaining more insight into how the customers wish to interact with the output and how that has changed the product.

  18. Visualization in aerospace research with a large wall display system

    NASA Astrophysics Data System (ADS)

    Matsuo, Yuichi

    2002-05-01

    National Aerospace Laboratory of Japan has built a large- scale visualization system with a large wall-type display. The system has been operational since April 2001 and comprises a 4.6x1.5-meter (15x5-foot) rear projection screen with 3 BARCO 812 high-resolution CRT projectors. The reason we adopted the 3-gun CRT projectors is support for stereoscopic viewing, ease with color/luminosity matching and accuracy of edge-blending. The system is driven by a new SGI Onyx 3400 server of distributed shared-memory architecture with 32 CPUs, 64Gbytes memory, 1.5TBytes FC RAID disk and 6 IR3 graphics pipelines. Software is another important issue for us to make full use of the system. We have introduced some applications available in a multi- projector environment such as AVS/MPE, EnSight Gold and COVISE, and been developing some software tools that create volumetric images with using SGI graphics libraries. The system is mainly used for visualization fo computational fluid dynamics (CFD) simulation sin aerospace research. Visualized CFD results are of our help for designing an improved configuration of aerospace vehicles and analyzing their aerodynamic performances. These days we also use it for various collaborations among researchers.

  19. Abnormal white matter tractography of visual pathways detected by high-angular-resolution diffusion imaging (HARDI) corresponds to visual dysfunction in cortical/cerebral visual impairment

    PubMed Central

    Bauer, Corinna M.; Heidary, Gena; Koo, Bang-Bon; Killiany, Ronald J.; Bex, Peter; Merabet, Lotfi B.

    2014-01-01

    Cortical (cerebral) visual impairment (CVI) is characterized by visual dysfunction associated with damage to the optic radiations and/or visual cortex. Typically it results from pre- or perinatal hypoxic damage to postchiasmal visual structures and pathways. The neuroanatomical basis of this condition remains poorly understood, particularly with regard to how the resulting maldevelopment of visual processing pathways relates to observations in the clinical setting. We report our investigation of 2 young adults diagnosed with CVI and visual dysfunction characterized by difficulties related to visually guided attention and visuospatial processing. Using high-angular-resolution diffusion imaging (HARDI), we characterized and compared their individual white matter projections of the extrageniculo-striate visual system with a normal-sighted control. Compared to a sighted control, both CVI cases revealed a striking reduction in association fibers, including the inferior frontal-occipital fasciculus as well as superior and inferior longitudinal fasciculi. This reduction in fibers associated with the major pathways implicated in visual processing may provide a neuroanatomical basis for the visual dysfunctions observed in these patients. PMID:25087644

  20. Traditional Project Management and the Visual Workplace Environment to Improve Project Success

    ERIC Educational Resources Information Center

    Fichera, Christopher E.

    2016-01-01

    A majority of large IT projects fail to meet scheduled deadlines, are over budget and do not satisfy the end user. Many projects fail in spite of utilizing traditional project management techniques. Research of project management has not identified the use of a visual workspace as a feature affecting or influencing the success of a project during…

  1. Slycat™ User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crossno, Patricia J.; Gittinger, Jaxon; Hunt, Warren L.

    Slycat™ is a web-based system for performing data analysis and visualization of potentially large quantities of remote, high-dimensional data. Slycat™ specializes in working with ensemble data. An ensemble is a group of related data sets, which typically consists of a set of simulation runs exploring the same problem space. An ensemble can be thought of as a set of samples within a multi-variate domain, where each sample is a vector whose value defines a point in high-dimensional space. To understand and describe the underlying problem being modeled in the simulations, ensemble analysis looks for shared behaviors and common features acrossmore » the group of runs. Additionally, ensemble analysis tries to quantify differences found in any members that deviate from the rest of the group. The Slycat™ system integrates data management, scalable analysis, and visualization. Results are viewed remotely on a user’s desktop via commodity web clients using a multi-tiered hierarchy of computation and data storage, as shown in Figure 1. Our goal is to operate on data as close to the source as possible, thereby reducing time and storage costs associated with data movement. Consequently, we are working to develop parallel analysis capabilities that operate on High Performance Computing (HPC) platforms, to explore approaches for reducing data size, and to implement strategies for staging computation across the Slycat™ hierarchy. Within Slycat™, data and visual analysis are organized around projects, which are shared by a project team. Project members are explicitly added, each with a designated set of permissions. Although users sign-in to access Slycat™, individual accounts are not maintained. Instead, authentication is used to determine project access. Within projects, Slycat™ models capture analysis results and enable data exploration through various visual representations. Although for scientists each simulation run is a model of real-world phenomena given certain conditions, we use the term model to refer to our modeling of the ensemble data, not the physics. Different model types often provide complementary perspectives on data features when analyzing the same data set. Each model visualizes data at several levels of abstraction, allowing the user to range from viewing the ensemble holistically to accessing numeric parameter values for a single run. Bookmarks provide a mechanism for sharing results, enabling interesting model states to be labeled and saved.« less

  2. Multiparameter vision testing apparatus

    NASA Technical Reports Server (NTRS)

    Hunt, S. R., Jr.; Homkes, R. J.; Poteate, W. B.; Sturgis, A. C. (Inventor)

    1975-01-01

    Compact vision testing apparatus is described for testing a large number of physiological characteristics of the eyes and visual system of a human subject. The head of the subject is inserted into a viewing port at one end of a light-tight housing containing various optical assemblies. Visual acuity and other refractive characteristics and ocular muscle balance characteristics of the eyes of the subject are tested by means of a retractable phoroptor assembly carried near the viewing port and a film cassette unit carried in the rearward portion of the housing (the latter selectively providing a variety of different visual targets which are viewed through the optical system of the phoroptor assembly). The visual dark adaptation characteristics and absolute brightness threshold of the subject are tested by means of a projector assembly which selectively projects one or both of a variable intensity fixation target and a variable intensity adaptation test field onto a viewing screen located near the top of the housing.

  3. Quantifying, Visualizing, and Monitoring Lead Optimization.

    PubMed

    Maynard, Andrew T; Roberts, Christopher D

    2016-05-12

    Although lead optimization (LO) is by definition a process, process-centric analysis and visualization of this important phase of pharmaceutical R&D has been lacking. Here we describe a simple statistical framework to quantify and visualize the progression of LO projects so that the vital signs of LO convergence can be monitored. We refer to the resulting visualizations generated by our methodology as the "LO telemetry" of a project. These visualizations can be automated to provide objective, holistic, and instantaneous analysis and communication of LO progression. This enhances the ability of project teams to more effectively drive LO process, while enabling management to better coordinate and prioritize LO projects. We present the telemetry of five LO projects comprising different biological targets and different project outcomes, including clinical compound selection, termination due to preclinical safety/tox, and termination due to lack of tractability. We demonstrate that LO progression is accurately captured by the telemetry. We also present metrics to quantify LO efficiency and tractability.

  4. A new metaphor for projection-based visual analysis and data exploration

    NASA Astrophysics Data System (ADS)

    Schreck, Tobias; Panse, Christian

    2007-01-01

    In many important application domains such as Business and Finance, Process Monitoring, and Security, huge and quickly increasing volumes of complex data are collected. Strong efforts are underway developing automatic and interactive analysis tools for mining useful information from these data repositories. Many data analysis algorithms require an appropriate definition of similarity (or distance) between data instances to allow meaningful clustering, classification, and retrieval, among other analysis tasks. Projection-based data visualization is highly interesting (a) for visual discrimination analysis of a data set within a given similarity definition, and (b) for comparative analysis of similarity characteristics of a given data set represented by different similarity definitions. We introduce an intuitive and effective novel approach for projection-based similarity visualization for interactive discrimination analysis, data exploration, and visual evaluation of metric space effectiveness. The approach is based on the convex hull metaphor for visually aggregating sets of points in projected space, and it can be used with a variety of different projection techniques. The effectiveness of the approach is demonstrated by application on two well-known data sets. Statistical evidence supporting the validity of the hull metaphor is presented. We advocate the hull-based approach over the standard symbol-based approach to projection visualization, as it allows a more effective perception of similarity relationships and class distribution characteristics.

  5. Game engines and immersive displays

    NASA Astrophysics Data System (ADS)

    Chang, Benjamin; Destefano, Marc

    2014-02-01

    While virtual reality and digital games share many core technologies, the programming environments, toolkits, and workflows for developing games and VR environments are often distinct. VR toolkits designed for applications in visualization and simulation often have a different feature set or design philosophy than game engines, while popular game engines often lack support for VR hardware. Extending a game engine to support systems such as the CAVE gives developers a unified development environment and the ability to easily port projects, but involves challenges beyond just adding stereo 3D visuals. In this paper we outline the issues involved in adapting a game engine for use with an immersive display system including stereoscopy, tracking, and clustering, and present example implementation details using Unity3D. We discuss application development and workflow approaches including camera management, rendering synchronization, GUI design, and issues specific to Unity3D, and present examples of projects created for a multi-wall, clustered, stereoscopic display.

  6. New Worlds Airship

    NASA Astrophysics Data System (ADS)

    Harness, Anthony; Cash, Webster; Shipley, Ann; Glassman, Tiffany; Warwick, Steve

    2013-09-01

    We review the progress on the New Worlds Airship project, which has the eventual goal of suborbitally mapping the Alpha Centauri planetary system into the Habitable Zone. This project consists of a telescope viewing a star that is occulted by a starshade suspended from an airship. The starshade suppresses the starlight such that fainter planetary objects near the star are revealed. A visual sensor is used to determine the position of the starshade and keep the telescope within the starshade's shadow. In the first attempt to demonstrate starshades through astronomical observations, we have built a precision line of sight position indicator and flew it on a Zeppelin in October (2012). Since the airship provider went out of business we have been redesigning the project to use Vertical Takeoff Vertical Landing rockets instead. These Suborbital Reusable Launch Vehicles will serve as a starshade platform and test bed for further development of the visual sensor. We have completed ground tests of starshades on dry lakebeds and have shown excellent contrast. We are now attempting to use starshades on hilltops to occult stars and perform high contrast imaging of outer planetary systems such as the debris disk around Fomalhaut.

  7. Contextual signals in visual cortex.

    PubMed

    Khan, Adil G; Hofer, Sonja B

    2018-06-05

    Vision is an active process. What we perceive strongly depends on our actions, intentions and expectations. During visual processing, these internal signals therefore need to be integrated with the visual information from the retina. The mechanisms of how this is achieved by the visual system are still poorly understood. Advances in recording and manipulating neuronal activity in specific cell types and axonal projections together with tools for circuit tracing are beginning to shed light on the neuronal circuit mechanisms of how internal, contextual signals shape sensory representations. Here we review recent work, primarily in mice, that has advanced our understanding of these processes, focusing on contextual signals related to locomotion, behavioural relevance and predictions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Symbolic participation. The role of projective drawings in a case of child abuse.

    PubMed

    Karp, M R

    1997-01-01

    Projective drawings have long been an invaluable tool in understanding the nonverbal, preconscious world of the patient, and it is well established that the drawings of children who have suffered physical and sexual abuse display predictable variations in the representation of visual symbols. However, careful attention to the individual symbol system of young children can reveal the unexpected, particular ways that trauma may be recorded in nonverbal and kinesthetic dimensions, especially when the trauma occurs before the acquisition of language. This essay explores the history of the psychoanalytic approach to the visual arts and the central ideas about the process of image-making in response to loss. A case is then presented in which one child's idiosyncratic visual symbol became a powerful communication between patient and therapist, through its ability to evoke a mutual, nonverbal, kinesthetic experience. Through the mechanism of projection, visual symbols can serve as a meeting ground in which the therapist may be able to participate in some of the nonverbal aspects of the patient's experience. In this instance, the emergence of a stable visual symbol in the intersubjective field facilitated first an unconscious identification with the patient and then a conscious understanding that had a mutative impact on the child's ability to organize nonverbal affect states within the therapy. This chapter illustrates the utility of a psychoanalytic understanding and attention to symbolic communication in a child whose manifest anxiety was phenotypically indistinguishable from Attention Deficit Disorder.

  9. Altered figure-ground perception in monkeys with an extra-striate lesion.

    PubMed

    Supèr, Hans; Lamme, Victor A F

    2007-11-05

    The visual system binds and segments the elements of an image into coherent objects and their surroundings. Recent findings demonstrate that primary visual cortex is involved in this process of figure-ground organization. In the primary visual cortex the late part of a neural response to a stimulus correlates with figure-ground segregation and perception. Such a late onset indicates an involvement of feedback projections from higher visual areas. To investigate the possible role of feedback in figure-ground perception we removed dorsal extra-striate areas of the monkey visual cortex. The findings show that figure-ground perception is reduced when the figure is presented in the lesioned hemifield and perception is normal when the figure appeared in the intact hemifield. In conclusion, our observations show the importance for recurrent processing in visual perception.

  10. Extraction of skin-friction fields from surface flow visualizations as an inverse problem

    NASA Astrophysics Data System (ADS)

    Liu, Tianshu

    2013-12-01

    Extraction of high-resolution skin-friction fields from surface flow visualization images as an inverse problem is discussed from a unified perspective. The surface flow visualizations used in this study are luminescent oil-film visualization and heat-transfer and mass-transfer visualizations with temperature- and pressure-sensitive paints (TSPs and PSPs). The theoretical foundations of these global methods are the thin-oil-film equation and the limiting forms of the energy- and mass-transport equations at a wall, which are projected onto the image plane to provide the relationships between a skin-friction field and the relevant quantities measured by using an imaging system. Since these equations can be re-cast in the same mathematical form as the optical flow equation, they can be solved by using the variational method in the image plane to extract relative or normalized skin-friction fields from images. Furthermore, in terms of instrumentation, essentially the same imaging system for measurements of luminescence can be used in these surface flow visualizations. Examples are given to demonstrate the applications of these methods in global skin-friction diagnostics of complex flows.

  11. Innovative Patient Room Lighting System with Integrated Spectrally Adaptive Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maniccia, Dorene A.; Rizzo, Patricia; Kim, James

    In December of 2013, the U.S. Department of Energy’s SSL R&D Program released a Funding Opportunity Announcement (FOA), that for the first time, contained opportunities for comprehensive application-specific system development. The FOA included opportunities for two applications, one of which was a Patient Room. Philips Lighting Research North America, submitted a proposal for the Patient Room application, and was selected for the complete project award. The award amount was for $497,127, with a Philips Research co-funding commitment 165,709 dollars. The total project value was 662,836 dollars. This project sought to redefine lighting for the patient room application. The goal wasmore » to deliver an innovative LED patient suite (patient room and bathroom) lighting system solution that was 40% more energy-efficient than traditional fluorescent incumbent technologies, and would meet all the visual and non-visual needs of patients, caregivers and visitors, and improve the patient experience. State-of-the-art multichannel LED platforms and control technologies that would provide spectral tuning and become part of an intelligent, connected lighting system drove the solution. The project was scoped into four main task areas that included a) System Concept Creation, b) Identification of the Luminaire Portfolio, c) Development of the Connected Lighting Infrastructure, and d) System Performance Validation. Each of the four main tasks were completed and validated extensively over the course the 2 ½ year project. The system concept was created by first developing a lighting design that demonstrated best practices for patient room lighting – illuminance and uniformity for task performance, reduced glare, and convenient controls, in addition to giving patients control over the lighting in their environment. A framework was defined to deliver circadian support via software behaviors. Through that process luminaires were identified from the Philips portfolio that were adaptable – by their form, dimensions, and optical materials – to mix multicolor LED platforms uniformly and deliver target design lumen levels. The Blue Sky luminaire was selected for the patient bed area to give the illusion of skylight while providing white light on the patient bed. Luminaires used existing 2-channel tunable white LED boards, and newly developed 4-channel LED boards. Red-Orange, Blue, Green, and Blue-shifted Yellow LED chips were selected based on spectral characteristics and their ability to produce high quality white light. 4-channel Power over Ethernet (PoE) drivers were developed and firmware written so they would communicate with both 2- and 4-channel boards. These components formed the backbone of the connected lighting infrastructure. Software, flexible and nuanced in its complexity, was written to set behaviors for myriad lighting scenes in the room throughout the 24 hour day – and all could be overridden by manual controls. This included a dynamic tunable white program, three color changing automatic programs that simulated degrees of sunrise to sunset palettes, and an amber night lighting system that offered visual cues for postural stability to minimize the risk of falls. All programs were carefully designed to provide visual comfort for all occupants, support critical task performance for staff, and to support the patient’s 24hr rhythms. A full scale mockup room was constructed in the Philips Cambridge Lab. The lighting system was installed, tested and functionality demonstrated to ensure smooth operation of system components – luminaires, drivers, PoE switches, wall controls, patient remote, and daylight and occupancy sensors. How did the system perform? It met visual criteria, confirmed by calculations, simulations and measurements in the field. It met non-visual criteria, confirmed by setting circadian stimulus (CS) targets and performing calculations using the calculator developed by the Lighting Research Center. Finally, human factors validation studies were conducted to gain insight from real end users in the healthcare profession; surveys were administered, data analyzed, and audio comments captured. The general consensus was positive, with requests to pilot the system in their hospitals. The importance of the research completed under this grant is that it allowed the exploration and development of a unique lighting system, one that would deliver a blend of visual and non-visual criteria in patient room design for today’s healthcare environment. The research investigated the area of multichannel LED technology, multichannel Power over Ethernet (PoE) drivers and their integration with automatic and manual controls as a system – uncovering and meeting challenges along the way. It married visual needs of patients and staff with support for 24 hour rhythms, placing value on the wellbeing of the patient – while successfully saving energy over incumbent technologies. Indications are that the market is ready and willing to invest – multiple healthcare facilities are in line to pilot this system, recognizing its value beyond energy to patient and staff well-being. Its value to the public can best be expressed by a patient support coordinator who, after spending several hours in the room being immersed in the lighting, analyzing all its features, commented: “This re-writes lighting for healthcare”.« less

  12. Immersive cyberspace system

    NASA Technical Reports Server (NTRS)

    Park, Brian V. (Inventor)

    1997-01-01

    An immersive cyberspace system is presented which provides visual, audible, and vibrational inputs to a subject remaining in neutral immersion, and also provides for subject control input. The immersive cyberspace system includes a relaxation chair and a neutral immersion display hood. The relaxation chair supports a subject positioned thereupon, and places the subject in position which merges a neutral body position, the position a body naturally assumes in zero gravity, with a savasana yoga position. The display hood, which covers the subject's head, is configured to produce light images and sounds. An image projection subsystem provides either external or internal image projection. The display hood includes a projection screen moveably attached to an opaque shroud. A motion base supports the relaxation chair and produces vibrational inputs over a range of about 0-30 Hz. The motion base also produces limited translation and rotational movements of the relaxation chair. These limited translational and rotational movements, when properly coordinated with visual stimuli, constitute motion cues which create sensations of pitch, yaw, and roll movements. Vibration transducers produce vibrational inputs from about 20 Hz to about 150 Hz. An external computer, coupled to various components of the immersive cyberspace system, executes a software program and creates the cyberspace environment. One or more neutral hand posture controllers may be coupled to the external computer system and used to control various aspects of the cyberspace environment, or to enter data during the cyberspace experience.

  13. Project DyAdd: Visual Attention in Adult Dyslexia and ADHD

    ERIC Educational Resources Information Center

    Laasonen, Marja; Salomaa, Jonna; Cousineau, Denis; Leppamaki, Sami; Tani, Pekka; Hokkanen, Laura; Dye, Matthew

    2012-01-01

    In this study of the project DyAdd, three aspects of visual attention were investigated in adults (18-55 years) with dyslexia (n = 35) or attention deficit/hyperactivity disorder (ADHD, n = 22), and in healthy controls (n = 35). Temporal characteristics of visual attention were assessed with Attentional Blink (AB), capacity of visual attention…

  14. Developing an Approach for Analyzing and Verifying System Communication

    NASA Technical Reports Server (NTRS)

    Stratton, William C.; Lindvall, Mikael; Ackermann, Chris; Sibol, Deane E.; Godfrey, Sally

    2009-01-01

    This slide presentation reviews a project for developing an approach for analyzing and verifying the inter system communications. The motivation for the study was that software systems in the aerospace domain are inherently complex, and operate under tight constraints for resources, so that systems of systems must communicate with each other to fulfill the tasks. The systems of systems requires reliable communications. The technical approach was to develop a system, DynSAVE, that detects communication problems among the systems. The project enhanced the proven Software Architecture Visualization and Evaluation (SAVE) tool to create Dynamic SAVE (DynSAVE). The approach monitors and records low level network traffic, converting low level traffic into meaningful messages, and displays the messages in a way the issues can be detected.

  15. Visual speech perception in foveal and extrafoveal vision: further implications for divisions in hemispheric projections.

    PubMed

    Jordan, Timothy R; Sheen, Mercedes; Abedipour, Lily; Paterson, Kevin B

    2014-01-01

    When observing a talking face, it has often been argued that visual speech to the left and right of fixation may produce differences in performance due to divided projections to the two cerebral hemispheres. However, while it seems likely that such a division in hemispheric projections exists for areas away from fixation, the nature and existence of a functional division in visual speech perception at the foveal midline remains to be determined. We investigated this issue by presenting visual speech in matched hemiface displays to the left and right of a central fixation point, either exactly abutting the foveal midline or else located away from the midline in extrafoveal vision. The location of displays relative to the foveal midline was controlled precisely using an automated, gaze-contingent eye-tracking procedure. Visual speech perception showed a clear right hemifield advantage when presented in extrafoveal locations but no hemifield advantage (left or right) when presented abutting the foveal midline. Thus, while visual speech observed in extrafoveal vision appears to benefit from unilateral projections to left-hemisphere processes, no evidence was obtained to indicate that a functional division exists when visual speech is observed around the point of fixation. Implications of these findings for understanding visual speech perception and the nature of functional divisions in hemispheric projection are discussed.

  16. The SCEC Community Modeling Environment(SCEC/CME): A Collaboratory for Seismic Hazard Analysis

    NASA Astrophysics Data System (ADS)

    Maechling, P. J.; Jordan, T. H.; Minster, J. B.; Moore, R.; Kesselman, C.

    2005-12-01

    The SCEC Community Modeling Environment (SCEC/CME) Project is an NSF-supported Geosciences/IT partnership that is actively developing an advanced information infrastructure for system-level earthquake science in Southern California. This partnership includes SCEC, USC's Information Sciences Institute (ISI), the San Diego Supercomputer Center (SDSC), the Incorporated Institutions for Research in Seismology (IRIS), and the U.S. Geological Survey. The goal of the SCEC/CME is to develop seismological applications and information technology (IT) infrastructure to support the development of Seismic Hazard Analysis (SHA) programs and other geophysical simulations. The SHA application programs developed on the Project include a Probabilistic Seismic Hazard Analysis system called OpenSHA. OpenSHA computational elements that are currently available include a collection of attenuation relationships, and several Earthquake Rupture Forecasts (ERFs). Geophysicists in the collaboration have also developed Anelastic Wave Models (AWMs) using both finite-difference and finite-element approaches. Earthquake simulations using these codes have been run for a variety of earthquake sources. Rupture Dynamic Model (RDM) codes have also been developed that simulate friction-based fault slip. The SCEC/CME collaboration has also developed IT software and hardware infrastructure to support the development, execution, and analysis of these SHA programs. To support computationally expensive simulations, we have constructed a grid-based scientific workflow system. Using the SCEC grid, project collaborators can submit computations from the SCEC/CME servers to High Performance Computers at USC and TeraGrid High Performance Computing Centers. Data generated and archived by the SCEC/CME is stored in a digital library system, the Storage Resource Broker (SRB). This system provides a robust and secure system for maintaining the association between the data seta and their metadata. To provide an easy-to-use system for constructing SHA computations, a browser-based workflow assembly web portal has been developed. Users can compose complex SHA calculations, specifying SCEC/CME data sets as inputs to calculations, and calling SCEC/CME computational programs to process the data and the output. Knowledge-based software tools have been implemented that utilize ontological descriptions of SHA software and data can validate workflows created with this pathway assembly tool. Data visualization software developed by the collaboration supports analysis and validation of data sets. Several programs have been developed to visualize SCEC/CME data including GMT-based map making software for PSHA codes, 4D wavefield propagation visualization software based on OpenGL, and 3D Geowall-based visualization of earthquakes, faults, and seismic wave propagation. The SCEC/CME Project also helps to sponsor the SCEC UseIT Intern program. The UseIT Intern Program provides research opportunities in both Geosciences and Information Technology to undergraduate students in a variety of fields. The UseIT group has developed a 3D data visualization tool, called SCEC-VDO, as a part of this undergraduate research program.

  17. Visualization of x-ray computer tomography using computer-generated holography

    NASA Astrophysics Data System (ADS)

    Daibo, Masahiro; Tayama, Norio

    1998-09-01

    The theory converted from x-ray projection data to the hologram directly by combining the computer tomography (CT) with the computer generated hologram (CGH), is proposed. The purpose of this study is to offer the theory for realizing the all- electronic and high-speed seeing through 3D visualization system, which is for the application to medical diagnosis and non- destructive testing. First, the CT is expressed using the pseudo- inverse matrix which is obtained by the singular value decomposition. CGH is expressed in the matrix style. Next, `projection to hologram conversion' (PTHC) matrix is calculated by the multiplication of phase matrix of CGH with pseudo-inverse matrix of the CT. Finally, the projection vector is converted to the hologram vector directly, by multiplication of the PTHC matrix with the projection vector. Incorporating holographic analog computation into CT reconstruction, it becomes possible that the calculation amount is drastically reduced. We demonstrate the CT cross section which is reconstituted by He-Ne laser in the 3D space from the real x-ray projection data acquired by x-ray television equipment, using our direct conversion technique.

  18. Creating a New Definition of Library Cooperation: Past, Present, and Future Models.

    ERIC Educational Resources Information Center

    Lenzini, Rebecca T.; Shaw, Ward

    1991-01-01

    Describes the creation and purpose of the Colorado Alliance of Research Libraries (CARL), the subsequent development of CARL Systems, and its current research projects. Topics discussed include online catalogs; UnCover, a journal article database; full text data; document delivery; visual images in computer systems; networks; and implications for…

  19. NED-2: A decision support system for integrated forest ecosystem management

    Treesearch

    Mark J. Twery; Peter D. Knopp; Scott A. Thomasma; H. Michael Rauscher; Donald E. Nute; Walter D. Potter; Frederick Maier; Jin Wang; Mayukh Dass; Hajime Uchiyama; Astrid Glende; Robin E. Hoffman

    2005-01-01

    NED-2 is a Windows-based system designed to improve project-level planning and decision making by providing useful and scientifically sound information to natural resource managers. Resources currently addressed include visual quality, ecology, forest health, timber, water, and wildlife. NED-2 expands on previous versions of NED applications by integrating treatment...

  20. NED-2: a decision support system for integrated forest ecosystem management

    Treesearch

    Mark J. Twery; Peter D. Knopp; Scott A. Thomasma; H. Michael Rauscher; Donald E. Nute; Walter D. Potter; Frederick Maier; Jin Wang; Mayukh Dass; Hajime Uchiyama; Astrid Glende; Robin E. Hoffman

    2005-01-01

    NED-2 is a Windows-based system designed to improve project-level planning and decision making by providing useful and scientifically sound information to natural resource managers. Resources currently addressed include visual quality, ecology, forest health, timber, water, and wildlife. NED-2 expands on previous versions of NED applications by integrating treatment...

  1. Education about Hallucinations Using an Internet Virtual Reality System: A Qualitative Survey

    ERIC Educational Resources Information Center

    Yellowlees, Peter M.; Cook, James N.

    2006-01-01

    Objective: The authors evaluate an Internet virtual reality technology as an education tool about the hallucinations of psychosis. Method: This is a pilot project using Second Life, an Internet-based virtual reality system, in which a virtual reality environment was constructed to simulate the auditory and visual hallucinations of two patients…

  2. Use and Evaluation of 3D GeoWall Visualizations in Undergraduate Space Science Classes

    NASA Astrophysics Data System (ADS)

    Turner, N. E.; Hamed, K. M.; Lopez, R. E.; Mitchell, E. J.; Gray, C. L.; Corralez, D. S.; Robinson, C. A.; Soderlund, K. M.

    2005-12-01

    One persistent difficulty many astronomy students face is the lack of 3- dimensional mental model of the systems being studied, in particular the Sun-Earth-Moon system. Students without such a mental model can have a very hard time conceptualizing the geometric relationships that cause, for example, the cycle of lunar phases or the pattern of seasons. The GeoWall is a recently developed and affordable projection mechanism for three-dimensional stereo visualization which is becoming a popular tool in classrooms and research labs for use in geology classes, but as yet very little work has been done involving the GeoWall for astronomy classes. We present results from a large study involving over 1000 students of varied backgrounds: some students were tested at the University of Texas at El Paso, a large public university on the US-Mexico border and other students were from the Florida Institute of Technology, a small, private, technical school in Melbourne Florida. We wrote a lecture tutorial-style lab to go along with a GeoWall 3D visual of the Earth-Moon system and tested the students before and after with several diagnostics. Students were given pre and post tests using the Lunar Phase Concept Inventory (LPCI) as well as a separate evaluation written specifically for this project. We found the lab useful for both populations of students, but not equally effective for all. We discuss reactions from the students and their improvement, as well as whether the students are able to correctly assess the usefullness of the project for their own learning.

  3. How (and why) the visual control of action differs from visual perception

    PubMed Central

    Goodale, Melvyn A.

    2014-01-01

    Vision not only provides us with detailed knowledge of the world beyond our bodies, but it also guides our actions with respect to objects and events in that world. The computations required for vision-for-perception are quite different from those required for vision-for-action. The former uses relational metrics and scene-based frames of reference while the latter uses absolute metrics and effector-based frames of reference. These competing demands on vision have shaped the organization of the visual pathways in the primate brain, particularly within the visual areas of the cerebral cortex. The ventral ‘perceptual’ stream, projecting from early visual areas to inferior temporal cortex, helps to construct the rich and detailed visual representations of the world that allow us to identify objects and events, attach meaning and significance to them and establish their causal relations. By contrast, the dorsal ‘action’ stream, projecting from early visual areas to the posterior parietal cortex, plays a critical role in the real-time control of action, transforming information about the location and disposition of goal objects into the coordinate frames of the effectors being used to perform the action. The idea of two visual systems in a single brain might seem initially counterintuitive. Our visual experience of the world is so compelling that it is hard to believe that some other quite independent visual signal—one that we are unaware of—is guiding our movements. But evidence from a broad range of studies from neuropsychology to neuroimaging has shown that the visual signals that give us our experience of objects and events in the world are not the same ones that control our actions. PMID:24789899

  4. Enhanced Night Visibility Series, Volume XVI : Phase III, Characterization of Experimental Objects

    DOT National Transportation Integrated Search

    2005-12-01

    The Enhanced Night Visibility (ENV) project is a series of experiments undertaken to investigate different visual enhancement systems (VES) for the nighttime driving task. The purpose of this characterization activity is to establish the photometric ...

  5. Change Blindness Phenomena for Virtual Reality Display Systems.

    PubMed

    Steinicke, Frank; Bruder, Gerd; Hinrichs, Klaus; Willemsen, Pete

    2011-09-01

    In visual perception, change blindness describes the phenomenon that persons viewing a visual scene may apparently fail to detect significant changes in that scene. These phenomena have been observed in both computer-generated imagery and real-world scenes. Several studies have demonstrated that change blindness effects occur primarily during visual disruptions such as blinks or saccadic eye movements. However, until now the influence of stereoscopic vision on change blindness has not been studied thoroughly in the context of visual perception research. In this paper, we introduce change blindness techniques for stereoscopic virtual reality (VR) systems, providing the ability to substantially modify a virtual scene in a manner that is difficult for observers to perceive. We evaluate techniques for semiimmersive VR systems, i.e., a passive and active stereoscopic projection system as well as an immersive VR system, i.e., a head-mounted display, and compare the results to those of monoscopic viewing conditions. For stereoscopic viewing conditions, we found that change blindness phenomena occur with the same magnitude as in monoscopic viewing conditions. Furthermore, we have evaluated the potential of the presented techniques for allowing abrupt, and yet significant, changes of a stereoscopically displayed virtual reality environment.

  6. Exploring Middle School Students' Representational Competence in Science: Development and Verification of a Framework for Learning with Visual Representations

    NASA Astrophysics Data System (ADS)

    Tippett, Christine Diane

    Scientific knowledge is constructed and communicated through a range of forms in addition to verbal language. Maps, graphs, charts, diagrams, formulae, models, and drawings are just some of the ways in which science concepts can be represented. Representational competence---an aspect of visual literacy that focuses on the ability to interpret, transform, and produce visual representations---is a key component of science literacy and an essential part of science reading and writing. To date, however, most research has examined learning from representations rather than learning with representations. This dissertation consisted of three distinct projects that were related by a common focus on learning from visual representations as an important aspect of scientific literacy. The first project was the development of an exploratory framework that is proposed for use in investigations of students constructing and interpreting multimedia texts. The exploratory framework, which integrates cognition, metacognition, semiotics, and systemic functional linguistics, could eventually result in a model that might be used to guide classroom practice, leading to improved visual literacy, better comprehension of science concepts, and enhanced science literacy because it emphasizes distinct aspects of learning with representations that can be addressed though explicit instruction. The second project was a metasynthesis of the research that was previously conducted as part of the Explicit Literacy Instruction Embedded in Middle School Science project (Pacific CRYSTAL, http://www.educ.uvic.ca/pacificcrystal). Five overarching themes emerged from this case-to-case synthesis: the engaging and effective nature of multimedia genres, opportunities for differentiated instruction using multimodal strategies, opportunities for assessment, an emphasis on visual representations, and the robustness of some multimodal literacy strategies across content areas. The third project was a mixed-methods verification study that was conducted to refine and validate the theoretical framework. This study examined middle school students' representational competence and focused on students' creation of visual representations such as labelled diagrams, a form of representation commonly found in science information texts and textbooks. An analysis of the 31 Grade 6 participants' representations and semistructured interviews revealed five themes, each of which supports one or more dimensions of the exploratory framework: participants' use of color, participants' choice of representation (form and function), participants' method of planning for representing, participants' knowledge of conventions, and participants' selection of information to represent. Together, the results of these three projects highlight the need for further research on learning with rather than learning from representations.

  7. The use of Web-based GIS data technologies in the construction of geoscience instructional materials: examples from the MARGINS Data in the Classroom project

    NASA Astrophysics Data System (ADS)

    Ryan, J. G.; McIlrath, J. A.

    2008-12-01

    Web-accessible geospatial information system (GIS) technologies have advanced in concert with an expansion of data resources that can be accessed and used by researchers, educators and students. These resources facilitate the development of data-rich instructional resources and activities that can be used to transition seamlessly into undergraduate research projects. MARGINS Data in the Classroom (http://serc.carleton.edu/ margins/index.html) seeks to engage MARGINS researchers and educators in using the images, datasets, and visualizations produced by NSF-MARGINS Program-funded research and related efforts to create Web-deliverable instructional materials for use in undergraduate-level geoscience courses (MARGINS Mini-Lessons). MARGINS science data is managed by the Marine Geosciences Data System (MGDS), and these and all other MGDS-hosted data can be accessed, manipulated and visualized using GeoMapApp (www.geomapapp.org; Carbotte et al, 2004), a freely available geographic information system focused on the marine environment. Both "packaged" MGDS datasets (i.e., global earthquake foci, volcanoes, bathymetry) and "raw" data (seismic surveys, magnetics, gravity) are accessible via GeoMapApp, with WFS linkages to other resources (geodesy from UNAVCO; seismic profiles from IRIS; geochemical and drillsite data from EarthChem, IODP, and others), permitting the comprehensive characterization of many regions of the ocean basins. Geospatially controlled datasets can be imported into GeoMapApp visualizations, and these visualizations can be exported into Google Earth as .kmz image files. Many of the MARGINS Mini-Lessons thus far produced use (or have studentss use the varied capabilities of GeoMapApp (i.e., constructing topographic profiles, overlaying varied geophysical and bathymetric datasets, characterizing geochemical data). These materials are available for use and testing from the project webpage (http://serc.carleton.edu/margins/). Classroom testing and assessment of the Mini- Lessons begins this Fall.

  8. Videoexoscopic real-time intraoperative navigation for spinal neurosurgery: a novel co-adaptation of two existing technology platforms, technical note.

    PubMed

    Huang, Meng; Barber, Sean Michael; Steele, William James; Boghani, Zain; Desai, Viren Rajendrakumar; Britz, Gavin Wayne; West, George Alexander; Trask, Todd Wilson; Holman, Paul Joseph

    2018-06-01

    Image-guided approaches to spinal instrumentation and interbody fusion have been widely popularized in the last decade [1-5]. Navigated pedicle screws are significantly less likely to breach [2, 3, 5, 6]. Navigation otherwise remains a point reference tool because the projection is off-axis to the surgeon's inline loupe or microscope view. The Synaptive robotic brightmatter drive videoexoscope monitor system represents a new paradigm for off-axis high-definition (HD) surgical visualization. It has many advantages over the traditional microscope and loupes, which have already been demonstrated in a cadaveric study [7]. An auxiliary, but powerful capability of this system is projection of a second, modifiable image in a split-screen configuration. We hypothesized that integration of both Medtronic and Synaptive platforms could permit the visualization of reconstructed navigation and surgical field images simultaneously. By utilizing navigated instruments, this configuration has the ability to support live image-guided surgery or real-time navigation (RTN). Medtronic O-arm/Stealth S7 navigation, MetRx, NavLock, and SureTrak spinal systems were implemented on a prone cadaveric specimen with a stream output to the Synaptive Display. Surgical visualization was provided using a Storz Image S1 platform and camera mounted to the Synaptive robotic brightmatter drive. We were able to successfully technically co-adapt both platforms. A minimally invasive transforaminal lumbar interbody fusion (MIS TLIF) and an open pedicle subtraction osteotomy (PSO) were performed using a navigated high-speed drill under RTN. Disc Shaver and Trials under RTN were implemented on the MIS TLIF. The synergy of Synaptive HD videoexoscope robotic drive and Medtronic Stealth platforms allow for live image-guided surgery or real-time navigation (RTN). Off-axis projection also allows upright neutral cervical spine operative ergonomics for the surgeons and improved surgical team visualization and education compared to traditional means. This technique has the potential to augment existing minimally invasive and open approaches, but will require long-term outcome measurements for efficacy.

  9. Solar heating and cooling technical data and systems analysis

    NASA Technical Reports Server (NTRS)

    Christensen, D. L.

    1976-01-01

    The accomplishments of a project to study solar heating and air conditioning are outlined. Presentation materials (data packages, slides, charts, and visual aids) were developed. Bibliographies and source materials on materials and coatings, solar water heaters, systems analysis computer models, solar collectors and solar projects were developed. Detailed MIRADS computer formats for primary data parameters were developed and updated. The following data were included: climatic, architectural, topography, heating and cooling equipment, thermal loads, and economics. Data sources in each of these areas were identified as well as solar radiation data stations and instruments.

  10. Louisiana: a model for advancing regional e-Research through cyberinfrastructure.

    PubMed

    Katz, Daniel S; Allen, Gabrielle; Cortez, Ricardo; Cruz-Neira, Carolina; Gottumukkala, Raju; Greenwood, Zeno D; Guice, Les; Jha, Shantenu; Kolluru, Ramesh; Kosar, Tevfik; Leger, Lonnie; Liu, Honggao; McMahon, Charlie; Nabrzyski, Jarek; Rodriguez-Milla, Bety; Seidel, Ed; Speyrer, Greg; Stubblefield, Michael; Voss, Brian; Whittenburg, Scott

    2009-06-28

    Louisiana researchers and universities are leading a concentrated, collaborative effort to advance statewide e-Research through a new cyberinfrastructure: computing systems, data storage systems, advanced instruments and data repositories, visualization environments and people, all linked together by software programs and high-performance networks. This effort has led to a set of interlinked projects that have started making a significant difference in the state, and has created an environment that encourages increased collaboration, leading to new e-Research. This paper describes the overall effort, the new projects and environment and the results to date.

  11. The Earth System CoG Collaboration Environment

    NASA Astrophysics Data System (ADS)

    DeLuca, C.; Murphy, S.; Cinquini, L.; Treshansky, A.; Wallis, J. C.; Rood, R. B.; Overeem, I.

    2013-12-01

    The Earth System CoG supports collaborative Earth science research and product development in virtual organizations that span multiple projects and communities. It provides access to data, metadata, and visualization services along with tools that support open project governance, and it can be used to host individual projects or to profile projects hosted elsewhere. All projects on CoG are described using a project ontology - an organized common vocabulary - that exposes information needed for collaboration and decision-making. Projects can be linked into a network, and the underlying ontology enables consolidated views of information across the network. This access to information promotes the creation of active and knowledgeable project governance, at both individual and aggregate project levels. CoG is being used to support software development projects, model intercomparison projects, training classes, and scientific programs. Its services and ontology are customizable by project. This presentation will provide an overview of CoG, review examples of current use, and discuss how CoG can be used as knowledge and coordination hub for networks of projects in the Earth Sciences.

  12. The Digital Space Shuttle, 3D Graphics, and Knowledge Management

    NASA Technical Reports Server (NTRS)

    Gomez, Julian E.; Keller, Paul J.

    2003-01-01

    The Digital Shuttle is a knowledge management project that seeks to define symbiotic relationships between 3D graphics and formal knowledge representations (ontologies). 3D graphics provides geometric and visual content, in 2D and 3D CAD forms, and the capability to display systems knowledge. Because the data is so heterogeneous, and the interrelated data structures are complex, 3D graphics combined with ontologies provides mechanisms for navigating the data and visualizing relationships.

  13. Automatic cell identification and visualization using digital holographic microscopy with head mounted augmented reality devices.

    PubMed

    O'Connor, Timothy; Rawat, Siddharth; Markman, Adam; Javidi, Bahram

    2018-03-01

    We propose a compact imaging system that integrates an augmented reality head mounted device with digital holographic microscopy for automated cell identification and visualization. A shearing interferometer is used to produce holograms of biological cells, which are recorded using customized smart glasses containing an external camera. After image acquisition, segmentation is performed to isolate regions of interest containing biological cells in the field-of-view, followed by digital reconstruction of the cells, which is used to generate a three-dimensional (3D) pseudocolor optical path length profile. Morphological features are extracted from the cell's optical path length map, including mean optical path length, coefficient of variation, optical volume, projected area, projected area to optical volume ratio, cell skewness, and cell kurtosis. Classification is performed using the random forest classifier, support vector machines, and K-nearest neighbor, and the results are compared. Finally, the augmented reality device displays the cell's pseudocolor 3D rendering of its optical path length profile, extracted features, and the identified cell's type or class. The proposed system could allow a healthcare worker to quickly visualize cells using augmented reality smart glasses and extract the relevant information for rapid diagnosis. To the best of our knowledge, this is the first report on the integration of digital holographic microscopy with augmented reality devices for automated cell identification and visualization.

  14. Altered transfer of visual motion information to parietal association cortex in untreated first-episode psychosis: Implications for pursuit eye tracking

    PubMed Central

    Lencer, Rebekka; Keedy, Sarah K.; Reilly, James L.; McDonough, Bruce E.; Harris, Margret S. H.; Sprenger, Andreas; Sweeney, John A.

    2011-01-01

    Visual motion processing and its use for pursuit eye movement control represent a valuable model for studying the use of sensory input for action planning. In psychotic disorders, alterations of visual motion perception have been suggested to cause pursuit eye tracking deficits. We evaluated this system in functional neuroimaging studies of untreated first-episode schizophrenia (N=24), psychotic bipolar disorder patients (N=13) and healthy controls (N=20). During a passive visual motion processing task, both patient groups showed reduced activation in the posterior parietal projection fields of motion-sensitive extrastriate area V5, but not in V5 itself. This suggests reduced bottom-up transfer of visual motion information from extrastriate cortex to perceptual systems in parietal association cortex. During active pursuit, activation was enhanced in anterior intraparietal sulcus and insula in both patient groups, and in dorsolateral prefrontal cortex and dorsomedial thalamus in schizophrenia patients. This may result from increased demands on sensorimotor systems for pursuit control due to the limited availability of perceptual motion information about target speed and tracking error. Visual motion information transfer deficits to higher -level association cortex may contribute to well-established pursuit tracking abnormalities, and perhaps to a wider array of alterations in perception and action planning in psychotic disorders. PMID:21873035

  15. Audio-visual imposture

    NASA Astrophysics Data System (ADS)

    Karam, Walid; Mokbel, Chafic; Greige, Hanna; Chollet, Gerard

    2006-05-01

    A GMM based audio visual speaker verification system is described and an Active Appearance Model with a linear speaker transformation system is used to evaluate the robustness of the verification. An Active Appearance Model (AAM) is used to automatically locate and track a speaker's face in a video recording. A Gaussian Mixture Model (GMM) based classifier (BECARS) is used for face verification. GMM training and testing is accomplished on DCT based extracted features of the detected faces. On the audio side, speech features are extracted and used for speaker verification with the GMM based classifier. Fusion of both audio and video modalities for audio visual speaker verification is compared with face verification and speaker verification systems. To improve the robustness of the multimodal biometric identity verification system, an audio visual imposture system is envisioned. It consists of an automatic voice transformation technique that an impostor may use to assume the identity of an authorized client. Features of the transformed voice are then combined with the corresponding appearance features and fed into the GMM based system BECARS for training. An attempt is made to increase the acceptance rate of the impostor and to analyzing the robustness of the verification system. Experiments are being conducted on the BANCA database, with a prospect of experimenting on the newly developed PDAtabase developed within the scope of the SecurePhone project.

  16. Virtual Earth System Laboratory (VESL): A Virtual Research Environment for The Visualization of Earth System Data and Process Simulations

    NASA Astrophysics Data System (ADS)

    Cheng, D. L. C.; Quinn, J. D.; Larour, E. Y.; Halkides, D. J.

    2017-12-01

    The Virtual Earth System Laboratory (VESL) is a Web application, under continued development at the Jet Propulsion Laboratory and UC Irvine, for the visualization of Earth System data and process simulations. As with any project of its size, we have encountered both successes and challenges during the course of development. Our principal point of success is the fact that VESL users can interact seamlessly with our earth science simulations within their own Web browser. Some of the challenges we have faced include retrofitting the VESL Web application to respond to touch gestures, reducing page load time (especially as the application has grown), and accounting for the differences between the various Web browsers and computing platforms.

  17. Pixels, people, perception, pet peeves, and possibilities: a look at displays

    NASA Astrophysics Data System (ADS)

    Task, H. Lee

    2007-04-01

    This year marks the 35 th anniversary of the Visually Coupled Systems symposium held at Brooks Air Force Base, San Antonio, Texas in November of 1972. This paper uses the proceedings of the 1972 VCS symposium as a guide to address several topics associated primarily with helmet-mounted displays, systems integration and the human-machine interface. Specific topics addressed include monocular and binocular helmet-mounted displays (HMDs), visor projection HMDs, color HMDs, system integration with aircraft windscreens, visual interface issues and others. In addition, this paper also addresses a few mysteries and irritations (pet peeves) collected over the past 35+ years of experience in the display and display related areas.

  18. Multimission Telemetry Visualization (MTV) system: A mission applications project from JPL's Multimedia Communications Laboratory

    NASA Technical Reports Server (NTRS)

    Koeberlein, Ernest, III; Pender, Shaw Exum

    1994-01-01

    This paper describes the Multimission Telemetry Visualization (MTV) data acquisition/distribution system. MTV was developed by JPL's Multimedia Communications Laboratory (MCL) and designed to process and display digital, real-time, science and engineering data from JPL's Mission Control Center. The MTV system can be accessed using UNIX workstations and PC's over common datacom and telecom networks from worldwide locations. It is designed to lower data distribution costs while increasing data analysis functionality by integrating low-cost, off-the-shelf desktop hardware and software. MTV is expected to significantly lower the cost of real-time data display, processing, distribution, and allow for greater spacecraft safety and mission data access.

  19. Virtual workstations and telepresence interfaces: Design accommodations and prototypes for Space Station Freedom evolution

    NASA Technical Reports Server (NTRS)

    Mcgreevy, Michael W.

    1990-01-01

    An advanced human-system interface is being developed for evolutionary Space Station Freedom as part of the NASA Office of Space Station (OSS) Advanced Development Program. The human-system interface is based on body-pointed display and control devices. The project will identify and document the design accommodations ('hooks and scars') required to support virtual workstations and telepresence interfaces, and prototype interface systems will be built, evaluated, and refined. The project is a joint enterprise of Marquette University, Astronautics Corporation of America (ACA), and NASA's ARC. The project team is working with NASA's JSC and McDonnell Douglas Astronautics Company (the Work Package contractor) to ensure that the project is consistent with space station user requirements and program constraints. Documentation describing design accommodations and tradeoffs will be provided to OSS, JSC, and McDonnell Douglas, and prototype interface devices will be delivered to ARC and JSC. ACA intends to commercialize derivatives of the interface for use with computer systems developed for scientific visualization and system simulation.

  20. High definition TV projection via single crystal faceplate technology

    NASA Astrophysics Data System (ADS)

    Kindl, H. J.; St. John, Thomas

    1993-03-01

    Single crystal phosphor faceplates are epitaxial phosphors grown on crystalline substrates with the advantages of high light output, resolution, and extended operational life. Single crystal phosphor faceplate industrial technology in the United States is capable of providing a faceplate appropriate to the projection industry of up to four (4) inches in diameter. Projection systems incorporating cathode ray tubes utilizing single crystal phosphor faceplates will produce 1500 lumens of white light with 1000 lines of resolution, non-interlaced. This 1500 lumen projection system will meet all of the currently specified luminance and resolution requirements of Visual Display systems for flight simulators. Significant logistic advantages accrue from the introduction of single crystal phosphor faceplate CRT's. Specifically, the full performance life of a CRT is expected to increase by a factor of five (5); ie, from 2000 to 10,000 hours of operation. There will be attendant reductions in maintenance time, spare CRT requirements, system down time, etc. The increased brightness of the projection system will allow use of lower gain, lower cost simulator screen material. Further, picture performance characteristics will be more balanced across the full simulator.

  1. Overview of research in progress at the Center of Excellence

    NASA Technical Reports Server (NTRS)

    Wandell, Brian A.

    1993-01-01

    The Center of Excellence (COE) was created nine years ago to facilitate active collaboration between the scientists at Ames Research Center and the Stanford Psychology Department. Significant interchange of ideas and personnel continues between Stanford and participating groups at NASA-Ames; the COE serves its function well. This progress report is organized into sections divided by project. Each section contains a list of investigators, a background statement, progress report, and a proposal for work during the coming year. The projects are: Algorithms for development and calibration of visual systems, Visually optimized image compression, Evaluation of advanced piloting displays, Spectral representations of color, Perception of motion in man and machine, Automation and decision making, and Motion information used for navigation and control.

  2. Deletion of Ten-m3 Induces the Formation of Eye Dominance Domains in Mouse Visual Cortex

    PubMed Central

    Merlin, Sam; Horng, Sam; Marotte, Lauren R.; Sur, Mriganka; Sawatari, Atomu

    2013-01-01

    The visual system is characterized by precise retinotopic mapping of each eye, together with exquisitely matched binocular projections. In many species, the inputs that represent the eyes are segregated into ocular dominance columns in primary visual cortex (V1), whereas in rodents, this does not occur. Ten-m3, a member of the Ten-m/Odz/Teneurin family, regulates axonal guidance in the retinogeniculate pathway. Significantly, ipsilateral projections are expanded in the dorsal lateral geniculate nucleus and are not aligned with contralateral projections in Ten-m3 knockout (KO) mice. Here, we demonstrate the impact of altered retinogeniculate mapping on the organization and function of V1. Transneuronal tracing and c-fos immunohistochemistry demonstrate that the subcortical expansion of ipsilateral input is conveyed to V1 in Ten-m3 KOs: Ipsilateral inputs are widely distributed across V1 and are interdigitated with contralateral inputs into eye dominance domains. Segregation is confirmed by optical imaging of intrinsic signals. Single-unit recording shows ipsilateral, and contralateral inputs are mismatched at the level of single V1 neurons, and binocular stimulation leads to functional suppression of these cells. These findings indicate that the medial expansion of the binocular zone together with an interocular mismatch is sufficient to induce novel structural features, such as eye dominance domains in rodent visual cortex. PMID:22499796

  3. Perception of linear horizontal self-motion induced by peripheral vision /linearvection/ - Basic characteristics and visual-vestibular interactions

    NASA Technical Reports Server (NTRS)

    Berthoz, A.; Pavard, B.; Young, L. R.

    1975-01-01

    The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.

  4. Snow rendering for interactive snowplow simulation : supporting safety in snowplow design.

    DOT National Transportation Integrated Search

    2011-02-01

    During a snowfall, following a snowplow can be extremely dangerous. This danger comes from the human visual : systems inability to accurately perceive the speed and motion of the snowplow, often resulting in rear-end : collisions. For this project...

  5. Media Services and Captioned Films Reports

    ERIC Educational Resources Information Center

    Delgado, Gilbert; and others

    1969-01-01

    Seven papers describing the development of captioned films, Project LIFE, the visual response system, and three regional media centers. Papers presented at National Conference on Research and Utilization of Educational Media for Teaching the Deaf (Lincoln, Nebraska, March 17-19, 1969). (JJ)

  6. Real-time "x-ray vision" for healthcare simulation: an interactive projective overlay system to enhance intubation training and other procedural training.

    PubMed

    Samosky, Joseph T; Baillargeon, Emma; Bregman, Russell; Brown, Andrew; Chaya, Amy; Enders, Leah; Nelson, Douglas A; Robinson, Evan; Sukits, Alison L; Weaver, Robert A

    2011-01-01

    We have developed a prototype of a real-time, interactive projective overlay (IPO) system that creates augmented reality display of a medical procedure directly on the surface of a full-body mannequin human simulator. These images approximate the appearance of both anatomic structures and instrument activity occurring within the body. The key innovation of the current work is sensing the position and motion of an actual device (such as an endotracheal tube) inserted into the mannequin and using the sensed position to control projected video images portraying the internal appearance of the same devices and relevant anatomic structures. The images are projected in correct registration onto the surface of the simulated body. As an initial practical prototype to test this technique we have developed a system permitting real-time visualization of the intra-airway position of an endotracheal tube during simulated intubation training.

  7. A Second-Generation Interactive Classroom Television System for the Partially Sighted.

    ERIC Educational Resources Information Center

    Genensky, S. M.; And Others

    The interactive classroom television system (ICTS) that is described permits partially sighted students and their teachers to be in continuous, two-way visual communication. It was implemented in Rowland Heights, California, as part of the second phase of a project aimed at evaluating how the ICTS helps in teaching basic skills to partially…

  8. The Vision Outreach Project: A Pilot Project to Train Teachers of Visually Impaired Students in Alabama.

    ERIC Educational Resources Information Center

    Sanspree, M. J.; And Others

    1991-01-01

    This article describes the Vision Outreach Project--a pilot project of the University of Alabama at Birmingham for training teachers of visually impaired students. The project produced video modules to provide distance education in rural and urban areas. The modules can be used to complete degree requirements or in-service training and continuing…

  9. Alpha-beta and gamma rhythms subserve feedback and feedforward influences among human visual cortical areas

    PubMed Central

    Michalareas, Georgios; Vezoli, Julien; van Pelt, Stan; Schoffelen, Jan-Mathijs; Kennedy, Henry; Fries, Pascal

    2016-01-01

    Primate visual cortex is hierarchically organized. Bottom-up and top-down influences are exerted through distinct frequency channels, as was recently revealed in macaques by correlating inter-areal influences with laminar anatomical projection patterns. Because this anatomical data cannot be obtained in human subjects, we selected seven homologous macaque and human visual areas, and correlated the macaque laminar projection patterns to human inter-areal directed influences as measured with magnetoencephalography. We show that influences along feedforward projections predominate in the gamma band, whereas influences along feedback projections predominate in the alpha-beta band. Rhythmic inter-areal influences constrain a functional hierarchy of the seven homologous human visual areas that is in close agreement with the respective macaque anatomical hierarchy. Rhythmic influences allow an extension of the hierarchy to 26 human visual areas including uniquely human brain areas. Hierarchical levels of ventral and dorsal stream visual areas are differentially affected by inter-areal influences in the alpha-beta band. PMID:26777277

  10. Virgil Gus Grissom's Visit to LaRC

    NASA Image and Video Library

    1963-02-22

    Astronaut Virgil "Gus" Grissom at the controls of the Visual Docking Simulator. From A.W. Vogeley, "Piloted Space-Flight Simulation at Langley Research Center," Paper presented at the American Society of Mechanical Engineers 1966 Winter Meeting, New York, NY, November 27-December 1, 1966. "This facility was [later known as the Visual-Optical Simulator.] It presents to the pilot an out-the-window view of his target in correct 6 degrees of freedom motion. The scene is obtained by a television camera pick-up viewing a small-scale gimbaled model of the target." "For docking studies, the docking target picture was projected onto the surface of a 20-foot-diameter sphere and the pilot could, effectively, maneuver into contract. this facility was used in a comparison study with the Rendezvous Docking Simulator - one of the few comparison experiments in which conditions were carefully controlled and a reasonable sample of pilots used. All pilots preferred the more realistic RDS visual scene. The pilots generally liked the RDS angular motion cues although some objected to the false gravity cues that these motions introduced. Training time was shorter on the RDS, but final performance on both simulators was essentially equal. " "For station-keeping studies, since close approach is not required, the target was presented to the pilot through a virtual-image system which projects his view to infinity, providing a more realistic effect. In addition to the target, the system also projects a star and horizon background. "

  11. The DaVinci Project: Multimedia in Art and Chemistry.

    ERIC Educational Resources Information Center

    Simonson, Michael; Schlosser, Charles

    1998-01-01

    Provides an overview of the DaVinci Project, a collaboration of students, teachers, and researchers in chemistry and art to develop multimedia materials for grades 3-12 visualizing basic concepts in chemistry and visual art. Topics addressed include standards in art and science; the conceptual framework for the project; and project goals,…

  12. Lingual and fusiform gyri in visual processing: a clinico-pathologic study of superior altitudinal hemianopia.

    PubMed Central

    Bogousslavsky, J; Miklossy, J; Deruaz, J P; Assal, G; Regli, F

    1987-01-01

    A macular-sparing superior altitudinal hemianopia with no visuo-psychic disturbance, except impaired visual learning, was associated with bilateral ischaemic necrosis of the lingual gyrus and only partial involvement of the fusiform gyrus on the left side. It is suggested that bilateral destruction of the lingual gyrus alone is not sufficient to affect complex visual processing. The fusiform gyrus probably has a critical role in colour integration, visuo-spatial processing, facial recognition and corresponding visual imagery. Involvement of the occipitotemporal projection system deep to the lingual gyri probably explained visual memory dysfunction, by a visuo-limbic disconnection. Impaired verbal memory may have been due to posterior involvement of the parahippocampal gyrus and underlying white matter, which may have disconnected the intact speech areas from the left medial temporal structures. Images PMID:3585386

  13. Visual Functions of the Thalamus

    PubMed Central

    Usrey, W. Martin; Alitto, Henry J.

    2017-01-01

    The thalamus is the heavily interconnected partner of the neocortex. All areas of the neocortex receive afferent input from and send efferent projections to specific thalamic nuclei. Through these connections, the thalamus serves to provide the cortex with sensory input, and to facilitate interareal cortical communication and motor and cognitive functions. In the visual system, the lateral geniculate nucleus (LGN) of the dorsal thalamus is the gateway through which visual information reaches the cerebral cortex. Visual processing in the LGN includes spatial and temporal influences on visual signals that serve to adjust response gain, transform the temporal structure of retinal activity patterns, and increase the signal-to-noise ratio of the retinal signal while preserving its basic content. This review examines recent advances in our understanding of LGN function and circuit organization and places these findings in a historical context. PMID:28217740

  14. Using Globe Browsing Systems in Planetariums to Take Audiences to Other Worlds.

    NASA Astrophysics Data System (ADS)

    Emmart, C. B.

    2014-12-01

    For the last decade planetariums have been adding capability of "full dome video" systems for both movie playback and interactive display. True scientific data visualization has now come to planetarium audiences as a means to display the actual three dimensional layout of the universe, the time based array of planets, minor bodies and spacecraft across the solar system, and now globe browsing systems to examine planetary bodies to the limits of resolutions acquired. Additionally, such planetarium facilities can be networked for simultaneous display across the world for wider audience and reach to authoritative scientist description and commentary. Data repositories such as NASA's Lunar Mapping and Modeling Project (LMMP), NASA GSFC's LANCE-MODIS, and others conforming to the Open Geospatial Consortium (OGC) standard of Web Map Server (WMS) protocols make geospatial data available for a growing number of dome supporting globe visualization systems. The immersive surround graphics of full dome video replicates our visual system creating authentic virtual scenes effectively placing audiences on location in some cases to other worlds only mapped robotically.

  15. Updates of Land Surface and Air Quality Products in NASA MAIRS and NEESPI Data Portals

    NASA Technical Reports Server (NTRS)

    Shen, Suhung; Leptoukh, Gregory G.; Gerasimov, Irina

    2010-01-01

    Following successful support of the Northern Eurasia Earth Sciences Partner Initiative (NEESPI) project with NASA satellite remote sensing data, from Spring 2009 the NASA GES DISC (Goddard Earth Sciences Data and Information Services Center) has been working on collecting more satellite and model data to support the Monsoon Asia Integrated Regional Study (MAIRS) project. The established data management and service infrastructure developed for NEESPI has been used and improved for MAIRS support.Data search, subsetting, and download functions are available through a single system. A customized Giovanni system has been created for MAIRS.The Web-based on line data analysis and visualization system, Giovanni (Goddard Interactive Online Visualization ANd aNalysis Infrastructure) allows scientists to explore, quickly analyze, and download data easily without learning the original data structure and format. Giovanni MAIRS includes satellite observations from multiple sensors and model output from the NASA Global Land Data Assimilation System (GLDAS), and from the NASA atmospheric reanalysis project, MERRA. Currently, we are working on processing and integrating higher resolution land data in to Giovanni, such as vegetation index, land surface temperature, and active fire at 5km or 1km from the standard MODIS products. For data that are not archived at the GESDISC,a product metadata portal is under development to serve as a gateway for providing product level information and data access links, which include both satellite, model products and ground-based measurements information collected from MAIRS scientists.Due to the large overlap of geographic coverage and many similar scientific interests of NEESPI and MAIRS, these data and tools will serve both projects.

  16. Climate Data Service in the FP7 EarthServer Project

    NASA Astrophysics Data System (ADS)

    Mantovani, Simone; Natali, Stefano; Barboni, Damiano; Grazia Veratelli, Maria

    2013-04-01

    EarthServer is a European Framework Program project that aims at developing and demonstrating the usability of open standards (OGC and W3C) in the management of multi-source, any-size, multi-dimensional spatio-temporal data - in short: "Big Earth Data Analytics". In order to demonstrate the feasibility of the approach, six thematic Lighthouse Applications (Cryospheric Science, Airborne Science, Atmospheric/ Climate Science, Geology, Oceanography, and Planetary Science), each with 100+ TB, are implemented. Scope of the Atmospheric/Climate lighthouse application (Climate Data Service) is to implement the system containing global to regional 2D / 3D / 4D datasets retrieved either from satellite observations, from numerical modelling and in-situ observations. Data contained in the Climate Data Service regard atmospheric profiles of temperature / humidity, aerosol content, AOT, and cloud properties provided by entities such as the European Centre for Mesoscale Weather Forecast (ECMWF), the Austrian Meteorological Service (Zentralanstalt für Meteorologie und Geodynamik - ZAMG), the Italian National Agency for new technologies, energies and sustainable development (ENEA), and the Sweden's Meteorological and Hydrological Institute (Sveriges Meteorologiska och Hydrologiska Institut -- SMHI). The system, through an easy-to-use web application permits to browse the loaded data, visualize their temporal evolution on a specific point with the creation of 2D graphs of a single field, or compare different fields on the same point (e.g. temperatures from different models and satellite observations), and visualize maps of specific fields superimposed with high resolution background maps. All data access operations and display are performed by means of OGC standard operations namely WMS, WCS and WCPS. The EarthServer project has just started its second year over a 3-years development plan: the present status the system contains subsets of the final database, with the scope of demonstrating I/O modules and visualization tools. At the end of the project all datasets will be available to the users.

  17. Predictive control of a chaotic permanent magnet synchronous generator in a wind turbine system

    NASA Astrophysics Data System (ADS)

    Manal, Messadi; Adel, Mellit; Karim, Kemih; Malek, Ghanes

    2015-01-01

    This paper investigates how to address the chaos problem in a permanent magnet synchronous generator (PMSG) in a wind turbine system. Predictive control approach is proposed to suppress chaotic behavior and make operating stable; the advantage of this method is that it can only be applied to one state of the wind turbine system. The use of the genetic algorithms to estimate the optimal parameter values of the wind turbine leads to maximization of the power generation. Moreover, some simulation results are included to visualize the effectiveness and robustness of the proposed method. Project supported by the CMEP-TASSILI Project (Grant No. 14MDU920).

  18. Project SCS (Special Communication Services).

    ERIC Educational Resources Information Center

    Curtis, John A.

    This extensive report describes and provides documentation on Special Communications Services for the Sensory Impaired (SCS), a Virginia-based telecommunications delivery system developed by the Center for Excellence, Inc. (CenTex), to provide information and entertainment broadcasting services to the visually handicapped, the hearing impaired,…

  19. Integrated Modeling Environment

    NASA Technical Reports Server (NTRS)

    Mosier, Gary; Stone, Paul; Holtery, Christopher

    2006-01-01

    The Integrated Modeling Environment (IME) is a software system that establishes a centralized Web-based interface for integrating people (who may be geographically dispersed), processes, and data involved in a common engineering project. The IME includes software tools for life-cycle management, configuration management, visualization, and collaboration.

  20. Visualization for Hyper-Heuristics. Front-End Graphical User Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroenung, Lauren

    Modern society is faced with ever more complex problems, many of which can be formulated as generate-and-test optimization problems. General-purpose optimization algorithms are not well suited for real-world scenarios where many instances of the same problem class need to be repeatedly and efficiently solved because they are not targeted to a particular scenario. Hyper-heuristics automate the design of algorithms to create a custom algorithm for a particular scenario. While such automated design has great advantages, it can often be difficult to understand exactly how a design was derived and why it should be trusted. This project aims to address thesemore » issues of usability by creating an easy-to-use graphical user interface (GUI) for hyper-heuristics to support practitioners, as well as scientific visualization of the produced automated designs. My contributions to this project are exhibited in the user-facing portion of the developed system and the detailed scientific visualizations created from back-end data.« less

  1. Cognitive measure on different profiles.

    PubMed

    Spindola, Marilda; Carra, Giovani; Balbinot, Alexandre; Zaro, Milton A

    2010-01-01

    Based on neurology and cognitive science many studies are developed to understand the human model mental, getting to know how human cognition works, especially about learning processes that involve complex contents and spatial-logical reasoning. Event Related Potential - ERP - is a basic and non-invasive method of electrophysiological investigation. It can be used to assess aspects of human cognitive processing by changing the rhythm of the frequency bands brain indicate that some type of processing or neuronal behavior. This paper focuses on ERP technique to help understand cognitive pathway in subjects from different areas of knowledge when they are exposed to an external visual stimulus. In the experiment we used 2D and 3D visual stimulus in the same picture. The signals were captured using 10 (ten) Electroencephalogram - EEG - channel system developed for this project and interfaced in a ADC (Analogical Digital System) board with LabVIEW system - National Instruments. That research was performed using project of experiments technique - DOE. The signal processing were done (math and statistical techniques) showing the relationship between cognitive pathway by groups and intergroups.

  2. Ergonomic approaches to designing educational materials for immersive multi-projection system

    NASA Astrophysics Data System (ADS)

    Shibata, Takashi; Lee, JaeLin; Inoue, Tetsuri

    2014-02-01

    Rapid advances in computer and display technologies have made it possible to present high quality virtual reality (VR) environment. To use such virtual environments effectively, research should be performed into how users perceive and react to virtual environment in view of particular human factors. We created a VR simulation of sea fish for science education, and we conducted an experiment to examine how observers perceive the size and depth of an object within their reach and evaluated their visual fatigue. We chose a multi-projection system for presenting the educational VR simulation, because this system can provide actual-size objects and produce stereo images located close to the observer. The results of the experiment show that estimation of size and depth was relatively accurate when subjects used physical actions to assess them. Presenting images within the observer's reach is suggested to be useful for education in VR environment. Evaluation of visual fatigue shows that the level of symptoms from viewing stereo images with a large disparity in VR environment was low in a short time.

  3. Activity-dependent disruption of intersublaminar spaces and ABAKAN expression does not impact functional on and off organization in the ferret retinogeniculate system

    PubMed Central

    2011-01-01

    In the adult visual system, functionally distinct retinal ganglion cells (RGCs) within each eye project to discrete targets in the brain. In the ferret, RGCs encoding light increments or decrements project to independent On and Off sublaminae within each eye-specific layer of the dorsal lateral geniculate nucleus (dLGN). Here we report a manipulation of retinal circuitry that alters RGC action potential firing patterns during development and eliminates the anatomical markers of segregated On and Off sublaminae in the LGN, including the intersublaminar spaces and the expression of a glial-associated inhibitory molecule, ABAKAN, normally separating On and Off leaflets. Despite the absence of anatomically defined On and Off sublaminae, electrophysiological recordings in the dLGN reveal that On and Off dLGN cells are segregated normally. These data demonstrate a dissociation between normal anatomical sublamination and segregation of function in the dLGN and call into question a purported role for ABAKAN boundaries in the developing visual system. PMID:21401945

  4. PLANETarium Pilot: visualizing PLANET Earth inside-out on the planetarium's full-dome

    NASA Astrophysics Data System (ADS)

    Ballmer, Maxim; Wiethoff, Tobias

    2016-04-01

    In the past decade, projection systems in most planetariums, traditional sites of outreach and education, have advanced from interfaces that can display the motion of stars as moving beam spots to systems that are able to visualize multicolor, high-resolution, immersive full-dome videos or images. These extraordinary capabilities are ideally suited for visualization of global processes occurring on the surface and within the interior of the Earth, a spherical body just as the full dome. So far, however, our community has largely ignored this wonderful interface for outreach and education, and any previous geo-shows have mostly been limited to cartoon-style animations. Thus, we here propose a framework to convey recent scientific results on the origin and evolution of our PLANET to the >100 million per-year worldwide audience of planetariums, making the traditionally astronomy-focussed interface a true PLANETarium. In order to do this most efficiently, we intend to show "inside-out" visualizations of scientific datasets and models, as if the audience was positioned in the Earth's core. Such visualizations are expected to be renderable to the dome with little or no effort. For example, showing global geophysical datasets (e.g., gravity, air temperature), or horizontal slices of seismic-tomography images and spherical computer models requires no rendering at all. Rendering of 3D Cartesian datasets or models may further be achieved using standard techiques. Here, we show several example pilot animations. These animations rendered for the full dome are projected back to 2D for visualization on the flatscreen. Present-day science visualizations are typically as intuitive as cartoon-style animations, yet more appealing visually, and clearly with a higher level of detail. In addition to e.g. climate change and natural hazards, themes for any future geo-shows may include the coupled evolution of the Earth's interior and life, from the accretion of our planet to the evolution of mantle convection as well as the sustainment of a magnetic field and habitable conditions. We believe that high-quality tax-funded science visualizations should not exclusively be used for communication among scientists, but also recycled to raise the public's awareness and appreciation of the Geosciences.

  5. PLANETarium Pilot: visualizing PLANET Earth inside-out on the planetarium's full-dome

    NASA Astrophysics Data System (ADS)

    Ballmer, M. D.; Wiethoff, T.

    2014-12-01

    In the past decade, projection systems in most planetariums, traditional sites of outreach and education, have advanced from interfaces that can display the motion of stars as moving beam spots to systems that are able to visualize multicolor, high-resolution, immersive full-dome videos or images. These extraordinary capabilities are ideally suited for visualization of global processes occurring on the surface and within the interior of the Earth, a spherical body just as the full dome. So far, however, our community has largely ignored this wonderful interface for outreach and education, and any previous geo-shows have mostly been limited to cartoon-style animations. Thus, we here propose a framework to convey recent scientific results on the origin and evolution of our PLANET to the >100 million per-year worldwide audience of planetariums, making the traditionally astronomy-focussed interface a true PLANETarium. In order to do this most efficiently, we intend to show „inside-out" visualizations of scientific datasets and models, as if the audience was positioned in the Earth's inner core. Such visualizations are expected to be renderable to the dome with little or no effort. For example, showing global geophysical datasets (e.g., gravity, air temperature), or horizontal slices of seismic-tomography images and spherical computer models requires no rendering at all. Rendering of 3D Cartesian datasets or models may further be achieved using standard techiques. Here, we show several example pilot animations. These animations rendered for the full dome are projected back to 2D for visualization on a flatscreen. Present-day science visualizations are typically as intuitive as cartoon-style animations, yet more appealing visually, and clearly with a higher level of detail. In addition to e.g. climate change and natural hazards, themes for any future geo-shows may include the coupled evolution of the Earth's interior and life, from the accretion of our planet to the evolution of mantle convection as well as the sustainment of a magnetic field and habitable conditions. We believe that high-quality tax-funded science visualizations should not exclusively be used for communication among scientists, but also recycled to raise the public's awareness and appreciation of the geosciences.

  6. On the Uses of Full-Scale Schlieren Flow Visualization

    NASA Astrophysics Data System (ADS)

    Settles, G. S.; Miller, J. D.; Dodson-Dreibelbis, L. J.

    2000-11-01

    A lens-and-grid-type schlieren system using a very large grid as a light source was described at earlier APS/DFD meetings. With a field-of-view of 2.3x2.9 m (7.5x9.5 feet), it is the largest indoor schlieren system in the world. Still and video examples of several full-scale airflows and heat-transfer problems visualized thus far will be shown. These include: heating and ventilation airflows, flows due to appliances and equipment, the thermal plumes of people, the aerodynamics of an explosive trace detection portal, gas leak detection, shock wave motion associated with aviation security problems, and heat transfer from live crops. Planned future projects include visualizing fume-hood and grocery display freezer airflows and studying the dispersion of insect repellent plumes at full scale.

  7. Alerts Analysis and Visualization in Network-based Intrusion Detection Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Dr. Li

    2010-08-01

    The alerts produced by network-based intrusion detection systems, e.g. Snort, can be difficult for network administrators to efficiently review and respond to due to the enormous number of alerts generated in a short time frame. This work describes how the visualization of raw IDS alert data assists network administrators in understanding the current state of a network and quickens the process of reviewing and responding to intrusion attempts. The project presented in this work consists of three primary components. The first component provides a visual mapping of the network topology that allows the end-user to easily browse clustered alerts. Themore » second component is based on the flocking behavior of birds such that birds tend to follow other birds with similar behaviors. This component allows the end-user to see the clustering process and provides an efficient means for reviewing alert data. The third component discovers and visualizes patterns of multistage attacks by profiling the attacker s behaviors.« less

  8. An integrated ball projection technology for the study of dynamic interceptive actions.

    PubMed

    Stone, J A; Panchuk, D; Davids, K; North, J S; Fairweather, I; Maynard, I W

    2014-12-01

    Dynamic interceptive actions, such as catching or hitting a ball, are important task vehicles for investigating the complex relationship between cognition, perception, and action in performance environments. Representative experimental designs have become more important recently, highlighting the need for research methods to ensure that the coupling of information and movement is faithfully maintained. However, retaining representative design while ensuring systematic control of experimental variables is challenging, due to the traditional tendency to employ methods that typically involve use of reductionist motor responses such as buttonpressing or micromovements. Here, we outline the methodology behind a custom-built, integrated ball projection technology that allows images of advanced visual information to be synchronized with ball projection. This integrated technology supports the controlled presentation of visual information to participants while they perform dynamic interceptive actions. We discuss theoretical ideas behind the integration of hardware and software, along with practical issues resolved in technological design, and emphasize how the system can be integrated with emerging developments such as mixed reality environments. We conclude by considering future developments and applications of the integrated projection technology for research in human movement behaviors.

  9. Concept, design and analysis of a large format autostereoscopic display system

    NASA Astrophysics Data System (ADS)

    Knocke, F.; de Jongh, R.; Frömel, M.

    2005-09-01

    Autostereoscopic display devices with large visual field are of importance in a number of applications such as computer aided design projects, technical education, and military command systems. Typical requirements for such systems are, aside from the large visual field, a large viewing zone, a high level of image brightness, and an extended depth of field. Additional appliances such as specialized eyeglasses or head-trackers are disadvantageous for the aforementioned applications. We report on the design and prototyping of an autostereoscopic display system on the basis of projection-type one-step unidirectional holography. The prototype consists of a hologram holder, an illumination unit, and a special direction-selective screen. Reconstruction light is provided by a 2W frequency-doubled Nd:YVO4 laser. The production of stereoscopic hologram stripes on photopolymer is carried out on a special origination setup. The prototype has a screen size of 180cm × 90cm and provides a visual field of 29° when viewed from 3.6 meters. Due to the coherent reconstruction, a depth of field of several meters is achievable. Up to 18 hologram stripes can be arranged on the holder to permit a rapid switch between a series of motifs or views. Both computer generated image sequences and digital camera photos may serve as input frames. However, a comprehensive pre-distortion must be performed in order to account for optical distortion and several other geometrical factors. The corresponding computations are briefly summarized below. The performance of the system is analyzed, aspects of beam-shaping and mechanical design are discussed and photographs of early reconstructions are presented.

  10. Geoinformation web-system for processing and visualization of large archives of geo-referenced data

    NASA Astrophysics Data System (ADS)

    Gordov, E. P.; Okladnikov, I. G.; Titov, A. G.; Shulgina, T. M.

    2010-12-01

    Developed working model of information-computational system aimed at scientific research in area of climate change is presented. The system will allow processing and analysis of large archives of geophysical data obtained both from observations and modeling. Accumulated experience of developing information-computational web-systems providing computational processing and visualization of large archives of geo-referenced data was used during the implementation (Gordov et al, 2007; Okladnikov et al, 2008; Titov et al, 2009). Functional capabilities of the system comprise a set of procedures for mathematical and statistical analysis, processing and visualization of data. At present five archives of data are available for processing: 1st and 2nd editions of NCEP/NCAR Reanalysis, ECMWF ERA-40 Reanalysis, JMA/CRIEPI JRA-25 Reanalysis, and NOAA-CIRES XX Century Global Reanalysis Version I. To provide data processing functionality a computational modular kernel and class library providing data access for computational modules were developed. Currently a set of computational modules for climate change indices approved by WMO is available. Also a special module providing visualization of results and writing to Encapsulated Postscript, GeoTIFF and ESRI shape files was developed. As a technological basis for representation of cartographical information in Internet the GeoServer software conforming to OpenGIS standards is used. Integration of GIS-functionality with web-portal software to provide a basis for web-portal’s development as a part of geoinformation web-system is performed. Such geoinformation web-system is a next step in development of applied information-telecommunication systems offering to specialists from various scientific fields unique opportunities of performing reliable analysis of heterogeneous geophysical data using approved computational algorithms. It will allow a wide range of researchers to work with geophysical data without specific programming knowledge and to concentrate on solving their specific tasks. The system would be of special importance for education in climate change domain. This work is partially supported by RFBR grant #10-07-00547, SB RAS Basic Program Projects 4.31.1.5 and 4.31.2.7, SB RAS Integration Projects 4 and 9.

  11. Natural 3D content on glasses-free light-field 3D cinema

    NASA Astrophysics Data System (ADS)

    Balogh, Tibor; Nagy, Zsolt; Kovács, Péter Tamás.; Adhikarla, Vamsi K.

    2013-03-01

    This paper presents a complete framework for capturing, processing and displaying the free viewpoint video on a large scale immersive light-field display. We present a combined hardware-software solution to visualize free viewpoint 3D video on a cinema-sized screen. The new glasses-free 3D projection technology can support larger audience than the existing autostereoscopic displays. We introduce and describe our new display system including optical and mechanical design considerations, the capturing system and render cluster for producing the 3D content, and the various software modules driving the system. The indigenous display is first of its kind, equipped with front-projection light-field HoloVizio technology, controlling up to 63 MP. It has all the advantages of previous light-field displays and in addition, allows a more flexible arrangement with a larger screen size, matching cinema or meeting room geometries, yet simpler to set-up. The software system makes it possible to show 3D applications in real-time, besides the natural content captured from dense camera arrangements as well as from sparse cameras covering a wider baseline. Our software system on the GPU accelerated render cluster, can also visualize pre-recorded Multi-view Video plus Depth (MVD4) videos on this light-field glasses-free cinema system, interpolating and extrapolating missing views.

  12. Interactive projection for aerial dance using depth sensing camera

    NASA Astrophysics Data System (ADS)

    Dubnov, Tammuz; Seldess, Zachary; Dubnov, Shlomo

    2014-02-01

    This paper describes an interactive performance system for oor and Aerial Dance that controls visual and sonic aspects of the presentation via a depth sensing camera (MS Kinect). In order to detect, measure and track free movement in space, 3 degree of freedom (3-DOF) tracking in space (on the ground and in the air) is performed using IR markers. Gesture tracking and recognition is performed using a simpli ed HMM model that allows robust mapping of the actor's actions to graphics and sound. Additional visual e ects are achieved by segmentation of the actor body based on depth information, allowing projection of separate imagery on the performer and the backdrop. Artistic use of augmented reality performance relative to more traditional concepts of stage design and dramaturgy are discussed.

  13. Louisiana: a model for advancing regional e-Research through cyberinfrastructure

    PubMed Central

    Katz, Daniel S.; Allen, Gabrielle; Cortez, Ricardo; Cruz-Neira, Carolina; Gottumukkala, Raju; Greenwood, Zeno D.; Guice, Les; Jha, Shantenu; Kolluru, Ramesh; Kosar, Tevfik; Leger, Lonnie; Liu, Honggao; McMahon, Charlie; Nabrzyski, Jarek; Rodriguez-Milla, Bety; Seidel, Ed; Speyrer, Greg; Stubblefield, Michael; Voss, Brian; Whittenburg, Scott

    2009-01-01

    Louisiana researchers and universities are leading a concentrated, collaborative effort to advance statewide e-Research through a new cyberinfrastructure: computing systems, data storage systems, advanced instruments and data repositories, visualization environments and people, all linked together by software programs and high-performance networks. This effort has led to a set of interlinked projects that have started making a significant difference in the state, and has created an environment that encourages increased collaboration, leading to new e-Research. This paper describes the overall effort, the new projects and environment and the results to date. PMID:19451102

  14. An interactive, stereoscopic virtual environment for medical imaging visualization, simulation and training

    NASA Astrophysics Data System (ADS)

    Krueger, Evan; Messier, Erik; Linte, Cristian A.; Diaz, Gabriel

    2017-03-01

    Recent advances in medical image acquisition allow for the reconstruction of anatomies with 3D, 4D, and 5D renderings. Nevertheless, standard anatomical and medical data visualization still relies heavily on the use of traditional 2D didactic tools (i.e., textbooks and slides), which restrict the presentation of image data to a 2D slice format. While these approaches have their merits beyond being cost effective and easy to disseminate, anatomy is inherently three-dimensional. By using 2D visualizations to illustrate more complex morphologies, important interactions between structures can be missed. In practice, such as in the planning and execution of surgical interventions, professionals require intricate knowledge of anatomical complexities, which can be more clearly communicated and understood through intuitive interaction with 3D volumetric datasets, such as those extracted from high-resolution CT or MRI scans. Open source, high quality, 3D medical imaging datasets are freely available, and with the emerging popularity of 3D display technologies, affordable and consistent 3D anatomical visualizations can be created. In this study we describe the design, implementation, and evaluation of one such interactive, stereoscopic visualization paradigm for human anatomy extracted from 3D medical images. A stereoscopic display was created by projecting the scene onto the lab floor using sequential frame stereo projection and viewed through active shutter glasses. By incorporating a PhaseSpace motion tracking system, a single viewer can navigate an augmented reality environment and directly manipulate virtual objects in 3D. While this paradigm is sufficiently versatile to enable a wide variety of applications in need of 3D visualization, we designed our study to work as an interactive game, which allows users to explore the anatomy of various organs and systems. In this study we describe the design, implementation, and evaluation of an interactive and stereoscopic visualization platform for exploring and understanding human anatomy. This system can present medical imaging data in three dimensions and allows for direct physical interaction and manipulation by the viewer. This should provide numerous benefits over traditional, 2D display and interaction modalities, and in our analysis, we aim to quantify and qualify users' visual and motor interactions with the virtual environment when employing this interactive display as a 3D didactic tool.

  15. The Grassmannian Atlas: A General Framework for Exploring Linear Projections of High-Dimensional Data

    DOE PAGES

    Liu, S.; Bremer, P. -T; Jayaraman, J. J.; ...

    2016-06-04

    Linear projections are one of the most common approaches to visualize high-dimensional data. Since the space of possible projections is large, existing systems usually select a small set of interesting projections by ranking a large set of candidate projections based on a chosen quality measure. However, while highly ranked projections can be informative, some lower ranked ones could offer important complementary information. Therefore, selection based on ranking may miss projections that are important to provide a global picture of the data. Here, the proposed work fills this gap by presenting the Grassmannian Atlas, a framework that captures the global structuresmore » of quality measures in the space of all projections, which enables a systematic exploration of many complementary projections and provides new insights into the properties of existing quality measures.« less

  16. Multiresolution and Explicit Methods for Vector Field Analysis and Visualization

    NASA Technical Reports Server (NTRS)

    Nielson, Gregory M.

    1997-01-01

    This is a request for a second renewal (3d year of funding) of a research project on the topic of multiresolution and explicit methods for vector field analysis and visualization. In this report, we describe the progress made on this research project during the second year and give a statement of the planned research for the third year. There are two aspects to this research project. The first is concerned with the development of techniques for computing tangent curves for use in visualizing flow fields. The second aspect of the research project is concerned with the development of multiresolution methods for curvilinear grids and their use as tools for visualization, analysis and archiving of flow data. We report on our work on the development of numerical methods for tangent curve computation first.

  17. System to provide 3D information on geological anomaly zone in deep subsea

    NASA Astrophysics Data System (ADS)

    Kim, W.; Kwon, O.; Kim, D.

    2017-12-01

    The study on building the ultra long and deep subsea tunnel of which length is 50km and depth is 200m at least, respectively, is underway in Korea. To analyze the geotechnical information required for designing and building subsea tunnel, topographic/geologiccal information analysis using 2D seabed geophysical prospecting and topographic, geologic, exploration and boring data were analyzed comprehensively and as a result, automation method to identify the geological structure zone under seabed which is needed to design the deep and long seabed tunnel was developed using geostatistical analysis. In addition, software using 3D visualized ground information to provide the information includes Gocad, MVS, Vulcan and DIMINE. This study is intended to analyze the geological anomaly zone for ultra deep seabed l and visualize the geological investigation result so as to develop the exclusive system for processing the ground investigation information which is convenient for the users. Particularly it's compatible depending on file of geophysical prospecting result and is realizable in Layer form and for 3D view as well. The data to be processed by 3D seabed information system includes (1) deep seabed topographic information, (2) geological anomaly zone, (3) geophysical prospecting, (4) boring investigation result and (5) 3D visualization of the section on seabed tunnel route. Each data has own characteristics depending on data and interface to allow interlocking with other data is granted. In each detail function, input data is displayed in a single space and each element is selectable to identify the further information as a project. Program creates the project when initially implemented and all output from detail information is stored by project unit. Each element representing detail information is stored in image file and is supported to store in text file as well. It also has the function to transfer, expand/reduce and rotate the model. To represent the all elements in 3D visualized platform, coordinate and time information are added to the data or data group to establish the conceptual model as a whole. This research was supported by the Korea Agency for Infrastructure Technology Advancement under the Ministry of Land, Infrastructure and Transport of the Korean government(Project Number: 13 Construction Research T01).

  18. Certification for Teachers of the Visually Impaired: A Rural Teacher Training Project.

    ERIC Educational Resources Information Center

    Tweto-Johnson, Linda

    The goal of a 2-year vision teacher training project is to provide the coursework instruction and student teaching opportunities necessary for Oregon certification as teacher of the visually impaired. The program was designed in response to several conditions affecting services for visually impaired students living in seven eastern Oregon…

  19. Origins of thalamic and cortical projections to the posterior auditory field in congenitally deaf cats.

    PubMed

    Butler, Blake E; Chabot, Nicole; Kral, Andrej; Lomber, Stephen G

    2017-01-01

    Crossmodal plasticity takes place following sensory loss, such that areas that normally process the missing modality are reorganized to provide compensatory function in the remaining sensory systems. For example, congenitally deaf cats outperform normal hearing animals on localization of visual stimuli presented in the periphery, and this advantage has been shown to be mediated by the posterior auditory field (PAF). In order to determine the nature of the anatomical differences that underlie this phenomenon, we injected a retrograde tracer into PAF of congenitally deaf animals and quantified the thalamic and cortical projections to this field. The pattern of projections from areas throughout the brain was determined to be qualitatively similar to that previously demonstrated in normal hearing animals, but with twice as many projections arising from non-auditory cortical areas. In addition, small ectopic projections were observed from a number of fields in visual cortex, including areas 19, 20a, 20b, and 21b, and area 7 of parietal cortex. These areas did not show projections to PAF in cats deafened ototoxically near the onset of hearing, and provide a possible mechanism for crossmodal reorganization of PAF. These, along with the possible contributions of other mechanisms, are considered. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Trident: scalable compute archives: workflows, visualization, and analysis

    NASA Astrophysics Data System (ADS)

    Gopu, Arvind; Hayashi, Soichi; Young, Michael D.; Kotulla, Ralf; Henschel, Robert; Harbeck, Daniel

    2016-08-01

    The Astronomy scientific community has embraced Big Data processing challenges, e.g. associated with time-domain astronomy, and come up with a variety of novel and efficient data processing solutions. However, data processing is only a small part of the Big Data challenge. Efficient knowledge discovery and scientific advancement in the Big Data era requires new and equally efficient tools: modern user interfaces for searching, identifying and viewing data online without direct access to the data; tracking of data provenance; searching, plotting and analyzing metadata; interactive visual analysis, especially of (time-dependent) image data; and the ability to execute pipelines on supercomputing and cloud resources with minimal user overhead or expertise even to novice computing users. The Trident project at Indiana University offers a comprehensive web and cloud-based microservice software suite that enables the straight forward deployment of highly customized Scalable Compute Archive (SCA) systems; including extensive visualization and analysis capabilities, with minimal amount of additional coding. Trident seamlessly scales up or down in terms of data volumes and computational needs, and allows feature sets within a web user interface to be quickly adapted to meet individual project requirements. Domain experts only have to provide code or business logic about handling/visualizing their domain's data products and about executing their pipelines and application work flows. Trident's microservices architecture is made up of light-weight services connected by a REST API and/or a message bus; a web interface elements are built using NodeJS, AngularJS, and HighCharts JavaScript libraries among others while backend services are written in NodeJS, PHP/Zend, and Python. The software suite currently consists of (1) a simple work flow execution framework to integrate, deploy, and execute pipelines and applications (2) a progress service to monitor work flows and sub-work flows (3) ImageX, an interactive image visualization service (3) an authentication and authorization service (4) a data service that handles archival, staging and serving of data products, and (5) a notification service that serves statistical collation and reporting needs of various projects. Several other additional components are under development. Trident is an umbrella project, that evolved from the One Degree Imager, Portal, Pipeline, and Archive (ODI-PPA) project which we had initially refactored toward (1) a powerful analysis/visualization portal for Globular Cluster System (GCS) survey data collected by IU researchers, 2) a data search and download portal for the IU Electron Microscopy Center's data (EMC-SCA), 3) a prototype archive for the Ludwig Maximilian University's Wide Field Imager. The new Trident software has been used to deploy (1) a metadata quality control and analytics portal (RADY-SCA) for DICOM formatted medical imaging data produced by the IU Radiology Center, 2) Several prototype work flows for different domains, 3) a snapshot tool within IU's Karst Desktop environment, 4) a limited component-set to serve GIS data within the IU GIS web portal. Trident SCA systems leverage supercomputing and storage resources at Indiana University but can be configured to make use of any cloud/grid resource, from local workstations/servers to (inter)national supercomputing facilities such as XSEDE.

  1. Synesthetic art through 3-D projection: The requirements of a computer-based supermedium

    NASA Technical Reports Server (NTRS)

    Mallary, Robert

    1989-01-01

    A computer-based form of multimedia art is proposed that uses the computer to fuse aspects of painting, sculpture, dance, music, film, and other media into a one-to-one synthesia of image and sound for spatially synchronous 3-D projection. Called synesthetic art, this conversion of many varied media into an aesthetically unitary experience determines the character and requirements of the system and its software. During the start-up phase, computer stereographic systems are unsuitable for software development. Eventually, a new type of illusory-projective supermedium will be required to achieve the needed combination of large-format projection and convincing real life presence, and to handle the vast amount of 3-D visual and acoustic information required. The influence of the concept on the author's research and creative work is illustrated through two examples.

  2. Integrating advanced visualization technology into the planetary Geoscience workflow

    NASA Astrophysics Data System (ADS)

    Huffman, John; Forsberg, Andrew; Loomis, Andrew; Head, James; Dickson, James; Fassett, Caleb

    2011-09-01

    Recent advances in computer visualization have allowed us to develop new tools for analyzing the data gathered during planetary missions, which is important, since these data sets have grown exponentially in recent years to tens of terabytes in size. As part of the Advanced Visualization in Solar System Exploration and Research (ADVISER) project, we utilize several advanced visualization techniques created specifically with planetary image data in mind. The Geoviewer application allows real-time active stereo display of images, which in aggregate have billions of pixels. The ADVISER desktop application platform allows fast three-dimensional visualization of planetary images overlain on digital terrain models. Both applications include tools for easy data ingest and real-time analysis in a programmatic manner. Incorporation of these tools into our everyday scientific workflow has proved important for scientific analysis, discussion, and publication, and enabled effective and exciting educational activities for students from high school through graduate school.

  3. Correction of respiratory motion for IMRT using aperture adaptive technique and visual guidance: A feasibility study

    NASA Astrophysics Data System (ADS)

    Chen, Ho-Hsing; Wu, Jay; Chuang, Keh-Shih; Kuo, Hsiang-Chi

    2007-07-01

    Intensity-modulated radiation therapy (IMRT) utilizes nonuniform beam profile to deliver precise radiation doses to a tumor while minimizing radiation exposure to surrounding normal tissues. However, the problem of intrafraction organ motion distorts the dose distribution and leads to significant dosimetric errors. In this research, we applied an aperture adaptive technique with a visual guiding system to toggle the problem of respiratory motion. A homemade computer program showing a cyclic moving pattern was projected onto the ceiling to visually help patients adjust their respiratory patterns. Once the respiratory motion becomes regular, the leaf sequence can be synchronized with the target motion. An oscillator was employed to simulate the patient's breathing pattern. Two simple fields and one IMRT field were measured to verify the accuracy. Preliminary results showed that after appropriate training, the amplitude and duration of volunteer's breathing can be well controlled by the visual guiding system. The sharp dose gradient at the edge of the radiation fields was successfully restored. The maximum dosimetric error in the IMRT field was significantly decreased from 63% to 3%. We conclude that the aperture adaptive technique with the visual guiding system can be an inexpensive and feasible alternative without compromising delivery efficiency in clinical practice.

  4. Evaluation of low-dose limits in 3D-2D rigid registration for surgical guidance

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Wang, A. S.; Otake, Y.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gallia, G. L.; Gokaslan, Z. L.; Siewerdsen, J. H.

    2014-09-01

    An algorithm for intensity-based 3D-2D registration of CT and C-arm fluoroscopy is evaluated for use in surgical guidance, specifically considering the low-dose limits of the fluoroscopic x-ray projections. The registration method is based on a framework using the covariance matrix adaptation evolution strategy (CMA-ES) to identify the 3D patient pose that maximizes the gradient information similarity metric. Registration performance was evaluated in an anthropomorphic head phantom emulating intracranial neurosurgery, using target registration error (TRE) to characterize accuracy and robustness in terms of 95% confidence upper bound in comparison to that of an infrared surgical tracking system. Three clinical scenarios were considered: (1) single-view image + guidance, wherein a single x-ray projection is used for visualization and 3D-2D guidance; (2) dual-view image + guidance, wherein one projection is acquired for visualization, combined with a second (lower-dose) projection acquired at a different C-arm angle for 3D-2D guidance; and (3) dual-view guidance, wherein both projections are acquired at low dose for the purpose of 3D-2D guidance alone (not visualization). In each case, registration accuracy was evaluated as a function of the entrance surface dose associated with the projection view(s). Results indicate that images acquired at a dose as low as 4 μGy (approximately one-tenth the dose of a typical fluoroscopic frame) were sufficient to provide TRE comparable or superior to that of conventional surgical tracking, allowing 3D-2D guidance at a level of dose that is at most 10% greater than conventional fluoroscopy (scenario #2) and potentially reducing the dose to approximately 20% of the level in a conventional fluoroscopically guided procedure (scenario #3).

  5. Project M: Scale Model of Lunar Landing Site of Apollo 17: Focus on Lighting Conditions and Analysis

    NASA Technical Reports Server (NTRS)

    Vanik, Christopher S.; Crain, Timothy P.

    2010-01-01

    This document captures the research and development of a scale model representation of the Apollo 17 landing site on the moon as part of the NASA INSPIRE program. Several key elements in this model were surface slope characteristics, crater sizes and locations, prominent rocks, and lighting conditions. This model supports development of Autonomous Landing and Hazard Avoidance Technology (ALHAT) and Project M for the GN&C Autonomous Flight Systems Branch. It will help project engineers visualize the landing site, and is housed in the building 16 Navigation Systems Technology Lab. The lead mentor was Dr. Timothy P. Crain. The purpose of this project was to develop an accurate scale representation of the Apollo 17 landing site on the moon. This was done on an 8'2.5"X10'1.375" reduced friction granite table, which can be restored to its previous condition if needed. The first step in this project was to research the best way to model and recreate the Apollo 17 landing site for the mockup. The project required a thorough plan, budget, and schedule, which was presented to the EG6 Branch for build approval. The final phase was to build the model. The project also required thorough research on the Apollo 17 landing site and the topography of the moon. This research was done on the internet and in person with Dean Eppler, a space scientist, from JSC KX. This data was used to analyze and calculate the scale of the mockup and the ratio of the sizes of the craters, ridges, etc. The final goal was to effectively communicate project status and demonstrate the multiple advantages of using our model. The conclusion of this project was that the mockup was completed as accurately as possible, and it successfully enables the Project M specialists to visualize and plan their goal on an accurate three dimensional surface representation.

  6. Comparing artistic and geometrical perspective depictions of space in the visual field

    PubMed Central

    Baldwin, Joseph; Burleigh, Alistair; Pepperell, Robert

    2014-01-01

    Which is the most accurate way to depict space in our visual field? Linear perspective, a form of geometrical perspective, has traditionally been regarded as the correct method of depicting visual space. But artists have often found it is limited in the angle of view it can depict; wide-angle scenes require uncomfortably close picture viewing distances or impractical degrees of enlargement to be seen properly. Other forms of geometrical perspective, such as fisheye projections, can represent wider views but typically produce pictures in which objects appear distorted. In this study we created an artistic rendering of a hemispherical visual space that encompassed the full visual field. We compared it to a number of geometrical perspective projections of the same space by asking participants to rate which best matched their visual experience. We found the artistic rendering performed significantly better than the geometrically generated projections. PMID:26034563

  7. Comparing artistic and geometrical perspective depictions of space in the visual field.

    PubMed

    Baldwin, Joseph; Burleigh, Alistair; Pepperell, Robert

    2014-01-01

    Which is the most accurate way to depict space in our visual field? Linear perspective, a form of geometrical perspective, has traditionally been regarded as the correct method of depicting visual space. But artists have often found it is limited in the angle of view it can depict; wide-angle scenes require uncomfortably close picture viewing distances or impractical degrees of enlargement to be seen properly. Other forms of geometrical perspective, such as fisheye projections, can represent wider views but typically produce pictures in which objects appear distorted. In this study we created an artistic rendering of a hemispherical visual space that encompassed the full visual field. We compared it to a number of geometrical perspective projections of the same space by asking participants to rate which best matched their visual experience. We found the artistic rendering performed significantly better than the geometrically generated projections.

  8. Definition and novel connections of the entopallium in the pigeon (Columba livia).

    PubMed

    Krützfeldt, Nils O E; Wild, J Martin

    2005-09-12

    The avian entopallium (E) is the major thalamorecipient zone, within the telencephalon, of the tectofugal visual system. Because of discrepancies concerning the structure of this nuclear mass in pigeons, and in light of recent evidence concerning entopallial projections in other avian species, we here redefine and chart some novel entopallial projections in the pigeon by using a combination of cytochrome oxidase (CO) activity, calcium binding protein immunohistochemistry (CBPi), normal histology, and tract tracing. We show that 1) E is defined by the accurate overlap of CO activity and the dense terminations of thalamic (rotundal) efferents; 2) the perientopallium (Ep), E's overlying belt region, receives a relatively sparse rotundal input and is a major source of projections to wider regions of the hemisphere; and 3) E can be subdivided into internal (Ei) and external (Ex) portions on the basis of normal histology, CBPi, and differential projections. Thus, Ei, but not Ex, makes a reciprocal connection with a distinct nucleus in the ventrolateral mesopallium and is a major source of projections to the lateral striatum. These findings suggest the necessity for a revision of the original proposal of a strictly serial flow of visual information through the entopallial complex and further regions of the hemisphere and also require a modification of the long-standing view that E is comparable to only one specific lamina (IV) of extrastriate visual cortex of mammals. Rather, E appears to be composed of a variety of neuronal types possibly equivalent to those in several neocortical laminae. Copyright (c) 2005 Wiley-Liss, Inc.

  9. A new method for text detection and recognition in indoor scene for assisting blind people

    NASA Astrophysics Data System (ADS)

    Jabnoun, Hanen; Benzarti, Faouzi; Amiri, Hamid

    2017-03-01

    Developing assisting system of handicapped persons become a challenging ask in research projects. Recently, a variety of tools are designed to help visually impaired or blind people object as a visual substitution system. The majority of these tools are based on the conversion of input information into auditory or tactile sensory information. Furthermore, object recognition and text retrieval are exploited in the visual substitution systems. Text detection and recognition provides the description of the surrounding environments, so that the blind person can readily recognize the scene. In this work, we aim to introduce a method for detecting and recognizing text in indoor scene. The process consists on the detection of the regions of interest that should contain the text using the connected component. Then, the text detection is provided by employing the images correlation. This component of an assistive blind person should be simple, so that the users are able to obtain the most informative feedback within the shortest time.

  10. A fast and flexible panoramic virtual reality system for behavioural and electrophysiological experiments.

    PubMed

    Takalo, Jouni; Piironen, Arto; Honkanen, Anna; Lempeä, Mikko; Aikio, Mika; Tuukkanen, Tuomas; Vähäsöyrinki, Mikko

    2012-01-01

    Ideally, neuronal functions would be studied by performing experiments with unconstrained animals whilst they behave in their natural environment. Although this is not feasible currently for most animal models, one can mimic the natural environment in the laboratory by using a virtual reality (VR) environment. Here we present a novel VR system based upon a spherical projection of computer generated images using a modified commercial data projector with an add-on fish-eye lens. This system provides equidistant visual stimulation with extensive coverage of the visual field, high spatio-temporal resolution and flexible stimulus generation using a standard computer. It also includes a track-ball system for closed-loop behavioural experiments with walking animals. We present a detailed description of the system and characterize it thoroughly. Finally, we demonstrate the VR system's performance whilst operating in closed-loop conditions by showing the movement trajectories of the cockroaches during exploratory behaviour in a VR forest.

  11. Meet our Neighbours - a tactile experience

    NASA Astrophysics Data System (ADS)

    Canas, L.; Lobo Correia, A.

    2013-09-01

    Planetary science is a key field in astronomy that draws lots of attention and that engages large amounts of enthusiasts. On its essence, it is a visual science and the current resources and activities for the inclusion of visually impaired children, although increasing, are still costly and somewhat scarce. Therefore there is a paramount need to develop more low cost resources in order to provide experiences that can reach all, even the more socially deprived communities. "Meet our neighbours!-a tactile experience", plans to promote and provide inclusion activities for visually impaired children and their non-visually impaired peers through the use of astronomy hands-on low cost activities. Is aimed for children from the ages of 6 to 12 years old and produce data set 13 tactile images of the main objects of the Solar System that can be used in schools, science centres and outreach associations. Accessing several common problems through tactile resources, with this project we present ways to successfully provide low cost solutions (avoiding the expensive tactile printing costs), promote inclusion and interactive hands-on activities for visually impaired children and their non-visually impaired peers and create dynamic interactions based on oral knowledge transmission between them. Here we describe the process of implementing such initiative near target communities: establishing a bridge between scientists, children and teachers. The struggles and challenges perceived during the project and the enrichment experience of engaging astronomy with these specific groups, broadening horizons in an overall experience accessible to all.

  12. Seeing the Invisible: Educating the Public on Planetary Magnetic Fields and How they Affect Atmospheres

    NASA Astrophysics Data System (ADS)

    Fillingim, M. O.; Brain, D. A.; Peticolas, L. M.; Schultz, G.; Yan, D.; Guevara, S.; Randol, S.

    2010-08-01

    Magnetic fields and charged particles are difficult for school children, the general public, and scientists alike to visualize. But studies of planetary magnetospheres and ionospheres have broad implications for planetary evolution, from the deep interior to the ancient climate, that are important to communicate to each of these audiences. This presentation will highlight the visualization materials that we are developing to educate audiences on the magnetic fields of planets and how they affect the atmosphere. The visualization materials that we are developing consist of simplified data sets that can be displayed on spherical projection systems and portable 3-D rigid models of planetary magnetic fields.

  13. Long-term effects of neonatal hypoxia-ischemia on structural and physiological integrity of the eye and visual pathway by multimodal MRI.

    PubMed

    Chan, Kevin C; Kancherla, Swarupa; Fan, Shu-Juan; Wu, Ed X

    2014-12-09

    Neonatal hypoxia-ischemia is a major cause of brain damage in infants and may frequently present visual impairments. Although advancements in perinatal care have increased survival, the pathogenesis of hypoxic-ischemic injury and the long-term consequences to the visual system remain unclear. We hypothesized that neonatal hypoxia-ischemia can lead to chronic, MRI-detectable structural and physiological alterations in both the eye and the brain's visual pathways. Eight Sprague-Dawley rats underwent ligation of the left common carotid artery followed by hypoxia for 2 hours at postnatal day 7. One year later, T2-weighted MRI, gadolinium-enhanced MRI, chromium-enhanced MRI, manganese-enhanced MRI, and diffusion tensor MRI (DTI) of the visual system were evaluated and compared between opposite hemispheres using a 7-Tesla scanner. Within the eyeball, systemic gadolinium administration revealed aqueous-vitreous or blood-ocular barrier leakage only in the ipsilesional left eye despite comparable aqueous humor dynamics in the anterior chamber of both eyes. Binocular intravitreal chromium injection showed compromised retinal integrity in the ipsilesional eye. Despite total loss of the ipsilesional visual cortex, both retinocollicular and retinogeniculate pathways projected from the contralesional eye toward ipsilesional visual cortex possessed stronger anterograde manganese transport and less disrupted structural integrity in DTI compared with the opposite hemispheres. High-field, multimodal MRI demonstrated in vivo the long-term structural and physiological deficits in the eye and brain's visual pathways after unilateral neonatal hypoxic-ischemic injury. The remaining retinocollicular and retinogeniculate pathways appeared to be more vulnerable to anterograde degeneration from eye injury than retrograde, transsynaptic degeneration from visual cortex injury. Copyright 2015 The Association for Research in Vision and Ophthalmology, Inc.

  14. Long-Term Effects of Neonatal Hypoxia-Ischemia on Structural and Physiological Integrity of the Eye and Visual Pathway by Multimodal MRI

    PubMed Central

    Chan, Kevin C.; Kancherla, Swarupa; Fan, Shu-Juan; Wu, Ed X.

    2015-01-01

    Purpose. Neonatal hypoxia-ischemia is a major cause of brain damage in infants and may frequently present visual impairments. Although advancements in perinatal care have increased survival, the pathogenesis of hypoxic-ischemic injury and the long-term consequences to the visual system remain unclear. We hypothesized that neonatal hypoxia-ischemia can lead to chronic, MRI-detectable structural and physiological alterations in both the eye and the brain's visual pathways. Methods. Eight Sprague-Dawley rats underwent ligation of the left common carotid artery followed by hypoxia for 2 hours at postnatal day 7. One year later, T2-weighted MRI, gadolinium-enhanced MRI, chromium-enhanced MRI, manganese-enhanced MRI, and diffusion tensor MRI (DTI) of the visual system were evaluated and compared between opposite hemispheres using a 7-Tesla scanner. Results. Within the eyeball, systemic gadolinium administration revealed aqueous-vitreous or blood-ocular barrier leakage only in the ipsilesional left eye despite comparable aqueous humor dynamics in the anterior chamber of both eyes. Binocular intravitreal chromium injection showed compromised retinal integrity in the ipsilesional eye. Despite total loss of the ipsilesional visual cortex, both retinocollicular and retinogeniculate pathways projected from the contralesional eye toward ipsilesional visual cortex possessed stronger anterograde manganese transport and less disrupted structural integrity in DTI compared with the opposite hemispheres. Conclusions. High-field, multimodal MRI demonstrated in vivo the long-term structural and physiological deficits in the eye and brain's visual pathways after unilateral neonatal hypoxic-ischemic injury. The remaining retinocollicular and retinogeniculate pathways appeared to be more vulnerable to anterograde degeneration from eye injury than retrograde, transsynaptic degeneration from visual cortex injury. PMID:25491295

  15. Virtual reality for intelligent and interactive operating, training, and visualization systems

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen; Schluse, Michael

    2000-10-01

    Virtual Reality Methods allow a new and intuitive way of communication between man and machine. The basic idea of Virtual Reality (VR) is the generation of artificial computer simulated worlds, which the user not only can look at but also can interact with actively using data glove and data helmet. The main emphasis for the use of such techniques at the IRF is the development of a new generation of operator interfaces for the control of robots and other automation components and for intelligent training systems for complex tasks. The basic idea of the methods developed at the IRF for the realization of Projective Virtual Reality is to let the user work in the virtual world as he would act in reality. The user actions are recognized by the Virtual reality System and by means of new and intelligent control software projected onto the automation components like robots which afterwards perform the necessary actions in reality to execute the users task. In this operation mode the user no longer has to be a robot expert to generate tasks for robots or to program them, because intelligent control software recognizes the users intention and generated automatically the commands for nearly every automation component. Now, Virtual Reality Methods are ideally suited for universal man-machine-interfaces for the control and supervision of a big class of automation components, interactive training and visualization systems. The Virtual Reality System of the IRF-COSIMIR/VR- forms the basis for different projects starting with the control of space automation systems in the projects CIROS, VITAL and GETEX, the realization of a comprehensive development tool for the International Space Station and last but not least with the realistic simulation fire extinguishing, forest machines and excavators which will be presented in the final paper in addition to the key ideas of this Virtual Reality System.

  16. Promoting Art through Technology, Education and Research of Natural Sciences (PATTERNS) across Wyoming, A Wyoming NSF EPSCoR Funded Project

    NASA Astrophysics Data System (ADS)

    Gellis, B. S.; McElroy, B. J.

    2016-12-01

    PATTERNS across Wyoming is a science and art project that promotes new and innovative approaches to STEM education and outreach, helping to re-contextualize how educators think about creative knowledge, and how to reach diverse audiences through informal education. The convergence of art, science and STEM outreach efforts is vital to increasing the presence of art in geosciences, developing multidisciplinary student research opportunities, expanding creative STEM thinking, and generating creative approaches of visualizing scientific data. A major goal of this project is to train art students to think critically about the value of scientific and artistic inquiry. PATTERNS across Wyoming makes science tangible to Wyoming citizens through K-14 art classrooms, and promotes novel maker-based art explorations centered around Wyoming's geosciences. The first PATTERNS across Wyoming scientific learning module (SIM) is a fish-tank sized flume that recreates natural patterns in sand as a result of fluid flow and sediment transport. It will help promotes the understanding of river systems found across Wyoming (e.g. Green, Yellowstone, Snake). This SIM, and the student artwork inspired by it, will help to visualize environmental-water changes in the central Rocky Mountains and will provide the essential inspiration and tools for Wyoming art students to design biological-driven creative explorations. Each art class will receive different fluvial system conditions, allowing for greater understanding of river system interactions. Artwork will return to the University of Wyoming for a STE{A}M Exhibition inspired by Wyoming's varying fluvial systems. It is our hope that new generations of science and art critical thinkers will not only explore questions of `why' and `how' scientific phenomena occur, but also `how' to better predict, conserve and study invaluable artifacts, and visualize conditions which allow for better control of scientific outcomes and public understanding.

  17. Sensing Super-position: Visual Instrument Sensor Replacement

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Schipper, John F.

    2006-01-01

    The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.

  18. Evaluation of an Innovative Digital Assessment Tool in Dental Anatomy.

    PubMed

    Lam, Matt T; Kwon, So Ran; Qian, Fang; Denehy, Gerald E

    2015-05-01

    The E4D Compare software is an innovative tool that provides immediate feedback to students' projects and competencies. It should provide consistent scores even when different scanners are used which may have inherent subtle differences in calibration. This study aimed to evaluate potential discrepancies in evaluation using the E4D Compare software based on four different NEVO scanners in dental anatomy projects. Additionally, correlation between digital and visual scores was evaluated. Thirty-five projects of maxillary left central incisors were evaluated. Among these, thirty wax-ups were performed by four operators and five consisted of standard dentoform teeth. Five scores were obtained for each project: one from an instructor that visually graded the project and from four different NEVO scanners. A faculty involved in teaching the dental anatomy course blindly scored the 35 projects. One operator scanned all projects to four NEVO scanners (D4D Technologies, Richardson, TX, USA). The images were aligned to the gold standard, and tolerance set at 0.3 mm to generate a score. The score reflected percentage match between the project and the gold standard. One-way ANOVA with repeated measures was used to determine whether there was a significant difference in scores among the four NEVO scanners. Paired-sample t-test was used to detect any difference between visual scores and the average scores of the four NEVO scanners. Pearson's correlation test was used to assess the relationship between visual and average scores of NEVO scanners. There was no significant difference in mean scores among four different NEVO scanners [F(3, 102) = 2.27, p = 0.0852 one-way ANOVA with repeated measures]. Moreover, the data provided strong evidence that a significant difference existed between visual and digital scores (p = 0.0217; a paired - sample t-test). Mean visual scores were significantly lower than digital scores (72.4 vs 75.1). Pearson's correlation coefficient of 0.85 indicated a strong correlation between visual and digital scores (p < 0.0001). The E4D Compare software provides consistent scores even when different scanners are used and correlates well with visual scores. The use of innovative digital assessment tools in dental education is promising with the E4D Compare software correlating well with visual scores and providing consistent scores even when different scanners are used.

  19. The logic of selecting an appropriate map projection in a Decision Support System (DSS)

    USGS Publications Warehouse

    Finn, Michael P.; Usery, E. Lynn; Woodard, Laura N.; Yamamoto, Kristina H.

    2017-01-01

    There are undeniable practical consequences to consider when choosing an appropriate map projection for a specific region. The surface of a globe covered by global, continental, and regional maps are so singular that each type distinctively affects the amount of distortion incurred during a projection transformation because of the an assortment of effects caused by distance, direction, scale , and area. A Decision Support System (DSS) for Map Projections of Small Scale Data was previously developed to help select an appropriate projection. This paper reports on a tutorial to accompany that DSS. The DSS poses questions interactively, allowing the user to decide on the parameters, which in turn determines the logic path to a solution. The objective of including a tutorial to accompany the DSS is achieved by visually representing the path of logic that is taken to a recommended map projection derived from the parameters the user selects. The tutorial informs the DSS user about the pedigree of the projection and provides a basic explanation of the specific projection design. This information is provided by informational pop-ups and other aids.

  20. Visualizing complex hydrodynamic features

    NASA Astrophysics Data System (ADS)

    Kempf, Jill L.; Marshall, Robert E.; Yen, Chieh-Cheng

    1990-08-01

    The Lake Erie Forecasting System is a cooperative project by university, private and governmental institutions to provide continuous forecasting of three-dimensional structure within the lake. The forecasts will include water velocity and temperature distributions throughout the body of water, as well as water level and wind-wave distributions at the lake's surface. Many hydrodynamic features can be extracted from this data, including coastal jets, large-scale thermocline motion and zones of upwelling and downwelling. A visualization system is being developed that will aid in understanding these features and their interactions. Because of the wide variety of features, they cannot all be adequately represented by a single rendering technique. Particle tracing, surface rendering, and volumetric techniques are all necessary. This visualization effortis aimed towards creating a system that will provide meaningful forecasts for those using the lake for recreational and commercial purposes. For example, the fishing industry needs to know about large-scale thermocline motion in order to find the best fishing areas and power plants need to know water intAke temperatures. The visualization system must convey this information in a manner that is easily understood by these users. Scientists must also be able to use this system to verify their hydrodynamic simulation. The focus of the system, therefore, is to provide the information to serve these diverse interests, without overwhelming any single user with unnecessary data.

  1. Image pattern recognition supporting interactive analysis and graphical visualization

    NASA Technical Reports Server (NTRS)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  2. Realistic realtime illumination of complex environment for immersive systems. A case study: the Parthenon

    NASA Astrophysics Data System (ADS)

    Callieri, M.; Debevec, P.; Pair, J.; Scopigno, R.

    2005-06-01

    Offine rendering techniques have nowadays reached an astonishing level of realism but paying the cost of a long computational time. The new generation of programmable graphic hardware, on the other hand, gives the possibility to implement in realtime some of the visual effects previously available only for cinematographic production. In a collaboration between the Visual Computing Lab (ISTI-CNR) with the Institute for Creative Technologies of the University of Southern California, has been developed a realtime demo that replicate a sequence from the short movie "The Parthenon" presented at Siggraph 2004. The application is designed to run on an immersive reality system, making possible for a user to perceive the virtual environment with a cinematographic visual quality. In this paper we present the principal ideas of the project, discussing design issues and technical solution used for the realtime demo.

  3. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papka, M.; Messina, P.; Coffey, R.

    The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursormore » to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to implement those algorithms. The Data Analytics and Visualization Team lends expertise in tools and methods for high-performance, post-processing of large datasets, interactive data exploration, batch visualization, and production visualization. The Operations Team ensures that system hardware and software work reliably and optimally; system tools are matched to the unique system architectures and scale of ALCF resources; the entire system software stack works smoothly together; and I/O performance issues, bug fixes, and requests for system software are addressed. The User Services and Outreach Team offers frontline services and support to existing and potential ALCF users. The team also provides marketing and outreach to users, DOE, and the broader community.« less

  4. Neutron and positron techniques for fluid transfer system analysis and remote temperature and stress measurement

    NASA Astrophysics Data System (ADS)

    Stewart, P. A. E.

    1987-05-01

    Present and projected applications of penetrating radiation techniques to gas turbine research and development are considered. Approaches discussed include the visualization and measurement of metal component movement using high energy X-rays, the measurement of metal temperatures using epithermal neutrons, the measurement of metal stresses using thermal neutron diffraction, and the visualization and measurement of oil and fuel systems using either cold neutron radiography or emitting isotope tomography. By selecting the radiation appropriate to the problem, the desired data can be probed for and obtained through imaging or signal acquisition, and the necessary information can then be extracted with digital image processing or knowledge based image manipulation and pattern recognition.

  5. Tailoring the visual communication of climate projections for local adaptation practitioners in Germany and the UK

    PubMed Central

    Lorenz, Susanne; Dessai, Suraje; Forster, Piers M.; Paavola, Jouni

    2015-01-01

    Visualizations are widely used in the communication of climate projections. However, their effectiveness has rarely been assessed among their target audience. Given recent calls to increase the usability of climate information through the tailoring of climate projections, it is imperative to assess the effectiveness of different visualizations. This paper explores the complexities of tailoring through an online survey conducted with 162 local adaptation practitioners in Germany and the UK. The survey examined respondents’ assessed and perceived comprehension (PC) of visual representations of climate projections as well as preferences for using different visualizations in communicating and planning for a changing climate. Comprehension and use are tested using four different graph formats, which are split into two pairs. Within each pair the information content is the same but is visualized differently. We show that even within a fairly homogeneous user group, such as local adaptation practitioners, there are clear differences in respondents’ comprehension of and preference for visualizations. We do not find a consistent association between assessed comprehension and PC or use within the two pairs of visualizations that we analysed. There is, however, a clear link between PC and use of graph format. This suggests that respondents use what they think they understand the best, rather than what they actually understand the best. These findings highlight that audience-specific targeted communication may be more complex and challenging than previously recognized. PMID:26460109

  6. Vision for perception and vision for action in the primate brain.

    PubMed

    Goodale, M A

    1998-01-01

    Visual systems first evolved not to enable animals to see, but to provide distal sensory control of their movements. Vision as 'sight' is a relative newcomer to the evolutionary landscape, but its emergence has enabled animals to carry out complex cognitive operations on perceptual representations of the world. The two streams of visual processing that have been identified in the primate cerebral cortex are a reflection of these two functions of vision. The dorsal 'action' stream projecting from primary visual cortex to the posterior parietal cortex provides flexible control of more ancient subcortical visuomotor modules for the production of motor acts. The ventral 'perceptual' stream projecting from the primary visual cortex to the temporal lobe provides the rich and detailed representation of the world required for cognitive operations. Both streams process information about the structure of objects and about their spatial locations--and both are subject to the modulatory influences of attention. Each stream, however, uses visual information in different ways. Transformations carried out in the ventral stream permit the formation of perceptual representations that embody the enduring characteristics of objects and their relations; those carried out in the dorsal stream which utilize moment-to-moment information about objects within egocentric frames of reference, mediate the control of skilled actions. Both streams work together in the production of goal-directed behaviour.

  7. Methods for structuring scientific knowledge from many areas related to aging research.

    PubMed

    Zhavoronkov, Alex; Cantor, Charles R

    2011-01-01

    Aging and age-related disease represents a substantial quantity of current natural, social and behavioral science research efforts. Presently, no centralized system exists for tracking aging research projects across numerous research disciplines. The multidisciplinary nature of this research complicates the understanding of underlying project categories, the establishment of project relations, and the development of a unified project classification scheme. We have developed a highly visual database, the International Aging Research Portfolio (IARP), available at AgingPortfolio.org to address this issue. The database integrates information on research grants, peer-reviewed publications, and issued patent applications from multiple sources. Additionally, the database uses flexible project classification mechanisms and tools for analyzing project associations and trends. This system enables scientists to search the centralized project database, to classify and categorize aging projects, and to analyze the funding aspects across multiple research disciplines. The IARP is designed to provide improved allocation and prioritization of scarce research funding, to reduce project overlap and improve scientific collaboration thereby accelerating scientific and medical progress in a rapidly growing area of research. Grant applications often precede publications and some grants do not result in publications, thus, this system provides utility to investigate an earlier and broader view on research activity in many research disciplines. This project is a first attempt to provide a centralized database system for research grants and to categorize aging research projects into multiple subcategories utilizing both advanced machine algorithms and a hierarchical environment for scientific collaboration.

  8. A novel augmented reality system of image projection for image-guided neurosurgery.

    PubMed

    Mahvash, Mehran; Besharati Tabrizi, Leila

    2013-05-01

    Augmented reality systems combine virtual images with a real environment. To design and develop an augmented reality system for image-guided surgery of brain tumors using image projection. A virtual image was created in two ways: (1) MRI-based 3D model of the head matched with the segmented lesion of a patient using MRIcro software (version 1.4, freeware, Chris Rorden) and (2) Digital photograph based model in which the tumor region was drawn using image-editing software. The real environment was simulated with a head phantom. For direct projection of the virtual image to the head phantom, a commercially available video projector (PicoPix 1020, Philips) was used. The position and size of the virtual image was adjusted manually for registration, which was performed using anatomical landmarks and fiducial markers position. An augmented reality system for image-guided neurosurgery using direct image projection has been designed successfully and implemented in first evaluation with promising results. The virtual image could be projected to the head phantom and was registered manually. Accurate registration (mean projection error: 0.3 mm) was performed using anatomical landmarks and fiducial markers position. The direct projection of a virtual image to the patients head, skull, or brain surface in real time is an augmented reality system that can be used for image-guided neurosurgery. In this paper, the first evaluation of the system is presented. The encouraging first visualization results indicate that the presented augmented reality system might be an important enhancement of image-guided neurosurgery.

  9. STDP in lateral connections creates category-based perceptual cycles for invariance learning with multiple stimuli.

    PubMed

    Evans, Benjamin D; Stringer, Simon M

    2015-04-01

    Learning to recognise objects and faces is an important and challenging problem tackled by the primate ventral visual system. One major difficulty lies in recognising an object despite profound differences in the retinal images it projects, due to changes in view, scale, position and other identity-preserving transformations. Several models of the ventral visual system have been successful in coping with these issues, but have typically been privileged by exposure to only one object at a time. In natural scenes, however, the challenges of object recognition are typically further compounded by the presence of several objects which should be perceived as distinct entities. In the present work, we explore one possible mechanism by which the visual system may overcome these two difficulties simultaneously, through segmenting unseen (artificial) stimuli using information about their category encoded in plastic lateral connections. We demonstrate that these experience-guided lateral interactions robustly organise input representations into perceptual cycles, allowing feed-forward connections trained with spike-timing-dependent plasticity to form independent, translation-invariant output representations. We present these simulations as a functional explanation for the role of plasticity in the lateral connectivity of visual cortex.

  10. JSC Shuttle Mission Simulator (SMS) visual system payload bay video image

    NASA Technical Reports Server (NTRS)

    1981-01-01

    This space shuttle orbiter payload bay (PLB) video image is used in JSC's Fixed Based (FB) Shuttle Mission Simulator (SMS). The image is projected inside the FB-SMS crew compartment during mission simulation training. The FB-SMS is located in the Mission Simulation and Training Facility Bldg 5.

  11. Summary Report for Personal Chemical Exposure Informatics: Visualization and Exploratory Research in Simulations and Systems (PerCEIVERS)

    EPA Science Inventory

    EPA Research Pathfinder Innovation Projects (PIPs), an internal competition for Agency scientists, was launched in 2010 to solicit innovative research proposals that would help the Agency to advance science for sustainability. In 2011, of the 117 proposals received from almost 30...

  12. Spotlight on Arts Education. Volume 3, Spring, 1988.

    ERIC Educational Resources Information Center

    North Carolina State Dept. of Public Instruction, Raleigh. Div. of Arts Education.

    This volume focuses on four North Carolina school systems that have developed strategies for improving teaching and learning environments in arts education. Article 1 discusses the challenge of providing adequate levels of visual arts instruction for exceptional children in Dare County and describes a specific art project for handicapped students…

  13. Embedding of Cortical Representations by the Superficial Patch System

    PubMed Central

    Da Costa, Nuno M. A.; Girardin, Cyrille C.; Naaman, Shmuel; Omer, David B.; Ruesch, Elisha; Grinvald, Amiram; Douglas, Rodney J.

    2011-01-01

    Pyramidal cells in layers 2 and 3 of the neocortex of many species collectively form a clustered system of lateral axonal projections (the superficial patch system—Lund JS, Angelucci A, Bressloff PC. 2003. Anatomical substrates for functional columns in macaque monkey primary visual cortex. Cereb Cortex. 13:15–24. or daisy architecture—Douglas RJ, Martin KAC. 2004. Neuronal circuits of the neocortex. Annu Rev Neurosci. 27:419–451.), but the function performed by this general feature of the cortical architecture remains obscure. By comparing the spatial configuration of labeled patches with the configuration of responses to drifting grating stimuli, we found the spatial organizations both of the patch system and of the cortical response to be highly conserved between cat and monkey primary visual cortex. More importantly, the configuration of the superficial patch system is directly reflected in the arrangement of function across monkey primary visual cortex. Our results indicate a close relationship between the structure of the superficial patch system and cortical responses encoding a single value across the surface of visual cortex (self-consistent states). This relationship is consistent with the spontaneous emergence of orientation response–like activity patterns during ongoing cortical activity (Kenet T, Bibitchkov D, Tsodyks M, Grinvald A, Arieli A. 2003. Spontaneously emerging cortical representations of visual attributes. Nature. 425:954–956.). We conclude that the superficial patch system is the physical encoding of self-consistent cortical states, and that a set of concurrently labeled patches participate in a network of mutually consistent representations of cortical input. PMID:21383233

  14. Distributed visualization framework architecture

    NASA Astrophysics Data System (ADS)

    Mishchenko, Oleg; Raman, Sundaresan; Crawfis, Roger

    2010-01-01

    An architecture for distributed and collaborative visualization is presented. The design goals of the system are to create a lightweight, easy to use and extensible framework for reasearch in scientific visualization. The system provides both single user and collaborative distributed environment. System architecture employs a client-server model. Visualization projects can be synchronously accessed and modified from different client machines. We present a set of visualization use cases that illustrate the flexibility of our system. The framework provides a rich set of reusable components for creating new applications. These components make heavy use of leading design patterns. All components are based on the functionality of a small set of interfaces. This allows new components to be integrated seamlessly with little to no effort. All user input and higher-level control functionality interface with proxy objects supporting a concrete implementation of these interfaces. These light-weight objects can be easily streamed across the web and even integrated with smart clients running on a user's cell phone. The back-end is supported by concrete implementations wherever needed (for instance for rendering). A middle-tier manages any communication and synchronization with the proxy objects. In addition to the data components, we have developed several first-class GUI components for visualization. These include a layer compositor editor, a programmable shader editor, a material editor and various drawable editors. These GUI components interact strictly with the interfaces. Access to the various entities in the system is provided by an AssetManager. The asset manager keeps track of all of the registered proxies and responds to queries on the overall system. This allows all user components to be populated automatically. Hence if a new component is added that supports the IMaterial interface, any instances of this can be used in the various GUI components that work with this interface. One of the main features is an interactive shader designer. This allows rapid prototyping of new visualization renderings that are shader-based and greatly accelerates the development and debug cycle.

  15. Collaborative interactive visualization: exploratory concept

    NASA Astrophysics Data System (ADS)

    Mokhtari, Marielle; Lavigne, Valérie; Drolet, Frédéric

    2015-05-01

    Dealing with an ever increasing amount of data is a challenge that military intelligence analysts or team of analysts face day to day. Increased individual and collective comprehension goes through collaboration between people. Better is the collaboration, better will be the comprehension. Nowadays, various technologies support and enhance collaboration by allowing people to connect and collaborate in settings as varied as across mobile devices, over networked computers, display walls, tabletop surfaces, to name just a few. A powerful collaboration system includes traditional and multimodal visualization features to achieve effective human communication. Interactive visualization strengthens collaboration because this approach is conducive to incrementally building a mental assessment of the data meaning. The purpose of this paper is to present an overview of the envisioned collaboration architecture and the interactive visualization concepts underlying the Sensemaking Support System prototype developed to support analysts in the context of the Joint Intelligence Collection and Analysis Capability project at DRDC Valcartier. It presents the current version of the architecture, discusses future capabilities to help analyst(s) in the accomplishment of their tasks and finally recommends collaboration and visualization technologies allowing to go a step further both as individual and as a team.

  16. A low-cost, portable, micro-controlled device for multi-channel LED visual stimulation.

    PubMed

    Pinto, Marcos Antonio da Silva; de Souza, John Kennedy Schettino; Baron, Jerome; Tierra-Criollo, Carlos Julio

    2011-04-15

    Light emitting diodes (LEDs) are extensively used as light sources to investigate visual and visually related function and dysfunction. Here, we describe the design of a compact, low-cost, stand-alone LED-based system that enables the configuration, storage and presentation of elaborate visual stimulation paradigms. The core functionality of this system is provided by a microcontroller whose ultra-low power consumption makes it well suited for long lasting battery applications. The effective use of hardware resources is managed by multi-layered architecture software that provides an intuitive and user-friendly interface. In the configuration mode, different stimulation sequences can be created and memorized for ten channels, independently. LED-driving current output can be set either as continuous or pulse modulated, up to 500 Hz, by duty cycle adjustments. In run mode, multiple-channel stimulus sequences are automatically applied according to the pre-programmed protocol. Steady state visual evoked potentials were successfully recorded in five subjects with no visible electromagnetic interferences from the stimulator, demonstrating the efficacy of combining our prototyped equipment with electrophysiological techniques. Finally, we discuss a number of possible improvements for future development of our project. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Scoring nuclear pleomorphism using a visual BoF modulated by a graph structure

    NASA Astrophysics Data System (ADS)

    Moncayo-Martínez, Ricardo; Romo-Bucheli, David; Arias, Viviana; Romero, Eduardo

    2017-11-01

    Nuclear pleomorphism has been recognized as a key histological criterium in breast cancer grading systems (such as Bloom Richardson and Nothingham grading systems). However, the nuclear pleomorphism assessment is subjective and presents high inter-reader variability. Automatic algorithms might facilitate quantitative estimation of nuclear variations in shape and size. Nevertheless, the automatic segmentation of the nuclei is difficult and still and open research problem. This paper presents a method using a bag of multi-scale visual features, modulated by a graph structure, to grade nuclei in breast cancer microscopical fields. This strategy constructs hematoxylin-eosin image patches, each containing a nucleus that is represented by a set of visual words in the BoF. The contribution of each visual word is computed by examining the visual words in an associated graph built when projecting the multi-dimensional BoF to a bi-dimensional plane where local relationships are conserved. The methodology was evaluated using 14 breast cancer cases of the Cancer Genome Atlas database. From these cases, a set of 134 microscopical fields was extracted, and under a leave-one-out validation scheme, an average F-score of 0.68 was obtained.

  18. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  19. Are visual peripheries forever young?

    PubMed

    Burnat, Kalina

    2015-01-01

    The paper presents a concept of lifelong plasticity of peripheral vision. Central vision processing is accepted as critical and irreplaceable for normal perception in humans. While peripheral processing chiefly carries information about motion stimuli features and redirects foveal attention to new objects, it can also take over functions typical for central vision. Here I review the data showing the plasticity of peripheral vision found in functional, developmental, and comparative studies. Even though it is well established that afferent projections from central and peripheral retinal regions are not established simultaneously during early postnatal life, central vision is commonly used as a general model of development of the visual system. Based on clinical studies and visually deprived animal models, I describe how central and peripheral visual field representations separately rely on early visual experience. Peripheral visual processing (motion) is more affected by binocular visual deprivation than central visual processing (spatial resolution). In addition, our own experimental findings show the possible recruitment of coarse peripheral vision for fine spatial analysis. Accordingly, I hypothesize that the balance between central and peripheral visual processing, established in the course of development, is susceptible to plastic adaptations during the entire life span, with peripheral vision capable of taking over central processing.

  20. NASA Mars rover: a testbed for evaluating applications of covariance intersection

    NASA Astrophysics Data System (ADS)

    Uhlmann, Jeffrey K.; Julier, Simon J.; Kamgar-Parsi, Behzad; Lanzagorta, Marco O.; Shyu, Haw-Jye S.

    1999-07-01

    The Naval Research Laboratory (NRL) has spearheaded the development and application of Covariance Intersection (CI) for a variety of decentralized data fusion problems. Such problems include distributed control, onboard sensor fusion, and dynamic map building and localization. In this paper we describe NRL's development of a CI-based navigation system for the NASA Mars rover that stresses almost all aspects of decentralized data fusion. We also describe how this project relates to NRL's augmented reality, advanced visualization, and REBOT projects.

  1. OSIRIX: open source multimodality image navigation software

    NASA Astrophysics Data System (ADS)

    Rosset, Antoine; Pysher, Lance; Spadola, Luca; Ratib, Osman

    2005-04-01

    The goal of our project is to develop a completely new software platform that will allow users to efficiently and conveniently navigate through large sets of multidimensional data without the need of high-end expensive hardware or software. We also elected to develop our system on new open source software libraries allowing other institutions and developers to contribute to this project. OsiriX is a free and open-source imaging software designed manipulate and visualize large sets of medical images: http://homepage.mac.com/rossetantoine/osirix/

  2. The National Sports Education Camps Project: Introducing Sports Skills to Students with Visual Impairments through Short-term Specialized Instruction

    ERIC Educational Resources Information Center

    Ponchillia, Paul E.; Armbruster, Jennifer; Wiebold, Jennipher

    2005-01-01

    The National Sports Education Camps Project (NSEC), a joint partnership between Western Michigan University and the United States Association of Blind Athletes, provides short-term interventions to teach sports to children with visual impairments. A study comparing 321 students with visual impairments, ranging in age from 8 to 19 years, before and…

  3. qPortal: A platform for data-driven biomedical research.

    PubMed

    Mohr, Christopher; Friedrich, Andreas; Wojnar, David; Kenar, Erhan; Polatkan, Aydin Can; Codrea, Marius Cosmin; Czemmel, Stefan; Kohlbacher, Oliver; Nahnsen, Sven

    2018-01-01

    Modern biomedical research aims at drawing biological conclusions from large, highly complex biological datasets. It has become common practice to make extensive use of high-throughput technologies that produce big amounts of heterogeneous data. In addition to the ever-improving accuracy, methods are getting faster and cheaper, resulting in a steadily increasing need for scalable data management and easily accessible means of analysis. We present qPortal, a platform providing users with an intuitive way to manage and analyze quantitative biological data. The backend leverages a variety of concepts and technologies, such as relational databases, data stores, data models and means of data transfer, as well as front-end solutions to give users access to data management and easy-to-use analysis options. Users are empowered to conduct their experiments from the experimental design to the visualization of their results through the platform. Here, we illustrate the feature-rich portal by simulating a biomedical study based on publically available data. We demonstrate the software's strength in supporting the entire project life cycle. The software supports the project design and registration, empowers users to do all-digital project management and finally provides means to perform analysis. We compare our approach to Galaxy, one of the most widely used scientific workflow and analysis platforms in computational biology. Application of both systems to a small case study shows the differences between a data-driven approach (qPortal) and a workflow-driven approach (Galaxy). qPortal, a one-stop-shop solution for biomedical projects offers up-to-date analysis pipelines, quality control workflows, and visualization tools. Through intensive user interactions, appropriate data models have been developed. These models build the foundation of our biological data management system and provide possibilities to annotate data, query metadata for statistics and future re-analysis on high-performance computing systems via coupling of workflow management systems. Integration of project and data management as well as workflow resources in one place present clear advantages over existing solutions.

  4. Design and implementation of visualization methods for the CHANGES Spatial Decision Support System

    NASA Astrophysics Data System (ADS)

    Cristal, Irina; van Westen, Cees; Bakker, Wim; Greiving, Stefan

    2014-05-01

    The CHANGES Spatial Decision Support System (SDSS) is a web-based system aimed for risk assessment and the evaluation of optimal risk reduction alternatives at local level as a decision support tool in long-term natural risk management. The SDSS use multidimensional information, integrating thematic, spatial, temporal and documentary data. The role of visualization in this context becomes of vital importance for efficiently representing each dimension. This multidimensional aspect of the required for the system risk information, combined with the diversity of the end-users imposes the use of sophisticated visualization methods and tools. The key goal of the present work is to exploit efficiently the large amount of data in relation to the needs of the end-user, utilizing proper visualization techniques. Three main tasks have been accomplished for this purpose: categorization of the end-users, the definition of system's modules and the data definition. The graphical representation of the data and the visualization tools were designed to be relevant to the data type and the purpose of the analysis. Depending on the end-users category, each user should have access to different modules of the system and thus, to the proper visualization environment. The technologies used for the development of the visualization component combine the latest and most innovative open source JavaScript frameworks, such as OpenLayers 2.13.1, ExtJS 4 and GeoExt 2. Moreover, the model-view-controller (MVC) pattern is used in order to ensure flexibility of the system at the implementation level. Using the above technologies, the visualization techniques implemented so far offer interactive map navigation, querying and comparison tools. The map comparison tools are of great importance within the SDSS and include the following: swiping tool for comparison of different data of the same location; raster subtraction for comparison of the same phenomena varying in time; linked views for comparison of data from different locations and a time slider tool for monitoring changes in spatio-temporal data. All these techniques are part of the interactive interface of the system and make use of spatial and spatio-temporal data. Further significant aspects of the visualization component include conventional cartographic techniques and visualization of non-spatial data. The main expectation from the present work is to offer efficient visualization of risk-related data in order to facilitate the decision making process, which is the final purpose of the CHANGES SDSS. This work is part of the "CHANGES" project, funded by the European Community's 7th Framework Programme.

  5. NASA visual thesaurus maintenance documentation

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The following document is presented in six sections: (1) introduction; (2) a diagram showing how the various routines are grouped together into functional modules; (3) a printout of all the layouts in the system along with their associated layout procedures; (4) listings of all the global procedures in the system; (5) a cross-reference of all identifiers used in the system; and (6) descriptions of the external procedures used in the system. The document was prepared at the Project ICON Image Scaling Laboratory.

  6. Effects of Spaceflight on Venous and Arterial Compliance

    NASA Technical Reports Server (NTRS)

    Ribeiro, L. C.; Laurie, S. S.; Lee, S. M. C.; Macias, B. R.; Martin, D. S.; Ploutz-Snyder, R.; Stenger, M. B.; Platts, S. H.

    2017-01-01

    The visual impairment and intracranial pressure (VIIP) syndrome is a spaceflight-associated set of symptoms affecting more than 50% of American astronauts who have flown International Space Station (ISS) missions. VIIP is defined primarily by visual acuity deficits and anatomical changes to eye structures (e.g. optic disc edema, choroidal folds, and globe flattening) and is hypothesized to be related to elevated intracranial pressure secondary to a cephalad fluid shift. However, ocular symptoms have not been replicated in subjects completing prolonged bed rest, a well-accepted spaceflight analog. Altered vascular compliance along with spaceflight factors such as diet, radiation exposure, or environmental factors may cause alterations in the cardiovascular system that contribute to the manifestation of ocular changes. Loss of visual acuity could be a significant threat to crew health and performance during and after an exploration mission and may have implications for years post-flight. The overall objective of this project is to determine if spaceflight alters vascular compliance and whether such an adaptation is related to the incidence of VIIP. This objective will be met by completing three separate but related projects.

  7. Electrophysiological signal analysis and visualization using Cloudwave for epilepsy clinical research.

    PubMed

    Jayapandian, Catherine P; Chen, Chien-Hung; Bozorgi, Alireza; Lhatoo, Samden D; Zhang, Guo-Qiang; Sahoo, Satya S

    2013-01-01

    Epilepsy is the most common serious neurological disorder affecting 50-60 million persons worldwide. Electrophysiological data recordings, such as electroencephalogram (EEG), are the gold standard for diagnosis and pre-surgical evaluation in epilepsy patients. The increasing trend towards multi-center clinical studies require signal visualization and analysis tools to support real time interaction with signal data in a collaborative environment, which cannot be supported by traditional desktop-based standalone applications. As part of the Prevention and Risk Identification of SUDEP Mortality (PRISM) project, we have developed a Web-based electrophysiology data visualization and analysis platform called Cloudwave using highly scalable open source cloud computing infrastructure. Cloudwave is integrated with the PRISM patient cohort identification tool called MEDCIS (Multi-modality Epilepsy Data Capture and Integration System). The Epilepsy and Seizure Ontology (EpSO) underpins both Cloudwave and MEDCIS to support query composition and result retrieval. Cloudwave is being used by clinicians and research staff at the University Hospital - Case Medical Center (UH-CMC) Epilepsy Monitoring Unit (EMU) and will be progressively deployed at four EMUs in the United States and the United Kingdomas part of the PRISM project.

  8. A highly scalable information system as extendable framework solution for medical R&D projects.

    PubMed

    Holzmüller-Laue, Silke; Göde, Bernd; Stoll, Regina; Thurow, Kerstin

    2009-01-01

    For research projects in preventive medicine a flexible information management is needed that offers a free planning and documentation of project specific examinations. The system should allow a simple, preferably automated data acquisition from several distributed sources (e.g., mobile sensors, stationary diagnostic systems, questionnaires, manual inputs) as well as an effective data management, data use and analysis. An information system fulfilling these requirements has been developed at the Center for Life Science Automation (celisca). This system combines data of multiple investigations and multiple devices and displays them on a single screen. The integration of mobile sensor systems for comfortable, location-independent capture of time-based physiological parameter and the possibility of observation of these measurements directly by this system allow new scenarios. The web-based information system presented in this paper is configurable by user interfaces. It covers medical process descriptions, operative process data visualizations, a user-friendly process data processing, modern online interfaces (data bases, web services, XML) as well as a comfortable support of extended data analysis with third-party applications.

  9. Map of the Pluto System - Children's Edition

    NASA Astrophysics Data System (ADS)

    Hargitai, H. I.

    2016-12-01

    Cartography is a powerful tool in the scientific visualization and communication of spatial data. Cartographic visualization for children requires special methods. Although almost all known solid surface bodies in the Solar System have been mapped in detail during the last more than 5 decades, books and publications that target children, tweens and teens never include any of the cartographic results of these missions. We have developed a series of large size planetary maps with the collaboration of planetary scientists, cartographers and graphic artists. The maps are based on photomosaics and DTMs that were redrawn as artwork. This process necessarily involved generalization, interpretation and transformation into the visual language that can be understood by children. In the first project we selected six planetary bodies (Venus, the Moon, Mars, Io, Europa and Titan) and invited six illustrators of childrens'books. Although the overall structure of the maps look similar, the visual approach was significantly different. An important addition was that the maps contained a narrative: different characters - astronauts or "alien-like lifeforms" - interacted with the surface. The map contents were translated into 11 languages and published online at https://childrensmaps.wordpress.com.We report here on the new map of the series. Following the New Horizons' Pluto flyby we have started working on a map that, unlike the others, depicts a planetary system, not only one body. Since only one hemisphere was imaged in high resolution, this map is showing the encounter hemispheres of Pluto and Charon. Projected high resolution image mosaics with informal nomenclature were provided by the New Horizons Team. The graphic artist is Adrienn Gyöngyösi. Our future plan is to produce a book format Children's Atlas of Solar System bodies that makes planetary cartographic and astrogeologic results more accessible for children, and the next generation of planetary scientists among them.

  10. Visualization of planetary subsurface radar sounder data in three dimensions using stereoscopy

    NASA Astrophysics Data System (ADS)

    Frigeri, A.; Federico, C.; Pauselli, C.; Ercoli, M.; Coradini, A.; Orosei, R.

    2010-12-01

    Planetary subsurface sounding radar data extend the knowledge of planetary surfaces to a third dimension: the depth. The interpretation of delays of radar echoes converted into depth often requires the comparative analysis with other data, mainly topography, and radar data from different orbits can be used to investigate the spatial continuity of signals from subsurface geologic features. This scenario requires taking into account spatially referred information in three dimensions. Three dimensional objects are generally easier to understand if represented into a three dimensional space, and this representation can be improved by stereoscopic vision. Since its invention in the first half of 19th century, stereoscopy has been used in a broad range of application, including scientific visualization. The quick improvement of computer graphics and the spread of graphic rendering hardware allow to apply the basic principles of stereoscopy in the digital domain, allowing the stereoscopic projection of complex models. Specialized system for stereoscopic view of scientific data have been available in the industry, and proprietary solutions were affordable only to large research institutions. In the last decade, thanks to the GeoWall Consortium, the basics of stereoscopy have been applied for setting up stereoscopic viewers based on off-the shelf hardware products. Geowalls have been spread and are now used by several geo-science research institutes and universities. We are exploring techniques for visualizing planetary subsurface sounding radar data in three dimensions and we are developing a hardware system for rendering it in a stereoscopic vision system. Several Free Open Source Software tools and libraries are being used, as their level of interoperability is typically high and their licensing system offers the opportunity to implement quickly new functionalities to solve specific needs during the progress of the project. Visualization of planetary radar data in three dimensions represents a challenging task, and the exploration of different strategies will bring to the selection of the most appropriate ones for a meaningful extraction of information from the products of these innovative instruments.

  11. Utilizing a scale model solar system project to visualize important planetary science concepts and develop technology and spatial reasoning skills

    NASA Astrophysics Data System (ADS)

    Kortenkamp, Stephen J.; Brock, Laci

    2016-10-01

    Scale model solar systems have been used for centuries to help educate young students and the public about the vastness of space and the relative sizes of objects. We have adapted the classic scale model solar system activity into a student-driven project for an undergraduate general education astronomy course at the University of Arizona. Students are challenged to construct and use their three dimensional models to demonstrate an understanding of numerous concepts in planetary science, including: 1) planetary obliquities, eccentricities, inclinations; 2) phases and eclipses; 3) planetary transits; 4) asteroid sizes, numbers, and distributions; 5) giant planet satellite and ring systems; 6) the Pluto system and Kuiper belt; 7) the extent of space travel by humans and robotic spacecraft; 8) the diversity of extrasolar planetary systems. Secondary objectives of the project allow students to develop better spatial reasoning skills and gain familiarity with technology such as Excel formulas, smart-phone photography, and audio/video editing.During our presentation we will distribute a formal description of the project and discuss our expectations of the students as well as present selected highlights from preliminary submissions.

  12. Projector-Based Augmented Reality for Quality Inspection of Scanned Objects

    NASA Astrophysics Data System (ADS)

    Kern, J.; Weinmann, M.; Wursthorn, S.

    2017-09-01

    After scanning or reconstructing the geometry of objects, we need to inspect the result of our work. Are there any parts missing? Is every detail covered in the desired quality? We typically do this by looking at the resulting point clouds or meshes of our objects on-screen. What, if we could see the information directly visualized on the object itself? Augmented reality is the generic term for bringing virtual information into our real environment. In our paper, we show how we can project any 3D information like thematic visualizations or specific monitoring information with reference to our object onto the object's surface itself, thus augmenting it with additional information. For small objects that could for instance be scanned in a laboratory, we propose a low-cost method involving a projector-camera system to solve this task. The user only needs a calibration board with coded fiducial markers to calibrate the system and to estimate the projector's pose later on for projecting textures with information onto the object's surface. Changes within the projected 3D information or of the projector's pose will be applied in real-time. Our results clearly reveal that such a simple setup will deliver a good quality of the augmented information.

  13. Learning about the scale of the solar system using digital planetarium visualizations

    NASA Astrophysics Data System (ADS)

    Yu, Ka Chun; Sahami, Kamran; Dove, James

    2017-07-01

    We studied the use of a digital planetarium for teaching relative distances and sizes in introductory undergraduate astronomy classes. Inspired in part by the classic short film The Powers of Ten and large physical scale models of the Solar System that can be explored on foot, we created lectures using virtual versions of these two pedagogical approaches for classes that saw either an immersive treatment in the planetarium or a non-immersive version in the regular classroom (with N = 973 students participating in total). Students who visited the planetarium had not only the greatest learning gains, but their performance increased with time, whereas students who saw the same visuals projected onto a flat display in their classroom showed less retention over time. The gains seen in the students who visited the planetarium reveal that this medium is a powerful tool for visualizing scale over multiple orders of magnitude. However the modest gains for the students in the regular classroom also show the utility of these visualization approaches for the broader category of classroom physics simulations.

  14. Total On-line Access Data System (TOADS): Phase II Final Report for the Period August 2002 - August 2004

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuracko, K. L.; Parang, M.; Landguth, D. C.

    2004-09-13

    TOADS (Total On-line Access Data System) is a new generation of real-time monitoring and information management system developed to support unattended environmental monitoring and long-term stewardship of U.S. Department of Energy facilities and sites. TOADS enables project managers, regulators, and stakeholders to view environmental monitoring information in realtime over the Internet. Deployment of TOADS at government facilities and sites will reduce the cost of monitoring while increasing confidence and trust in cleanup and long term stewardship activities. TOADS: Reliably interfaces with and acquires data from a wide variety of external databases, remote systems, and sensors such as contaminant monitors, areamore » monitors, atmospheric condition monitors, visual surveillance systems, intrusion devices, motion detectors, fire/heat detection devices, and gas/vapor detectors; Provides notification and triggers alarms as appropriate; Performs QA/QC on data inputs and logs the status of instruments/devices; Provides a fully functional data management system capable of storing, analyzing, and reporting on data; Provides an easy-to-use Internet-based user interface that provides visualization of the site, data, and events; and Enables the community to monitor local environmental conditions in real time. During this Phase II STTR project, TOADS has been developed and successfully deployed for unattended facility, environmental, and radiological monitoring at a Department of Energy facility.« less

  15. Robot tracking system improvements and visual calibration of orbiter position for radiator inspection

    NASA Technical Reports Server (NTRS)

    Tonkay, Gregory

    1990-01-01

    The following separate topics are addressed: (1) improving a robotic tracking system; and (2) providing insights into orbiter position calibration for radiator inspection. The objective of the tracking system project was to provide the capability to track moving targets more accurately by adjusting parameters in the control system and implementing a predictive algorithm. A computer model was developed to emulate the tracking system. Using this model as a test bed, a self-tuning algorithm was developed to tune the system gains. The model yielded important findings concerning factors that affect the gains. The self-tuning algorithms will provide the concepts to write a program to automatically tune the gains in the real system. The section concerning orbiter position calibration provides a comparison to previous work that had been performed for plant growth. It provided the conceptualized routines required to visually determine the orbiter position and orientation. Furthermore, it identified the types of information which are required to flow between the robot controller and the vision system.

  16. Analyzing and Detecting Problems in Systems of Systems

    NASA Technical Reports Server (NTRS)

    Lindvall, Mikael; Ackermann, Christopher; Stratton, William C.; Sibol, Deane E.; Godfrey, Sally

    2008-01-01

    Many software systems are evolving complex system of systems (SoS) for which inter-system communication is mission-critical. Evidence indicates that transmission failures and performance issues are not uncommon occurrences. In a NASA-supported Software Assurance Research Program (SARP) project, we are researching a new approach addressing such problems. In this paper, we are presenting an approach for analyzing inter-system communications with the goal to uncover both transmission errors and performance problems. Our approach consists of a visualization and an evaluation component. While the visualization of the observed communication aims to facilitate understanding, the evaluation component automatically checks the conformance of an observed communication (actual) to a desired one (planned). The actual and the planned are represented as sequence diagrams. The evaluation algorithm checks the conformance of the actual to the planned diagram. We have applied our approach to the communication of aerospace systems and were successful in detecting and resolving even subtle and long existing transmission problems.

  17. Model-based system engineering approach for the Euclid mission to manage scientific and technical complexity

    NASA Astrophysics Data System (ADS)

    Lorenzo Alvarez, Jose; Metselaar, Harold; Amiaux, Jerome; Saavedra Criado, Gonzalo; Gaspar Venancio, Luis M.; Salvignol, Jean-Christophe; Laureijs, René J.; Vavrek, Roland

    2016-08-01

    In the last years, the system engineering field is coming to terms with a paradigm change in the approach for complexity management. Different strategies have been proposed to cope with highly interrelated systems, system of systems and collaborative system engineering have been proposed and a significant effort is being invested into standardization and ontology definition. In particular, Model Based System Engineering (MBSE) intends to introduce methodologies for a systematic system definition, development, validation, deployment, operation and decommission, based on logical and visual relationship mapping, rather than traditional 'document based' information management. The practical implementation in real large-scale projects is not uniform across fields. In space science missions, the usage has been limited to subsystems or sample projects with modeling being performed 'a-posteriori' in many instances. The main hurdle for the introduction of MBSE practices in new projects is still the difficulty to demonstrate their added value to a project and whether their benefit is commensurate with the level of effort required to put them in place. In this paper we present the implemented Euclid system modeling activities, and an analysis of the benefits and limitations identified to support in particular requirement break-down and allocation, and verification planning at mission level.

  18. XVIS: Visualization for the Extreme-Scale Scientific-Computation Ecosystem Final Scientific/Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geveci, Berk; Maynard, Robert

    The XVis project brings together the key elements of research to enable scientific discovery at extreme scale. Scientific computing will no longer be purely about how fast computations can be performed. Energy constraints, processor changes, and I/O limitations necessitate significant changes in both the software applications used in scientific computation and the ways in which scientists use them. Components for modeling, simulation, analysis, and visualization must work together in a computational ecosystem, rather than working independently as they have in the past. The XVis project brought together collaborators from predominant DOE projects for visualization on accelerators and combining their respectivemore » features into a new visualization toolkit called VTK-m.« less

  19. Establishing the fundamentals for an elephant early warning and monitoring system.

    PubMed

    Zeppelzauer, Matthias; Stoeger, Angela S

    2015-09-04

    The decline of habitat for elephants due to expanding human activity is a serious conservation problem. This has continuously escalated the human-elephant conflict in Africa and Asia. Elephants make extensive use of powerful infrasonic calls (rumbles) that travel distances of up to several kilometers. This makes elephants well-suited for acoustic monitoring because it enables detecting elephants even if they are out of sight. In sight, their distinct visual appearance makes them a good candidate for visual monitoring. We provide an integrated overview of our interdisciplinary project that established the scientific fundamentals for a future early warning and monitoring system for humans who regularly experience serious conflict with elephants. We first draw the big picture of an early warning and monitoring system, then review the developed solutions for automatic acoustic and visual detection, discuss specific challenges and present open future work necessary to build a robust and reliable early warning and monitoring system that is able to operate in situ. We present a method for the automated detection of elephant rumbles that is robust to the diverse noise sources present in situ. We evaluated the method on an extensive set of audio data recorded under natural field conditions. Results show that the proposed method outperforms existing approaches and accurately detects elephant rumbles. Our visual detection method shows that tracking elephants in wildlife videos (of different sizes and postures) is feasible and particularly robust at near distances. From our project results we draw a number of conclusions that are discussed and summarized. We clearly identified the most critical challenges and necessary improvements of the proposed detection methods and conclude that our findings have the potential to form the basis for a future automated early warning system for elephants. We discuss challenges that need to be solved and summarize open topics in the context of a future early warning and monitoring system. We conclude that a long-term evaluation of the presented methods in situ using real-time prototypes is the most important next step to transfer the developed methods into practical implementation.

  20. Holographic data visualization: using synthetic full-parallax holography to share information

    NASA Astrophysics Data System (ADS)

    Dalenius, Tove N.; Rees, Simon; Richardson, Martin

    2017-03-01

    This investigation explores representing information through data visualization using the medium holography. It is an exploration from the perspective of a creative practitioner deploying a transdisciplinary approach. The task of visualizing and making use of data and "big data" has been the focus of a large number of research projects during the opening of this century. As the amount of data that can be gathered has increased in a short time our ability to comprehend and get meaning out of the numbers has been brought into attention. This project is looking at the possibility of employing threedimensional imaging using holography to visualize data and additional information. To explore the viability of the concept, this project has set out to transform the visualization of calculated energy and fluid flow data to a holographic medium. A Computational Fluid Dynamics (CFD) model of flow around a vehicle, and a model of Solar irradiation on a building were chosen to investigate the process. As no pre-existing software is available to directly transform the data into a compatible format the team worked collaboratively and transdisciplinary in order to achieve an accurate conversion from the format of the calculation and visualization tools to a configuration suitable for synthetic holography production. The project also investigates ideas for layout and design suitable for holographic visualization of energy data. Two completed holograms will be presented. Future possibilities for developing the concept of Holographic Data Visualization are briefly deliberated upon.

  1. Tailoring the visual communication of climate projections for local adaptation practitioners in Germany and the UK.

    PubMed

    Lorenz, Susanne; Dessai, Suraje; Forster, Piers M; Paavola, Jouni

    2015-11-28

    Visualizations are widely used in the communication of climate projections. However, their effectiveness has rarely been assessed among their target audience. Given recent calls to increase the usability of climate information through the tailoring of climate projections, it is imperative to assess the effectiveness of different visualizations. This paper explores the complexities of tailoring through an online survey conducted with 162 local adaptation practitioners in Germany and the UK. The survey examined respondents' assessed and perceived comprehension (PC) of visual representations of climate projections as well as preferences for using different visualizations in communicating and planning for a changing climate. Comprehension and use are tested using four different graph formats, which are split into two pairs. Within each pair the information content is the same but is visualized differently. We show that even within a fairly homogeneous user group, such as local adaptation practitioners, there are clear differences in respondents' comprehension of and preference for visualizations. We do not find a consistent association between assessed comprehension and PC or use within the two pairs of visualizations that we analysed. There is, however, a clear link between PC and use of graph format. This suggests that respondents use what they think they understand the best, rather than what they actually understand the best. These findings highlight that audience-specific targeted communication may be more complex and challenging than previously recognized. © 2015 The Authors.

  2. Research and Support for MTLS Data Management and Visualization

    DOT National Transportation Integrated Search

    2016-09-30

    This report documents the research project "Research and Support for MTLS Data Management and Visualization." The primary goal of the project was to support Caltrans District 4 in their upgrade and enhancement efforts for Mobile Terrestrial Laser Sca...

  3. Immersive visualization of rail simulation data.

    DOT National Transportation Integrated Search

    2016-01-01

    The prime objective of this project was to create scientific, immersive visualizations of a Rail-simulation. This project is a part of a larger initiative that consists of three distinct parts. The first step consists of performing a finite element a...

  4. Guidelines for the use of visualization

    DOT National Transportation Integrated Search

    1998-12-01

    This document is the product of a research project into visualization in the design and public review of transportation facilities. The project's goal was to provide NCDOT engineers and managers with a basic primer on this relatively new technology i...

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drouhard, Margaret MEG G; Steed, Chad A; Hahn, Steven E

    In this paper, we propose strategies and objectives for immersive data visualization with applications in materials science using the Oculus Rift virtual reality headset. We provide background on currently available analysis tools for neutron scattering data and other large-scale materials science projects. In the context of the current challenges facing scientists, we discuss immersive virtual reality visualization as a potentially powerful solution. We introduce a prototype immersive visual- ization system, developed in conjunction with materials scientists at the Spallation Neutron Source, which we have used to explore large crystal structures and neutron scattering data. Finally, we offer our perspective onmore » the greatest challenges that must be addressed to build effective and intuitive virtual reality analysis tools that will be useful for scientists in a wide range of fields.« less

  6. Visual Odometry for Autonomous Deep-Space Navigation

    NASA Technical Reports Server (NTRS)

    Robinson, Shane; Pedrotty, Sam

    2016-01-01

    Visual Odometry fills two critical needs shared by all future exploration architectures considered by NASA: Autonomous Rendezvous and Docking (AR&D), and autonomous navigation during loss of comm. To do this, a camera is combined with cutting-edge algorithms (called Visual Odometry) into a unit that provides accurate relative pose between the camera and the object in the imagery. Recent simulation analyses have demonstrated the ability of this new technology to reliably, accurately, and quickly compute a relative pose. This project advances this technology by both preparing the system to process flight imagery and creating an activity to capture said imagery. This technology can provide a pioneering optical navigation platform capable of supporting a wide variety of future missions scenarios: deep space rendezvous, asteroid exploration, loss-of-comm.

  7. High-fidelity video and still-image communication based on spectral information: natural vision system and its applications

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Masahiro; Haneishi, Hideaki; Fukuda, Hiroyuki; Kishimoto, Junko; Kanazawa, Hiroshi; Tsuchida, Masaru; Iwama, Ryo; Ohyama, Nagaaki

    2006-01-01

    In addition to the great advancement of high-resolution and large-screen imaging technology, the issue of color is now receiving considerable attention as another aspect than the image resolution. It is difficult to reproduce the original color of subject in conventional imaging systems, and that obstructs the applications of visual communication systems in telemedicine, electronic commerce, and digital museum. To breakthrough the limitation of conventional RGB 3-primary systems, "Natural Vision" project aims at an innovative video and still-image communication technology with high-fidelity color reproduction capability, based on spectral information. This paper summarizes the results of NV project including the development of multispectral and multiprimary imaging technologies and the experimental investigations on the applications to medicine, digital archives, electronic commerce, and computer graphics.

  8. Haptics-based immersive telerobotic system for improvised explosive device disposal: Are two hands better than one?

    NASA Astrophysics Data System (ADS)

    Erickson, David; Lacheray, Hervé; Lambert, Jason Michel; Mantegh, Iraj; Crymble, Derry; Daly, John; Zhao, Yan

    2012-06-01

    State-of-the-art robotic explosive ordnance disposal robotics have not, in general, adopted recent advances in control technology and man-machine interfaces and lag many years behind academia. This paper describes the Haptics-based Immersive Telerobotic System project investigating an immersive telepresence envrionment incorporating advanced vehicle control systems, Augmented immersive sensory feedback, dynamic 3D visual information, and haptic feedback for explosive ordnance disposal operators. The project aim is to provide operatiors a more sophisticated interface and expand sensory input to perform complex tasks to defeat improvised explosive devices successfully. The introduction of haptics and immersive teleprescence has the potential to shift the way teleprescence systems work for explosive ordnance disposal tasks or more widely for first responders scenarios involving remote unmanned ground vehicles.

  9. Different cortical projections from three subdivisions of the rat lateral posterior thalamic nucleus: a single-neuron tracing study with viral vectors.

    PubMed

    Nakamura, Hisashi; Hioki, Hiroyuki; Furuta, Takahiro; Kaneko, Takeshi

    2015-05-01

    The lateral posterior thalamic nucleus (LP) is one of the components of the extrageniculate pathway in the rat visual system, and is cytoarchitecturally divided into three subdivisions--lateral (LPl), rostromedial (LPrm), and caudomedial (LPcm) portions. To clarify the differences in the dendritic fields and axonal arborisations among the three subdivisions, we applied a single-neuron labeling technique with viral vectors to LP neurons. The proximal dendrites of LPl neurons were more numerous than those of LPrm and LPcm neurons, and LPrm neurons tended to have wider dendritic fields than LPl neurons. We then analysed the axonal arborisations of LP neurons by reconstructing the axon fibers in the cortex. The LPl, LPrm and LPcm were different from one another in terms of the projection targets--the main target cortical regions of LPl and LPrm neurons were the secondary and primary visual areas, whereas those of LPcm neurons were the postrhinal and temporal association areas. Furthermore, the principal target cortical layers of LPl neurons in the visual areas were middle layers, but that of LPrm neurons was layer 1. This indicates that LPl and LPrm neurons can be categorised into the core and matrix types of thalamic neurons, respectively, in the visual areas. In addition, LPl neurons formed multiple axonal clusters within the visual areas, whereas the fibers of LPrm neurons were widely and diffusely distributed. It is therefore presumed that these two types of neurons play different roles in visual information processing by dual thalamocortical innervation of the visual areas. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  10. Earth Adventure: Virtual Globe-based Suborbital Atmospheric Greenhouse Gases Exploration

    NASA Astrophysics Data System (ADS)

    Wei, Y.; Landolt, K.; Boyer, A.; Santhana Vannan, S. K.; Wei, Z.; Wang, E.

    2016-12-01

    The Earth Venture Suborbital (EVS) mission is an important component of NASA's Earth System Science Pathfinder program that aims at making substantial advances in Earth system science through measurements from suborbital platforms and modeling researches. For example, the Carbon in Arctic Reservoirs Vulnerability Experiment (CARVE) project of EVS-1 collected measurements of greenhouse gases (GHG) on local to regional scales in the Alaskan Arctic. The Atmospheric Carbon and Transport - America (ACT-America) project of EVS-2 will provide advanced, high-resolution measurements of atmospheric profiles and horizontal gradients of CO2 and CH4.As the long-term archival center for CARVE and the future ACT-America data, the Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC) has been developing a versatile data management system for CARVE data to maximize their usability. One of these efforts is the virtual globe-based Suborbital Atmospheric GHG Exploration application. It leverages Google Earth to simulate the 185 flights flew by the C-23 Sherpa aircraft in 2012-2015 for the CARVE project. Based on Google Earth's 3D modeling capability and the precise coordinates, altitude, pitch, roll, and heading info of the aircraft recorded in every second during each flight, the application provides users accurate and vivid simulation of flight experiences, with an active 3D visualization of a C-23 Sherpa aircraft in view. This application provides dynamic visualization of GHG, including CO2, CO, H2O, and CH4 captured during the flights, at the same pace of the flight simulation in Google Earth. Photos taken during those flights are also properly displayed along the flight paths. In the future, this application will be extended to incorporate more complicated GHG measurements (e.g. vertical profiles) from the ACT-America project. This application leverages virtual globe technology to provide users an integrated framework to interactively explore information about GHG measurements and to link scientific measurements to the rich virtual planet environment provided by Google Earth. Positive feedbacks have been received from users. It provides a good example of extending basic data visualization into a knowledge discovery experience and maximizing the usability of Earth science observations.

  11. Remote video assessment for missile launch facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wagner, G.G.; Stewart, W.A.

    1995-07-01

    The widely dispersed, unmanned launch facilities (LFs) for land-based ICBMs (intercontinental ballistic missiles) currently do not have visual assessment capability for existing intrusion alarms. The security response force currently must assess each alarm on-site. Remote assessment will enhance manpower, safety, and security efforts. Sandia National Laboratories was tasked by the USAF Electronic Systems Center to research, recommend, and demonstrate a cost-effective remote video assessment capability at missile LFs. The project`s charter was to provide: system concepts; market survey analysis; technology search recommendations; and operational hardware demonstrations for remote video assessment from a missile LF to a remote security center viamore » a cost-effective transmission medium and without using visible, on-site lighting. The technical challenges of this project were to: analyze various video transmission media and emphasize using the existing missile system copper line which can be as long as 30 miles; accentuate and extremely low-cost system because of the many sites requiring system installation; integrate the video assessment system with the current LF alarm system; and provide video assessment at the remote sites with non-visible lighting.« less

  12. Mash-up of techniques between data crawling/transfer, data preservation/stewardship and data processing/visualization technologies on a science cloud system designed for Earth and space science: a report of successful operation and science projects of the NICT Science Cloud

    NASA Astrophysics Data System (ADS)

    Murata, K. T.

    2014-12-01

    Data-intensive or data-centric science is 4th paradigm after observational and/or experimental science (1st paradigm), theoretical science (2nd paradigm) and numerical science (3rd paradigm). Science cloud is an infrastructure for 4th science methodology. The NICT science cloud is designed for big data sciences of Earth, space and other sciences based on modern informatics and information technologies [1]. Data flow on the cloud is through the following three techniques; (1) data crawling and transfer, (2) data preservation and stewardship, and (3) data processing and visualization. Original tools and applications of these techniques have been designed and implemented. We mash up these tools and applications on the NICT Science Cloud to build up customized systems for each project. In this paper, we discuss science data processing through these three steps. For big data science, data file deployment on a distributed storage system should be well designed in order to save storage cost and transfer time. We developed a high-bandwidth virtual remote storage system (HbVRS) and data crawling tool, NICTY/DLA and Wide-area Observation Network Monitoring (WONM) system, respectively. Data files are saved on the cloud storage system according to both data preservation policy and data processing plan. The storage system is developed via distributed file system middle-ware (Gfarm: GRID datafarm). It is effective since disaster recovery (DR) and parallel data processing are carried out simultaneously without moving these big data from storage to storage. Data files are managed on our Web application, WSDBank (World Science Data Bank). The big-data on the cloud are processed via Pwrake, which is a workflow tool with high-bandwidth of I/O. There are several visualization tools on the cloud; VirtualAurora for magnetosphere and ionosphere, VDVGE for google Earth, STICKER for urban environment data and STARStouch for multi-disciplinary data. There are 30 projects running on the NICT Science Cloud for Earth and space science. In 2003 56 refereed papers were published. At the end, we introduce a couple of successful results of Earth and space sciences using these three techniques carried out on the NICT Sciences Cloud. [1] http://sc-web.nict.go.jp

  13. Social Water Science Data: Dimensions, Data Management, and Visualization

    NASA Astrophysics Data System (ADS)

    Jones, A. S.; Horsburgh, J. S.; Flint, C.; Jackson-Smith, D.

    2016-12-01

    Water systems are increasingly conceptualized as coupled human-natural systems, with growing emphasis on representing the human element in hydrology. However, social science data and associated considerations may be unfamiliar and intimidating to many hydrologic researchers. Monitoring social aspects of water systems involves expanding the range of data types typically used in hydrology and appreciating nuances in datasets that are well known to social scientists, but less understood by hydrologists. We define social water science data as any information representing the human aspects of a water system. We present a scheme for classifying these data, highlight an array of data types, and illustrate data management considerations and challenges unique to social science data. This classification scheme was applied to datasets generated as part of iUTAH (innovative Urban Transitions and Arid region Hydro-sustainability), an interdisciplinary water research project based in Utah, USA that seeks to integrate and share social and biophysical water science data. As the project deployed cyberinfrastructure for baseline biophysical data, cyberinfrastructure for analogous social science data was necessary. As a particular case of social water science data, we focus in this presentation on social science survey data. These data are often interpreted through the lens of the original researcher and are typically presented to interested parties in static figures or reports. To provide more exploratory and dynamic communication of these data beyond the individual or team who collected the data, we developed a web-based, interactive viewer to visualize social science survey responses. This interface is applicable for examining survey results that show human motivations and actions related to environmental systems and as a useful tool for participatory decision-making. It also serves as an example of how new data sharing and visualization tools can be developed once the classification and characteristics of social water science data are well understood. We demonstrate the survey data viewer implemented to explore water-related survey data collected as part of the iUTAH project. The Viewer uses a standardized template for encoding survey data and metadata, making it generalizable and reusable for similar surveys.

  14. Designing clinically useful systems: examples from medicine and dentistry.

    PubMed

    Koch, S

    2003-12-01

    Despite promising results in medical informatics research and the development of a large number of different systems, few systems get beyond a prototype state and are really used in practice. Among other factors, the lack of explicit user focus is one main reason. The research projects presented in this paper follow a user-centered system development approach based on extensive work analyses in interdisciplinary working groups, taking into account human cognitive performance. Different medical and health-care specialists, together with researchers in human-computer interaction and medical informatics, specify future clinical work scenarios. Special focus is put on analysis and design of the information and communication flow and on exploration of intuitive visualization and interaction techniques for clinical information. Adequate choice of the technical access device is made depending on the user's work situation. It is the purpose of this paper to apply this method in two different research projects and thereby to show its potential for designing clinically useful systems that do support and not hamper clinical work. These research projects cover IT support for chairside work in dentistry (http://www.dis.uu.se/mdi/research/projects/orquest) and ICT support for home health care of elderly citizens (http://www.medsci.uu.se/mie/project/closecare).

  15. The Process and Product: Crafting Community Portraits with Young People in Flexible Learning Settings

    ERIC Educational Resources Information Center

    Baker, Alison M.

    2016-01-01

    Community-based alternative education is situated on the margins in relation to mainstream education. Young people attending these learning sites are often characterised as "disengaged learners", who have fallen through the cracks of the traditional schooling system. The aim of this project was to use participatory visual methods with…

  16. Educational Technology in Distance Learning (for the Deaf).

    ERIC Educational Resources Information Center

    Hales, Gerald

    This discussion of the use of distance education for deaf students argues that distance education methodologies appear to be relatively attractive to the hearing impaired student because they rely to a substantial extent upon the written word and visual transmission of information. Several projects that use computer or interactive systems to teach…

  17. The Importance of Earth Observations and Data Collaboration within Environmental Intelligence Supporting Arctic Research

    NASA Technical Reports Server (NTRS)

    Casas, Joseph

    2017-01-01

    Within the IARPC Collaboration Team activities of 2016, Arctic in-situ and remote earth observations advanced topics such as :1) exploring the role for new and innovative autonomous observing technologies in the Arctic; 2) advancing catalytic national and international community based observing efforts in support of the National Strategy for the Arctic Region; and 3) enhancing the use of discovery tools for observing system collaboration such as the U.S. National Oceanic and Atmospheric Administration (NOAA) Arctic Environmental Response Management Application (ERMA) and the U.S. National Aeronautics and Space Administration (NASA) Arctic Collaborative Environment (ACE) project geo reference visualization decision support and exploitation internet based tools. Critical to the success of these earth observations for both in-situ and remote systems is the emerging of new and innovative data collection technologies and comprehensive modeling as well as enhanced communications and cyber infrastructure capabilities which effectively assimilate and dissemination many environmental intelligence products in a timely manner. The Arctic Collaborative Environment (ACE) project is well positioned to greatly enhance user capabilities for accessing, organizing, visualizing, sharing and producing collaborative knowledge for the Arctic.

  18. Basigin/EMMPRIN/CD147 mediates neuron-glia interactions in the optic lamina of Drosophila.

    PubMed

    Curtin, Kathryn D; Wyman, Robert J; Meinertzhagen, Ian A

    2007-11-15

    Basigin, an IgG family glycoprotein found on the surface of human metastatic tumors, stimulates fibroblasts to secrete matrix metalloproteases (MMPs) that remodel the extracellular matrix, and is thus also known as Extracellular Matrix MetalloPRotease Inducer (EMMPRIN). Using Drosophila we previously identified novel roles for basigin. Specifically, photoreceptors of flies with basigin eyes show misplaced nuclei, rough ER and mitochondria, and swollen axon terminals, suggesting cytoskeletal disruptions. Here we demonstrate that basigin is required for normal neuron-glia interactions in the Drosophila visual system. Flies with basigin mutant photoreceptors have misplaced epithelial glial cells within the first optic neuropile, or lamina. In addition, epithelial glia insert finger-like projections--capitate projections (CPs)--sites of vesicle endocytosis and possibly neurotransmitter recycling. When basigin is missing from photoreceptors terminals, CP formation between glia and photoreceptor terminals is disrupted. Visual system function is also altered in flies with basigin mutant eyes. While photoreceptors depolarize normally to light, synaptic transmission is greatly diminished, consistent with a defect in neurotransmitter release. Basigin expression in photoreceptor neurons is required for normal structure and placement of glia cells.

  19. High Performance Real-Time Visualization of Voluminous Scientific Data Through the NOAA Earth Information System (NEIS).

    NASA Astrophysics Data System (ADS)

    Stewart, J.; Hackathorn, E. J.; Joyce, J.; Smith, J. S.

    2014-12-01

    Within our community data volume is rapidly expanding. These data have limited value if one cannot interact or visualize the data in a timely manner. The scientific community needs the ability to dynamically visualize, analyze, and interact with these data along with other environmental data in real-time regardless of the physical location or data format. Within the National Oceanic Atmospheric Administration's (NOAA's), the Earth System Research Laboratory (ESRL) is actively developing the NOAA Earth Information System (NEIS). Previously, the NEIS team investigated methods of data discovery and interoperability. The recent focus shifted to high performance real-time visualization allowing NEIS to bring massive amounts of 4-D data, including output from weather forecast models as well as data from different observations (surface obs, upper air, etc...) in one place. Our server side architecture provides a real-time stream processing system which utilizes server based NVIDIA Graphical Processing Units (GPU's) for data processing, wavelet based compression, and other preparation techniques for visualization, allows NEIS to minimize the bandwidth and latency for data delivery to end-users. Client side, users interact with NEIS services through the visualization application developed at ESRL called TerraViz. Terraviz is developed using the Unity game engine and takes advantage of the GPU's allowing a user to interact with large data sets in real time that might not have been possible before. Through these technologies, the NEIS team has improved accessibility to 'Big Data' along with providing tools allowing novel visualization and seamless integration of data across time and space regardless of data size, physical location, or data format. These capabilities provide the ability to see the global interactions and their importance for weather prediction. Additionally, they allow greater access than currently exists helping to foster scientific collaboration and new ideas. This presentation will provide an update of the recent enhancements of the NEIS architecture and visualization capabilities, challenges faced, as well as ongoing research activities related to this project.

  20. Solar System Symphony: Combining astronomy with live classical music

    NASA Astrophysics Data System (ADS)

    Kremer, Kyle; WorldWide Telescope

    2017-01-01

    Solar System Symphony is an educational outreach show which combines astronomy visualizations and live classical music. As musicians perform excerpts from Holst’s “The Planets” and other orchestral works, visualizations developed using WorldWide Telescope and NASA images and animations are projected on-stage. Between each movement of music, a narrator guides the audience through scientific highlights of the solar system. The content of Solar System Symphony is geared toward a general audience, particularly targeting K-12 students. The hour-long show not only presents a new medium for exposing a broad audience to astronomy, but also provides universities an effective tool for facilitating interdisciplinary collaboration between two divergent fields. The show was premiered at Northwestern University in May 2016 in partnership with Northwestern’s Bienen School of Music and was recently performed at the Colburn Conservatory of Music in November 2016.

  1. GoIFISH: a system for the quantification of single cell heterogeneity from IFISH images.

    PubMed

    Trinh, Anne; Rye, Inga H; Almendro, Vanessa; Helland, Aslaug; Russnes, Hege G; Markowetz, Florian

    2014-08-26

    Molecular analysis has revealed extensive intra-tumor heterogeneity in human cancer samples, but cannot identify cell-to-cell variations within the tissue microenvironment. In contrast, in situ analysis can identify genetic aberrations in phenotypically defined cell subpopulations while preserving tissue-context specificity. GoIFISHGoIFISH is a widely applicable, user-friendly system tailored for the objective and semi-automated visualization, detection and quantification of genomic alterations and protein expression obtained from fluorescence in situ analysis. In a sample set of HER2-positive breast cancers GoIFISHGoIFISH is highly robust in visual analysis and its accuracy compares favorably to other leading image analysis methods. GoIFISHGoIFISH is freely available at www.sourceforge.net/projects/goifish/.

  2. Advancements to Visualization Control System (VCS, part of UV-CDAT), a Visualization Package Designed for Climate Scientists

    NASA Astrophysics Data System (ADS)

    Lipsa, D.; Chaudhary, A.; Williams, D. N.; Doutriaux, C.; Jhaveri, S.

    2017-12-01

    Climate Data Analysis Tools (UV-CDAT, https://uvcdat.llnl.gov) is a data analysis and visualization software package developed at Lawrence Livermore National Laboratory and designed for climate scientists. Core components of UV-CDAT include: 1) Community Data Management System (CDMS) which provides I/O support and a data model for climate data;2) CDAT Utilities (GenUtil) that processes data using spatial and temporal averaging and statistic functions; and 3) Visualization Control System (VCS) for interactive visualization of the data. VCS is a Python visualization package primarily built for climate scientists, however, because of its generality and breadth of functionality, it can be a useful tool to other scientific applications. VCS provides 1D, 2D and 3D visualization functions such as scatter plot and line graphs for 1d data, boxfill, meshfill, isofill, isoline for 2d scalar data, vector glyphs and streamlines for 2d vector data and 3d_scalar and 3d_vector for 3d data. Specifically for climate data our plotting routines include projections, Skew-T plots and Taylor diagrams. While VCS provided a user-friendly API, the previous implementation of VCS relied on slow performing vector graphics (Cairo) backend which is suitable for smaller dataset and non-interactive graphics. LLNL and Kitware team has added a new backend to VCS that uses the Visualization Toolkit (VTK) as its visualization backend. VTK is one of the most popular open source, multi-platform scientific visualization library written in C++. Its use of OpenGL and pipeline processing architecture results in a high performant VCS library. Its multitude of available data formats and visualization algorithms results in easy adoption of new visualization methods and new data formats in VCS. In this presentation, we describe recent contributions to VCS that includes new visualization plots, continuous integration testing using Conda and CircleCI, tutorials and examples using Jupyter notebooks as well as upgrades that we are planning in the near future which will improve its ease of use and reliability and extend its capabilities.

  3. Interactive Correlation Analysis and Visualization of Climate Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Kwan-Liu

    The relationship between our ability to analyze and extract insights from visualization of climate model output and the capability of the available resources to make those visualizations has reached a crisis point. The large volume of data currently produced by climate models is overwhelming the current, decades-old visualization workflow. The traditional methods for visualizing climate output also have not kept pace with changes in the types of grids used, the number of variables involved, and the number of different simulations performed with a climate model or the feature-richness of high-resolution simulations. This project has developed new and faster methods formore » visualization in order to get the most knowledge out of the new generation of high-resolution climate models. While traditional climate images will continue to be useful, there is need for new approaches to visualization and analysis of climate data if we are to gain all the insights available in ultra-large data sets produced by high-resolution model output and ensemble integrations of climate models such as those produced for the Coupled Model Intercomparison Project. Towards that end, we have developed new visualization techniques for performing correlation analysis. We have also introduced highly scalable, parallel rendering methods for visualizing large-scale 3D data. This project was done jointly with climate scientists and visualization researchers at Argonne National Laboratory and NCAR.« less

  4. Neurofilament protein is differentially distributed in subpopulations of corticocortical projection neurons in the macaque monkey visual pathways

    NASA Technical Reports Server (NTRS)

    Hof, P. R.; Ungerleider, L. G.; Webster, M. J.; Gattass, R.; Adams, M. M.; Sailstad, C. A.; Morrison, J. H.; Bloom, F. E. (Principal Investigator)

    1996-01-01

    Previous studies of the primate cerebral cortex have shown that neurofilament protein is present in pyramidal neuron subpopulations displaying specific regional and laminar distribution patterns. In order to characterize further the neurochemical phenotype of the neurons furnishing feedforward and feedback pathways in the visual cortex of the macaque monkey, we performed an analysis of the distribution of neurofilament protein in corticocortical projection neurons in areas V1, V2, V3, V3A, V4, and MT. Injections of the retrogradely transported dyes Fast Blue and Diamidino Yellow were placed within areas V4 and MT, or in areas V1 and V2, in 14 adult rhesus monkeys, and the brains of these animals were processed for immunohistochemistry with an antibody to nonphosphorylated epitopes of the medium and heavy molecular weight subunits of the neurofilament protein. Overall, there was a higher proportion of neurons projecting from areas V1, V2, V3, and V3A to area MT that were neurofilament protein-immunoreactive (57-100%), than to area V4 (25-36%). In contrast, feedback projections from areas MT, V4, and V3 exhibited a more consistent proportion of neurofilament protein-containing neurons (70-80%), regardless of their target areas (V1 or V2). In addition, the vast majority of feedback neurons projecting to areas V1 and V2 were located in layers V and VI in areas V4 and MT, while they were observed in both supragranular and infragranular layers in area V3. The laminar distribution of feedforward projecting neurons was heterogeneous. In area V1, Meynert and layer IVB cells were found to project to area MT, while neurons projecting to area V4 were particularly dense in layer III within the foveal representation. In area V2, almost all neurons projecting to areas MT or V4 were located in layer III, whereas they were found in both layers II-III and V-VI in areas V3 and V3A. These results suggest that neurofilament protein identifies particular subpopulations of corticocortically projecting neurons with distinct regional and laminar distribution in the monkey visual system. It is possible that the preferential distribution of neurofilament protein within feedforward connections to area MT and all feedback projections is related to other distinctive properties of these corticocortical projection neurons.

  5. Multispectral image analysis for object recognition and classification

    NASA Astrophysics Data System (ADS)

    Viau, C. R.; Payeur, P.; Cretu, A.-M.

    2016-05-01

    Computer and machine vision applications are used in numerous fields to analyze static and dynamic imagery in order to assist or automate decision-making processes. Advancements in sensor technologies now make it possible to capture and visualize imagery at various wavelengths (or bands) of the electromagnetic spectrum. Multispectral imaging has countless applications in various fields including (but not limited to) security, defense, space, medical, manufacturing and archeology. The development of advanced algorithms to process and extract salient information from the imagery is a critical component of the overall system performance. The fundamental objective of this research project was to investigate the benefits of combining imagery from the visual and thermal bands of the electromagnetic spectrum to improve the recognition rates and accuracy of commonly found objects in an office setting. A multispectral dataset (visual and thermal) was captured and features from the visual and thermal images were extracted and used to train support vector machine (SVM) classifiers. The SVM's class prediction ability was evaluated separately on the visual, thermal and multispectral testing datasets.

  6. Applying the metro map to software development management

    NASA Astrophysics Data System (ADS)

    Aguirregoitia, Amaia; Dolado, J. Javier; Presedo, Concepción

    2010-01-01

    This paper presents MetroMap, a new graphical representation model for controlling and managing the software development process. Metromap uses metaphors and visual representation techniques to explore several key indicators in order to support problem detection and resolution. The resulting visualization addresses diverse management tasks, such as tracking of deviations from the plan, analysis of patterns of failure detection and correction, overall assessment of change management policies, and estimation of product quality. The proposed visualization uses a metaphor with a metro map along with various interactive techniques to represent information concerning the software development process and to deal efficiently with multivariate visual queries. Finally, the paper shows the implementation of the tool in JavaFX with data of a real project and the results of testing the tool with the aforementioned data and users attempting several information retrieval tasks. The conclusion shows the results of analyzing user response time and efficiency using the MetroMap visualization system. The utility of the tool was positively evaluated.

  7. Virtual finger boosts three-dimensional imaging and microsurgery as well as terabyte volume image visualization and analysis.

    PubMed

    Peng, Hanchuan; Tang, Jianyong; Xiao, Hang; Bria, Alessandro; Zhou, Jianlong; Butler, Victoria; Zhou, Zhi; Gonzalez-Bellido, Paloma T; Oh, Seung W; Chen, Jichao; Mitra, Ananya; Tsien, Richard W; Zeng, Hongkui; Ascoli, Giorgio A; Iannello, Giulio; Hawrylycz, Michael; Myers, Eugene; Long, Fuhui

    2014-07-11

    Three-dimensional (3D) bioimaging, visualization and data analysis are in strong need of powerful 3D exploration techniques. We develop virtual finger (VF) to generate 3D curves, points and regions-of-interest in the 3D space of a volumetric image with a single finger operation, such as a computer mouse stroke, or click or zoom from the 2D-projection plane of an image as visualized with a computer. VF provides efficient methods for acquisition, visualization and analysis of 3D images for roundworm, fruitfly, dragonfly, mouse, rat and human. Specifically, VF enables instant 3D optical zoom-in imaging, 3D free-form optical microsurgery, and 3D visualization and annotation of terabytes of whole-brain image volumes. VF also leads to orders of magnitude better efficiency of automated 3D reconstruction of neurons and similar biostructures over our previous systems. We use VF to generate from images of 1,107 Drosophila GAL4 lines a projectome of a Drosophila brain.

  8. Large-scale visualization projects for teaching software engineering.

    PubMed

    Müller, Christoph; Reina, Guido; Burch, Michael; Weiskopf, Daniel

    2012-01-01

    The University of Stuttgart's software engineering major complements the traditional computer science major with more practice-oriented education. Two-semester software projects in various application areas offered by the university's different computer science institutes are a successful building block in the curriculum. With this realistic, complex project setting, students experience the practice of software engineering, including software development processes, technologies, and soft skills. In particular, visualization-based projects are popular with students. Such projects offer them the opportunity to gain profound knowledge that would hardly be possible with only regular lectures and homework assignments.

  9. Information-computational system for storage, search and analytical processing of environmental datasets based on the Semantic Web technologies

    NASA Astrophysics Data System (ADS)

    Titov, A.; Gordov, E.; Okladnikov, I.

    2009-04-01

    In this report the results of the work devoted to the development of working model of the software system for storage, semantically-enabled search and retrieval along with processing and visualization of environmental datasets containing results of meteorological and air pollution observations and mathematical climate modeling are presented. Specially designed metadata standard for machine-readable description of datasets related to meteorology, climate and atmospheric pollution transport domains is introduced as one of the key system components. To provide semantic interoperability the Resource Description Framework (RDF, http://www.w3.org/RDF/) technology means have been chosen for metadata description model realization in the form of RDF Schema. The final version of the RDF Schema is implemented on the base of widely used standards, such as Dublin Core Metadata Element Set (http://dublincore.org/), Directory Interchange Format (DIF, http://gcmd.gsfc.nasa.gov/User/difguide/difman.html), ISO 19139, etc. At present the system is available as a Web server (http://climate.risks.scert.ru/metadatabase/) based on the web-portal ATMOS engine [1] and is implementing dataset management functionality including SeRQL-based semantic search as well as statistical analysis and visualization of selected data archives [2,3]. The core of the system is Apache web server in conjunction with Tomcat Java Servlet Container (http://jakarta.apache.org/tomcat/) and Sesame Server (http://www.openrdf.org/) used as a database for RDF and RDF Schema. At present statistical analysis of meteorological and climatic data with subsequent visualization of results is implemented for such datasets as NCEP/NCAR Reanalysis, Reanalysis NCEP/DOE AMIP II, JMA/CRIEPI JRA-25, ECMWF ERA-40 and local measurements obtained from meteorological stations on the territory of Russia. This functionality is aimed primarily at finding of main characteristics of regional climate dynamics. The proposed system represents a step in the process of development of a distributed collaborative information-computational environment to support multidisciplinary investigations of Earth regional environment [4]. Partial support of this work by SB RAS Integration Project 34, SB RAS Basic Program Project 4.5.2.2, APN Project CBA2007-08NSY and FP6 Enviro-RISKS project (INCO-CT-2004-013427) is acknowledged. References 1. E.P. Gordov, V.N. Lykosov, and A.Z. Fazliev. Web portal on environmental sciences "ATMOS" // Advances in Geosciences. 2006. Vol. 8. p. 33 - 38. 2. Gordov E.P., Okladnikov I.G., Titov A.G. Development of elements of web based information-computational system supporting regional environment processes investigations // Journal of Computational Technologies, Vol. 12, Special Issue #3, 2007, pp. 20 - 28. 3. Okladnikov I.G., Titov A.G. Melnikova V.N., Shulgina T.M. Web-system for processing and visualization of meteorological and climatic data // Journal of Computational Technologies, Vol. 13, Special Issue #3, 2008, pp. 64 - 69. 4. Gordov E.P., Lykosov V.N. Development of information-computational infrastructure for integrated study of Siberia environment // Journal of Computational Technologies, Vol. 12, Special Issue #2, 2007, pp. 19 - 30.

  10. Hierarchical organization of macaque and cat cortical sensory systems explored with a novel network processor.

    PubMed

    Hilgetag, C C; O'Neill, M A; Young, M P

    2000-01-29

    Neuroanatomists have described a large number of connections between the various structures of monkey and cat cortical sensory systems. Because of the complexity of the connection data, analysis is required to unravel what principles of organization they imply. To date, analysis of laminar origin and termination connection data to reveal hierarchical relationships between the cortical areas has been the most widely acknowledged approach. We programmed a network processor that searches for optimal hierarchical orderings of cortical areas given known hierarchical constraints and rules for their interpretation. For all cortical systems and all cost functions, the processor found a multitude of equally low-cost hierarchies. Laminar hierarchical constraints that are presently available in the anatomical literature were therefore insufficient to constrain a unique ordering for any of the sensory systems we analysed. Hierarchical orderings of the monkey visual system that have been widely reported, but which were derived by hand, were not among the optimal orderings. All the cortical systems we studied displayed a significant degree of hierarchical organization, and the anatomical constraints from the monkey visual and somato-motor systems were satisfied with very few constraint violations in the optimal hierarchies. The visual and somato-motor systems in that animal were therefore surprisingly strictly hierarchical. Most inconsistencies between the constraints and the hierarchical relationships in the optimal structures for the visual system were related to connections of area FST (fundus of superior temporal sulcus). We found that the hierarchical solutions could be further improved by assuming that FST consists of two areas, which differ in the nature of their projections. Indeed, we found that perfect hierarchical arrangements of the primate visual system, without any violation of anatomical constraints, could be obtained under two reasonable conditions, namely the subdivision of FST into two distinct areas, whose connectivity we predict, and the abolition of at least one of the less reliable rule constraints. Our analyses showed that the future collection of the same type of laminar constraints, or the inclusion of new hierarchical constraints from thalamocortical connections, will not resolve the problem of multiple optimal hierarchical representations for the primate visual system. Further data, however, may help to specify the relative ordering of some more areas. This indeterminacy of the visual hierarchy is in part due to the reported absence of some connections between cortical areas. These absences are consistent with limited cross-talk between differentiated processing streams in the system. Hence, hierarchical representation of the visual system is affected by, and must take into account, other organizational features, such as processing streams.

  11. The onset of visual experience gates auditory cortex critical periods

    PubMed Central

    Mowery, Todd M.; Kotak, Vibhakar C.; Sanes, Dan H.

    2016-01-01

    Sensory systems influence one another during development and deprivation can lead to cross-modal plasticity. As auditory function begins before vision, we investigate the effect of manipulating visual experience during auditory cortex critical periods (CPs) by assessing the influence of early, normal and delayed eyelid opening on hearing loss-induced changes to membrane and inhibitory synaptic properties. Early eyelid opening closes the auditory cortex CPs precociously and dark rearing prevents this effect. In contrast, delayed eyelid opening extends the auditory cortex CPs by several additional days. The CP for recovery from hearing loss is also closed prematurely by early eyelid opening and extended by delayed eyelid opening. Furthermore, when coupled with transient hearing loss that animals normally fully recover from, very early visual experience leads to inhibitory deficits that persist into adulthood. Finally, we demonstrate a functional projection from the visual to auditory cortex that could mediate these effects. PMID:26786281

  12. An Unmanned Aerial Vehicle Cluster Network Cruise System for Monitor

    NASA Astrophysics Data System (ADS)

    Jiang, Jirong; Tao, Jinpeng; Xin, Guipeng

    2018-06-01

    The existing maritime cruising system mainly uses manned motorboats to monitor the quality of coastal water and patrol and maintenance of the navigation -aiding facility, which has the problems of high energy consumption, small range of cruise for monitoring, insufficient information control and low visualization. In recent years, the application of UAS in the maritime field has alleviated the phenomenon above to some extent. A cluster-based unmanned network monitoring cruise system designed in this project uses the floating small UAV self-powered launching platform as a carrier, applys the idea of cluster, and combines the strong controllability of the multi-rotor UAV and the capability to carry customized modules, constituting a unmanned, visualized and normalized monitoring cruise network to realize the functions of maritime cruise, maintenance of navigational-aiding and monitoring the quality of coastal water.

  13. Comparison of human driver dynamics in simulators with complex and simple visual displays and in an automobile on the road

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Klein, R. H.

    1975-01-01

    As part of a comprehensive program exploring driver/vehicle system response in lateral steering tasks, driver/vehicle system describing functions and other dynamic data have been gathered in several milieu. These include a simple fixed base simulator with an elementary roadway delineation only display; a fixed base statically operating automobile with a terrain model based, wide angle projection system display; and a full scale moving base automobile operating on the road. Dynamic data with the two fixed base simulators compared favorably, implying that the impoverished visual scene, lack of engine noise, and simplified steering wheel feel characteristics in the simple simulator did not induce significant driver dynamic behavior variations. The fixed base vs. moving base comparisons showed substantially greater crossover frequencies and phase margins on the road course.

  14. Preliminary development of augmented reality systems for spinal surgery

    NASA Astrophysics Data System (ADS)

    Nguyen, Nhu Q.; Ramjist, Joel M.; Jivraj, Jamil; Jakubovic, Raphael; Deorajh, Ryan; Yang, Victor X. D.

    2017-02-01

    Surgical navigation has been more actively deployed in open spinal surgeries due to the need for improved precision during procedures. This is increasingly difficult in minimally invasive surgeries due to the lack of visual cues caused by smaller exposure sites, and increases a surgeon's dependence on their knowledge of anatomical landmarks as well as the CT or MRI images. The use of augmented reality (AR) systems and registration technologies in spinal surgeries could allow for improvements to techniques by overlaying a 3D reconstruction of patient anatomy in the surgeon's field of view, creating a mixed reality visualization. The AR system will be capable of projecting the 3D reconstruction onto a field and preliminary object tracking on a phantom. Dimensional accuracy of the mixed media will also be quantified to account for distortions in tracking.

  15. Effects of heavy ions on visual function and electrophysiology of rodents: the ALTEA-MICE project

    NASA Technical Reports Server (NTRS)

    Sannita, W. G.; Acquaviva, M.; Ball, S. L.; Belli, F.; Bisti, S.; Bidoli, V.; Carozzo, S.; Casolino, M.; Cucinotta, F.; De Pascale, M. P.; hide

    2004-01-01

    ALTEA-MICE will supplement the ALTEA project on astronauts and provide information on the functional visual impairment possibly induced by heavy ions during prolonged operations in microgravity. Goals of ALTEA-MICE are: (1) to investigate the effects of heavy ions on the visual system of normal and mutant mice with retinal defects; (2) to define reliable experimental conditions for space research; and (3) to develop animal models to study the physiological consequences of space travels on humans. Remotely controlled mouse setup, applied electrophysiological recording methods, remote particle monitoring, and experimental procedures were developed and tested. The project has proved feasible under laboratory-controlled conditions comparable in important aspects to those of astronauts' exposure to particle in space. Experiments are performed at the Brookhaven National Laboratories [BNL] (Upton, NY, USA) and the Gesellschaft fur Schwerionenforschung mbH [GSI]/Biophysik (Darmstadt, FRG) to identify possible electrophysiological changes and/or activation of protective mechanisms in response to pulsed radiation. Offline data analyses are in progress and observations are still anecdotal. Electrophysiological changes after pulsed radiation are within the limits of spontaneous variability under anesthesia, with only indirect evidence of possible retinal/cortical responses. Immunostaining showed changes (e.g. increased expression of FGF2 protein in the outer nuclear layer) suggesting a retinal stress reaction to high-energy particles of potential relevance in space. c2004 COSPAR. Published by Elsevier Ltd. All rights reserved.

  16. Network of anatomical texts (NAnaTex), an open-source project for visualizing the interaction between anatomical terms.

    PubMed

    Momota, Ryusuke; Ohtsuka, Aiji

    2018-01-01

    Anatomy is the science and art of understanding the structure of the body and its components in relation to the functions of the whole-body system. Medicine is based on a deep understanding of anatomy, but quite a few introductory-level learners are overwhelmed by the sheer amount of anatomical terminology that must be understood, so they regard anatomy as a dull and dense subject. To help them learn anatomical terms in a more contextual way, we started a new open-source project, the Network of Anatomical Texts (NAnaTex), which visualizes relationships of body components by integrating text-based anatomical information using Cytoscape, a network visualization software platform. Here, we present a network of bones and muscles produced from literature descriptions. As this network is primarily text-based and does not require any programming knowledge, it is easy to implement new functions or provide extra information by making changes to the original text files. To facilitate collaborations, we deposited the source code files for the network into the GitHub repository ( https://github.com/ryusukemomota/nanatex ) so that anybody can participate in the evolution of the network and use it for their own non-profit purposes. This project should help not only introductory-level learners but also professional medical practitioners, who could use it as a quick reference.

  17. Contributions of the SDR Task Network tool to Calibration and Validation of the NPOESS Preparatory Project instruments

    NASA Astrophysics Data System (ADS)

    Feeley, J.; Zajic, J.; Metcalf, A.; Baucom, T.

    2009-12-01

    The National Polar-orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project (NPP) Calibration and Validation (Cal/Val) team is planning post-launch activities to calibrate the NPP sensors and validate Sensor Data Records (SDRs). The IPO has developed a web-based data collection and visualization tool in order to effectively collect, coordinate, and manage the calibration and validation tasks for the OMPS, ATMS, CrIS, and VIIRS instruments. This tool is accessible to the multi-institutional Cal/Val teams consisting of the Prime Contractor and Government Cal/Val leads along with the NASA NPP Mission team, and is used for mission planning and identification/resolution of conflicts between sensor activities. Visualization techniques aid in displaying task dependencies, including prerequisites and exit criteria, allowing for the identification of a critical path. This presentation will highlight how the information is collected, displayed, and used to coordinate the diverse instrument calibration/validation teams.

  18. Capturing change: the duality of time-lapse imagery to acquire data and depict ecological dynamics

    USGS Publications Warehouse

    Brinley Buckley, Emma M.; Allen, Craig R.; Forsberg, Michael; Farrell, Michael; Caven, Andrew J.

    2017-01-01

    We investigate the scientific and communicative value of time-lapse imagery by exploring applications for data collection and visualization. Time-lapse imagery has a myriad of possible applications to study and depict ecosystems and can operate at unique temporal and spatial scales to bridge the gap between large-scale satellite imagery projects and observational field research. Time-lapse data sequences, linking time-lapse imagery with data visualization, have the ability to make data come alive for a wider audience by connecting abstract numbers to images that root data in time and place. Utilizing imagery from the Platte Basin Timelapse Project, water inundation and vegetation phenology metrics are quantified via image analysis and then paired with passive monitoring data, including streamflow and water chemistry. Dynamic and interactive time-lapse data sequences elucidate the visible and invisible ecological dynamics of a significantly altered yet internationally important river system in central Nebraska.

  19. The community-driven BiG CZ software system for integration and analysis of bio- and geoscience data in the critical zone

    NASA Astrophysics Data System (ADS)

    Aufdenkampe, A. K.; Mayorga, E.; Horsburgh, J. S.; Lehnert, K. A.; Zaslavsky, I.; Valentine, D. W., Jr.; Richard, S. M.; Cheetham, R.; Meyer, F.; Henry, C.; Berg-Cross, G.; Packman, A. I.; Aronson, E. L.

    2014-12-01

    Here we present the prototypes of a new scientific software system designed around the new Observations Data Model version 2.0 (ODM2, https://github.com/UCHIC/ODM2) to substantially enhance integration of biological and Geological (BiG) data for Critical Zone (CZ) science. The CZ science community takes as its charge the effort to integrate theory, models and data from the multitude of disciplines collectively studying processes on the Earth's surface. The central scientific challenge of the CZ science community is to develop a "grand unifying theory" of the critical zone through a theory-model-data fusion approach, for which the key missing need is a cyberinfrastructure for seamless 4D visual exploration of the integrated knowledge (data, model outputs and interpolations) from all the bio and geoscience disciplines relevant to critical zone structure and function, similar to today's ability to easily explore historical satellite imagery and photographs of the earth's surface using Google Earth. This project takes the first "BiG" steps toward answering that need. The overall goal of this project is to co-develop with the CZ science and broader community, including natural resource managers and stakeholders, a web-based integration and visualization environment for joint analysis of cross-scale bio and geoscience processes in the critical zone (BiG CZ), spanning experimental and observational designs. We will: (1) Engage the CZ and broader community to co-develop and deploy the BiG CZ software stack; (2) Develop the BiG CZ Portal web application for intuitive, high-performance map-based discovery, visualization, access and publication of data by scientists, resource managers, educators and the general public; (3) Develop the BiG CZ Toolbox to enable cyber-savvy CZ scientists to access BiG CZ Application Programming Interfaces (APIs); and (4) Develop the BiG CZ Central software stack to bridge data systems developed for multiple critical zone domains into a single metadata catalog. The entire BiG CZ Software system is being developed on public repositories as a modular suite of open source software projects. It will be built around a new Observations Data Model Version 2.0 (ODM2) that has been developed by members of the BiG CZ project team, with community input, under separate funding.

  20. Communicating Ocean Acidification and Climate Change to Public Audiences Using Scientific Data, Interactive Exploration Tools, and Visual Narratives

    NASA Astrophysics Data System (ADS)

    Miller, M. K.; Rossiter, A.; Spitzer, W.

    2016-12-01

    The Exploratorium, a hands-on science museum, explores local environmental conditions of San Francisco Bay to connect audiences to the larger global implications of ocean acidification and climate change. The work is centered in the Fisher Bay Observatory at Pier 15, a glass-walled gallery sited for explorations of urban San Francisco and the Bay. Interactive exhibits, high-resolution data visualizations, and mediated activities and conversations communicate to public audiences the impacts of excess carbon dioxide in the atmosphere and ocean. Through a 10-year education partnership with NOAA and two environmental literacy grants funded by its Office of Education, the Exploratorium has been part of two distinct but complementary strategies to increase climate literacy beyond traditional classroom settings. We will discuss two projects that address the ways complex scientific information can be transformed into learning opportunities for the public, providing information citizens can use for decision-making in their personal lives and their communities. The Visualizing Change project developed "visual narratives" that combine scientific visualizations and other images with story telling about the science and potential solutions of climate impacts on the ocean. The narratives were designed to engage curiosity and provide the public with hopeful and useful information to stimulate solutions-oriented behavior rather than to communicate despair about climate change. Training workshops for aquarium and museum docents prepare informal educators to use the narratives and help them frame productive conversations with the pubic. The Carbon Networks project, led by the Exploratorium, uses local and Pacific Rim data to explore the current state of climate change and ocean acidification. The Exploratorium collects and displays local ocean and atmosphere data as a member of the Central and Northern California Ocean Observing System and as an observing station for NOAA's Pacific Marine Environment Lab's carbon buoy network. Other Carbon Network partners, the Pacific Science Center and Waikiki Aquarium, also have access to local carbon data from NOAA. The project collectively explores the development of hands-on activities, teaching resources, and workshops for museum educators and classroom teachers.

  1. Interconnections of the visual cortex with the frontal cortex in the rat.

    PubMed

    Sukekawa, K

    1988-01-01

    Horseradish peroxidase conjugated to wheat germ agglutinin (WGA-HRP) and autoradiography of tritiated leucine were used to trace the cortical origins and terminations of the connections between the visual and frontal cortices in the rat. Ipsilateral reciprocal connections between each subdivision of the visual cortex (areas 17, 18a and 18b) and the posterior half of the medial part of the frontal agranular cortex (PAGm), and their laminar organizations were confirmed. These connections did not appear to have a significant topographic organization. Although in areas 17 and 18b terminals or cells of origin in this fiber system were confined to the anterior half of these cortices, in area 18a they were observed spanning the anteroposterior extent of this cortex, with in part a column like organization. No evidence could be found for the participation of both the posterior parts of areas 17 and 18b and the anterior half of this frontal agranular cortex in these connections. Fibers from each subdivision of the visual cortex to the PAGm terminated predominantly in the lower part of layer I and in layer II. In area 17, this occipito-frontal projection was found to arise from the scattered pyramidal cells in layer V and more prominently from pyramidal cells in layer V of area 17/18a border. In area 18a, the fibers projecting to the PAGm originated mainly from pyramidal cells primarily in layer V and to a lesser extent in layers II, III and VI. Whereas in area 18b, this projection was found to arise mainly from pyramidal cells in layers II and III, to a lesser extent in layers V and VI, and less frequent in layer IV. On the other hand, the reciprocal projection to the visual cortex was found to originate largely from pyramidal cells in layers III and V of the PAGm. In areas 17 and 18a, these fibers terminated in layers I and VI, and in layers I, V and VI, respectively. Whereas in area 18b, they were distributed throughout all layers except layer II.

  2. Project Themis: Water Visualization Study

    DTIC Science & Technology

    2011-09-15

    parameters and design space. Apparatus is discussed, including water flow loop and test section parts, as well as flow measurements, LDV, PLIF, and...release; distribution unlimited Project Themis: Water Visualization Study Allen Bishop AFRL/RZSE 15 Sept 2011 2 About Me • BS & MS Aerospace

  3. Evaluation of Blalock-Taussig shunts in newborns: value of oblique MRI planes.

    PubMed

    Kastler, B; Livolsi, A; Germain, P; Zöllner, G; Dietemann, J L

    1991-01-01

    Eight infants with systemic-pulmonary Blalock-Taussig shunts were evaluated by spin-echo ECG-gated MRI. Contrary to Echocardiography, MRI using coronal oblique projections successfully visualized all palliative shunts entirely in one single plane (including one carried out on a right aberrant subclavian artery). MRI allowed assessment of size, course and patency of the shunt, including pulmonary and subclavian insertion. The proximal portion of the pulmonary and subclavian arteries were also visualized. We conclude that MRI with axial scans completed by coronal oblique planes is a promising, non invasive method for imaging the anatomical features of Blalock-Taussig shunts.

  4. The Visual Geophysical Exploration Environment: A Multi-dimensional Scientific Visualization

    NASA Astrophysics Data System (ADS)

    Pandya, R. E.; Domenico, B.; Murray, D.; Marlino, M. R.

    2003-12-01

    The Visual Geophysical Exploration Environment (VGEE) is an online learning environment designed to help undergraduate students understand fundamental Earth system science concepts. The guiding principle of the VGEE is the importance of hands-on interaction with scientific visualization and data. The VGEE consists of four elements: 1) an online, inquiry-based curriculum for guiding student exploration; 2) a suite of El Nino-related data sets adapted for student use; 3) a learner-centered interface to a scientific visualization tool; and 4) a set of concept models (interactive tools that help students understand fundamental scientific concepts). There are two key innovations featured in this interactive poster session. One is the integration of concept models and the visualization tool. Concept models are simple, interactive, Java-based illustrations of fundamental physical principles. We developed eight concept models and integrated them into the visualization tool to enable students to probe data. The ability to probe data using a concept model addresses the common problem of transfer: the difficulty students have in applying theoretical knowledge to everyday phenomenon. The other innovation is a visualization environment and data that are discoverable in digital libraries, and installed, configured, and used for investigations over the web. By collaborating with the Integrated Data Viewer developers, we were able to embed a web-launchable visualization tool and access to distributed data sets into the online curricula. The Thematic Real-time Environmental Data Distributed Services (THREDDS) project is working to provide catalogs of datasets that can be used in new VGEE curricula under development. By cataloging this curricula in the Digital Library for Earth System Education (DLESE), learners and educators can discover the data and visualization tool within a framework that guides their use.

  5. Electric scooter pilot project

    NASA Astrophysics Data System (ADS)

    Slanina, Zdenek; Dedek, Jan; Golembiovsky, Matej

    2016-09-01

    This article describes the issue of electric scooter development for educational and demonstration purposes on the Technical University of Ostrava. Electric scooter is equipped with a brushless motor with permanent magnets and engine controller, measuring and monitoring system for speed regulation, energy flow control and both online and off-line data analysis, visualization system for real-time diagnostics and battery management with balancing modules system. Implemented device brings a wide area for the following scientific research. This article also includes some initial test results and electric vehicles experiences.

  6. Testing and Evaluation of the Bear Medical Systems, Inc. Bear 33 Volume Ventilator System

    DTIC Science & Technology

    1990-12-01

    approved for publication. RICHARD J. KNECHT, Lt Col, USAF, NC ROGER L STORK , Col, USAF, BSC Project Scientist Chief, Crew Systems Branch EORCHENDER...no problems. After the vibration tests, a visual inspection of the humidifier revealed that a screw and metal clip from a terminal on the incoming...hexagonal J-bolt nuts, which secure the sled to the litter, with larger wing nuts. This modification will allow the sled to be adequately secured by

  7. Design of an off-axis visual display based on a free-form projection screen to realize stereo vision

    NASA Astrophysics Data System (ADS)

    Zhao, Yuanming; Cui, Qingfeng; Piao, Mingxu; Zhao, Lidong

    2017-10-01

    A free-form projection screen is designed for an off-axis visual display, which shows great potential in applications such as flight training for providing both accommodation and convergence cues for pilots. The method based on point cloud is proposed for the design of the free-form surface, and the design of the point cloud is controlled by a program written in the macro-language. In the visual display based on the free-form projection screen, when the error of the screen along Z-axis is 1 mm, the error of visual distance at each filed is less than 1%. And the resolution of the design for full field is better than 1‧, which meet the requirement of resolution for human eyes.

  8. A zero-footprint 3D visualization system utilizing mobile display technology for timely evaluation of stroke patients

    NASA Astrophysics Data System (ADS)

    Park, Young Woo; Guo, Bing; Mogensen, Monique; Wang, Kevin; Law, Meng; Liu, Brent

    2010-03-01

    When a patient is accepted in the emergency room suspected of stroke, time is of the utmost importance. The infarct brain area suffers irreparable damage as soon as three hours after the onset of stroke symptoms. A CT scan is one of standard first line of investigations with imaging and is crucial to identify and properly triage stroke cases. The availability of an expert Radiologist in the emergency environment to diagnose the stroke patient in a timely manner only increases the challenges within the clinical workflow. Therefore, a truly zero-footprint web-based system with powerful advanced visualization tools for volumetric imaging including 2D. MIP/MPR, 3D display can greatly facilitate this dynamic clinical workflow for stroke patients. Together with mobile technology, the proper visualization tools can be delivered at the point of decision anywhere and anytime. We will present a small pilot project to evaluate the use of mobile technologies using devices such as iPhones in evaluating stroke patients. The results of the evaluation as well as any challenges in setting up the system will also be discussed.

  9. SU-D-BRF-04: Digital Tomosynthesis for Improved Daily Setup in Treatment of Liver Lesions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armstrong, H; Jones, B; Miften, M

    Purpose: Daily localization of liver lesions with cone-beam CT (CBCT) is difficult due to poor image quality caused by scatter, respiratory motion, and the lack of radiographic contrast between the liver parenchyma and the lesion(s). Digital tomosynthesis (DTS) is investigated as a modality to improve liver visualization and lesion/parenchyma contrast for daily setup. Methods: An in-house tool was developed to generate DTS images using a point-by-point filtered back-projection method from on-board CBCT projection data. DTS image planes are generated in a user defined orientation to visualize the anatomy at various depths. Reference DTS images are obtained from forward projection ofmore » the planning CT dataset at each projection angle. The CBCT DTS image set can then be registered to the reference DTS image set as a means for localization. Contour data from the planning CT's associate RT Structure file and forward projected similarly to the planning CT data. DTS images are created for each contoured structure, which can then be overlaid onto the DTS images for organ volume visualization. Results: High resolution DTS images generated from CBCT projections show fine anatomical detail, including small blood vessels, within the patient. However, the reference DTS images generated from forward projection of the planning CT lacks this level of detail due to the low resolution of the CT voxels as compared to the pixel size in the projection images; typically 1mm-by-1mm-by-3mm (lat, vrt, lng) for the planning CT vs. 0.4mm-by-0.4mm for CBCT projections. Overlaying of the contours onto the DTS image allows for visualization of structures of interest. Conclusion: The ability to generate DTS images over a limited range of projection angles allows for reduction in the amount of respiratory motion within each acquisition. DTS may provide improved visualization of structures and lesions as compared to CBCT for highly mobile tumors.« less

  10. Land drainage system detection using IR and visual imagery taken from autonomous mapping airship and evaluation of physical and spatial parameters of suggested method

    NASA Astrophysics Data System (ADS)

    Koska, Bronislav; Křemen, Tomáš; Štroner, Martin; Pospíšil, Jiří; Jirka, Vladimír.

    2014-10-01

    An experimental approach to the land drainage system detection and its physical and spatial parameters evaluation by the form of pilot project is presented in this paper. The novelty of the approach is partly based on using of unique unmanned aerial vehicle - airship with some specific properties. The most important parameters are carrying capacity (15 kg) and long flight time (3 hours). A special instrumentation was installed for physical characteristic testing in the locality too. The most important is 30 meter high mast with 3 meter length bracket at the top with sensors recording absolute and comparative temperature, humidity and wind speed and direction in several heights of the mast. There were also installed several measuring units recording local condition in the area. Recorded data were compared with IR images taken from airship platform. The locality is situated around village Domanín in the Czech Republic and has size about 1.8 x 1.5 km. There was build a land drainage system during the 70-ties of the last century which is made from burnt ceramic blocks placed about 70 cm below surface. The project documentation of the land drainage system exists but real state surveying haveńt been never realized. The aim of the project was land surveying of land drainage system based on infrared, visual and its combination high resolution orthophotos (10 cm for VIS and 30 cm for IR) and spatial and physical parameters evaluation of the presented procedure. The orthophoto in VIS and IR spectrum and its combination seems to be suitable for the task.

  11. Situational Awareness Applied to Geology Field Mapping using Integration of Semantic Data and Visualization Techniques

    NASA Astrophysics Data System (ADS)

    Houser, P. I. Q.

    2017-12-01

    21st century earth science is data-intensive, characterized by heterogeneous, sometimes voluminous collections representing phenomena at different scales collected for different purposes and managed in disparate ways. However, much of the earth's surface still requires boots-on-the-ground, in-person fieldwork in order to detect the subtle variations from which humans can infer complex structures and patterns. Nevertheless, field experiences can and should be enabled and enhanced by a variety of emerging technologies. The goal of the proposed research project is to pilot test emerging data integration, semantic and visualization technologies for evaluation of their potential usefulness in the field sciences, particularly in the context of field geology. The proposed project will investigate new techniques for data management and integration enabled by semantic web technologies, along with new techniques for augmented reality that can operate on such integrated data to enable in situ visualization in the field. The research objectives include: Develop new technical infrastructure that applies target technologies to field geology; Test, evaluate, and assess the technical infrastructure in a pilot field site; Evaluate the capabilities of the systems for supporting and augmenting field science; and Assess the generality of the system for implementation in new and different types of field sites. Our hypothesis is that these technologies will enable what we call "field science situational awareness" - a cognitive state formerly attained only through long experience in the field - that is highly desirable but difficult to achieve in time- and resource-limited settings. Expected outcomes include elucidation of how, and in what ways, these technologies are beneficial in the field; enumeration of the steps and requirements to implement these systems; and cost/benefit analyses that evaluate under what conditions the investments of time and resources are advisable to construct such system.

  12. Towards a gestural 3D interaction for tangible and three-dimensional GIS visualizations

    NASA Astrophysics Data System (ADS)

    Partsinevelos, Panagiotis; Agadakos, Ioannis; Pattakos, Nikolas; Maragakis, Michail

    2014-05-01

    The last decade has been characterized by a significant increase of spatially dependent applications that require storage, visualization, analysis and exploration of geographic information. GIS analysis of spatiotemporal geographic data is operated by highly trained personnel under an abundance of software and tools, lacking interoperability and friendly user interaction. Towards this end, new forms of querying and interaction are emerging, including gestural interfaces. Three-dimensional GIS representations refer to either tangible surfaces or projected representations. Making a 3D tangible geographic representation touch-sensitive may be a convenient solution, but such an approach raises the cost significantly and complicates the hardware and processing required to combine touch-sensitive material (for pinpointing points) with deformable material (for displaying elevations). In this study, a novel interaction scheme upon a three dimensional visualization of GIS data is proposed. While gesture user interfaces are not yet fully acceptable due to inconsistencies and complexity, a non-tangible GIS system where 3D visualizations are projected, calls for interactions that are based on three-dimensional, non-contact and gestural procedures. Towards these objectives, we use the Microsoft Kinect II system which includes a time of flight camera, allowing for a robust and real time depth map generation, along with the capturing and translation of a variety of predefined gestures from different simultaneous users. By incorporating these features into our system architecture, we attempt to create a natural way for users to operate on GIS data. Apart from the conventional pan and zoom features, the key functions addressed for the 3-D user interface is the ability to pinpoint particular points, lines and areas of interest, such as destinations, waypoints, landmarks, closed areas, etc. The first results shown, concern a projected GIS representation where the user selects points and regions of interest while the GIS component responds accordingly by changing the scenario in a natural disaster application. Creating a 3D model representation of geospatial data provides a natural way for users to perceive and interact with space. To the best of our knowledge it is the first attempt to use Kinect II for GIS applications and generally virtual environments using novel Human Computer Interaction methods. Under a robust decision support system, the users are able to interact, combine and computationally analyze information in three dimensions using gestures. This study promotes geographic awareness and education and will prove beneficial for a wide range of geoscience applications including natural disaster and emergency management. Acknowledgements: This work is partially supported under the framework of the "Cooperation 2011" project ATLANTAS (11_SYN_6_1937) funded from the Operational Program "Competitiveness and Entrepreneurship" (co-funded by the European Regional Development Fund (ERDF)) and managed by the Greek General Secretariat for Research and Technology.

  13. Web-based Collaboration and Visualization in the ANDRILL Program

    NASA Astrophysics Data System (ADS)

    Reed, J.; Rack, F. R.; Huffman, L. T.; Cattadori, M.

    2009-12-01

    ANDRILL has embraced the web as a platform for facilitating collaboration and communicating science with educators, students and researchers alike. Two recent ANDRILL education and outreach projects, Project Circle 2008 and the Climate Change Student Summit, brought together classrooms from around the world to participate in cutting edge science. A large component of each project was the online collaboration achieved through project websites, blogs, and the GroupHub--a secure online environment where students could meet to send messages, exchange presentations and pictures, and even chat live. These technologies enabled students from different countries and time zones to connect and participate in a shared 'conversation' about climate change research. ANDRILL has also developed several interactive, web-based visualizations to make scientific drilling data more engaging and accessible to the science community and the public. Each visualization is designed around three core concepts that enable the Web 2.0 platform, namely, that they are: (1) customizable - a user can customize the visualization to display the exact data she is interested in; (2) linkable - each view in the visualization has a distinct URL that the user can share with her friends via sites like Facebook and Twitter; and (3) mashable - the user can take the visualization, mash it up with data from other sites or her own research, and embed it in her blog or website. The web offers an ideal environment for visualization and collaboration because it requires no special software and works across all computer platforms, which allows organizations and research projects to engage much larger audiences. In this presentation we will describe past challenges and successes, as well as future plans.

  14. OCULUS fire: a command and control system for fire management with crowd sourcing and social media interconnectivity

    NASA Astrophysics Data System (ADS)

    Thomopoulos, Stelios C. A.; Kyriazanos, Dimitris M.; Astyakopoulos, Alkiviadis; Dimitros, Kostantinos; Margonis, Christos; Thanos, Giorgos Konstantinos; Skroumpelou, Katerina

    2016-05-01

    AF3 (Advanced Forest Fire Fighting2) is a European FP7 research project that intends to improve the efficiency of current fire-fighting operations and the protection of human lives, the environment and property by developing innovative technologies to ensure the integration between existing and new systems. To reach this objective, the AF3 project focuses on innovative active and passive countermeasures, early detection and monitoring, integrated crisis management and advanced public information channels. OCULUS Fire is the innovative control and command system developed within AF3 as a monitoring, GIS and Knowledge Extraction System and Visualization Tool. OCULUS Fire includes (a) an interface for real-time updating and reconstructing of maps to enable rerouting based on estimated hazards and risks, (b) processing of GIS dynamic re-construction and mission re-routing, based on the fusion of airborne, satellite, ground and ancillary geolocation data, (c) visualization components for the C2 monitoring system, displaying and managing information arriving from a variety of sources and (d) mission and situational awareness module for OCULUS Fire ground monitoring system being part of an Integrated Crisis Management Information System for ground and ancillary sensors. OCULUS Fire will also process and visualise information from public information channels, social media and also mobile applications by helpful citizens and volunteers. Social networking, community building and crowdsourcing features will enable a higher reliability and less false alarm rates when using such data in the context of safety and security applications.

  15. 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display.

    PubMed

    Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen

    2017-07-01

    Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. A medical application integrating remote 3D visualization tools to access picture archiving and communication system on mobile devices.

    PubMed

    He, Longjun; Ming, Xing; Liu, Qian

    2014-04-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.

  17. The Development and Evaluation of a Computer-Based System for Managing the Design and Pilot-Testing of Interactive Videodisc Programs. Training and Development Research Center, Project Number Forty-Three.

    ERIC Educational Resources Information Center

    Sayre, Scott Alan

    The purpose of this study was to develop and validate a computer-based system that would allow interactive video developers to integrate and manage the design components prior to production. These components of an interactive video (IVD) program include visual information in a variety of formats, audio information, and instructional techniques,…

  18. Visual abilities of students with severe developmental delay in special needs education - a vision screening project in Northern Jutland, Denmark.

    PubMed

    Welinder, Lotte G; Baggesen, Kirsten L

    2012-12-01

    To investigate the visual abilities of students with severe developmental delay (DD) age 6-8 starting in special needs education. Between 1 January 2000 and 31 December 2008, we screened all students with severe DD starting in special needs schools in Northern Jutland, Denmark for vision. All students with visual acuities ≤6/12 were refractioned and examined by an ophthalmologist. Of 502 students, 56 (11%) had visual impairment (VI) [visual acuity (VA) ≤ 6/18], of which 21 had been previously undiagnosed. Legal blindness was found in 15 students (3%), of whom three had previously been undiagnosed. Students tested with preferential looking systems (N = 78) had significantly lower visual acuities [VA (decimal) = 0.55] than students tested with ortho types [VA (decimal) = 0.91] and had problems participating in the colour and form tests, possibly due to cerebral VI. The number of students with decreased vision identified by screening decreased significantly during the study period (r = 0.724, p = 0.028). The number of students needed to be screened to find one student with VI was 24 and to identify legal blindness 181 needed to be screened. Visual impairment is a common condition in students with severe DD. Despite increased awareness of VI in the school and health care system, we continued to find a considerable number of students with hitherto undiagnosed decreased vision. © 2011 The Authors. Acta Ophthalmologica © 2011 Acta Ophthalmologica Scandinavica Foundation.

  19. Online geometrical calibration of a mobile C-arm using external sensors

    NASA Astrophysics Data System (ADS)

    Mitschke, Matthias M.; Navab, Nassir; Schuetz, Oliver

    2000-04-01

    3D tomographic reconstruction of high contrast objects such as contrast agent enhanced blood vessels or bones from x-ray images acquired by isocentric C-arm systems recently gained interest. For tomographic reconstruction, a sequence of images is captured during the C-arm rotation around the patient and the precise projection geometry has to be determined for each image. This is a difficult task, as C- arms usually do not provide accurate information about their projection geometry. Standard methods propose the use of an x-ray calibration phantom and an offline calibration, when the motion of the C-arm is supposed to be reproducible between calibration and patient run. However, mobile C-arms usually do not have this desirable property. Therefore, an online recovery of projection geometry is necessary. Here, we study the use of external tracking systems such as Polaris or Optotrak from Northern Digital, Inc., for online calibration. In order to use the external tracking system for recovery of x-ray projection geometry two unknown transformations have to be estimated. We describe our attempt to solve this calibration problem. These are the relations between x-ray imaging system and marker plate of the tracking system as well as worked and sensor coordinate system. Experimental result son anatomical data are presented and visually compared with the results of estimating the projection geometry with an x-ray calibration phantom.

  20. Visual habit formation in monkeys with neurotoxic lesions of the ventrocaudal neostriatum

    PubMed Central

    Fernandez-Ruiz, Juan; Wang, Jin; Aigner, Thomas G.; Mishkin, Mortimer

    2001-01-01

    Visual habit formation in monkeys, assessed by concurrent visual discrimination learning with 24-h intertrial intervals (ITI), was found earlier to be impaired by removal of the inferior temporal visual area (TE) but not by removal of either the medial temporal lobe or inferior prefrontal convexity, two of TE's major projection targets. To assess the role in this form of learning of another pair of structures to which TE projects, namely the rostral portion of the tail of the caudate nucleus and the overlying ventrocaudal putamen, we injected a neurotoxin into this neostriatal region of several monkeys and tested them on the 24-h ITI task as well as on a test of visual recognition memory. Compared with unoperated monkeys, the experimental animals were unaffected on the recognition test but showed an impairment on the 24-h ITI task that was highly correlated with the extent of their neostriatal damage. The findings suggest that TE and its projection areas in the ventrocaudal neostriatum form part of a circuit that selectively mediates visual habit formation. PMID:11274442

  1. A Visual Database System for Image Analysis on Parallel Computers and its Application to the EOS Amazon Project

    NASA Technical Reports Server (NTRS)

    Shapiro, Linda G.; Tanimoto, Steven L.; Ahrens, James P.

    1996-01-01

    The goal of this task was to create a design and prototype implementation of a database environment that is particular suited for handling the image, vision and scientific data associated with the NASA's EOC Amazon project. The focus was on a data model and query facilities that are designed to execute efficiently on parallel computers. A key feature of the environment is an interface which allows a scientist to specify high-level directives about how query execution should occur.

  2. Ultrasonic Waves in Water Visualized With Schlieren Imaging

    NASA Technical Reports Server (NTRS)

    Juergens, Jeffrey R.

    2000-01-01

    The Acoustic Liquid Manipulation project at the NASA Glenn Research Center at Lewis Field is working with high-intensity ultrasound waves to produce acoustic radiation pressure and acoustic streaming. These effects can be used to propel liquid flows to manipulate floating objects and liquid surfaces. Interest in acoustic liquid manipulation has been shown in acoustically enhanced circuit board electroplating, microelectromechanical systems (MEMS), and microgravity space experiments. The current areas of work on this project include phased-array ultrasonic beam steering, acoustic intensity measurements, and schlieren imaging of the ultrasonic waves.

  3. Data compression for full motion video transmission

    NASA Technical Reports Server (NTRS)

    Whyte, Wayne A., Jr.; Sayood, Khalid

    1991-01-01

    Clearly transmission of visual information will be a major, if not dominant, factor in determining the requirements for, and assessing the performance of the Space Exploration Initiative (SEI) communications systems. Projected image/video requirements which are currently anticipated for SEI mission scenarios are presented. Based on this information and projected link performance figures, the image/video data compression requirements which would allow link closure are identified. Finally several approaches which could satisfy some of the compression requirements are presented and possible future approaches which show promise for more substantial compression performance improvement are discussed.

  4. Local spatio-temporal analysis in vision systems

    NASA Astrophysics Data System (ADS)

    Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David

    1994-07-01

    The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.

  5. Visual force feedback in laparoscopic training.

    PubMed

    Horeman, Tim; Rodrigues, Sharon P; van den Dobbelsteen, John J; Jansen, Frank-Willem; Dankelman, Jenny

    2012-01-01

    To improve endoscopic surgical skills, an increasing number of surgical residents practice on box or virtual reality (VR) trainers. Current training is focused mainly on hand-eye coordination. Training methods that focus on applying the right amount of force are not yet available. The aim of this project is to develop a low-cost training system that measures the interaction force between tissue and instruments and displays a visual representation of the applied forces inside the camera image. This visual representation continuously informs the subject about the magnitude and the direction of applied forces. To show the potential of the developed training system, a pilot study was conducted in which six novices performed a needle-driving task in a box trainer with visual feedback of the force, and six novices performed the same task without visual feedback of the force. All subjects performed the training task five times and were subsequently tested in a post-test without visual feedback. The subjects who received visual feedback during training exerted on average 1.3 N (STD 0.6 N) to drive the needle through the tissue during the post-test. This value was considerably higher for the group that received no feedback (2.6 N, STD 0.9 N). The maximum interaction force during the post-test was noticeably lower for the feedback group (4.1 N, STD 1.1 N) compared with that of the control group (8.0 N, STD 3.3 N). The force-sensing training system provides us with the unique possibility to objectively assess tissue-handling skills in a laboratory setting. The real-time visualization of applied forces during training may facilitate acquisition of tissue-handling skills in complex laparoscopic tasks and could stimulate proficiency gain curves of trainees. However, larger randomized trials that also include other tasks are necessary to determine whether training with visual feedback about forces reduces the interaction force during laparoscopic surgery.

  6. PLANETarium - Visualizing Earth Sciences in the Planetarium

    NASA Astrophysics Data System (ADS)

    Ballmer, M. D.; Wiethoff, T.; Kraupe, T. W.

    2013-12-01

    In the past decade, projection systems in most planetariums, traditional sites of outreach and public education, have advanced from instruments that can visualize the motion of stars as beam spots moving over spherical projection areas to systems that are able to display multicolor, high-resolution, immersive full-dome videos or images. These extraordinary capabilities are ideally suited for visualization of global processes occurring on the surface and within the interior of the Earth, a spherical body just as the full dome. So far, however, our community has largely ignored this wonderful interface for outreach and education. A few documentaries on e.g. climate change or volcanic eruptions have been brought to planetariums, but are taking little advantage of the true potential of the medium, as mostly based on standard two-dimensional videos and cartoon-style animations. Along these lines, we here propose a framework to convey recent scientific results on the origin and evolution of our PLANET to the >100,000,000 per-year worldwide audience of planetariums, making the traditionally astronomy-focussed interface a true PLANETarium. In order to do this most efficiently, we intend to directly show visualizations of scientific datasets or models, originally designed for basic research. Such visualizations in solid-Earth, as well as athmospheric and ocean sciences, are expected to be renderable to the dome with little or no effort. For example, showing global geophysical datasets (e.g., surface temperature, gravity, magnetic field), or horizontal slices of seismic-tomography images and of spherical computer simulations (e.g., climate evolution, mantle flow or ocean currents) requires almost no rendering at all. Three-dimensional Cartesian datasets or models can be rendered using standard methods. With the appropriate audio support, present-day science visualizations are typically as intuitive as cartoon-style animations, yet more appealing visually, and clearly more informative as revealing the complexity and beauty of our planet. In addition to e.g. climate change and natural hazards, themes of interest may include the coupled evolution of the Earth's interior and life, from the accretion of our planet to the generation and sustainment of the magnetic field as well as of habitable conditions in the atmosphere and oceans. We believe that high-quality tax-funded science visualizations should not exclusively be used to facilitate communication amoung scientists, but also be directly recycled to raise the public's awareness and appreciation of geosciences.

  7. Electrophysiology Meets Ecology: Investigating How Vision is Tuned to the Life Style of an Animal using Electroretinography.

    PubMed

    Stowasser, Annette; Mohr, Sarah; Buschbeck, Elke; Vilinsky, Ilya

    2015-01-01

    Students learn best when projects are multidisciplinary, hands-on, and provide ample opportunity for self-driven investigation. We present a teaching unit that leads students to explore relationships between sensory function and ecology. Field studies, which are rare in neurobiology education, are combined with laboratory experiments that assess visual properties of insect eyes, using electroretinography (ERG). Comprised of nearly one million species, insects are a diverse group of animals, living in nearly all habitats and ecological niches. Each of these lifestyles puts different demands on their visual systems, and accordingly, insects display a wide array of eye organizations and specializations. Physiologically relevant differences can be measured using relatively simple extracellular electrophysiological methods that can be carried out with standard equipment, much of which is already in place in most physiology laboratories. The teaching unit takes advantage of the large pool of locally available species, some of which likely show specialized visual properties that can be measured by students. In the course of the experiments, students collect local insects or other arthropods of their choice, are guided to formulate hypotheses about how the visual system of "their" insects might be tuned to the lifestyle of the species, and use ERGs to investigate the insects' visual response dynamics, and both chromatic and temporal properties of the visual system. Students are then guided to interpret their results in both a comparative physiological and ecological context. This set of experiments closely mirrors authentic research and has proven to be a popular, informative and highly engaging teaching tool.

  8. Electrophysiology Meets Ecology: Investigating How Vision is Tuned to the Life Style of an Animal using Electroretinography

    PubMed Central

    Stowasser, Annette; Mohr, Sarah; Buschbeck, Elke; Vilinsky, Ilya

    2015-01-01

    Students learn best when projects are multidisciplinary, hands-on, and provide ample opportunity for self-driven investigation. We present a teaching unit that leads students to explore relationships between sensory function and ecology. Field studies, which are rare in neurobiology education, are combined with laboratory experiments that assess visual properties of insect eyes, using electroretinography (ERG). Comprised of nearly one million species, insects are a diverse group of animals, living in nearly all habitats and ecological niches. Each of these lifestyles puts different demands on their visual systems, and accordingly, insects display a wide array of eye organizations and specializations. Physiologically relevant differences can be measured using relatively simple extracellular electrophysiological methods that can be carried out with standard equipment, much of which is already in place in most physiology laboratories. The teaching unit takes advantage of the large pool of locally available species, some of which likely show specialized visual properties that can be measured by students. In the course of the experiments, students collect local insects or other arthropods of their choice, are guided to formulate hypotheses about how the visual system of “their” insects might be tuned to the lifestyle of the species, and use ERGs to investigate the insects’ visual response dynamics, and both chromatic and temporal properties of the visual system. Students are then guided to interpret their results in both a comparative physiological and ecological context. This set of experiments closely mirrors authentic research and has proven to be a popular, informative and highly engaging teaching tool. PMID:26240534

  9. A computer system for the storage and retrieval of gravity data, Kingdom of Saudi Arabia

    USGS Publications Warehouse

    Godson, Richard H.; Andreasen, Gordon H.

    1974-01-01

    A computer system has been developed for the systematic storage and retrieval of gravity data. All pertinent facts relating to gravity station measurements and computed Bouguer values may be retrieved either by project name or by geographical coordinates. Features of the system include visual display in the form of printer listings of gravity data and printer plots of station locations. The retrieved data format interfaces with the format of GEOPAC, a system of computer programs designed for the analysis of geophysical data.

  10. Using RSVP for analyzing state and previous activities for the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Cooper, Brian K.; Hartman, Frank; Maxwell, Scott; Wright, John; Yen, Jeng

    2004-01-01

    Current developments in immersive environments for mission planning include several tools which make up a system for performing and rehearsing missions. This system, known as the Rover Sequencing and Visualization Program (RSVP), includes tools for planning long range sorties for highly autonomous rovers, tools for planning operations with robotic arms, and advanced tools for visualizing telemetry from remote spacecraft and landers. One of the keys to successful planning of rover activities is knowing what the rover has accomplished to date and understanding the current rover state. RSVP builds on the lessons learned and the heritage of the Mars Pathfinder mission This paper will discuss the tools and methodologies present in the RSVP suite for examining rover state, reviewing previous activities, visually comparing telemetered results to rehearsed results, and reviewing science and engineering imagery. In addition we will present how this tool suite was used on the Mars Exploration Rovers (MER) project to explore the surface of Mars.

  11. Event Display for the Visualization of CMS Events

    NASA Astrophysics Data System (ADS)

    Bauerdick, L. A. T.; Eulisse, G.; Jones, C. D.; Kovalskyi, D.; McCauley, T.; Mrak Tadel, A.; Muelmenstaedt, J.; Osborne, I.; Tadel, M.; Tu, Y.; Yagil, A.

    2011-12-01

    During the last year the CMS experiment engaged in consolidation of its existing event display programs. The core of the new system is based on the Fireworks event display program which was by-design directly integrated with the CMS Event Data Model (EDM) and the light version of the software framework (FWLite). The Event Visualization Environment (EVE) of the ROOT framework is used to manage a consistent set of 3D and 2D views, selection, user-feedback and user-interaction with the graphics windows; several EVE components were developed by CMS in collaboration with the ROOT project. In event display operation simple plugins are registered into the system to perform conversion from EDM collections into their visual representations which are then managed by the application. Full event navigation and filtering as well as collection-level filtering is supported. The same data-extraction principle can also be applied when Fireworks will eventually operate as a service within the full software framework.

  12. Imaging of the human choroid with a 1.7 MHz A-scan rate FDML swept source OCT system

    NASA Astrophysics Data System (ADS)

    Gorczynska, I.; Migacz, J. V.; Jonnal, R.; Zawadzki, R. J.; Poddar, R.; Werner, J. S.

    2017-02-01

    We demonstrate OCT angiography (OCTA) and Doppler OCT imaging of the choroid in the eyes of two healthy volunteers and in a geographic atrophy case. We show that visualization of specific choroidal layers requires selection of appropriate OCTA methods. We investigate how imaging speed, B-scan averaging and scanning density influence visualization of various choroidal vessels. We introduce spatial power spectrum analysis of OCT en face angiographic projections as a method of quantitative analysis of choroicapillaris morphology. We explore the possibility of Doppler OCT imaging to provide information about directionality of blood flow in choroidal vessels. To achieve these goals, we have developed OCT systems utilizing an FDML laser operating at 1.7 MHz sweep rate, at 1060 nm center wavelength, and with 7.5 μm axial imaging resolution. A correlation mapping OCA method was implemented for visualization of the vessels. Joint Spectral and Time domain OCT (STdOCT) technique was used for Doppler OCT imaging.

  13. MEMS technologies for epiretinal stimulation of the retina

    NASA Astrophysics Data System (ADS)

    Mokwa, W.

    2004-09-01

    It has been shown that electrical stimulation of retinal ganglion cells yields visual sensations. Therefore, a retina implant for blind humans suffering from retinitis pigmentosa based on this concept seems to be feasible. In Germany, there are two projects funded by the government working on different approaches namely the subretinal and the epiretinal approaches. This paper describes the epiretinal approach for such a system. The extraocular part of this system records visual images. The images are transformed by a neural net into corresponding signals for stimulation of the retinal ganglion cells. These signals are transmitted to a receiver unit of an intraocular implant, the retina stimulator. Integrated circuitry of this unit decodes the signals and transfers the data to a stimulation circuitry that selects stimulation electrodes placed onto the retina and generates current pulses to the electrodes. By this, action potentials in retinal ganglion cells are evoked, causing a visual sensation. This paper concentrates on the MEMS part of this implant.

  14. Distributed 3D Information Visualization - Towards Integration of the Dynamic 3D Graphics and Web Services

    NASA Astrophysics Data System (ADS)

    Vucinic, Dean; Deen, Danny; Oanta, Emil; Batarilo, Zvonimir; Lacor, Chris

    This paper focuses on visualization and manipulation of graphical content in distributed network environments. The developed graphical middleware and 3D desktop prototypes were specialized for situational awareness. This research was done in the LArge Scale COllaborative decision support Technology (LASCOT) project, which explored and combined software technologies to support human-centred decision support system for crisis management (earthquake, tsunami, flooding, airplane or oil-tanker incidents, chemical, radio-active or other pollutants spreading, etc.). The performed state-of-the-art review did not identify any publicly available large scale distributed application of this kind. Existing proprietary solutions rely on the conventional technologies and 2D representations. Our challenge was to apply the "latest" available technologies, such Java3D, X3D and SOAP, compatible with average computer graphics hardware. The selected technologies are integrated and we demonstrate: the flow of data, which originates from heterogeneous data sources; interoperability across different operating systems and 3D visual representations to enhance the end-users interactions.

  15. Theories of Visual Rhetoric: Looking at the Human Genome.

    ERIC Educational Resources Information Center

    Rosner, Mary

    2001-01-01

    Considers how visuals are constructions that are products of a writer's interpretation with its own "power-laden agenda." Reviews the current approach taken by composition scholars, surveys richer interdisciplinary work on visuals, and (by using visuals connected with the Human Genome Project) models an analysis of visuals as rhetoric.…

  16. Enhancing Knowledge Sharing Management Using BIM Technology in Construction

    PubMed Central

    Ho, Shih-Ping; Tserng, Hui-Ping

    2013-01-01

    Construction knowledge can be communicated and reused among project managers and jobsite engineers to alleviate problems on a construction jobsite and reduce the time and cost of solving problems related to constructability. This paper proposes a new methodology for the sharing of construction knowledge by using Building Information Modeling (BIM) technology. The main characteristics of BIM include illustrating 3D CAD-based presentations and keeping information in a digital format and facilitation of easy updating and transfer of information in the BIM environment. Using the BIM technology, project managers and engineers can gain knowledge related to BIM and obtain feedback provided by jobsite engineers for future reference. This study addresses the application of knowledge sharing management using BIM technology and proposes a BIM-based Knowledge Sharing Management (BIMKSM) system for project managers and engineers. The BIMKSM system is then applied in a selected case study of a construction project in Taiwan to demonstrate the effectiveness of sharing knowledge in the BIM environment. The results demonstrate that the BIMKSM system can be used as a visual BIM-based knowledge sharing management platform by utilizing the BIM technology. PMID:24723790

  17. Enhancing knowledge sharing management using BIM technology in construction.

    PubMed

    Ho, Shih-Ping; Tserng, Hui-Ping; Jan, Shu-Hui

    2013-01-01

    Construction knowledge can be communicated and reused among project managers and jobsite engineers to alleviate problems on a construction jobsite and reduce the time and cost of solving problems related to constructability. This paper proposes a new methodology for the sharing of construction knowledge by using Building Information Modeling (BIM) technology. The main characteristics of BIM include illustrating 3D CAD-based presentations and keeping information in a digital format and facilitation of easy updating and transfer of information in the BIM environment. Using the BIM technology, project managers and engineers can gain knowledge related to BIM and obtain feedback provided by jobsite engineers for future reference. This study addresses the application of knowledge sharing management using BIM technology and proposes a BIM-based Knowledge Sharing Management (BIMKSM) system for project managers and engineers. The BIMKSM system is then applied in a selected case study of a construction project in Taiwan to demonstrate the effectiveness of sharing knowledge in the BIM environment. The results demonstrate that the BIMKSM system can be used as a visual BIM-based knowledge sharing management platform by utilizing the BIM technology.

  18. Preparing Teachers to Support the Development of Climate Literate Students

    NASA Astrophysics Data System (ADS)

    Haddad, N.; Ledley, T. S.; Ellins, K. K.; Bardar, E. W.; Youngman, E.; Dunlap, C.; Lockwood, J.; Mote, A. S.; McNeal, K.; Libarkin, J. C.; Lynds, S. E.; Gold, A. U.

    2014-12-01

    The EarthLabs climate project includes curriculum development, teacher professional development, teacher leadership development, and research on student learning, all directed at increasing high school teachers' and students' understanding of the factors that shape our planet's climate. The project has developed four new modules which focus on climate literacy and which are part of the larger Web based EarthLabs collection of Earth science modules. Climate related themes highlighted in the new modules include the Earth system with its positive and negative feedback loops; the range of temporal and spatial scales at which climate, weather, and other Earth system processes occur; and the recurring question, "How do we know what we know about Earth's past and present climate?" which addresses proxy data and scientific instrumentation. EarthLabs climate modules use two central strategies to help students navigate the multiple challenges inherent in understanding climate science. The first is to actively engage students with the content by using a variety of learning modes, and by allowing students to pace themselves through interactive visualizations that address particularly challenging content. The second strategy, which is the focus of this presentation, is to support teachers in a subject area where few have substantive content knowledge or technical skills. Teachers who grasp the processes and interactions that give Earth its climate and the technical skills to engage with relevant data and visualizations are more likely to be successful in supporting students' understanding of climate's complexities. This presentation will briefly introduce the EarthLabs project and will describe the steps the project takes to prepare climate literate teachers, including Web based resources, teacher workshops, and the development of a cadre of teacher leaders who are prepared to continue leading the workshops after project funding ends.

  19. Compensation for Transport Delays Produced by Computer Image Generation Systems. Cooperative Training Series.

    ERIC Educational Resources Information Center

    Ricard, G. L.; And Others

    The cooperative Navy/Air Force project described is aimed at the problem of image-flutter encountered when visual displays that present computer generated images are used for the simulation of certain flying situations. Two experiments are described which extend laboratory work on delay compensation schemes to the simulation of formation flight in…

  20. Development of the framework for a water quality monitoring system : controlling MoDOT's contribution to 303(d) listed streams in the state of Missouri, final report, February 2010.

    DOT National Transportation Integrated Search

    2010-02-01

    By utilizing ArcGIS to quickly visualize the location of any impaired waterbody in relation to its projects/activities, MoDOT will : be able to allocate resources optimally. Additionally, the Water Quality Impact Database (WQID) will allow easy trans...

  1. VGLUT1 mRNA and protein expression in the visual system of prosimian galagos (Otolemur garnetti)

    PubMed Central

    Balaram, Pooja; Hackett, Troy A; Kaas, Jon H

    2011-01-01

    The presynaptic storage and release of glutamate, an excitatory neurotransmitter, is modulated by a family of transport proteins known as vesicular glutamate transporters. Vesicular glutamate transporter 1 (VGLUT1) is widely distributed in the central nervous system of most mammalian and nonmammalian species, and regulates the uptake of glutamate into synaptic vesicles as well as the transport of filled glutamatergic vesicles to the terminal membrane during excitatory transmission. In rodents, VGLUT1 mRNA is primarily found in the neocortex, cerebellum, and hippocampus, and the VGLUT1 transport protein is involved in intercortical and corticothalamic projections that remain distinct from projections involving other VGLUT isoforms. With the exception of a few thalamic sensory nuclei, VGLUT1 mRNA is absent from subcortical areas and does not colocalize with other VGLUT mRNAs. VGLUT1 is similarly restricted to a few thalamic association nuclei and does not colocalize with other VGLUT proteins. However, recent work in primates has shown that VGLUT1 mRNA is also found in several subcortical nuclei as well as cortical areas, and that VGLUT1 may overlap with other VGLUT isoforms in glutamatergic projections. In order to expand current knowledge of VGLUT1 distributions in primates and gain insight on glutamatergic transmission in the visual system of primate species, we examined VGLUT1 mRNA and protein distributions in the lateral geniculate nucleus, pulvinar complex, superior colliculus, V1, V2, and the middle temporal area (MT) of prosimian galagos. We found that, similar to other studies in primates, VGLUT1 mRNA and protein are widely distributed in both subcortical and cortical areas. However, glutamatergic projections involving VGLUT1 are largely limited to intrinsic connections within subcortical and cortical areas, as well as the expected intercortical and corticothalamic projections. Additionally, VGLUT1 expression in galagos allowed us to identify laminar subdivisions of the superior colliculus, V1, V2, and MT. PMID:22912561

  2. Focal and Ambient Processing of Built Environments: Intellectual and Atmospheric Experiences of Architecture

    PubMed Central

    Rooney, Kevin K.; Condia, Robert J.; Loschky, Lester C.

    2017-01-01

    Neuroscience has well established that human vision divides into the central and peripheral fields of view. Central vision extends from the point of gaze (where we are looking) out to about 5° of visual angle (the width of one’s fist at arm’s length), while peripheral vision is the vast remainder of the visual field. These visual fields project to the parvo and magno ganglion cells, which process distinctly different types of information from the world around us and project that information to the ventral and dorsal visual streams, respectively. Building on the dorsal/ventral stream dichotomy, we can further distinguish between focal processing of central vision, and ambient processing of peripheral vision. Thus, our visual processing of and attention to objects and scenes depends on how and where these stimuli fall on the retina. The built environment is no exception to these dependencies, specifically in terms of how focal object perception and ambient spatial perception create different types of experiences we have with built environments. We argue that these foundational mechanisms of the eye and the visual stream are limiting parameters of architectural experience. We hypothesize that people experience architecture in two basic ways based on these visual limitations; by intellectually assessing architecture consciously through focal object processing and assessing architecture in terms of atmosphere through pre-conscious ambient spatial processing. Furthermore, these separate ways of processing architectural stimuli operate in parallel throughout the visual perceptual system. Thus, a more comprehensive understanding of architecture must take into account that built environments are stimuli that are treated differently by focal and ambient vision, which enable intellectual analysis of architectural experience versus the experience of architectural atmosphere, respectively. We offer this theoretical model to help advance a more precise understanding of the experience of architecture, which can be tested through future experimentation. (298 words) PMID:28360867

  3. Focal and Ambient Processing of Built Environments: Intellectual and Atmospheric Experiences of Architecture.

    PubMed

    Rooney, Kevin K; Condia, Robert J; Loschky, Lester C

    2017-01-01

    Neuroscience has well established that human vision divides into the central and peripheral fields of view. Central vision extends from the point of gaze (where we are looking) out to about 5° of visual angle (the width of one's fist at arm's length), while peripheral vision is the vast remainder of the visual field. These visual fields project to the parvo and magno ganglion cells, which process distinctly different types of information from the world around us and project that information to the ventral and dorsal visual streams, respectively. Building on the dorsal/ventral stream dichotomy, we can further distinguish between focal processing of central vision, and ambient processing of peripheral vision. Thus, our visual processing of and attention to objects and scenes depends on how and where these stimuli fall on the retina. The built environment is no exception to these dependencies, specifically in terms of how focal object perception and ambient spatial perception create different types of experiences we have with built environments. We argue that these foundational mechanisms of the eye and the visual stream are limiting parameters of architectural experience. We hypothesize that people experience architecture in two basic ways based on these visual limitations; by intellectually assessing architecture consciously through focal object processing and assessing architecture in terms of atmosphere through pre-conscious ambient spatial processing. Furthermore, these separate ways of processing architectural stimuli operate in parallel throughout the visual perceptual system. Thus, a more comprehensive understanding of architecture must take into account that built environments are stimuli that are treated differently by focal and ambient vision, which enable intellectual analysis of architectural experience versus the experience of architectural atmosphere, respectively. We offer this theoretical model to help advance a more precise understanding of the experience of architecture, which can be tested through future experimentation. (298 words).

  4. Absolute Depth Sensitivity in Cat Primary Visual Cortex under Natural Viewing Conditions.

    PubMed

    Pigarev, Ivan N; Levichkina, Ekaterina V

    2016-01-01

    Mechanisms of 3D perception, investigated in many laboratories, have defined depth either relative to the fixation plane or to other objects in the visual scene. It is obvious that for efficient perception of the 3D world, additional mechanisms of depth constancy could operate in the visual system to provide information about absolute distance. Neurons with properties reflecting some features of depth constancy have been described in the parietal and extrastriate occipital cortical areas. It has also been shown that, for some neurons in the visual area V1, responses to stimuli of constant angular size differ at close and remote distances. The present study was designed to investigate whether, in natural free gaze viewing conditions, neurons tuned to absolute depths can be found in the primary visual cortex (area V1). Single-unit extracellular activity was recorded from the visual cortex of waking cats sitting on a trolley in front of a large screen. The trolley was slowly approaching the visual scene, which consisted of stationary sinusoidal gratings of optimal orientation rear-projected over the whole surface of the screen. Each neuron was tested with two gratings, with spatial frequency of one grating being twice as high as that of the other. Assuming that a cell is tuned to a spatial frequency, its maximum response to the grating with a spatial frequency twice as high should be shifted to a distance half way closer to the screen in order to attain the same size of retinal projection. For hypothetical neurons selective to absolute depth, location of the maximum response should remain at the same distance irrespective of the type of stimulus. It was found that about 20% of neurons in our experimental paradigm demonstrated sensitivity to particular distances independently of the spatial frequencies of the gratings. We interpret these findings as an indication of the use of absolute depth information in the primary visual cortex.

  5. Building effective learning experiences around visualizations: NASA Eyes on the Solar System and Infiniscope

    NASA Astrophysics Data System (ADS)

    Tamer, A. J. J.; Anbar, A. D.; Elkins-Tanton, L. T.; Klug Boonstra, S.; Mead, C.; Swann, J. L.; Hunsley, D.

    2017-12-01

    Advances in scientific visualization and public access to data have transformed science outreach and communication, but have yet to realize their potential impacts in the realm of education. Computer-based learning is a clear bridge between visualization and education, but creating high-quality learning experiences that leverage existing visualizations requires close partnerships among scientists, technologists, and educators. The Infiniscope project is working to foster such partnerships in order to produce exploration-driven learning experiences around NASA SMD data and images, leveraging the principles of ETX (Education Through eXploration). The visualizations inspire curiosity, while the learning design promotes improved reasoning skills and increases understanding of space science concepts. Infiniscope includes both a web portal to host these digital learning experiences, as well as a teaching network of educators using and modifying these experiences. Our initial efforts to enable student discovery through active exploration of the concepts associated with Small Worlds, Kepler's Laws, and Exoplanets led us to develop our own visualizations at Arizona State University. Other projects focused on Astrobiology and Mars geology led us to incorporate an immersive Virtual Field Trip platform into the Infiniscope portal in support of virtual exploration of scientifically significant locations. Looking to apply ETX design practices with other visualizations, our team at Arizona State partnered with the Jet Propulsion Lab to integrate the web-based version of NASA Eyes on the Eclipse within Smart Sparrow's digital learning platform in a proof-of-concept focused on the 2017 Eclipse. This goes a step beyond the standard features of "Eyes" by wrapping guided exploration, focused on a specific learning goal into standards-aligned lesson built around the visualization, as well as its distribution through Infiniscope and it's digital teaching network. Experience from this development effort has laid the groundwork to explore future integrations with JPL and other NASA partners.

  6. Bedmap2; Mapping, visualizing and communicating the Antarctic sub-glacial environment.

    NASA Astrophysics Data System (ADS)

    Fretwell, Peter; Pritchard, Hamish

    2013-04-01

    Bedmap2; Mapping, visualizing and communicating the Antarctic sub-glacial environment. The Bedmap2 project has been a large cooperative effort to compile, model, map and visualize the ice-rock interface beneath the Antarctic ice sheet. Here we present the final output of that project; the Bedmap2 printed map. The map is an A1, double sided print, showing 2d and 3d visualizations of the dataset. It includes scientific interpretations, cross sections and comparisons with other areas. Paper copies of the colour double sided map will be freely distributed at this session.

  7. Collaborations in art/science: Renaissance teams.

    PubMed

    Cox, D J

    1991-01-01

    A Renaissance Team is a group of specialists who collaborate and provide synergism in the quest for knowledge and information. Artists can participate in Renaissance Teams with scientists and computer specialists for scientific visualization projects. Some projects are described in which the author functioned as programmer and color expert, as interface designer, as visual paradigm maker, as animator, and as producer. Examples are provided for each of these five projects.

  8. Benchtop and Animal Validation of a Projective Imaging System for Potential Use in Intraoperative Surgical Guidance

    PubMed Central

    Gan, Qi; Wang, Dong; Ye, Jian; Zhang, Zeshu; Wang, Xinrui; Hu, Chuanzhen; Shao, Pengfei; Xu, Ronald X.

    2016-01-01

    We propose a projective navigation system for fluorescence imaging and image display in a natural mode of visual perception. The system consists of an excitation light source, a monochromatic charge coupled device (CCD) camera, a host computer, a projector, a proximity sensor and a Complementary metal–oxide–semiconductor (CMOS) camera. With perspective transformation and calibration, our surgical navigation system is able to achieve an overall imaging speed higher than 60 frames per second, with a latency of 330 ms, a spatial sensitivity better than 0.5 mm in both vertical and horizontal directions, and a projection bias less than 1 mm. The technical feasibility of image-guided surgery is demonstrated in both agar-agar gel phantoms and an ex vivo chicken breast model embedding Indocyanine Green (ICG). The biological utility of the system is demonstrated in vivo in a classic model of ICG hepatic metabolism. Our benchtop, ex vivo and in vivo experiments demonstrate the clinical potential for intraoperative delineation of disease margin and image-guided resection surgery. PMID:27391764

  9. Visual projection neurons in the Drosophila lobula link feature detection to distinct behavioral programs

    PubMed Central

    Wu, Ming; Nern, Aljoscha; Williamson, W Ryan; Morimoto, Mai M; Reiser, Michael B; Card, Gwyneth M; Rubin, Gerald M

    2016-01-01

    Visual projection neurons (VPNs) provide an anatomical connection between early visual processing and higher brain regions. Here we characterize lobula columnar (LC) cells, a class of Drosophila VPNs that project to distinct central brain structures called optic glomeruli. We anatomically describe 22 different LC types and show that, for several types, optogenetic activation in freely moving flies evokes specific behaviors. The activation phenotypes of two LC types closely resemble natural avoidance behaviors triggered by a visual loom. In vivo two-photon calcium imaging reveals that these LC types respond to looming stimuli, while another type does not, but instead responds to the motion of a small object. Activation of LC neurons on only one side of the brain can result in attractive or aversive turning behaviors depending on the cell type. Our results indicate that LC neurons convey information on the presence and location of visual features relevant for specific behaviors. DOI: http://dx.doi.org/10.7554/eLife.21022.001 PMID:28029094

  10. Neurodevelopmental effects of chronic exposure to elevated levels of pro-inflammatory cytokines in a developing visual system

    PubMed Central

    2010-01-01

    Background Imbalances in the regulation of pro-inflammatory cytokines have been increasingly correlated with a number of severe and prevalent neurodevelopmental disorders, including autism spectrum disorder, schizophrenia and Down syndrome. Although several studies have shown that cytokines have potent effects on neural function, their role in neural development is still poorly understood. In this study, we investigated the link between abnormal cytokine levels and neural development using the Xenopus laevis tadpole visual system, a model frequently used to examine the anatomical and functional development of neural circuits. Results Using a test for a visually guided behavior that requires normal visual system development, we examined the long-term effects of prolonged developmental exposure to three pro-inflammatory cytokines with known neural functions: interleukin (IL)-1β, IL-6 and tumor necrosis factor (TNF)-α. We found that all cytokines affected the development of normal visually guided behavior. Neuroanatomical imaging of the visual projection showed that none of the cytokines caused any gross abnormalities in the anatomical organization of this projection, suggesting that they may be acting at the level of neuronal microcircuits. We further tested the effects of TNF-α on the electrophysiological properties of the retinotectal circuit and found that long-term developmental exposure to TNF-α resulted in enhanced spontaneous excitatory synaptic transmission in tectal neurons, increased AMPA/NMDA ratios of retinotectal synapses, and a decrease in the number of immature synapses containing only NMDA receptors, consistent with premature maturation and stabilization of these synapses. Local interconnectivity within the tectum also appeared to remain widespread, as shown by increased recurrent polysynaptic activity, and was similar to what is seen in more immature, less refined tectal circuits. TNF-α treatment also enhanced the overall growth of tectal cell dendrites. Finally, we found that TNF-α-reared tadpoles had increased susceptibility to pentylenetetrazol-induced seizures. Conclusions Taken together our data are consistent with a model in which TNF-α causes premature stabilization of developing synapses within the tectum, therefore preventing normal refinement and synapse elimination that occurs during development, leading to increased local connectivity and epilepsy. This experimental model also provides an integrative approach to understanding the effects of cytokines on the development of neural circuits and may provide novel insights into the etiology underlying some neurodevelopmental disorders. PMID:20067608

  11. Neurodevelopmental effects of chronic exposure to elevated levels of pro-inflammatory cytokines in a developing visual system.

    PubMed

    Lee, Ryan H; Mills, Elizabeth A; Schwartz, Neil; Bell, Mark R; Deeg, Katherine E; Ruthazer, Edward S; Marsh-Armstrong, Nicholas; Aizenman, Carlos D

    2010-01-12

    Imbalances in the regulation of pro-inflammatory cytokines have been increasingly correlated with a number of severe and prevalent neurodevelopmental disorders, including autism spectrum disorder, schizophrenia and Down syndrome. Although several studies have shown that cytokines have potent effects on neural function, their role in neural development is still poorly understood. In this study, we investigated the link between abnormal cytokine levels and neural development using the Xenopus laevis tadpole visual system, a model frequently used to examine the anatomical and functional development of neural circuits. Using a test for a visually guided behavior that requires normal visual system development, we examined the long-term effects of prolonged developmental exposure to three pro-inflammatory cytokines with known neural functions: interleukin (IL)-1beta, IL-6 and tumor necrosis factor (TNF)-alpha. We found that all cytokines affected the development of normal visually guided behavior. Neuroanatomical imaging of the visual projection showed that none of the cytokines caused any gross abnormalities in the anatomical organization of this projection, suggesting that they may be acting at the level of neuronal microcircuits. We further tested the effects of TNF-alpha on the electrophysiological properties of the retinotectal circuit and found that long-term developmental exposure to TNF-alpha resulted in enhanced spontaneous excitatory synaptic transmission in tectal neurons, increased AMPA/NMDA ratios of retinotectal synapses, and a decrease in the number of immature synapses containing only NMDA receptors, consistent with premature maturation and stabilization of these synapses. Local interconnectivity within the tectum also appeared to remain widespread, as shown by increased recurrent polysynaptic activity, and was similar to what is seen in more immature, less refined tectal circuits. TNF-alpha treatment also enhanced the overall growth of tectal cell dendrites. Finally, we found that TNF-alpha-reared tadpoles had increased susceptibility to pentylenetetrazol-induced seizures. Taken together our data are consistent with a model in which TNF-alpha causes premature stabilization of developing synapses within the tectum, therefore preventing normal refinement and synapse elimination that occurs during development, leading to increased local connectivity and epilepsy. This experimental model also provides an integrative approach to understanding the effects of cytokines on the development of neural circuits and may provide novel insights into the etiology underlying some neurodevelopmental disorders.

  12. Simulation environment and graphical visualization environment: a COPD use-case

    PubMed Central

    2014-01-01

    Background Today, many different tools are developed to execute and visualize physiological models that represent the human physiology. Most of these tools run models written in very specific programming languages which in turn simplify the communication among models. Nevertheless, not all of these tools are able to run models written in different programming languages. In addition, interoperability between such models remains an unresolved issue. Results In this paper we present a simulation environment that allows, first, the execution of models developed in different programming languages and second the communication of parameters to interconnect these models. This simulation environment, developed within the Synergy-COPD project, aims at helping and supporting bio-researchers and medical students understand the internal mechanisms of the human body through the use of physiological models. This tool is composed of a graphical visualization environment, which is a web interface through which the user can interact with the models, and a simulation workflow management system composed of a control module and a data warehouse manager. The control module monitors the correct functioning of the whole system. The data warehouse manager is responsible for managing the stored information and supporting its flow among the different modules. This simulation environment has been validated with the integration of three models: two deterministic, i.e. based on linear and differential equations, and one probabilistic, i.e., based on probability theory. These models have been selected based on the disease under study in this project, i.e., chronic obstructive pulmonary disease. Conclusion It has been proved that the simulation environment presented here allows the user to research and study the internal mechanisms of the human physiology by the use of models via a graphical visualization environment. A new tool for bio-researchers is ready for deployment in various use cases scenarios. PMID:25471327

  13. A mixed reality approach for stereo-tomographic quantification of lung nodules.

    PubMed

    Chen, Mianyi; Kalra, Mannudeep K; Yun, Wenbing; Cong, Wenxiang; Yang, Qingsong; Nguyen, Terry; Wei, Biao; Wang, Ge

    2016-05-25

    To reduce the radiation dose and the equipment cost associated with lung CT screening, in this paper we propose a mixed reality based nodule measurement method with an active shutter stereo imaging system. Without involving hundreds of projection views and subsequent image reconstruction, we generated two projections of an iteratively placed ellipsoidal volume in the field of view and merging these synthetic projections with two original CT projections. We then demonstrated the feasibility of measuring the position and size of a nodule by observing whether projections of an ellipsoidal volume and the nodule are overlapped from a human observer's visual perception through the active shutter 3D vision glasses. The average errors of measured nodule parameters are less than 1 mm in the simulated experiment with 8 viewers. Hence, it could measure real nodules accurately in the experiments with physically measured projections.

  14. Motion parallax in immersive cylindrical display systems

    NASA Astrophysics Data System (ADS)

    Filliard, N.; Reymond, G.; Kemeny, A.; Berthoz, A.

    2012-03-01

    Motion parallax is a crucial visual cue produced by translations of the observer for the perception of depth and selfmotion. Therefore, tracking the observer viewpoint has become inevitable in immersive virtual (VR) reality systems (cylindrical screens, CAVE, head mounted displays) used e.g. in automotive industry (style reviews, architecture design, ergonomics studies) or in scientific studies of visual perception. The perception of a stable and rigid world requires that this visual cue be coherent with other extra-retinal (e.g. vestibular, kinesthetic) cues signaling ego-motion. Although world stability is never questioned in real world, rendering head coupled viewpoint in VR can lead to the perception of an illusory perception of unstable environments, unless a non-unity scale factor is applied on recorded head movements. Besides, cylindrical screens are usually used with static observers due to image distortions when rendering image for viewpoints different from a sweet spot. We developed a technique to compensate in real-time these non-linear visual distortions, in an industrial VR setup, based on a cylindrical screen projection system. Additionally, to evaluate the amount of discrepancies tolerated without perceptual distortions between visual and extraretinal cues, a "motion parallax gain" between the velocity of the observer's head and that of the virtual camera was introduced in this system. The influence of this artificial gain was measured on the gait stability of free-standing participants. Results indicate that, below unity, gains significantly alter postural control. Conversely, the influence of higher gains remains limited, suggesting a certain tolerance of observers to these conditions. Parallax gain amplification is therefore proposed as a possible solution to provide a wider exploration of space to users of immersive virtual reality systems.

  15. Exploring New Methods of Displaying Bit-Level Quality and Other Flags for MODIS Data

    NASA Technical Reports Server (NTRS)

    Khalsa, Siri Jodha Singh; Weaver, Ron

    2003-01-01

    The NASA Distributed Active Archive Center (DAAC) at the National Snow and Ice Data Center (NSIDC) archives and distributes snow and sea ice products derived from the MODerate resolution Imaging Spectroradiometer (MODIS) on board NASA's Terra and Aqua satellites. All MODIS standard products are in the Earth Observing System version of the Hierarchal Data Format (HDF-EOS). The MODIS science team has packed a wealth of information into each HDF-EOS file. In addition to the science data arrays containing the geophysical product, there are often pixel-level Quality Assurance arrays which are important for understanding and interpreting the science data. Currently, researchers are limited in their ability to access and decode information stored as individual bits in many of the MODIS science products. Commercial and public domain utilities give users access, in varying degrees, to the elements inside MODIS HDF-EOS files. However, when attempting to visualize the data, users are confronted with the fact that many of the elements actually represent eight different 1-bit arrays packed into a single byte array. This project addressed the need for researchers to access bit-level information inside MODIS data files. In an previous NASA-funded project (ESDIS Prototype ID 50.0) we developed a visualization tool tailored to polar gridded HDF-EOS data set. This tool,called the Polar researchers to access, geolocate, visualize, and subset data that originate from different sources and have different spatial resolutions but which are placed on a common polar grid. The bit-level visualization function developed under this project was added to PHDIS, resulting in a versatile tool that serves a variety of needs. We call this the EOS Imaging Tool.

  16. VGLUT2 mRNA and protein expression in the visual thalamus and midbrain of prosimian galagos (Otolemur garnetti).

    PubMed

    Balaram, Pooja; Takahata, Toru; Kaas, Jon H

    2011-03-01

    Vesicular glutamate transporters (VGLUTs) control the storage and presynaptic release of glutamate in the central nervous system, and are involved in the majority of glutamatergic transmission in the brain. Two VGLUT isoforms, VGLUT1 and VGLUT2, are known to characterize complementary distributions of glutamatergic neurons in the rodent brain, which suggests that they are each responsible for unique circuits of excitatory transmission. In rodents, VGLUT2 is primarily utilized in thalamocortical circuits, and is strongly expressed in the primary sensory nuclei, including all areas of the visual thalamus. The distribution of VGLUT2 in the visual thalamus and midbrain has yet to be characterized in primate species. Thus, the present study describes the expression of VGLUT2 mRNA and protein across the visual thalamus and superior colliculus of prosimian galagos to provide a better understanding of glutamatergic transmission in the primate brain. VGLUT2 is strongly expressed in all six layers of the dorsal lateral geniculate nucleus, and much less so in the intralaminar zones, which correspond to retinal and superior collicular inputs, respectively. The parvocellular and magnocellular layers expressed VGLUT2 mRNA more densely than the koniocellular layers. A patchy distribution of VGLUT2 positive terminals in the pulvinar complex possibly reflects inputs from the superior colliculus. The upper superficial granular layers of the superior colliculus, with inputs from the retina, most densely expressed VGLUT2 protein, while the lower superficial granular layers, with projections to the pulvinar, most densely expressed VGLUT2 mRNA. The results are consistent with the conclusion that retinal and superior colliculus projections to the thalamus depend highly on the VGLUT2 transporter, as do cortical projections from the magnocellular and parvocellular layers of the lateral geniculate nucleus and neurons of the pulvinar complex.

  17. A Data-Driven Approach to Interactive Visualization of Power Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Jun

    Driven by emerging industry standards, electric utilities and grid coordination organizations are eager to seek advanced tools to assist grid operators to perform mission-critical tasks and enable them to make quick and accurate decisions. The emerging field of visual analytics holds tremendous promise for improving the business practices in today’s electric power industry. The conducted investigation, however, has revealed that the existing commercial power grid visualization tools heavily rely on human designers, hindering user’s ability to discover. Additionally, for a large grid, it is very labor-intensive and costly to build and maintain the pre-designed visual displays. This project proposes amore » data-driven approach to overcome the common challenges. The proposed approach relies on developing powerful data manipulation algorithms to create visualizations based on the characteristics of empirically or mathematically derived data. The resulting visual presentations emphasize what the data is rather than how the data should be presented, thus fostering comprehension and discovery. Furthermore, the data-driven approach formulates visualizations on-the-fly. It does not require a visualization design stage, completely eliminating or significantly reducing the cost for building and maintaining visual displays. The research and development (R&D) conducted in this project is mainly divided into two phases. The first phase (Phase I & II) focuses on developing data driven techniques for visualization of power grid and its operation. Various data-driven visualization techniques were investigated, including pattern recognition for auto-generation of one-line diagrams, fuzzy model based rich data visualization for situational awareness, etc. The R&D conducted during the second phase (Phase IIB) focuses on enhancing the prototyped data driven visualization tool based on the gathered requirements and use cases. The goal is to evolve the prototyped tool developed during the first phase into a commercial grade product. We will use one of the identified application areas as an example to demonstrate how research results achieved in this project are successfully utilized to address an emerging industry need. In summary, the data-driven visualization approach developed in this project has proven to be promising for building the next-generation power grid visualization tools. Application of this approach has resulted in a state-of-the-art commercial tool currently being leveraged by more than 60 utility organizations in North America and Europe .« less

  18. National Laboratory for Advanced Scientific Visualization at UNAM - Mexico

    NASA Astrophysics Data System (ADS)

    Manea, Marina; Constantin Manea, Vlad; Varela, Alfredo

    2016-04-01

    In 2015, the National Autonomous University of Mexico (UNAM) joined the family of Universities and Research Centers where advanced visualization and computing plays a key role to promote and advance missions in research, education, community outreach, as well as business-oriented consulting. This initiative provides access to a great variety of advanced hardware and software resources and offers a range of consulting services that spans a variety of areas related to scientific visualization, among which are: neuroanatomy, embryonic development, genome related studies, geosciences, geography, physics and mathematics related disciplines. The National Laboratory for Advanced Scientific Visualization delivers services through three main infrastructure environments: the 3D fully immersive display system Cave, the high resolution parallel visualization system Powerwall, the high resolution spherical displays Earth Simulator. The entire visualization infrastructure is interconnected to a high-performance-computing-cluster (HPCC) called ADA in honor to Ada Lovelace, considered to be the first computer programmer. The Cave is an extra large 3.6m wide room with projected images on the front, left and right, as well as floor walls. Specialized crystal eyes LCD-shutter glasses provide a strong stereo depth perception, and a variety of tracking devices allow software to track the position of a user's hand, head and wand. The Powerwall is designed to bring large amounts of complex data together through parallel computing for team interaction and collaboration. This system is composed by 24 (6x4) high-resolution ultra-thin (2 mm) bezel monitors connected to a high-performance GPU cluster. The Earth Simulator is a large (60") high-resolution spherical display used for global-scale data visualization like geophysical, meteorological, climate and ecology data. The HPCC-ADA, is a 1000+ computing core system, which offers parallel computing resources to applications that requires large quantity of memory as well as large and fast parallel storage systems. The entire system temperature is controlled by an energy and space efficient cooling solution, based on large rear door liquid cooled heat exchangers. This state-of-the-art infrastructure will boost research activities in the region, offer a powerful scientific tool for teaching at undergraduate and graduate levels, and enhance association and cooperation with business-oriented organizations.

  19. Nonlinear projection methods for visualizing Barcode data and application on two data sets.

    PubMed

    Olteanu, Madalina; Nicolas, Violaine; Schaeffer, Brigitte; Denys, Christiane; Missoup, Alain-Didier; Kennis, Jan; Larédo, Catherine

    2013-11-01

    Developing tools for visualizing DNA sequences is an important issue in the Barcoding context. Visualizing Barcode data can be put in a purely statistical context, unsupervised learning. Clustering methods combined with projection methods have two closely linked objectives, visualizing and finding structure in the data. Multidimensional scaling (MDS) and Self-organizing maps (SOM) are unsupervised statistical tools for data visualization. Both algorithms map data onto a lower dimensional manifold: MDS looks for a projection that best preserves pairwise distances while SOM preserves the topology of the data. Both algorithms were initially developed for Euclidean data and the conditions necessary to their good implementation were not satisfied for Barcode data. We developed a workflow consisting in four steps: collapse data into distinct sequences; compute a dissimilarity matrix; run a modified version of SOM for dissimilarity matrices to structure the data and reduce dimensionality; project the results using MDS. This methodology was applied to Astraptes fulgerator and Hylomyscus, an African rodent with debated taxonomy. We obtained very good results for both data sets. The results were robust against unbalanced species. All the species in Astraptes were well displayed in very distinct groups in the various visualizations, except for LOHAMP and FABOV that were mixed up. For Hylomyscus, our findings were consistent with known species, confirmed the existence of four unnamed taxa and suggested the existence of potentially new species. © 2013 John Wiley & Sons Ltd.

  20. Rapid fusion of 2D X-ray fluoroscopy with 3D multislice CT for image-guided electrophysiology procedures

    NASA Astrophysics Data System (ADS)

    Zagorchev, Lyubomir; Manzke, Robert; Cury, Ricardo; Reddy, Vivek Y.; Chan, Raymond C.

    2007-03-01

    Interventional cardiac electrophysiology (EP) procedures are typically performed under X-ray fluoroscopy for visualizing catheters and EP devices relative to other highly-attenuating structures such as the thoracic spine and ribs. These projections do not however contain information about soft-tissue anatomy and there is a recognized need for fusion of conventional fluoroscopy with pre-operatively acquired cardiac multislice computed tomography (MSCT) volumes. Rapid 2D-3D integration in this application would allow for real-time visualization of all catheters present within the thorax in relation to the cardiovascular anatomy visible in MSCT. We present a method for rapid fusion of 2D X-ray fluoroscopy with 3DMSCT that can facilitate EP mapping and interventional procedures by reducing the need for intra-operative contrast injections to visualize heart chambers and specialized systems to track catheters within the cardiovascular anatomy. We use hardware-accelerated ray-casting to compute digitally reconstructed radiographs (DRRs) from the MSCT volume and iteratively optimize the rigid-body pose of the volumetric data to maximize the similarity between the MSCT-derived DRR and the intra-operative X-ray projection data.

  1. Science-Driven Computing: NERSC's Plan for 2006-2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simon, Horst D.; Kramer, William T.C.; Bailey, David H.

    NERSC has developed a five-year strategic plan focusing on three components: Science-Driven Systems, Science-Driven Services, and Science-Driven Analytics. (1) Science-Driven Systems: Balanced introduction of the best new technologies for complete computational systems--computing, storage, networking, visualization and analysis--coupled with the activities necessary to engage vendors in addressing the DOE computational science requirements in their future roadmaps. (2) Science-Driven Services: The entire range of support activities, from high-quality operations and user services to direct scientific support, that enable a broad range of scientists to effectively use NERSC systems in their research. NERSC will concentrate on resources needed to realize the promise ofmore » the new highly scalable architectures for scientific discovery in multidisciplinary computational science projects. (3) Science-Driven Analytics: The architectural and systems enhancements and services required to integrate NERSC's powerful computational and storage resources to provide scientists with new tools to effectively manipulate, visualize, and analyze the huge data sets derived from simulations and experiments.« less

  2. Tachistoscopic illumination and masking of real scenes.

    PubMed

    Chichka, David; Philbeck, John W; Gajewski, Daniel A

    2015-03-01

    Tachistoscopic presentation of scenes has been valuable for studying the emerging properties of visual scene representations. The spatial aspects of this work have generally been focused on the conceptual locations (e.g., next to the refrigerator) and directional locations of objects in 2-D arrays and/or images. Less is known about how the perceived egocentric distance of objects develops. Here we describe a novel system for presenting brief glimpses of a real-world environment, followed by a mask. The system includes projectors with mechanical shutters for projecting the fixation and masking images, a set of LED floodlights for illuminating the environment, and computer-controlled electronics to set the timing and initiate the process. Because a real environment is used, most visual distance and depth cues can be manipulated using traditional methods. The system is inexpensive, robust, and its components are readily available in the marketplace. This article describes the system and the timing characteristics of each component. We verified the system's ability to control exposure to time scales as low as a few milliseconds.

  3. Review of ultraresolution (10-100 megapixel) visualization systems built by tiling commercial display components

    NASA Astrophysics Data System (ADS)

    Hopper, Darrel G.; Haralson, David G.; Simpson, Matthew A.; Longo, Sam J.

    2002-08-01

    Ultra-resolution visualization systems are achieved by the technique of tiling many direct or project-view displays. During the past fews years, several such systems have been built from commercial electronics components (displays, computers, image generators, networks, communication links, and software). Civil applications driving this development have independently determined that they require images at 10-100 megapixel (Mpx) resolution to enable state-of-the-art research, engineering, design, stock exchanges, flight simulators, business information and enterprise control centers, education, art and entertainment. Military applications also press the art of the possible to improve the productivity of warfighters and lower the cost of providing for the national defense. The environment in some 80% of defense applications can be addressed by ruggedization of commercial components. This paper reviews the status of ultra-resolution systems based on commercial components and describes a vision for their integration into advanced yet affordable military command centers, simulator/trainers, and, eventually, crew stations in air, land, sea and space systems.

  4. Geothopica and the interactive analysis and visualization of the updated Italian National Geothermal Database

    NASA Astrophysics Data System (ADS)

    Trumpy, Eugenio; Manzella, Adele

    2017-02-01

    The Italian National Geothermal Database (BDNG), is the largest collection of Italian Geothermal data and was set up in the 1980s. It has since been updated both in terms of content and management tools: information on deep wells and thermal springs (with temperature > 30 °C) are currently organized and stored in a PostgreSQL relational database management system, which guarantees high performance, data security and easy access through different client applications. The BDNG is the core of the Geothopica web site, whose webGIS tool allows different types of user to access geothermal data, to visualize multiple types of datasets, and to perform integrated analyses. The webGIS tool has been recently improved by two specially designed, programmed and implemented visualization tools to display data on well lithology and underground temperatures. This paper describes the contents of the database and its software and data update, as well as the webGIS tool including the new tools for data lithology and temperature visualization. The geoinformation organized in the database and accessible through Geothopica is of use not only for geothermal purposes, but also for any kind of georesource and CO2 storage project requiring the organization of, and access to, deep underground data. Geothopica also supports project developers, researchers, and decision makers in the assessment, management and sustainable deployment of georesources.

  5. Visualizations and Mental Models - The Educational Implications of GEOWALL

    NASA Astrophysics Data System (ADS)

    Rapp, D.; Kendeou, P.

    2003-12-01

    Work in the earth sciences has outlined many of the faulty beliefs that students possess concerning particular geological systems and processes. Evidence from educational and cognitive psychology has demonstrated that students often have difficulty overcoming their na‹ve beliefs about science. Prior knowledge is often remarkably resistant to change, particularly when students' existing mental models for geological principles may be faulty or inaccurate. Figuring out how to help students revise their mental models to include appropriate information is a major challenge. Up until this point, research has tended to focus on whether 2-dimensional computer visualizations are useful tools for helping students develop scientifically correct models. Research suggests that when students are given the opportunity to use dynamic computer-based visualizations, they are more likely to recall the learned information, and are more likely to transfer that knowledge to novel settings. Unfortunately, 2-dimensional visualization systems are often inadequate representations of the material that educators would like students to learn. For example, a 2-dimensional image of the Earth's surface does not adequately convey particular features that are critical for visualizing the geological environment. This may limit the models that students can construct following these visualizations. GEOWALL is a stereo projection system that has attempted to address this issue. It can display multidimensional static geologic images and dynamic geologic animations in a 3-dimensional format. Our current research examines whether multidimensional visualization systems such as GEOWALL may facilitate learning by helping students to develop more complex mental models. This talk will address some of the cognitive issues that influence the construction of mental models, and the difficulty of updating existing mental models. We will also discuss our current work that seeks to examine whether GEOWALL is an effective tool for helping students to learn geological information (and potentially restructure their na‹ve conceptions of geologic principles).

  6. A Computer Supported Teamwork Project for People with a Visual Impairment.

    ERIC Educational Resources Information Center

    Hale, Greg

    2000-01-01

    Discussion of the use of computer supported teamwork (CSTW) in team-based organizations focuses on problems that visually impaired people have reading graphical user interface software via screen reader software. Describes a project that successfully used email for CSTW, and suggests issues needing further research. (LRW)

  7. Creative Arts and Crafts for Children with Visual Handicaps.

    ERIC Educational Resources Information Center

    Sykes, Kim C.; And Others

    This teaching guide gives instructions for 23 creative art or craft projects thought to be appropriate for use with visually handicapped children. Usually included for each project are the educational objective, materials and equipment needed, procedure, possible variations, and photographs. The following types of activity are recommended: tempera…

  8. Visualization and interaction tools for aerial photograph mosaics

    NASA Astrophysics Data System (ADS)

    Fernandes, João Pedro; Fonseca, Alexandra; Pereira, Luís; Faria, Adriano; Figueira, Helder; Henriques, Inês; Garção, Rita; Câmara, António

    1997-05-01

    This paper describes the development of a digital spatial library based on mosaics of digital orthophotos, called Interactive Portugal, that will enable users both to retrieve geospatial information existing in the Portuguese National System for Geographic Information World Wide Web server, and to develop local databases connected to the main system. A set of navigation, interaction, and visualization tools are proposed and discussed. They include sketching, dynamic sketching, and navigation capabilities over the digital orthophotos mosaics. Main applications of this digital spatial library are pointed out and discussed, namely for education, professional, and tourism markets. Future developments are considered. These developments are related to user reactions, technological advancements, and projects that also aim at delivering and exploring digital imagery on the World Wide Web. Future capabilities for site selection and change detection are also considered.

  9. Distributed Visualization Project

    NASA Technical Reports Server (NTRS)

    Craig, Douglas; Conroy, Michael; Kickbusch, Tracey; Mazone, Rebecca

    2016-01-01

    Distributed Visualization allows anyone, anywhere to see any simulation at any time. Development focuses on algorithms, software, data formats, data systems and processes to enable sharing simulation-based information across temporal and spatial boundaries without requiring stakeholders to possess highly-specialized and very expensive display systems. It also introduces abstraction between the native and shared data, which allows teams to share results without giving away proprietary or sensitive data. The initial implementation of this capability is the Distributed Observer Network (DON) version 3.1. DON 3.1 is available for public release in the NASA Software Store (https://software.nasa.gov/software/KSC-13775) and works with version 3.0 of the Model Process Control specification (an XML Simulation Data Representation and Communication Language) to display complex graphical information and associated Meta-Data.

  10. Application of IT-technologies in visualization of innovation project life-cycle stages during the study of the course "Management of innovation projects"

    NASA Astrophysics Data System (ADS)

    Kolychev, V. D.; Prokhorov, I. V.

    2017-01-01

    The article presents a methodology for the application of IT-technologies in teaching discipline "Management of innovation projects," which helps students to be more competitive and gather the useful skills for their future specialization in high-tech areas. IT-technologies are widely used nowadays in educational and training spheres especially in knowledge-intensive disciplines such as systems analysis, the theory of games, operations research, theory of risks, innovation management etc. For studying such courses it is necessary to combine both mathematical models and information technology approaches for the clear understanding of the investigated object. That is why this article comprises both the framework of research and the IT-tools for investigation in the educational process. Taking into consideration the importance of the IT-system implementation especially for the university we assume to suggest the methods of research in the area of innovation projects with the help of IT-support.

  11. Resolving ability and image discretization in the visual system.

    PubMed

    Shelepin, Yu E; Bondarko, V M

    2004-02-01

    Psychophysiological studies were performed to measure the spatial threshold for resolution of two "points" and the thresholds for discriminating their orientations depending on the distance between the two points. Data were compared with the scattering of the "point" by the eye's optics, the packing density of cones in the fovea, and the characteristics of the receptive fields of ganglion cells in the foveal area of the retina and neurons in the corresponding projection zones of the primary visual cortex. The effective zone was shown to have to contain a scattering function for several receptors, as this allowed preliminary blurring of the image by the eye's optics to decrease the subsequent (at the level of receptors) discretization noise created by a matrix of receptors. The concordance of these parameters supports the optical operation of the spatial elements of the neural network determining the resolving ability of the visual system at different levels of visual information processing. It is suggested that the special geometry of the receptive fields of neurons in the striate cortex, which are concordant with the statistics of natural scenes, results in a further increase in the signal:noise ratio.

  12. Juno Mission Simulation

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Weidner, Richard J.

    2008-01-01

    The Juno spacecraft is planned to launch in August of 2012 and would arrive at Jupiter four years later. The spacecraft would spend more than one year orbiting the planet and investigating the existence of an ice-rock core; determining the amount of global water and ammonia present in the atmosphere, studying convection and deep- wind profiles in the atmosphere; investigating the origin of the Jovian magnetic field, and exploring the polar magnetosphere. Juno mission management is responsible for mission and navigation design, mission operation planning, and ground-data-system development. In order to ensure successful mission management from initial checkout to final de-orbit, it is critical to share a common vision of the entire mission operation phases with the rest of the project teams. Two major challenges are 1) how to develop a shared vision that can be appreciated by all of the project teams of diverse disciplines and expertise, and 2) how to continuously evolve a shared vision as the project lifecycle progresses from formulation phase to operation phase. The Juno mission simulation team addresses these challenges by developing agile and progressive mission models, operation simulations, and real-time visualization products. This paper presents mission simulation visualization network (MSVN) technology that has enabled a comprehensive mission simulation suite (MSVN-Juno) for the Juno project.

  13. Synthetic Vision System Commercial Aircraft Flight Deck Display Technologies for Unusual Attitude Recovery

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Ellis, Kyle E.; Arthur, Jarvis J.; Nicholas, Stephanie N.; Kiggins, Daniel

    2017-01-01

    A Commercial Aviation Safety Team (CAST) study of 18 worldwide loss-of-control accidents and incidents determined that the lack of external visual references was associated with a flight crew's loss of attitude awareness or energy state awareness in 17 of these events. Therefore, CAST recommended development and implementation of virtual day-Visual Meteorological Condition (VMC) display systems, such as synthetic vision systems, which can promote flight crew attitude awareness similar to a day-VMC environment. This paper describes the results of a high-fidelity, large transport aircraft simulation experiment that evaluated virtual day-VMC displays and a "background attitude indicator" concept as an aid to pilots in recovery from unusual attitudes. Twelve commercial airline pilots performed multiple unusual attitude recoveries and both quantitative and qualitative dependent measures were collected. Experimental results and future research directions under this CAST initiative and the NASA "Technologies for Airplane State Awareness" research project are described.

  14. Visual Complexity and Pictorial Memory: A Fifteen Year Research Perspective.

    ERIC Educational Resources Information Center

    Berry, Louis H.

    For 15 years an ongoing research project at the University of Pittsburgh has focused on the effects of variations in visual complexity and color on the storage and retrieval of visual information by learners. Research has shown that visual materials facilitate instruction, but has not fully delineated the interactions of visual complexity and…

  15. Integrating human and machine intelligence in galaxy morphology classification tasks

    NASA Astrophysics Data System (ADS)

    Beck, Melanie R.; Scarlata, Claudia; Fortson, Lucy F.; Lintott, Chris J.; Simmons, B. D.; Galloway, Melanie A.; Willett, Kyle W.; Dickinson, Hugh; Masters, Karen L.; Marshall, Philip J.; Wright, Darryl

    2018-06-01

    Quantifying galaxy morphology is a challenging yet scientifically rewarding task. As the scale of data continues to increase with upcoming surveys, traditional classification methods will struggle to handle the load. We present a solution through an integration of visual and automated classifications, preserving the best features of both human and machine. We demonstrate the effectiveness of such a system through a re-analysis of visual galaxy morphology classifications collected during the Galaxy Zoo 2 (GZ2) project. We reprocess the top-level question of the GZ2 decision tree with a Bayesian classification aggregation algorithm dubbed SWAP, originally developed for the Space Warps gravitational lens project. Through a simple binary classification scheme, we increase the classification rate nearly 5-fold classifying 226 124 galaxies in 92 d of GZ2 project time while reproducing labels derived from GZ2 classification data with 95.7 per cent accuracy. We next combine this with a Random Forest machine learning algorithm that learns on a suite of non-parametric morphology indicators widely used for automated morphologies. We develop a decision engine that delegates tasks between human and machine and demonstrate that the combined system provides at least a factor of 8 increase in the classification rate, classifying 210 803 galaxies in just 32 d of GZ2 project time with 93.1 per cent accuracy. As the Random Forest algorithm requires a minimal amount of computational cost, this result has important implications for galaxy morphology identification tasks in the era of Euclid and other large-scale surveys.

  16. Results of prototype software development for automation of shuttle proximity operations

    NASA Technical Reports Server (NTRS)

    Hiers, Harry K.; Olszewski, Oscar W.

    1991-01-01

    A Rendezvous Expert System (REX) was implemented on a Symbolics 3650 processor and integrated with the 6 DOF, high fidelity Systems Engineering Simulator (SES) at the NASA Johnson Space Center in Houston, Texas. The project goals were to automate the terminal phase of a shuttle rendezvous, normally flown manually by the crew, and proceed automatically to docking with the Space Station Freedom (SSF). The project goals were successfully demonstrated to various flight crew members, managers, and engineers in the technical community at JSC. The project was funded by NASA's Office of Space Flight, Advanced Program Development Division. Because of the complexity of the task, the REX development was divided into two distinct efforts. One to handle the guidance and control function using perfect navigation data, and another to provide the required visuals for the system management functions needed to give visibility to the crew members of the progress being made towards docking the shuttle with the LVLH stabilized SSF.

  17. An Application of Project-Based Learning in an Urban Project Topic in the Visual Arts Course in 8th Classes of Primary Education

    ERIC Educational Resources Information Center

    Kalyoncu, Raif; Tepecik, Adnan

    2010-01-01

    The purpose of this study is to measure the effect of project-based learning that is used in visual arts course on students' academic success and permanence. The research was applied to students of Hasan Ali Yucel Primary School in the city of Trabzon during the fall semester of 2007-2008 academic year. Among the sample that had been selected…

  18. Scientific Visualization for Atmospheric Data Analysis in Collaborative Virtual Environments

    NASA Astrophysics Data System (ADS)

    Engelke, Wito; Flatken, Markus; Garcia, Arturo S.; Bar, Christian; Gerndt, Andreas

    2016-04-01

    1 INTRODUCTION The three year European research project CROSS DRIVE (Collaborative Rover Operations and Planetary Science Analysis System based on Distributed Remote and Interactive Virtual Environments) started in January 2014. The research and development within this project is motivated by three use case studies: landing site characterization, atmospheric science and rover target selection [1]. Currently the implementation for the second use case is in its final phase [2]. Here, the requirements were generated based on the domain experts input and lead to development and integration of appropriate methods for visualization and analysis of atmospheric data. The methods range from volume rendering, interactive slicing, iso-surface techniques to interactive probing. All visualization methods are integrated in DLR's Terrain Rendering application. With this, the high resolution surface data visualization can be enriched with additional methods appropriate for atmospheric data sets. This results in an integrated virtual environment where the scientist has the possibility to interactively explore his data sets directly within the correct context. The data sets include volumetric data of the martian atmosphere, precomputed two dimensional maps and vertical profiles. In most cases the surface data as well as the atmospheric data has global coverage and is of time dependent nature. Furthermore, all interaction is synchronized between different connected application instances, allowing for collaborative sessions between distant experts. 2 VISUALIZATION TECHNIQUES Also the application is currently used for visualization of data sets related to Mars the techniques can be used for other data sets as well. Currently the prototype is capable of handling 2 and 2.5D surface data as well as 4D atmospheric data. Specifically, the surface data is presented using an LoD approach which is based on the HEALPix tessellation of a sphere [3, 4, 5] and can handle data sets in the order of terabytes. The combination of different data sources (e.g., MOLA, HRSC, HiRISE) and selection of presented data (e.g., infrared, spectral, imagery) is also supported. Furthermore, the data is presented unchanged and with the highest possible resolution for the target setup (e.g., power-wall, workstation, laptop) and view distance. The visualization techniques for the volumetric data sets can handle VTK [6] based data sets and also support different grid types as well as a time component. In detail, the integrated volume rendering uses a GPU based ray casting algorithm which was adapted to work in spherical coordinate systems. This approach results in interactive frame-rates without compromising visual fidelity. Besides direct visualization via volume rendering the prototype supports interactive slicing, extraction of iso-surfaces and probing. The latter can also be used for side-by-side comparison and on-the-fly diagram generation within the application. Similarily to the surface data a combination of different data sources is supported as well. For example, the extracted iso-surface of a scalar pressure field can be used for the visualization of the temperature. The software development is supported by the ViSTA VR-toolkit [7] and supports different target systems as well as a wide range of VR-devices. Furthermore, the prototype is scalable to run on laptops, workstations and cluster setups. REFERENCES [1] A. S. Garcia, D. J. Roberts, T. Fernando, C. Bar, R. Wolff, J. Dodiya, W. Engelke, and A. Gerndt, "A collaborative workspace architecture for strengthening collaboration among space scientists," in IEEE Aerospace Conference, (Big Sky, Montana, USA), 7-14 March 2015. [2] W. Engelke, "Mars Cartography VR System 2/3." German Aerospace Center (DLR), 2015. Project Deliverable D4.2. [3] E. Hivon, F. K. Hansen, and A. J. Banday, "The healpix primer," arXivpreprint astro-ph/9905275, 1999. [4] K. M. Gorski, E. Hivon, A. Banday, B. D. Wandelt, F. K. Hansen, M. Reinecke, and M. Bartelmann, "Healpix: a framework for high-resolution discretization and fast analysis of data distributed on the sphere," The Astrophysical Journal, vol. 622, no. 2, p. 759, 2005. [5] R. Westerteiger, A. Gerndt, and B. Hamann, "Spherical terrain render- ing using the hierarchical healpix grid," VLUDS, vol. 11, pp. 13-23, 2011. [6] W. Schroeder, K. Martin, and B. Lorensen, The Visualization Toolkit. Kitware, 4 ed., 2006. [7] T. van Reimersdahl, T. Kuhlen, A. Gerndt, J. Henrichs, and C. Bischof, "ViSTA: a multimodal, platform-independent VR-toolkit based on WTK, VTK, and MPI," in Proceedings of the 4th International Immersive Projection Technology Workshop (IPT), 2000.

  19. Marshall Space Flight Center Telescience Resource Kit

    NASA Technical Reports Server (NTRS)

    Wade, Gina

    2016-01-01

    Telescience Resource Kit (TReK) is a suite of software applications that can be used to monitor and control assets in space or on the ground. The Telescience Resource Kit was originally developed for the International Space Station program. Since then it has been used to support a variety of NASA programs and projects including the WB-57 Ascent Vehicle Experiment (WAVE) project, the Fast Affordable Science and Technology Satellite (FASTSAT) project, and the Constellation Program. The Payloads Operations Center (POC), also known as the Payload Operations Integration Center (POIC), provides the capability for payload users to operate their payloads at their home sites. In this environment, TReK provides local ground support system services and an interface to utilize remote services provided by the POC. TReK provides ground system services for local and remote payload user sites including International Partner sites, Telescience Support Centers, and U.S. Investigator sites in over 40 locations worldwide. General Capabilities: Support for various data interfaces such as User Datagram Protocol, Transmission Control Protocol, and Serial interfaces. Data Services - retrieve, process, record, playback, forward, and display data (ground based data or telemetry data). Command - create, modify, send, and track commands. Command Management - Configure one TReK system to serve as a command server/filter for other TReK systems. Database - databases are used to store telemetry and command definition information. Application Programming Interface (API) - ANSI C interface compatible with commercial products such as Visual C++, Visual Basic, LabVIEW, Borland C++, etc. The TReK API provides a bridge for users to develop software to access and extend TReK services. Environments - development, test, simulations, training, and flight. Includes standalone training simulators.

  20. RIMS: An Integrated Mapping and Analysis System with Applications to Earth Sciences and Hydrology

    NASA Astrophysics Data System (ADS)

    Proussevitch, A. A.; Glidden, S.; Shiklomanov, A. I.; Lammers, R. B.

    2011-12-01

    A web-based information and computational system for analysis of spatially distributed Earth system, climate, and hydrologic data have been developed. The System allows visualization, data exploration, querying, manipulation and arbitrary calculations with any loaded gridded or vector polygon dataset. The system's acronym, RIMS, stands for its core functionality as a Rapid Integrated Mapping System. The system can be deployed for a Global scale projects as well as for regional hydrology and climatology studies. In particular, the Water Systems Analysis Group of the University of New Hampshire developed the global and regional (Northern Eurasia, pan-Arctic) versions of the system with different map projections and specific data. The system has demonstrated its potential for applications in other fields of Earth sciences and education. The key Web server/client components of the framework include (a) a visualization engine built on Open Source libraries (GDAL, PROJ.4, etc.) that are utilized in a MapServer; (b) multi-level data querying tools built on XML server-client communication protocols that allow downloading map data on-the-fly to a client web browser; and (c) data manipulation and grid cell level calculation tools that mimic desktop GIS software functionality via a web interface. Server side data management of the system is designed around a simple database of dataset metadata facilitating mounting of new data to the system and maintaining existing data in an easy manner. RIMS contains "built-in" river network data that allows for query of upstream areas on-demand which can be used for spatial data aggregation and analysis of sub-basin areas. RIMS is an ongoing effort and currently being used to serve a number of websites hosting a suite of hydrologic, environmental and other GIS data.

  1. [Evolutionary significance of reciprocal connections in the turtle tectofugal visual system].

    PubMed

    Kenigfest, N B; Belekhova, M G

    2009-01-01

    In two turtle species--Emys orbicularis and Testudo horsfieldi--by the method of anterograde and retrograde traicing method at the light and electron microscopy level, the existence is proven of direct descending projections from the thalamic nucleus of the tectofugal visual system n. rotunds (Rot) to the optic tectum. After injection of tracers into Rot alone and into Rot with involvement of the tectothalamic tract (Trtth), occasional labeled fibers with varicosities and terminals are revealed predominantly in the deep sublayers of SGFS of the rostral optic tectum, while in the lower amount in other tectal layers. After the tracer injections into the optic tectum, a few retrogradely labeled neurons were found mainly in the Rot ventral parts and within Trtth. Their localization coincides with that of GABA-immunoreactive cells. Electron microscopy showed the existence of many retrogradely labeled dendrites throughout the whole Rot; a few labeled cell bodies were also present there, some of them being also GABA-immunoreactive. These results allow us to conclude about the existence of reciprocal connections between the optic tectum and Rot in turtles, these connections being able to affect processing of visual information in tectum. We suggest that reciprocity of tectothalamic connections might be the ancestral feature of the vertebrate brain; in the course of amniote evolution the functional significance of this feature can be decreased and even lost in parallel with a rise of the role of direct corticotectal projections.

  2. A browse facility for Earth science remote sensing data: Center director's discretionary fund final report

    NASA Technical Reports Server (NTRS)

    Meyer, P. J.

    1993-01-01

    An image data visual browse facility is developed for a UNIX platform using the X Windows 11 system. It allows one to visually examine reduced resolution image data to determine which data are applicable for further research. Links with a relational data base manager then allow one to extract not only the full resolution image data, but any other ancillary data related to the case study. Various techniques are examined for compression of the image data in order to reduce data storage requirements and time necessary to transmit the data on the internet. Data used were from the WetNet project.

  3. GlastCam: A Telemetry-Driven Spacecraft Visualization Tool

    NASA Technical Reports Server (NTRS)

    Stoneking, Eric T.; Tsai, Dean

    2009-01-01

    Developed for the GLAST project, which is now the Fermi Gamma-ray Space Telescope, GlastCam software ingests telemetry from the Integrated Test and Operations System (ITOS) and generates four graphical displays of geometric properties in real time, allowing visual assessment of the attitude, configuration, position, and various cross-checks. Four windows are displayed: a "cam" window shows a 3D view of the satellite; a second window shows the standard position plot of the satellite on a Mercator map of the Earth; a third window displays star tracker fields of view, showing which stars are visible from the spacecraft in order to verify star tracking; and the fourth window depicts

  4. Immersive Visualization of the Solid Earth

    NASA Astrophysics Data System (ADS)

    Kreylos, O.; Kellogg, L. H.

    2017-12-01

    Immersive visualization using virtual reality (VR) display technology offers unique benefits for the visual analysis of complex three-dimensional data such as tomographic images of the mantle and higher-dimensional data such as computational geodynamics models of mantle convection or even planetary dynamos. Unlike "traditional" visualization, which has to project 3D scalar data or vectors onto a 2D screen for display, VR can display 3D data in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection and interfere with interpretation. As a result, researchers can apply their spatial reasoning skills to 3D data in the same way they can to real objects or environments, as well as to complex objects like vector fields. 3D Visualizer is an application to visualize 3D volumetric data, such as results from mantle convection simulations or seismic tomography reconstructions, using VR display technology and a strong focus on interactive exploration. Unlike other visualization software, 3D Visualizer does not present static visualizations, such as a set of cross-sections at pre-selected positions and orientations, but instead lets users ask questions of their data, for example by dragging a cross-section through the data's domain with their hands and seeing data mapped onto that cross-section in real time, or by touching a point inside the data domain, and immediately seeing an isosurface connecting all points having the same data value as the touched point. Combined with tools allowing 3D measurements of positions, distances, and angles, and with annotation tools that allow free-hand sketching directly in 3D data space, the outcome of using 3D Visualizer is not primarily a set of pictures, but derived data to be used for subsequent analysis. 3D Visualizer works best in virtual reality, either in high-end facility-scale environments such as CAVEs, or using commodity low-cost virtual reality headsets such as HTC's Vive. The recent emergence of high-quality commodity VR means that researchers can buy a complete VR system off the shelf, install it and the 3D Visualizer software themselves, and start using it for data analysis immediately.

  5. Visualizing Sea Level Rise with Augmented Reality

    NASA Astrophysics Data System (ADS)

    Kintisch, E. S.

    2013-12-01

    Looking Glass is an application on the iPhone that visualizes in 3-D future scenarios of sea level rise, overlaid on live camera imagery in situ. Using a technology known as augmented reality, the app allows a layperson user to explore various scenarios of sea level rise using a visual interface. Then the user can see, in an immersive, dynamic way, how those scenarios would affect a real place. The first part of the experience activates users' cognitive, quantitative thinking process, teaching them how global sea level rise, tides and storm surge contribute to flooding; the second allows an emotional response to a striking visual depiction of possible future catastrophe. This project represents a partnership between a science journalist, MIT, and the Rhode Island School of Design, and the talk will touch on lessons this projects provides on structuring and executing such multidisciplinary efforts on future design projects.

  6. The Secret Club Project: Exploring Miscarriage through the Visual Arts.

    ERIC Educational Resources Information Center

    Seftel, Laura

    2001-01-01

    Examines art as a means to understand the physical and emotional loss of miscarriage. "The Secret Club Project," an innovative exhibit featuring 10 women artists' visual responses to miscarriage, is described. Rituals related to pregnancy loss are reviewed, as well as artists' and art therapists' use of the creative process to move…

  7. The Visual Identity Project

    ERIC Educational Resources Information Center

    Tennant-Gadd, Laurie; Sansone, Kristina Lamour

    2008-01-01

    Identity is the focus of the middle-school visual arts program at Cambridge Friends School (CFS) in Cambridge, Massachusetts. Sixth graders enter the middle school and design a personal logo as their first major project in the art studio. The logo becomes a way for students to introduce themselves to their teachers and to represent who they are…

  8. Ranked centroid projection: a data visualization approach with self-organizing maps.

    PubMed

    Yen, G G; Wu, Z

    2008-02-01

    The self-organizing map (SOM) is an efficient tool for visualizing high-dimensional data. In this paper, the clustering and visualization capabilities of the SOM, especially in the analysis of textual data, i.e., document collections, are reviewed and further developed. A novel clustering and visualization approach based on the SOM is proposed for the task of text mining. The proposed approach first transforms the document space into a multidimensional vector space by means of document encoding. Afterwards, a growing hierarchical SOM (GHSOM) is trained and used as a baseline structure to automatically produce maps with various levels of detail. Following the GHSOM training, the new projection method, namely the ranked centroid projection (RCP), is applied to project the input vectors to a hierarchy of 2-D output maps. The RCP is used as a data analysis tool as well as a direct interface to the data. In a set of simulations, the proposed approach is applied to an illustrative data set and two real-world scientific document collections to demonstrate its applicability.

  9. A Multimedia Knowledge Representation for an "Intelligent" Computerized Tutor. Technical Report No. 142.

    ERIC Educational Resources Information Center

    Baggett, Patricia; Ehrenfeucht, Andrzej

    The intended end product of the research project described is an "intelligent" multimedia tutoring system for procedural tasks, in particular, the repair of physical objects. This paper presents the data structure that will be used, i.e., a graph with five types of nodes (mental, abstract, motoric or action, visual, and verbal) and two types of…

  10. Fast I/O for Massively Parallel Applications

    NASA Technical Reports Server (NTRS)

    OKeefe, Matthew T.

    1996-01-01

    The two primary goals for this report were the design, contruction and modeling of parallel disk arrays for scientific visualization and animation, and a study of the IO requirements of highly parallel applications. In addition, further work in parallel display systems required to project and animate the very high-resolution frames resulting from our supercomputing simulations in ocean circulation and compressible gas dynamics.

  11. The role of typography in differentiating look-alike/sound-alike drug names.

    PubMed

    Gabriele, Sandra

    2006-01-01

    Until recently, when errors occurred in the course of caring for patients, blame was assigned to the healthcare professionals closest to the incident rather than examining the larger system and the actions that led up to the event. Now, the medical profession is embracing expertise and methodologies used in other fields to improve its own systems in relation to patient safety issues. This exploratory study, part of a Master's of Design thesis project, was a response to the problem of errors that occur due to confusion between look-alike/sound-alike drug names (medication names that have orthographic and/or phonetic similarities). The study attempts to provide a visual means to help differentiate problematic names using formal typographic and graphic cues. The FDA's Name Differentiation Project recommendations and other typographic alternatives were considered to address issues of attention and cognition. Eleven acute care nurses participated in testing that consisted of word-recognition tasks and questions intended to elicit opinions regarding the visual treatment of look-alike/sound-alike names in the context of a label prototype. Though limited in sample size, testing provided insight into the kinds of typographic differentiation that might be effective in a high-risk situation.

  12. Flexible high-resolution display systems for the next generation of radiology reading rooms

    NASA Astrophysics Data System (ADS)

    Caban, Jesus J.; Wood, Bradford J.; Park, Adrian

    2007-03-01

    A flexible, scalable, high-resolution display system is presented to support the next generation of radiology reading rooms or interventional radiology suites. The project aims to create an environment for radiologists that will simultaneously facilitate image interpretation, analysis, and understanding while lowering visual and cognitive stress. Displays currently in use present radiologists with technical challenges to exploring complex datasets that we seek to address. These include resolution and brightness, display and ambient lighting differences, and degrees of complexity in addition to side-by-side comparison of time-variant and 2D/3D images. We address these issues through a scalable projector-based system that uses our custom-designed geometrical and photometrical calibration process to create a seamless, bright, high-resolution display environment that can reduce the visual fatigue commonly experienced by radiologists. The system we have designed uses an array of casually aligned projectors to cooperatively increase overall resolution and brightness. Images from a set of projectors in their narrowest zoom are combined at a shared projection surface, thus increasing the global "pixels per inch" (PPI) of the display environment. Two primary challenges - geometric calibration and photometric calibration - remained to be resolved before our high-resolution display system could be used in a radiology reading room or procedure suite. In this paper we present a method that accomplishes those calibrations and creates a flexible high-resolution display environment that appears seamless, sharp, and uniform across different devices.

  13. Ensembl 2004.

    PubMed

    Birney, E; Andrews, D; Bevan, P; Caccamo, M; Cameron, G; Chen, Y; Clarke, L; Coates, G; Cox, T; Cuff, J; Curwen, V; Cutts, T; Down, T; Durbin, R; Eyras, E; Fernandez-Suarez, X M; Gane, P; Gibbins, B; Gilbert, J; Hammond, M; Hotz, H; Iyer, V; Kahari, A; Jekosch, K; Kasprzyk, A; Keefe, D; Keenan, S; Lehvaslaiho, H; McVicker, G; Melsopp, C; Meidl, P; Mongin, E; Pettett, R; Potter, S; Proctor, G; Rae, M; Searle, S; Slater, G; Smedley, D; Smith, J; Spooner, W; Stabenau, A; Stalker, J; Storey, R; Ureta-Vidal, A; Woodwark, C; Clamp, M; Hubbard, T

    2004-01-01

    The Ensembl (http://www.ensembl.org/) database project provides a bioinformatics framework to organize biology around the sequences of large genomes. It is a comprehensive and integrated source of annotation of large genome sequences, available via interactive website, web services or flat files. As well as being one of the leading sources of genome annotation, Ensembl is an open source software engineering project to develop a portable system able to handle very large genomes and associated requirements. The facilities of the system range from sequence analysis to data storage and visualization and installations exist around the world both in companies and at academic sites. With a total of nine genome sequences available from Ensembl and more genomes to follow, recent developments have focused mainly on closer integration between genomes and external data.

  14. Modeling Urban Energy Savings Scenarios Using Earth System Microclimate and Urban Morphology

    NASA Astrophysics Data System (ADS)

    Allen, M. R.; Rose, A.; New, J. R.; Yuan, J.; Omitaomu, O.; Sylvester, L.; Branstetter, M. L.; Carvalhaes, T. M.; Seals, M.; Berres, A.

    2017-12-01

    We analyze and quantify the relationships among climatic conditions, urban morphology, population, land cover, and energy use so that these relationships can be used to inform energy-efficient urban development and planning. We integrate different approaches across three research areas: earth system modeling; impacts, adaptation and vulnerability; and urban planning in order to address three major gaps in the existing capability in these areas: i) neighborhood resolution modeling and simulation of urban micrometeorological processes and their effect on and from regional climate; ii) projections for future energy use under urbanization and climate change scenarios identifying best strategies for urban morphological development and energy savings; iii) analysis and visualization tools to help planners optimally use these projections.

  15. Computer-aided light sheet flow visualization using photogrammetry

    NASA Technical Reports Server (NTRS)

    Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.

    1994-01-01

    A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and a visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) results, was chosen to interactively display the reconstructed light sheet images with the numerical surface geometry for the model or aircraft under study. The photogrammetric reconstruction technique and the image processing and computer graphics techniques and equipment are described. Results of the computer-aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images with CFD solutions in the same graphics environment is also demonstrated.

  16. Computer-Aided Light Sheet Flow Visualization

    NASA Technical Reports Server (NTRS)

    Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.

    1993-01-01

    A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.

  17. VPython: Writing Real-time 3D Physics Programs

    NASA Astrophysics Data System (ADS)

    Chabay, Ruth

    2001-06-01

    VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.

  18. Applications of Optical Coherence Tomography in Pediatric Clinical Neuroscience

    PubMed Central

    Avery, Robert A.; Rajjoub, Raneem D.; Trimboli-Heidler, Carmelina; Waldman, Amy T.

    2015-01-01

    For nearly two centuries, the ophthalmoscope has permitted examination of the retina and optic nerve—the only axons directly visualized by the physician. The retinal ganglion cells project their axons, which travel along the innermost retina to form the optic nerve, marking the beginning of the anterior visual pathway. Both the structure and function of the visual pathway are essential components of the neurologic examination as it can be involved in numerous acquired, congenital and genetic central nervous system conditions. The development of optical coherence tomography now permits the pediatric neuroscientist to visualize and quantify the optic nerve and retinal layers with unprecedented resolution. As optical coherence tomography becomes more accessible and integrated into research and clinical care, the pediatric neuroscientist may have the opportunity to utilize and/or interpret results from this device. This review describes the basic technical features of optical coherence tomography and highlights its potential clinical and research applications in pediatric clinical neuroscience including optic nerve swelling, optic neuritis, tumors of the visual pathway, vigabatrin toxicity, nystagmus, and neurodegenerative conditions. PMID:25803824

  19. Applications of optical coherence tomography in pediatric clinical neuroscience.

    PubMed

    Avery, Robert A; Rajjoub, Raneem D; Trimboli-Heidler, Carmelina; Waldman, Amy T

    2015-04-01

    For nearly two centuries, the ophthalmoscope has permitted examination of the retina and optic nerve-the only axons directly visualized by the physician. The retinal ganglion cells project their axons, which travel along the innermost retina to form the optic nerve, marking the beginning of the anterior visual pathway. Both the structure and function of the visual pathway are essential components of the neurologic examination as it can be involved in numerous acquired, congenital and genetic central nervous system conditions. The development of optical coherence tomography now permits the pediatric neuroscientist to visualize and quantify the optic nerve and retinal layers with unprecedented resolution. As optical coherence tomography becomes more accessible and integrated into research and clinical care, the pediatric neuroscientist may have the opportunity to utilize and/or interpret results from this device. This review describes the basic technical features of optical coherence tomography and highlights its potential clinical and research applications in pediatric clinical neuroscience including optic nerve swelling, optic neuritis, tumors of the visual pathway, vigabatrin toxicity, nystagmus, and neurodegenerative conditions. Georg Thieme Verlag KG Stuttgart · New York.

  20. Computer-aided light sheet flow visualization

    NASA Technical Reports Server (NTRS)

    Stacy, Kathryn; Severance, Kurt; Childers, Brooks A.

    1993-01-01

    A computer-aided flow visualization process has been developed to analyze video images acquired from rotating and translating light sheet visualization systems. The computer process integrates a mathematical model for image reconstruction, advanced computer graphics concepts, and digital image processing to provide a quantitative and visual analysis capability. The image reconstruction model, based on photogrammetry, uses knowledge of the camera and light sheet locations and orientations to project two-dimensional light sheet video images into three-dimensional space. A sophisticated computer visualization package, commonly used to analyze computational fluid dynamics (CFD) data sets, was chosen to interactively display the reconstructed light sheet images, along with the numerical surface geometry for the model or aircraft under study. A description is provided of the photogrammetric reconstruction technique, and the image processing and computer graphics techniques and equipment. Results of the computer aided process applied to both a wind tunnel translating light sheet experiment and an in-flight rotating light sheet experiment are presented. The capability to compare reconstructed experimental light sheet images and CFD solutions in the same graphics environment is also demonstrated.

  1. Application of Advanced Wide Area Early Warning Systems with Adaptive Protection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumstein, Carl; Cibulka, Lloyd; Thorp, James

    2014-09-30

    Recent blackouts of power systems in North America and throughout the world have shown how critical a reliable power system is to modern societies, and the enormous economic and societal damage a blackout can cause. It has been noted that unanticipated operation of protection systems can contribute to cascading phenomena and, ultimately, blackouts. This project developed and field-tested two methods of Adaptive Protection systems utilizing synchrophasor data. One method detects conditions of system stress that can lead to unintended relay operation, and initiates a supervisory signal to modify relay response in real time to avoid false trips. The second methodmore » detects the possibility of false trips of impedance relays as stable system swings “encroach” on the relays’ impedance zones, and produces an early warning so that relay engineers can re-evaluate relay settings. In addition, real-time synchrophasor data produced by this project was used to develop advanced visualization techniques for display of synchrophasor data to utility operators and engineers.« less

  2. Preserving anonymity in e-voting system using voter non-repudiation oriented scheme

    NASA Astrophysics Data System (ADS)

    Hamid, Isredza Rahmi A.; Radzi, Siti Nafishah Md; Rahman, Nurul Hidayah Ab; Wen, Chuah Chai; Abdullah, Nurul Azma

    2017-10-01

    The voting system has been developed from traditional paper ballot to electronic voting (e-voting). The e-voting system has high potential to be widely used in election event. However, the e-voting system still does not meet the most important security properties which are voter's authenticity and non-repudiation. This is because voters can simply vote again by entering other people's identification number. In this project, an electronic voting using voter non-repudiation oriented scheme will be developed. This system contains ten modules which are log in, vote session, voter, candidate, open session, voting results, user account, initial score, logs and reset vote count. In order to ensure there would be no non-repudiation issue, a voter non-repudiation oriented scheme concept will be adapted and implemented in the system. This system will be built using Microsoft Visual Studio 2013 which only can be accessed using personal computers at the voting center. This project will be beneficial for future use in order to overcome non-repudiation issue.

  3. WorldWide Telescope: A Newly Open Source Astronomy Visualization System

    NASA Astrophysics Data System (ADS)

    Fay, Jonathan; Roberts, Douglas A.

    2016-01-01

    After eight years of development by Microsoft Research, WorldWide Telescope (WWT) was made an open source project at the end of June 2015. WWT was motivated by the desire to put new surveys of objects, such as the Sloan Digital Sky Survey in the context of the night sky. The development of WWT under Microsoft started with the creation of a Windows desktop client that is widely used in various education, outreach and research projects. Using this, users can explore the data built into WWT as well as data that is loaded in. Beyond exploration, WWT can be used to create tours that present various datasets a narrative format.In the past two years, the team developed a collection of web controls, including an HTML5 web client, which contains much of the functionality of the Windows desktop client. The project under Microsoft has deep connections with several user communities such as education through the WWT Ambassadors program, http://wwtambassadors.org/ and with planetariums and museums such as the Adler Planetarium. WWT can also support research, including using WWT to visualize the Bones of the Milky Way and rich connections between WWT and the Astrophysical Data Systems (ADS, http://labs.adsabs.harvard.edu/adsabs/). One important new research connection is the use of WWT to create dynamic and potentially interactive supplements to journal articles, which have been created in 2015.Now WWT is an open source community lead project. The source code is available in GitHub (https://github.com/WorldWideTelescope). There is significant developer documentation on the website (http://worldwidetelescope.org/Developers/) and an extensive developer workshops (http://wwtworkshops.org/?tribe_events=wwt-developer-workshop) has taken place in the fall of 2015.Now that WWT is open source anyone who has the interest in the project can be a contributor. As important as helping out with coding, the project needs people interested in documentation, testing, training and other roles.

  4. Quick Fix for Managing Risks

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Under a Phase II SBIR contract, Kennedy and Lumina Decision Systems, Inc., jointly developed the Schedule and Cost Risk Analysis Modeling (SCRAM) system, based on a version of Lumina's flagship software product, Analytica(R). Acclaimed as "the best single decision-analysis program yet produced" by MacWorld magazine, Analytica is a "visual" tool used in decision-making environments worldwide to build, revise, and present business models, minus the time-consuming difficulty commonly associated with spreadsheets. With Analytica as their platform, Kennedy and Lumina created the SCRAM system in response to NASA's need to identify the importance of major delays in Shuttle ground processing, a critical function in project management and process improvement. As part of the SCRAM development project, Lumina designed a version of Analytica called the Analytica Design Engine (ADE) that can be easily incorporated into larger software systems. ADE was commercialized and utilized in many other developments, including web-based decision support.

  5. Multimodal Preception and Multicriterion Control of Nested Systems. 3; A Functional Visual Assessment Test for Human Health Maintenance and Countermeasure Evaluation

    NASA Technical Reports Server (NTRS)

    Riccio, Gary E.; McDonald, P. Vernon; Bloomberg, Jacob

    1999-01-01

    Our theoretical and empirical research on the whole-body coordination during locomotion led to a Phase 1 SBIR grant from NASA JSC. The purpose of the SBIR grant was to design an innovative system for evaluating eye-head-trunk coordination during whole-body perturbations that are characteristic of locomotion. The approach we used to satisfy the Phase 1 objectives was based on a structured methodology for the development of human-systems technology. Accordingly the project was broken down into a number of tasks and subtasks. In sequence, the major tasks were: (1) identify needs for functional assessment of visual acuity under conditions involving whole-body perturbation within the NASA Space Medical Monitoring and Countermeasures (SMMaC) program and in other related markets; (2) analyze the needs into the causes and symptoms of impaired visual acuity under conditions involving whole-body perturbation; (3) translate the analyzed needs into technology requirements for the Functional Visual Assessment Test (FVAT); (4) identify candidate technology solutions and implementations of FVAT; and (5) prioritize and select technology solutions. The work conducted in these tasks is described in this final volume of the series on Multimodal Perception and Multicriterion Control of Nested Systems. While prior volumes (1 and 2) in the series focus on theoretical foundations and novel data-analytic techniques, this volume addresses technology that is necessary for minimally intrusive data collection and near-real-time data analysis and display.

  6. Functional and structural comparison of visual lateralization in birds – similar but still different

    PubMed Central

    Ströckens, Felix

    2014-01-01

    Vertebrate brains display physiological and anatomical left-right differences, which are related to hemispheric dominances for specific functions. Functional lateralizations likely rely on structural left-right differences in intra- and interhemispheric connectivity patterns that develop in tight gene-environment interactions. The visual systems of chickens and pigeons show that asymmetrical light stimulation during ontogeny induces a dominance of the left hemisphere for visuomotor control that is paralleled by projection asymmetries within the ascending visual pathways. But structural asymmetries vary essentially between both species concerning the affected pathway (thalamo- vs. tectofugal system), constancy of effects (transient vs. permanent), and the hemisphere receiving stronger bilateral input (right vs. left). These discrepancies suggest that at least two aspects of visual processes are influenced by asymmetric light stimulation: (1) visuomotor dominance develops within the ontogenetically stronger stimulated hemisphere but not necessarily in the one receiving stronger bottom-up input. As a secondary consequence of asymmetrical light experience, lateralized top-down mechanisms play a critical role in the emergence of hemispheric dominance. (2) Ontogenetic light experiences may affect the dominant use of left- and right-hemispheric strategies. Evidences from social and spatial cognition tasks indicate that chickens rely more on a right-hemispheric global strategy whereas pigeons display a dominance of the left hemisphere. Thus, behavioral asymmetries are linked to a stronger bilateral input to the right hemisphere in chickens but to the left one in pigeons. The degree of bilateral visual input may determine the dominant visual processing strategy when redundant encoding is possible. This analysis supports that environmental stimulation affects the balance between hemispheric-specific processing by lateralized interactions of bottom-up and top-down systems. PMID:24723898

  7. Astroaccesible: Bringing the study of the Universe to the visually impaired

    NASA Astrophysics Data System (ADS)

    Pérez-Montero, E.; García Gómez-Caro, E.; Sánchez Molina, Y.; Ortiz-Gil, A.; López de Lacalle, S.; Tamayo, A.

    2017-03-01

    Astroaccesible is an outreach project carried out in collaboration with the IAA-CSIC and ONCE to make astronomy more accessible to the visually impaired people so the main source of information is not based on the use of images. The activities of the project started in 2014 and since then it has received financial support from SEA in 2015 and from FECYT in 2016 making possible to extend the activity for many ONCE centres in Spain. The activities include in-person classes using adequate descriptions, high-contrast images for those people with visual remain and touching material representing basic concepts about sizes, scales and distances of astronomical bodies. To maximize the impact of the contents of the project many of the contents, summary of activities, links to resources are available through the web page of the project. This project focused on astronomy is also intended to make the scientific community more sensitive to perform more accessible explanations of their results.

  8. Design and application of BIM based digital sand table for construction management

    NASA Astrophysics Data System (ADS)

    Fuquan, JI; Jianqiang, LI; Weijia, LIU

    2018-05-01

    This paper explores the design and application of BIM based digital sand table for construction management. Aiming at the demands and features of construction management plan for bridge and tunnel engineering, the key functional features of digital sand table should include three-dimensional GIS, model navigation, virtual simulation, information layers, and data exchange, etc. That involving the technology of 3D visualization and 4D virtual simulation of BIM, breakdown structure of BIM model and project data, multi-dimensional information layers, and multi-source data acquisition and interaction. Totally, the digital sand table is a visual and virtual engineering information integrated terminal, under the unified data standard system. Also, the applications shall contain visual constructing scheme, virtual constructing schedule, and monitoring of construction, etc. Finally, the applicability of several basic software to the digital sand table is analyzed.

  9. Seeing the Invisible: Educating the Public on Planetary Magnetic Fields and How they Affect Atmospheres

    NASA Astrophysics Data System (ADS)

    Fillingim, M. O.; Brain, D. A.; Peticolas, L. M.; Schultz, G.; Yan, D.; Guevara, S.; Randol, S.

    2009-12-01

    Magnetic fields and charged particles are difficult for school children, the general public, and scientists alike to visualize. But studies of planetary magnetospheres and ionospheres have broad implications for planetary evolution, from the deep interior to the ancient climate, that are important to communicate to each of these audiences. This presentation will highlight the visualization materials that we are developing to educate audiences on the magnetic fields of planets and how they affect atmospheres. The visualization materials that we are developing consist of simplified data sets that can be displayed on spherical projection systems and portable 3-D rigid models of planetary magnetic fields.We are developing presentations for science museums and classrooms that relate fundamental information about the Martian magnetic field, how it differs from Earth’s, and why the differences are significant.

  10. NaviCell Web Service for network-based data visualization.

    PubMed

    Bonnet, Eric; Viara, Eric; Kuperstein, Inna; Calzone, Laurence; Cohen, David P A; Barillot, Emmanuel; Zinovyev, Andrei

    2015-07-01

    Data visualization is an essential element of biological research, required for obtaining insights and formulating new hypotheses on mechanisms of health and disease. NaviCell Web Service is a tool for network-based visualization of 'omics' data which implements several data visual representation methods and utilities for combining them together. NaviCell Web Service uses Google Maps and semantic zooming to browse large biological network maps, represented in various formats, together with different types of the molecular data mapped on top of them. For achieving this, the tool provides standard heatmaps, barplots and glyphs as well as the novel map staining technique for grasping large-scale trends in numerical values (such as whole transcriptome) projected onto a pathway map. The web service provides a server mode, which allows automating visualization tasks and retrieving data from maps via RESTful (standard HTTP) calls. Bindings to different programming languages are provided (Python and R). We illustrate the purpose of the tool with several case studies using pathway maps created by different research groups, in which data visualization provides new insights into molecular mechanisms involved in systemic diseases such as cancer and neurodegenerative diseases. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. BiNA: A Visual Analytics Tool for Biological Network Data

    PubMed Central

    Gerasch, Andreas; Faber, Daniel; Küntzer, Jan; Niermann, Peter; Kohlbacher, Oliver; Lenhof, Hans-Peter; Kaufmann, Michael

    2014-01-01

    Interactive visual analysis of biological high-throughput data in the context of the underlying networks is an essential task in modern biomedicine with applications ranging from metabolic engineering to personalized medicine. The complexity and heterogeneity of data sets require flexible software architectures for data analysis. Concise and easily readable graphical representation of data and interactive navigation of large data sets are essential in this context. We present BiNA - the Biological Network Analyzer - a flexible open-source software for analyzing and visualizing biological networks. Highly configurable visualization styles for regulatory and metabolic network data offer sophisticated drawings and intuitive navigation and exploration techniques using hierarchical graph concepts. The generic projection and analysis framework provides powerful functionalities for visual analyses of high-throughput omics data in the context of networks, in particular for the differential analysis and the analysis of time series data. A direct interface to an underlying data warehouse provides fast access to a wide range of semantically integrated biological network databases. A plugin system allows simple customization and integration of new analysis algorithms or visual representations. BiNA is available under the 3-clause BSD license at http://bina.unipax.info/. PMID:24551056

  12. NaviCell Web Service for network-based data visualization

    PubMed Central

    Bonnet, Eric; Viara, Eric; Kuperstein, Inna; Calzone, Laurence; Cohen, David P. A.; Barillot, Emmanuel; Zinovyev, Andrei

    2015-01-01

    Data visualization is an essential element of biological research, required for obtaining insights and formulating new hypotheses on mechanisms of health and disease. NaviCell Web Service is a tool for network-based visualization of ‘omics’ data which implements several data visual representation methods and utilities for combining them together. NaviCell Web Service uses Google Maps and semantic zooming to browse large biological network maps, represented in various formats, together with different types of the molecular data mapped on top of them. For achieving this, the tool provides standard heatmaps, barplots and glyphs as well as the novel map staining technique for grasping large-scale trends in numerical values (such as whole transcriptome) projected onto a pathway map. The web service provides a server mode, which allows automating visualization tasks and retrieving data from maps via RESTful (standard HTTP) calls. Bindings to different programming languages are provided (Python and R). We illustrate the purpose of the tool with several case studies using pathway maps created by different research groups, in which data visualization provides new insights into molecular mechanisms involved in systemic diseases such as cancer and neurodegenerative diseases. PMID:25958393

  13. A system for environmental model coupling and code reuse: The Great Rivers Project

    NASA Astrophysics Data System (ADS)

    Eckman, B.; Rice, J.; Treinish, L.; Barford, C.

    2008-12-01

    As part of the Great Rivers Project, IBM is collaborating with The Nature Conservancy and the Center for Sustainability and the Global Environment (SAGE) at the University of Wisconsin, Madison to build a Modeling Framework and Decision Support System (DSS) designed to help policy makers and a variety of stakeholders (farmers, fish & wildlife managers, hydropower operators, et al.) to assess, come to consensus, and act on land use decisions representing effective compromises between human use and ecosystem preservation/restoration. Initially focused on Brazil's Paraguay-Parana, China's Yangtze, and the Mississippi Basin in the US, the DSS integrates data and models from a wide variety of environmental sectors, including water balance, water quality, carbon balance, crop production, hydropower, and biodiversity. In this presentation we focus on the modeling framework aspect of this project. In our approach to these and other environmental modeling projects, we see a flexible, extensible modeling framework infrastructure for defining and running multi-step analytic simulations as critical. In this framework, we divide monolithic models into atomic components with clearly defined semantics encoded via rich metadata representation. Once models and their semantics and composition rules have been registered with the system by their authors or other experts, non-expert users may construct simulations as workflows of these atomic model components. A model composition engine enforces rules/constraints for composing model components into simulations, to avoid the creation of Frankenmodels, models that execute but produce scientifically invalid results. A common software environment and common representations of data and models are required, as well as an adapter strategy for code written in e.g., Fortran or python, that still enables efficient simulation runs, including parallelization. Since each new simulation, as a new composition of model components, requires calibration of parameters (fudge factors) to produce scientifically valid results, we are also developing an autocalibration engine. Finally, visualization is a key element of this modeling framework strategy, both to convey complex scientific data effectively, and also to enable non-expert users to make full use of the relevant features of the framework. We are developing a visualization environment with a strong data model, to enable visualizations, model results, and data all to be handled similarly.

  14. Dark focus of accommodation as dependent and independent variables in visual display technology

    NASA Technical Reports Server (NTRS)

    Jones, Sherrie; Kennedy, Robert; Harm, Deborah

    1992-01-01

    When independent stimuli are available for accommodation, as in the dark or under low contrast conditions, the lens seeks its resting position. Individual differences in resting positions are reliable, under autonomic control, and can change with visual task demands. We hypothesized that motion sickness in a flight simulator might result in dark focus changes. Method: Subjects received training flights in three different Navy flight simulators. Two were helicopter simulators entailed CRT presentation using infinity optics, one involved a dome presentation of a computer graphic visual projection system. Results: In all three experiments there were significant differences between dark focus activity before and after simulator exposure when comparisons were made between sick and not-sick pilot subjects. In two of these experiments, the average shift in dark focus for the sick subjects was toward increased myopia when each subject was compared to his own baseline. In the third experiment, the group showed an average shift outward of small amount and the subjects who were sick showed significantly less outward movement than those who were symptom free. Conclusions: Although the relationship is not a simple one, dark focus changes in simulator sickness imply parasympathetic activity. Because changes can occur in relation to endogenous and exogenous events, such measurement may have useful applications as dependent measures in studies of visually coupled systems, virtual reality systems, and space adaptation syndrome.

  15. Use of Open Standards and Technologies at the Lunar Mapping and Modeling Project

    NASA Astrophysics Data System (ADS)

    Law, E.; Malhotra, S.; Bui, B.; Chang, G.; Goodale, C. E.; Ramirez, P.; Kim, R. M.; Sadaqathulla, S.; Rodriguez, L.

    2011-12-01

    The Lunar Mapping and Modeling Project (LMMP), led by the Marshall Space Flight center (MSFC), is tasked by NASA. The project is responsible for the development of an information system to support lunar exploration activities. It provides lunar explorers a set of tools and lunar map and model products that are predominantly derived from present lunar missions (e.g., the Lunar Reconnaissance Orbiter (LRO)) and from historical missions (e.g., Apollo). At Jet Propulsion Laboratory (JPL), we have built the LMMP interoperable geospatial information system's underlying infrastructure and a single point of entry - the LMMP Portal by employing a number of open standards and technologies. The Portal exposes a set of services to users to allow search, visualization, subset, and download of lunar data managed by the system. Users also have access to a set of tools that visualize, analyze and annotate the data. The infrastructure and Portal are based on web service oriented architecture. We designed the system to support solar system bodies in general including asteroids, earth and planets. We employed a combination of custom software, commercial and open-source components, off-the-shelf hardware and pay-by-use cloud computing services. The use of open standards and web service interfaces facilitate platform and application independent access to the services and data, offering for instances, iPad and Android mobile applications and large screen multi-touch with 3-D terrain viewing functions, for a rich browsing and analysis experience from a variety of platforms. The web services made use of open standards including: Representational State Transfer (REST); and Open Geospatial Consortium (OGC)'s Web Map Service (WMS), Web Coverage Service (WCS), Web Feature Service (WFS). Its data management services have been built on top of a set of open technologies including: Object Oriented Data Technology (OODT) - open source data catalog, archive, file management, data grid framework; openSSO - open source access management and federation platform; solr - open source enterprise search platform; redmine - open source project collaboration and management framework; GDAL - open source geospatial data abstraction library; and others. Its data products are compliant with Federal Geographic Data Committee (FGDC) metadata standard. This standardization allows users to access the data products via custom written applications or off-the-shelf applications such as GoogleEarth. We will demonstrate this ready-to-use system for data discovery and visualization by walking through the data services provided through the portal such as browse, search, and other tools. We will further demonstrate image viewing and layering of lunar map images from the Internet, via mobile devices such as Apple's iPad.

  16. Visual Discourse in Scientific Conference Papers: A Genre-based Study.

    ERIC Educational Resources Information Center

    Rowley-Jolivet, Elizabeth

    2002-01-01

    Investigates the role of visual communication in a spoken research genre: the scientific research paper. Analyzes 2,048 visuals projected during 90 papers given at five international conferences in three fields (Geology, medicine, physics), in order to bring out the recurrent features of the visual dimension. (Author/VWL)

  17. Walking simulator for evaluation of ophthalmic devices

    NASA Astrophysics Data System (ADS)

    Barabas, James; Woods, Russell L.; Peli, Eli

    2005-03-01

    Simulating mobility tasks in a virtual environment reduces risk for research subjects, and allows for improved experimental control and measurement. We are currently using a simulated shopping mall environment (where subjects walk on a treadmill in front of a large projected video display) to evaluate a number of ophthalmic devices developed at the Schepens Eye Research Institute for people with vision impairment, particularly visual field defects. We have conducted experiments to study subject's perception of "safe passing distance" when walking towards stationary obstacles. The subject's binary responses about potential collisions are analyzed by fitting a psychometric function, which gives an estimate of the subject's perceived safe passing distance, and the variability of subject responses. The system also enables simulations of visual field defects using head and eye tracking, enabling better understanding of the impact of visual field loss. Technical infrastructure for our simulated walking environment includes a custom eye and head tracking system, a gait feedback system to adjust treadmill speed, and a handheld 3-D pointing device. Images are generated by a graphics workstation, which contains a model with photographs of storefronts from an actual shopping mall, where concurrent validation experiments are being conducted.

  18. Professor Eric Can't See: A Project-Based Learning Case for Neurobiology Students.

    PubMed

    Ogilvie, Judith Mosinger; Ribbens, Eric

    2016-01-01

    "Professor Eric Can't See" is a semi-biographical case study written for an upper level undergraduate Neurobiology of Disease course. The case is integrated into a unit using a project-based learning approach to investigate the retinal degenerative disorder Retinitis pigmentosa and the visual system. Some case study scenes provide specific questions for student discussion and problem-based learning, while others provide background for student inquiry and related active learning exercises. The case was adapted from "'Chemical Eric' Can't See," and could be adapted for courses in general neuroscience or sensory neuroscience.

  19. Project Orion: A Design Study of a System for Detecting Extrasolar Planets

    NASA Technical Reports Server (NTRS)

    Black, D. C. (Editor)

    1980-01-01

    A design concept for a ground based astrometric telescope that could significantly increase the potential accuracy of astrometric observations is considered. The state of current techniques and instrumentation is examined in the context of detecting extrasolar planets. Emphasis is placed on the direct detection of extrasolar planets at either visual or infrared wavelengths. The design concept of the imaging stellar interferometer (ISI), developed under Project Orion, is described. The Orion ISI employs the state-of-the-art technology and is theoretically capable of attaining 0.00010 arc sec/yr accuracy in relative astrometric observations.

  20. Grist : grid-based data mining for astronomy

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph C.; Katz, Daniel S.; Miller, Craig D.; Walia, Harshpreet; Williams, Roy; Djorgovski, S. George; Graham, Matthew J.; Mahabal, Ashish; Babu, Jogesh; Berk, Daniel E. Vanden; hide

    2004-01-01

    The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the 'hyperatlas' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.

Top