Sample records for enabling direct visualization

  1. Route Network Construction with Location-Direction-Enabled Photographs

    NASA Astrophysics Data System (ADS)

    Fujita, Hideyuki; Sagara, Shota; Ohmori, Tadashi; Shintani, Takahiko

    2018-05-01

    We propose a method for constructing a geometric graph for generating routes that summarize a geographical area and also have visual continuity by using a set of location-direction-enabled photographs. A location- direction-enabled photograph is a photograph that has information about the location (position of the camera at the time of shooting) and the direction (direction of the camera at the time of shooting). Each nodes of the graph corresponds to a location-direction-enabled photograph. The location of each node is the location of the corresponding photograph, and a route on the graph corresponds to a route in the geographic area and a sequence of photographs. The proposed graph is constructed to represent characteristic spots and paths linking the spots, and it is assumed to be a kind of a spatial summarization of the area with the photographs. Therefore, we call the routes on the graph as spatial summary route. Each route on the proposed graph also has a visual continuity, which means that we can understand the spatial relationship among the continuous photographs on the route such as moving forward, backward, turning right, etc. In this study, when the changes in the shooting position and shooting direction satisfied a given threshold, the route was defined to have visual continuity. By presenting the photographs in order along the generated route, information can be presented sequentially, while maintaining visual continuity to a great extent.

  2. Beyond Control Panels: Direct Manipulation for Visual Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; Bradel, Lauren; North, Chris

    2013-07-19

    Information Visualization strives to provide visual representations through which users can think about and gain insight into information. By leveraging the visual and cognitive systems of humans, complex relationships and phenomena occurring within datasets can be uncovered by exploring information visually. Interaction metaphors for such visualizations are designed to enable users direct control over the filters, queries, and other parameters controlling how the data is visually represented. Through the evolution of information visualization, more complex mathematical and data analytic models are being used to visualize relationships and patterns in data – creating the field of Visual Analytics. However, the expectationsmore » for how users interact with these visualizations has remained largely unchanged – focused primarily on the direct manipulation of parameters of the underlying mathematical models. In this article we present an opportunity to evolve the methodology for user interaction from the direct manipulation of parameters through visual control panels, to interactions designed specifically for visual analytic systems. Instead of focusing on traditional direct manipulation of mathematical parameters, the evolution of the field can be realized through direct manipulation within the visual representation – where users can not only gain insight, but also interact. This article describes future directions and research challenges that fundamentally change the meaning of direct manipulation with regards to visual analytics, advancing the Science of Interaction.« less

  3. How Ants Use Vision When Homing Backward.

    PubMed

    Schwarz, Sebastian; Mangan, Michael; Zeil, Jochen; Webb, Barbara; Wystrach, Antoine

    2017-02-06

    Ants can navigate over long distances between their nest and food sites using visual cues [1, 2]. Recent studies show that this capacity is undiminished when walking backward while dragging a heavy food item [3-5]. This challenges the idea that ants use egocentric visual memories of the scene for guidance [1, 2, 6]. Can ants use their visual memories of the terrestrial cues when going backward? Our results suggest that ants do not adjust their direction of travel based on the perceived scene while going backward. Instead, they maintain a straight direction using their celestial compass. This direction can be dictated by their path integrator [5] but can also be set using terrestrial visual cues after a forward peek. If the food item is too heavy to enable body rotations, ants moving backward drop their food on occasion, rotate and walk a few steps forward, return to the food, and drag it backward in a now-corrected direction defined by terrestrial cues. Furthermore, we show that ants can maintain their direction of travel independently of their body orientation. It thus appears that egocentric retinal alignment is required for visual scene recognition, but ants can translate this acquired directional information into a holonomic frame of reference, which enables them to decouple their travel direction from their body orientation and hence navigate backward. This reveals substantial flexibility and communication between different types of navigational information: from terrestrial to celestial cues and from egocentric to holonomic directional memories. VIDEO ABSTRACT. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  4. Brief Report: Just-in-Time Visual Supports to Children with Autism via the Apple Watch:® A Pilot Feasibility Study.

    PubMed

    O'Brien, Amanda; Schlosser, Ralf W; Shane, Howard C; Abramson, Jennifer; Allen, Anna A; Flynn, Suzanne; Yu, Christina; Dimery, Katherine

    2016-12-01

    Using augmented input might be an effective means for supplementing spoken language for children with autism who have difficulties following spoken directives. This study aimed to (a) explore whether JIT-delivered scene cues (photos, video clips) via the Apple Watch ® enable children with autism to carry out directives they were unable to implement with speech alone, and (b) test the feasibility of the Apple Watch ® (with a focus on display size). Results indicated that the hierarchical JIT supports enabled five children with autism to carry out the majority of directives. Hence, the relatively small display size of the Apple Watch does not seem to hinder children with autism to glean critical information from visual supports.

  5. Constructive, Collaborative, Contextual, and Self-Directed Learning in Surface Anatomy Education

    ERIC Educational Resources Information Center

    Bergman, Esther M.; Sieben, Judith M.; Smailbegovic, Ida; de Bruin, Anique B. H.; Scherpbier, Albert J. J. A.; van der Vleuten, Cees P. M.

    2013-01-01

    Anatomy education often consists of a combination of lectures and laboratory sessions, the latter frequently including surface anatomy. Studying surface anatomy enables students to elaborate on their knowledge of the cadaver's static anatomy by enabling the visualization of structures, especially those of the musculoskeletal system, move and…

  6. The role of visual and direct force feedback in robotics-assisted mitral valve annuloplasty.

    PubMed

    Currie, Maria E; Talasaz, Ali; Rayman, Reiza; Chu, Michael W A; Kiaii, Bob; Peters, Terry; Trejos, Ana Luisa; Patel, Rajni

    2017-09-01

    The objective of this work was to determine the effect of both direct force feedback and visual force feedback on the amount of force applied to mitral valve tissue during ex vivo robotics-assisted mitral valve annuloplasty. A force feedback-enabled master-slave surgical system was developed to provide both visual and direct force feedback during robotics-assisted cardiac surgery. This system measured the amount of force applied by novice and expert surgeons to cardiac tissue during ex vivo mitral valve annuloplasty repair. The addition of visual (2.16 ± 1.67), direct (1.62 ± 0.86), or both visual and direct force feedback (2.15 ± 1.08) resulted in lower mean maximum force applied to mitral valve tissue while suturing compared with no force feedback (3.34 ± 1.93 N; P < 0.05). To achieve better control of interaction forces on cardiac tissue during robotics-assisted mitral valve annuloplasty suturing, force feedback may be required. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Visualizing Transcranial Direct Current Stimulation (tDCS) in vivo using Magnetic Resonance Imaging

    NASA Astrophysics Data System (ADS)

    Jog, Mayank Anant

    Transcranial Direct Current Stimulation (tDCS) is a low-cost, non-invasive neuromodulation technique that has been shown to treat clinical symptoms as well as improve cognition. However, no techniques exist at the time of research to visualize tDCS currents in vivo. This dissertation presents the theoretical framework and experimental implementations of a novel MRI technique that enables non-invasive visualization of the tDCS electric current using magnetic field mapping. The first chapter establishes the feasibility of measuring magnetic fields induced by tDCS currents. The following chapter discusses the state of the art implementation that can measure magnetic field changes in individual subjects undergoing concurrent tDCS/MRI. The final chapter discusses how the developed technique was integrated with BOLD fMRI-an established MRI technique for measuring brain function. By enabling a concurrent measurement of the tDCS current induced magnetic field as well as the brain's hemodynamic response to tDCS, our technique opens a new avenue to investigate tDCS mechanisms and improve targeting.

  8. Progressive Visual Analytics: User-Driven Visual Exploration of In-Progress Analytics.

    PubMed

    Stolper, Charles D; Perer, Adam; Gotz, David

    2014-12-01

    As datasets grow and analytic algorithms become more complex, the typical workflow of analysts launching an analytic, waiting for it to complete, inspecting the results, and then re-Iaunching the computation with adjusted parameters is not realistic for many real-world tasks. This paper presents an alternative workflow, progressive visual analytics, which enables an analyst to inspect partial results of an algorithm as they become available and interact with the algorithm to prioritize subspaces of interest. Progressive visual analytics depends on adapting analytical algorithms to produce meaningful partial results and enable analyst intervention without sacrificing computational speed. The paradigm also depends on adapting information visualization techniques to incorporate the constantly refining results without overwhelming analysts and provide interactions to support an analyst directing the analytic. The contributions of this paper include: a description of the progressive visual analytics paradigm; design goals for both the algorithms and visualizations in progressive visual analytics systems; an example progressive visual analytics system (Progressive Insights) for analyzing common patterns in a collection of event sequences; and an evaluation of Progressive Insights and the progressive visual analytics paradigm by clinical researchers analyzing electronic medical records.

  9. Interaction Junk: User Interaction-Based Evaluation of Visual Analytic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; North, Chris

    2012-10-14

    With the growing need for visualization to aid users in understanding large, complex datasets, the ability for users to interact and explore these datasets is critical. As visual analytic systems have advanced to leverage powerful computational models and data analytics capabilities, the modes by which users engage and interact with the information are limited. Often, users are taxed with directly manipulating parameters of these models through traditional GUIs (e.g., using sliders to directly manipulate the value of a parameter). However, the purpose of user interaction in visual analytic systems is to enable visual data exploration – where users can focusmore » on their task, as opposed to the tool or system. As a result, users can engage freely in data exploration and decision-making, for the purpose of gaining insight. In this position paper, we discuss how evaluating visual analytic systems can be approached through user interaction analysis, where the goal is to minimize the cognitive translation between the visual metaphor and the mode of interaction (i.e., reducing the “Interactionjunk”). We motivate this concept through a discussion of traditional GUIs used in visual analytics for direct manipulation of model parameters, and the importance of designing interactions the support visual data exploration.« less

  10. ProteoLens: a visual analytic tool for multi-scale database-driven biological network data mining.

    PubMed

    Huan, Tianxiao; Sivachenko, Andrey Y; Harrison, Scott H; Chen, Jake Y

    2008-08-12

    New systems biology studies require researchers to understand how interplay among myriads of biomolecular entities is orchestrated in order to achieve high-level cellular and physiological functions. Many software tools have been developed in the past decade to help researchers visually navigate large networks of biomolecular interactions with built-in template-based query capabilities. To further advance researchers' ability to interrogate global physiological states of cells through multi-scale visual network explorations, new visualization software tools still need to be developed to empower the analysis. A robust visual data analysis platform driven by database management systems to perform bi-directional data processing-to-visualizations with declarative querying capabilities is needed. We developed ProteoLens as a JAVA-based visual analytic software tool for creating, annotating and exploring multi-scale biological networks. It supports direct database connectivity to either Oracle or PostgreSQL database tables/views, on which SQL statements using both Data Definition Languages (DDL) and Data Manipulation languages (DML) may be specified. The robust query languages embedded directly within the visualization software help users to bring their network data into a visualization context for annotation and exploration. ProteoLens supports graph/network represented data in standard Graph Modeling Language (GML) formats, and this enables interoperation with a wide range of other visual layout tools. The architectural design of ProteoLens enables the de-coupling of complex network data visualization tasks into two distinct phases: 1) creating network data association rules, which are mapping rules between network node IDs or edge IDs and data attributes such as functional annotations, expression levels, scores, synonyms, descriptions etc; 2) applying network data association rules to build the network and perform the visual annotation of graph nodes and edges according to associated data values. We demonstrated the advantages of these new capabilities through three biological network visualization case studies: human disease association network, drug-target interaction network and protein-peptide mapping network. The architectural design of ProteoLens makes it suitable for bioinformatics expert data analysts who are experienced with relational database management to perform large-scale integrated network visual explorations. ProteoLens is a promising visual analytic platform that will facilitate knowledge discoveries in future network and systems biology studies.

  11. Experimenter's Laboratory for Visualized Interactive Science

    NASA Technical Reports Server (NTRS)

    Hansen, Elaine R.; Rodier, Daniel R.; Klemp, Marjorie K.

    1994-01-01

    ELVIS (Experimenter's Laboratory for Visualized Interactive Science) is an interactive visualization environment that enables scientists, students, and educators to visualize and analyze large, complex, and diverse sets of scientific data. It accomplishes this by presenting the data sets as 2-D, 3-D, color, stereo, and graphic images with movable and multiple light sources combined with displays of solid-surface, contours, wire-frame, and transparency. By simultaneously rendering diverse data sets acquired from multiple sources, formats, and resolutions and by interacting with the data through an intuitive, direct-manipulation interface, ELVIS provides an interactive and responsive environment for exploratory data analysis.

  12. Wristwatch dosimeter

    DOEpatents

    Wolf, Michael A.; Waechter, David A.; Umbarger, C. John

    1986-01-01

    The disclosure is directed to a wristwatch dosimeter utilizing a CdTe detector, a microprocessor and an audio and/or visual alarm. The dosimeter is entirely housable with a conventional digital watch case having an additional aperture enabling the detector to receive radiation.

  13. How visual cues for when to listen aid selective auditory attention.

    PubMed

    Varghese, Lenny A; Ozmeral, Erol J; Best, Virginia; Shinn-Cunningham, Barbara G

    2012-06-01

    Visual cues are known to aid auditory processing when they provide direct information about signal content, as in lip reading. However, some studies hint that visual cues also aid auditory perception by guiding attention to the target in a mixture of similar sounds. The current study directly tests this idea for complex, nonspeech auditory signals, using a visual cue providing only timing information about the target. Listeners were asked to identify a target zebra finch bird song played at a random time within a longer, competing masker. Two different maskers were used: noise and a chorus of competing bird songs. On half of all trials, a visual cue indicated the timing of the target within the masker. For the noise masker, the visual cue did not affect performance when target and masker were from the same location, but improved performance when target and masker were in different locations. In contrast, for the chorus masker, visual cues improved performance only when target and masker were perceived as coming from the same direction. These results suggest that simple visual cues for when to listen improve target identification by enhancing sounds near the threshold of audibility when the target is energetically masked and by enhancing segregation when it is difficult to direct selective attention to the target. Visual cues help little when target and masker already differ in attributes that enable listeners to engage selective auditory attention effectively, including differences in spectrotemporal structure and in perceived location.

  14. Quantifying and visualizing variations in sets of images using continuous linear optimal transport

    NASA Astrophysics Data System (ADS)

    Kolouri, Soheil; Rohde, Gustavo K.

    2014-03-01

    Modern advancements in imaging devices have enabled us to explore the subcellular structure of living organisms and extract vast amounts of information. However, interpreting the biological information mined in the captured images is not a trivial task. Utilizing predetermined numerical features is usually the only hope for quantifying this information. Nonetheless, direct visual or biological interpretation of results obtained from these selected features is non-intuitive and difficult. In this paper, we describe an automatic method for modeling visual variations in a set of images, which allows for direct visual interpretation of the most significant differences, without the need for predefined features. The method is based on a linearized version of the continuous optimal transport (OT) metric, which provides a natural linear embedding for the image data set, in which linear combination of images leads to a visually meaningful image. This enables us to apply linear geometric data analysis techniques such as principal component analysis and linear discriminant analysis in the linearly embedded space and visualize the most prominent modes, as well as the most discriminant modes of variations, in the dataset. Using the continuous OT framework, we are able to analyze variations in shape and texture in a set of images utilizing each image at full resolution, that otherwise cannot be done by existing methods. The proposed method is applied to a set of nuclei images segmented from Feulgen stained liver tissues in order to investigate the major visual differences in chromatin distribution of Fetal-Type Hepatoblastoma (FHB) cells compared to the normal cells.

  15. A magnetic tether system to investigate visual and olfactory mediated flight control in Drosophila.

    PubMed

    Duistermars, Brian J; Frye, Mark

    2008-11-21

    It has been clear for many years that insects use visual cues to stabilize their heading in a wind stream. Many animals track odors carried in the wind. As such, visual stabilization of upwind tracking directly aids in odor tracking. But do olfactory signals directly influence visual tracking behavior independently from wind cues? Also, the recent deluge of research on the neurophysiology and neurobehavioral genetics of olfaction in Drosophila has motivated ever more technically sophisticated and quantitative behavioral assays. Here, we modified a magnetic tether system originally devised for vision experiments by equipping the arena with narrow laminar flow odor plumes. A fly is glued to a small steel pin and suspended in a magnetic field that enables it to yaw freely. Small diameter food odor plumes are directed downward over the fly's head, eliciting stable tracking by a hungry fly. Here we focus on the critical mechanics of tethering, aligning the magnets, devising the odor plume, and confirming stable odor tracking.

  16. Linking Automated Data Analysis and Visualization with Applications in Developmental Biology and High-Energy Physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruebel, Oliver

    2009-11-20

    Knowledge discovery from large and complex collections of today's scientific datasets is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the increasing number of data dimensions and data objects is presenting tremendous challenges for data analysis and effective data exploration methods and tools. Researchers are overwhelmed with data and standard tools are often insufficient to enable effective data analysis and knowledge discovery. The main objective of this thesis is to provide important new capabilities to accelerate scientific knowledge discovery form large, complex, and multivariate scientific data. The research coveredmore » in this thesis addresses these scientific challenges using a combination of scientific visualization, information visualization, automated data analysis, and other enabling technologies, such as efficient data management. The effectiveness of the proposed analysis methods is demonstrated via applications in two distinct scientific research fields, namely developmental biology and high-energy physics.Advances in microscopy, image analysis, and embryo registration enable for the first time measurement of gene expression at cellular resolution for entire organisms. Analysis of high-dimensional spatial gene expression datasets is a challenging task. By integrating data clustering and visualization, analysis of complex, time-varying, spatial gene expression patterns and their formation becomes possible. The analysis framework MATLAB and the visualization have been integrated, making advanced analysis tools accessible to biologist and enabling bioinformatic researchers to directly integrate their analysis with the visualization. Laser wakefield particle accelerators (LWFAs) promise to be a new compact source of high-energy particles and radiation, with wide applications ranging from medicine to physics. To gain insight into the complex physical processes of particle acceleration, physicists model LWFAs computationally. The datasets produced by LWFA simulations are (i) extremely large, (ii) of varying spatial and temporal resolution, (iii) heterogeneous, and (iv) high-dimensional, making analysis and knowledge discovery from complex LWFA simulation data a challenging task. To address these challenges this thesis describes the integration of the visualization system VisIt and the state-of-the-art index/query system FastBit, enabling interactive visual exploration of extremely large three-dimensional particle datasets. Researchers are especially interested in beams of high-energy particles formed during the course of a simulation. This thesis describes novel methods for automatic detection and analysis of particle beams enabling a more accurate and efficient data analysis process. By integrating these automated analysis methods with visualization, this research enables more accurate, efficient, and effective analysis of LWFA simulation data than previously possible.« less

  17. Wrist-watch dosimeter

    DOEpatents

    Wolf, M.A.; Waechter, D.A.; Umbarger, C.J.

    1982-04-16

    The disclosure is directed to a wristwatch dosimeter utilizing a CdTe detector, a microprocessor and an audio and/or visual alarm. The dosimeter is entirely housable within a conventional digital watch case having an additional aperture enabling the detector to receive radiation.

  18. Wristwatch dosimeter

    DOEpatents

    Wolf, M.A.; Waechter, D.A.; Umbarger, C.J.

    1986-08-26

    The disclosure is directed to a wristwatch dosimeter utilizing a CdTe detector, a microprocessor and an audio and/or visual alarm. The dosimeter is entirely housable with a conventional digital watch case having an additional aperture enabling the detector to receive radiation. 10 figs.

  19. Simple device for the direct visualization of oral-cavity tissue fluorescence

    NASA Astrophysics Data System (ADS)

    Lane, Pierre M.; Gilhuly, Terence; Whitehead, Peter D.; Zeng, Haishan; Poh, Catherine; Ng, Samson; Williams, Michelle; Zhang, Lewei; Rosin, Miriam; MacAulay, Calum E.

    2006-03-01

    Early identification of high-risk disease could greatly reduce both mortality and morbidity due to oral cancer. We describe a simple handheld device that facilitates the direct visualization of oral-cavity fluorescence for the detection of high-risk precancerous and early cancerous lesions. Blue excitation light (400 to 460 nm) is employed to excite green-red fluorescence from fluorophores in the oral tissues. Tissue fluorescence is viewed directly along an optical axis collinear with the axis of excitation to reduce inter- and intraoperator variability. This robust, field-of-view device enables the direct visualization of fluorescence in the context of surrounding normal tissue. Results from a pilot study of 44 patients are presented. Using histology as the gold standard, the device achieves a sensitivity of 98% and specificity of 100% when discriminating normal mucosa from severe dysplasia/carcinoma in situ (CIS) or invasive carcinoma. We envisage this device as a suitable adjunct for oral cancer screening, biopsy guidance, and margin delineation.

  20. Neural Circuit to Integrate Opposing Motions in the Visual Field.

    PubMed

    Mauss, Alex S; Pankova, Katarina; Arenz, Alexander; Nern, Aljoscha; Rubin, Gerald M; Borst, Alexander

    2015-07-16

    When navigating in their environment, animals use visual motion cues as feedback signals that are elicited by their own motion. Such signals are provided by wide-field neurons sampling motion directions at multiple image points as the animal maneuvers. Each one of these neurons responds selectively to a specific optic flow-field representing the spatial distribution of motion vectors on the retina. Here, we describe the discovery of a group of local, inhibitory interneurons in the fruit fly Drosophila key for filtering these cues. Using anatomy, molecular characterization, activity manipulation, and physiological recordings, we demonstrate that these interneurons convey direction-selective inhibition to wide-field neurons with opposite preferred direction and provide evidence for how their connectivity enables the computation required for integrating opposing motions. Our results indicate that, rather than sharpening directional selectivity per se, these circuit elements reduce noise by eliminating non-specific responses to complex visual information. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. 3DView: Space physics data visualizer

    NASA Astrophysics Data System (ADS)

    Génot, V.; Beigbeder, L.; Popescu, D.; Dufourg, N.; Gangloff, M.; Bouchemit, M.; Caussarieu, S.; Toniutti, J.-P.; Durand, J.; Modolo, R.; André, N.; Cecconi, B.; Jacquey, C.; Pitout, F.; Rouillard, A.; Pinto, R.; Erard, S.; Jourdane, N.; Leclercq, L.; Hess, S.; Khodachenko, M.; Al-Ubaidi, T.; Scherf, M.; Budnik, E.

    2018-04-01

    3DView creates visualizations of space physics data in their original 3D context. Time series, vectors, dynamic spectra, celestial body maps, magnetic field or flow lines, and 2D cuts in simulation cubes are among the variety of data representation enabled by 3DView. It offers direct connections to several large databases and uses VO standards; it also allows the user to upload data. 3DView's versatility covers a wide range of space physics contexts.

  2. The answer is blowing in the wind: free-flying honeybees can integrate visual and mechano-sensory inputs for making complex foraging decisions.

    PubMed

    Ravi, Sridhar; Garcia, Jair E; Wang, Chun; Dyer, Adrian G

    2016-11-01

    Bees navigate in complex environments using visual, olfactory and mechano-sensorial cues. In the lowest region of the atmosphere, the wind environment can be highly unsteady and bees employ fine motor-skills to enhance flight control. Recent work reveals sophisticated multi-modal processing of visual and olfactory channels by the bee brain to enhance foraging efficiency, but it currently remains unclear whether wind-induced mechano-sensory inputs are also integrated with visual information to facilitate decision making. Individual honeybees were trained in a linear flight arena with appetitive-aversive differential conditioning to use a context-setting cue of 3 m s -1 cross-wind direction to enable decisions about either a 'blue' or 'yellow' star stimulus being the correct alternative. Colour stimuli properties were mapped in bee-specific opponent-colour spaces to validate saliency, and to thus enable rapid reverse learning. Bees were able to integrate mechano-sensory and visual information to facilitate decisions that were significantly different to chance expectation after 35 learning trials. An independent group of bees were trained to find a single rewarding colour that was unrelated to the wind direction. In these trials, wind was not used as a context-setting cue and served only as a potential distracter in identifying the relevant rewarding visual stimuli. Comparison between respective groups shows that bees can learn to integrate visual and mechano-sensory information in a non-elemental fashion, revealing an unsuspected level of sensory processing in honeybees, and adding to the growing body of knowledge on the capacity of insect brains to use multi-modal sensory inputs in mediating foraging behaviour. © 2016. Published by The Company of Biologists Ltd.

  3. The primary visual cortex in the neural circuit for visual orienting

    NASA Astrophysics Data System (ADS)

    Zhaoping, Li

    The primary visual cortex (V1) is traditionally viewed as remote from influencing brain's motor outputs. However, V1 provides the most abundant cortical inputs directly to the sensory layers of superior colliculus (SC), a midbrain structure to command visual orienting such as shifting gaze and turning heads. I will show physiological, anatomical, and behavioral data suggesting that V1 transforms visual input into a saliency map to guide a class of visual orienting that is reflexive or involuntary. In particular, V1 receives a retinotopic map of visual features, such as orientation, color, and motion direction of local visual inputs; local interactions between V1 neurons perform a local-to-global computation to arrive at a saliency map that highlights conspicuous visual locations by higher V1 responses. The conspicuous location are usually, but not always, where visual input statistics changes. The population V1 outputs to SC, which is also retinotopic, enables SC to locate, by lateral inhibition between SC neurons, the most salient location as the saccadic target. Experimental tests of this hypothesis will be shown. Variations of the neural circuit for visual orienting across animal species, with more or less V1 involvement, will be discussed. Supported by the Gatsby Charitable Foundation.

  4. Ovis: A Framework for Visual Analysis of Ocean Forecast Ensembles.

    PubMed

    Höllt, Thomas; Magdy, Ahmed; Zhan, Peng; Chen, Guoning; Gopalakrishnan, Ganesh; Hoteit, Ibrahim; Hansen, Charles D; Hadwiger, Markus

    2014-08-01

    We present a novel integrated visualization system that enables interactive visual analysis of ensemble simulations of the sea surface height that is used in ocean forecasting. The position of eddies can be derived directly from the sea surface height and our visualization approach enables their interactive exploration and analysis.The behavior of eddies is important in different application settings of which we present two in this paper. First, we show an application for interactive planning of placement as well as operation of off-shore structures using real-world ensemble simulation data of the Gulf of Mexico. Off-shore structures, such as those used for oil exploration, are vulnerable to hazards caused by eddies, and the oil and gas industry relies on ocean forecasts for efficient operations. We enable analysis of the spatial domain, as well as the temporal evolution, for planning the placement and operation of structures.Eddies are also important for marine life. They transport water over large distances and with it also heat and other physical properties as well as biological organisms. In the second application we present the usefulness of our tool, which could be used for planning the paths of autonomous underwater vehicles, so called gliders, for marine scientists to study simulation data of the largely unexplored Red Sea.

  5. Brief Report: Just-in-Time Visual Supports to Children with Autism via the Apple Watch®: A Pilot Feasibility Study

    ERIC Educational Resources Information Center

    O'Brien, Amanda; Schlosser, Ralf W.; Shane, Howard C.; Abramson, Jennifer; Allen, Anna A.; Flynn, Suzanne; Yu, Christina; Dimery, Katherine

    2016-01-01

    Using augmented input might be an effective means for supplementing spoken language for children with autism who have difficulties following spoken directives. This study aimed to (a) explore whether JIT-delivered scene cues (photos, video clips) via the Apple Watch® enable children with autism to carry out directives they were unable to implement…

  6. Application of advanced computing techniques to the analysis and display of space science measurements

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M.; Lapolla, M. V.; Horblit, B.

    1995-01-01

    A prototype system has been developed to aid the experimental space scientist in the display and analysis of spaceborne data acquired from direct measurement sensors in orbit. We explored the implementation of a rule-based environment for semi-automatic generation of visualizations that assist the domain scientist in exploring one's data. The goal has been to enable rapid generation of visualizations which enhance the scientist's ability to thoroughly mine his data. Transferring the task of visualization generation from the human programmer to the computer produced a rapid prototyping environment for visualizations. The visualization and analysis environment has been tested against a set of data obtained from the Hot Plasma Composition Experiment on the AMPTE/CCE satellite creating new visualizations which provided new insight into the data.

  7. Phospholipid micelle-based magneto-plasmonic nanoformulation for magnetic field-directed, imaging-guided photo-induced cancer therapy.

    PubMed

    Ohulchanskyy, Tymish Y; Kopwitthaya, Atcha; Jeon, Mansik; Guo, Moran; Law, Wing-Cheung; Furlani, Edward P; Kim, Chulhong; Prasad, Paras N

    2013-11-01

    We present a magnetoplasmonic nanoplatform combining gold nanorods (GNR) and iron-oxide nanoparticles within phospholipid-based polymeric nanomicelles (PGRFe). The gold nanorods exhibit plasmon resonance absorbance at near infrared wavelengths to enable photoacoustic imaging and photothermal therapy, while the Fe3O4 nanoparticles enable magnetophoretic control of the nanoformulation. The fabricated nanoformulation can be directed and concentrated by an external magnetic field, which provides enhancement of a photoacoustic signal. Application of an external field also leads to enhanced uptake of the magnetoplasmonic formulation by cancer cells in vitro. Under laser irradiation at the wavelength of the GNR absorption peak, the PGRFe formulation efficiently generates plasmonic nanobubbles within cancer cells, as visualized by confocal microscopy, causing cell destruction. The combined magnetic and plasmonic functionalities of the nanoplatform enable magnetic field-directed, imaging-guided, enhanced photo-induced cancer therapy. In this study, a nano-formulation of gold nanorods and iron oxide nanoparticles is presented using a phospholipid micelle-based delivery system for magnetic field-directed and imaging-guided photo-induced cancer therapy. The gold nanorods enable photoacoustic imaging and photothermal therapy, while the Fe3O4 nanoparticles enable magnetophoretic control of the formulation. This and similar systems could enable more precise and efficient cancer therapy, hopefully in the near future, after additional testing. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. 3D visualization of optical ray aberration and its broadcasting to smartphones by ray aberration generator

    NASA Astrophysics Data System (ADS)

    Hellman, Brandon; Bosset, Erica; Ender, Luke; Jafari, Naveed; McCann, Phillip; Nguyen, Chris; Summitt, Chris; Wang, Sunglin; Takashima, Yuzuru

    2017-11-01

    The ray formalism is critical to understanding light propagation, yet current pedagogy relies on inadequate 2D representations. We present a system in which real light rays are visualized through an optical system by using a collimated laser bundle of light and a fog chamber. Implementation for remote and immersive access is enabled by leveraging a commercially available 3D viewer and gesture-based remote controlling of the tool via bi-directional communication over the Internet.

  9. Optical Histology: High-Resolution Visualization of Tissue Microvasculature

    NASA Astrophysics Data System (ADS)

    Moy, Austin Jing-Ming

    Mammalian tissue requires the delivery of nutrients, growth factors, and the exchange of oxygen and carbon dioxide gases to maintain normal function. These elements are delivered by the blood, which travels through the connected network of blood vessels, known as the vascular system. The vascular system consists of large feeder blood vessels (arteries and veins) that are connected to the small blood vessels (arterioles and venules), which in turn are connected to the capillaries that are directly connected to the tissue and facilitate gas exchange and nutrient delivery. These small blood vessels and capillaries make up an intricate but organized network of blood vessels that exist in all mammalian tissues known as the microvasculature and are very important in maintaining the health and proper function of mammalian tissue. Due to the importance of the microvasculature in tissue survival, disruption of the microvasculature typically leads to tissue dysfunction and tissue death. The most prevalent method to study the microvasculature is visualization. Immunohistochemistry (IHC) is the gold-standard method to visualize tissue microvasculature. IHC is very well-suited for highly detailed interrogation of the tissue microvasculature at the cellular level but is unwieldy and impractical for wide-field visualization of the tissue microvasculature. The objective my dissertation research was to develop a method to enable wide-field visualization of the microvasculature, while still retaining the high-resolution afforded by optical microscopy. My efforts led to the development of a technique dubbed "optical histology" that combines chemical and optical methods to enable high-resolution visualization of the microvasculature. The development of the technique first involved preliminary studies to quantify optical property changes in optically cleared tissues, followed by development and demonstration of the methodology. Using optical histology, I successfully obtained high resolution, depth sectioned images of the microvasculature in mouse brain and the coronary microvasculature in mouse heart. Future directions of optical histology include the potential to facilitate visualization of the entire microvascular structure of an organ as well as visualization of other tissue molecular markers of interest.

  10. Availability Issues in Wireless Visual Sensor Networks

    PubMed Central

    Costa, Daniel G.; Silva, Ivanovitch; Guedes, Luiz Affonso; Vasques, Francisco; Portugal, Paulo

    2014-01-01

    Wireless visual sensor networks have been considered for a large set of monitoring applications related with surveillance, tracking and multipurpose visual monitoring. When sensors are deployed over a monitored field, permanent faults may happen during the network lifetime, reducing the monitoring quality or rendering parts or the entire network unavailable. In a different way from scalar sensor networks, camera-enabled sensors collect information following a directional sensing model, which changes the notions of vicinity and redundancy. Moreover, visual source nodes may have different relevancies for the applications, according to the monitoring requirements and cameras' poses. In this paper we discuss the most relevant availability issues related to wireless visual sensor networks, addressing availability evaluation and enhancement. Such discussions are valuable when designing, deploying and managing wireless visual sensor networks, bringing significant contributions to these networks. PMID:24526301

  11. Visualization-by-Sketching: An Artist's Interface for Creating Multivariate Time-Varying Data Visualizations.

    PubMed

    Schroeder, David; Keefe, Daniel F

    2016-01-01

    We present Visualization-by-Sketching, a direct-manipulation user interface for designing new data visualizations. The goals are twofold: First, make the process of creating real, animated, data-driven visualizations of complex information more accessible to artists, graphic designers, and other visual experts with traditional, non-technical training. Second, support and enhance the role of human creativity in visualization design, enabling visual experimentation and workflows similar to what is possible with traditional artistic media. The approach is to conceive of visualization design as a combination of processes that are already closely linked with visual creativity: sketching, digital painting, image editing, and reacting to exemplars. Rather than studying and tweaking low-level algorithms and their parameters, designers create new visualizations by painting directly on top of a digital data canvas, sketching data glyphs, and arranging and blending together multiple layers of animated 2D graphics. This requires new algorithms and techniques to interpret painterly user input relative to data "under" the canvas, balance artistic freedom with the need to produce accurate data visualizations, and interactively explore large (e.g., terabyte-sized) multivariate datasets. Results demonstrate a variety of multivariate data visualization techniques can be rapidly recreated using the interface. More importantly, results and feedback from artists support the potential for interfaces in this style to attract new, creative users to the challenging task of designing more effective data visualizations and to help these users stay "in the creative zone" as they work.

  12. Direct ophthalmoscopy on YouTube: analysis of instructional YouTube videos' content and approach to visualization.

    PubMed

    Borgersen, Nanna Jo; Henriksen, Mikael Johannes Vuokko; Konge, Lars; Sørensen, Torben Lykke; Thomsen, Ann Sofia Skou; Subhi, Yousif

    2016-01-01

    Direct ophthalmoscopy is well-suited for video-based instruction, particularly if the videos enable the student to see what the examiner sees when performing direct ophthalmoscopy. We evaluated the pedagogical effectiveness of instructional YouTube videos on direct ophthalmoscopy by evaluating their content and approach to visualization. In order to synthesize main themes and points for direct ophthalmoscopy, we formed a broad panel consisting of a medical student, junior and senior physicians, and took into consideration book chapters targeting medical students and physicians in general. We then systematically searched YouTube. Two authors reviewed eligible videos to assess eligibility and extract data on video statistics, content, and approach to visualization. Correlations between video statistics and contents were investigated using two-tailed Spearman's correlation. We screened 7,640 videos, of which 27 were found eligible for this study. Overall, a median of 12 out of 18 points (interquartile range: 8-14 key points) were covered; no videos covered all of the 18 points assessed. We found the most difficulties in the approach to visualization of how to approach the patient and how to examine the fundus. Time spent on fundus examination correlated with the number of views per week (Spearman's ρ=0.53; P=0.029). Videos may help overcome the pedagogical issues in teaching direct ophthalmoscopy; however, the few available videos on YouTube fail to address this particular issue adequately. There is a need for high-quality videos that include relevant points, provide realistic visualization of the examiner's view, and give particular emphasis on fundus examination.

  13. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information.

    PubMed

    Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise

    2017-01-01

    Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.

  14. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information

    PubMed Central

    Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise

    2017-01-01

    Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information. PMID:28634444

  15. Effects of cholinergic deafferentation of the rhinal cortex on visual recognition memory in monkeys.

    PubMed

    Turchi, Janita; Saunders, Richard C; Mishkin, Mortimer

    2005-02-08

    Excitotoxic lesion studies have confirmed that the rhinal cortex is essential for visual recognition ability in monkeys. To evaluate the mnemonic role of cholinergic inputs to this cortical region, we compared the visual recognition performance of monkeys given rhinal cortex infusions of a selective cholinergic immunotoxin, ME20.4-SAP, with the performance of monkeys given control infusions into this same tissue. The immunotoxin, which leads to selective cholinergic deafferentation of the infused cortex, yielded recognition deficits of the same magnitude as those produced by excitotoxic lesions of this region, providing the most direct demonstration to date that cholinergic activation of the rhinal cortex is essential for storing the representations of new visual stimuli and thereby enabling their later recognition.

  16. Design by Dragging: An Interface for Creative Forward and Inverse Design with Simulation Ensembles

    PubMed Central

    Coffey, Dane; Lin, Chi-Lun; Erdman, Arthur G.; Keefe, Daniel F.

    2014-01-01

    We present an interface for exploring large design spaces as encountered in simulation-based engineering, design of visual effects, and other tasks that require tuning parameters of computationally-intensive simulations and visually evaluating results. The goal is to enable a style of design with simulations that feels as-direct-as-possible so users can concentrate on creative design tasks. The approach integrates forward design via direct manipulation of simulation inputs (e.g., geometric properties, applied forces) in the same visual space with inverse design via “tugging” and reshaping simulation outputs (e.g., scalar fields from finite element analysis (FEA) or computational fluid dynamics (CFD)). The interface includes algorithms for interpreting the intent of users’ drag operations relative to parameterized models, morphing arbitrary scalar fields output from FEA and CFD simulations, and in-place interactive ensemble visualization. The inverse design strategy can be extended to use multi-touch input in combination with an as-rigid-as-possible shape manipulation to support rich visual queries. The potential of this new design approach is confirmed via two applications: medical device engineering of a vacuum-assisted biopsy device and visual effects design using a physically based flame simulation. PMID:24051845

  17. Micro/nano-computed tomography technology for quantitative dynamic, multi-scale imaging of morphogenesis.

    PubMed

    Gregg, Chelsea L; Recknagel, Andrew K; Butcher, Jonathan T

    2015-01-01

    Tissue morphogenesis and embryonic development are dynamic events challenging to quantify, especially considering the intricate events that happen simultaneously in different locations and time. Micro- and more recently nano-computed tomography (micro/nanoCT) has been used for the past 15 years to characterize large 3D fields of tortuous geometries at high spatial resolution. We and others have advanced micro/nanoCT imaging strategies for quantifying tissue- and organ-level fate changes throughout morphogenesis. Exogenous soft tissue contrast media enables visualization of vascular lumens and tissues via extravasation. Furthermore, the emergence of antigen-specific tissue contrast enables direct quantitative visualization of protein and mRNA expression. Micro-CT X-ray doses appear to be non-embryotoxic, enabling longitudinal imaging studies in live embryos. In this chapter we present established soft tissue contrast protocols for obtaining high-quality micro/nanoCT images and the image processing techniques useful for quantifying anatomical and physiological information from the data sets.

  18. PRIDE Inspector Toolsuite: Moving Toward a Universal Visualization Tool for Proteomics Data Standard Formats and Quality Assessment of ProteomeXchange Datasets.

    PubMed

    Perez-Riverol, Yasset; Xu, Qing-Wei; Wang, Rui; Uszkoreit, Julian; Griss, Johannes; Sanchez, Aniel; Reisinger, Florian; Csordas, Attila; Ternent, Tobias; Del-Toro, Noemi; Dianes, Jose A; Eisenacher, Martin; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2016-01-01

    The original PRIDE Inspector tool was developed as an open source standalone tool to enable the visualization and validation of mass-spectrometry (MS)-based proteomics data before data submission or already publicly available in the Proteomics Identifications (PRIDE) database. The initial implementation of the tool focused on visualizing PRIDE data by supporting the PRIDE XML format and a direct access to private (password protected) and public experiments in PRIDE.The ProteomeXchange (PX) Consortium has been set up to enable a better integration of existing public proteomics repositories, maximizing its benefit to the scientific community through the implementation of standard submission and dissemination pipelines. Within the Consortium, PRIDE is focused on supporting submissions of tandem MS data. The increasing use and popularity of the new Proteomics Standards Initiative (PSI) data standards such as mzIdentML and mzTab, and the diversity of workflows supported by the PX resources, prompted us to design and implement a new suite of algorithms and libraries that would build upon the success of the original PRIDE Inspector and would enable users to visualize and validate PX "complete" submissions. The PRIDE Inspector Toolsuite supports the handling and visualization of different experimental output files, ranging from spectra (mzML, mzXML, and the most popular peak lists formats) and peptide and protein identification results (mzIdentML, PRIDE XML, mzTab) to quantification data (mzTab, PRIDE XML), using a modular and extensible set of open-source, cross-platform libraries. We believe that the PRIDE Inspector Toolsuite represents a milestone in the visualization and quality assessment of proteomics data. It is freely available at http://github.com/PRIDE-Toolsuite/. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.

  19. PRIDE Inspector Toolsuite: Moving Toward a Universal Visualization Tool for Proteomics Data Standard Formats and Quality Assessment of ProteomeXchange Datasets*

    PubMed Central

    Perez-Riverol, Yasset; Xu, Qing-Wei; Wang, Rui; Uszkoreit, Julian; Griss, Johannes; Sanchez, Aniel; Reisinger, Florian; Csordas, Attila; Ternent, Tobias; del-Toro, Noemi; Dianes, Jose A.; Eisenacher, Martin; Hermjakob, Henning; Vizcaíno, Juan Antonio

    2016-01-01

    The original PRIDE Inspector tool was developed as an open source standalone tool to enable the visualization and validation of mass-spectrometry (MS)-based proteomics data before data submission or already publicly available in the Proteomics Identifications (PRIDE) database. The initial implementation of the tool focused on visualizing PRIDE data by supporting the PRIDE XML format and a direct access to private (password protected) and public experiments in PRIDE. The ProteomeXchange (PX) Consortium has been set up to enable a better integration of existing public proteomics repositories, maximizing its benefit to the scientific community through the implementation of standard submission and dissemination pipelines. Within the Consortium, PRIDE is focused on supporting submissions of tandem MS data. The increasing use and popularity of the new Proteomics Standards Initiative (PSI) data standards such as mzIdentML and mzTab, and the diversity of workflows supported by the PX resources, prompted us to design and implement a new suite of algorithms and libraries that would build upon the success of the original PRIDE Inspector and would enable users to visualize and validate PX “complete” submissions. The PRIDE Inspector Toolsuite supports the handling and visualization of different experimental output files, ranging from spectra (mzML, mzXML, and the most popular peak lists formats) and peptide and protein identification results (mzIdentML, PRIDE XML, mzTab) to quantification data (mzTab, PRIDE XML), using a modular and extensible set of open-source, cross-platform libraries. We believe that the PRIDE Inspector Toolsuite represents a milestone in the visualization and quality assessment of proteomics data. It is freely available at http://github.com/PRIDE-Toolsuite/. PMID:26545397

  20. Near Real Time Review of Instrument Performance using the Airborne Data Processing and Analysis Software Package

    NASA Astrophysics Data System (ADS)

    Delene, D. J.

    2014-12-01

    Research aircraft that conduct atmospheric measurements carry an increasing array of instrumentation. While on-board personnel constantly review instrument parameters and time series plots, there are an overwhelming number of items. Furthermore, directing the aircraft flight takes up much of the flight scientist time. Typically, a flight engineer is given the responsibility of reviewing the status of on-board instruments. While major issues like not receiving data are quickly identified during a flight, subtle issues like low but believable concentration measurements may go unnoticed. Therefore, it is critical to review data after a flight in near real time. The Airborne Data Processing and Analysis (ADPAA) software package used by the University of North Dakota automates the post-processing of aircraft flight data. Utilizing scripts to process the measurements recorded by data acquisition systems enables the generation of data files within an hour of flight completion. The ADPAA Cplot visualization program enables plots to be quickly generated that enable timely review of all recorded and processed parameters. Near real time review of aircraft flight data enables instrument problems to be identified, investigated and fixed before conducting another flight. On one flight, near real time data review resulted in the identification of unusually low measurements of cloud condensation nuclei, and rapid data visualization enabled the timely investigation of the cause. As a result, a leak was found and fixed before the next flight. Hence, with the high cost of aircraft flights, it is critical to find and fix instrument problems in a timely matter. The use of a automated processing scripts and quick visualization software enables scientists to review aircraft flight data in near real time to identify potential problems.

  1. Psychophysical and neuroimaging responses to moving stimuli in a patient with the Riddoch phenomenon due to bilateral visual cortex lesions.

    PubMed

    Arcaro, Michael J; Thaler, Lore; Quinlan, Derek J; Monaco, Simona; Khan, Sarah; Valyear, Kenneth F; Goebel, Rainer; Dutton, Gordon N; Goodale, Melvyn A; Kastner, Sabine; Culham, Jody C

    2018-05-09

    Patients with injury to early visual cortex or its inputs can display the Riddoch phenomenon: preserved awareness for moving but not stationary stimuli. We provide a detailed case report of a patient with the Riddoch phenomenon, MC. MC has extensive bilateral lesions to occipitotemporal cortex that include most early visual cortex and complete blindness in visual field perimetry testing with static targets. Nevertheless, she shows a remarkably robust preserved ability to perceive motion, enabling her to navigate through cluttered environments and perform actions like catching moving balls. Comparisons of MC's structural magnetic resonance imaging (MRI) data to a probabilistic atlas based on controls reveals that MC's lesions encompass the posterior, lateral, and ventral early visual cortex bilaterally (V1, V2, V3A/B, LO1/2, TO1/2, hV4 and VO1 in both hemispheres) as well as more extensive damage to right parietal (inferior parietal lobule) and left ventral occipitotemporal cortex (VO1, PHC1/2). She shows some sparing of anterior occipital cortex, which may account for her ability to see moving targets beyond ~15 degrees eccentricity during perimetry. Most strikingly, functional and structural MRI revealed robust and reliable spared functionality of the middle temporal motion complex (MT+) bilaterally. Moreover, consistent with her preserved ability to discriminate motion direction in psychophysical testing, MC also shows direction-selective adaptation in MT+. A variety of tests did not enable us to discern whether input to MT+ was driven by her spared anterior occipital cortex or subcortical inputs. Nevertheless, MC shows rich motion perception despite profoundly impaired static and form vision, combined with clear preservation of activation in MT+, thus supporting the role of MT+ in the Riddoch phenomenon. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Direct ophthalmoscopy on YouTube: analysis of instructional YouTube videos’ content and approach to visualization

    PubMed Central

    Borgersen, Nanna Jo; Henriksen, Mikael Johannes Vuokko; Konge, Lars; Sørensen, Torben Lykke; Thomsen, Ann Sofia Skou; Subhi, Yousif

    2016-01-01

    Background Direct ophthalmoscopy is well-suited for video-based instruction, particularly if the videos enable the student to see what the examiner sees when performing direct ophthalmoscopy. We evaluated the pedagogical effectiveness of instructional YouTube videos on direct ophthalmoscopy by evaluating their content and approach to visualization. Methods In order to synthesize main themes and points for direct ophthalmoscopy, we formed a broad panel consisting of a medical student, junior and senior physicians, and took into consideration book chapters targeting medical students and physicians in general. We then systematically searched YouTube. Two authors reviewed eligible videos to assess eligibility and extract data on video statistics, content, and approach to visualization. Correlations between video statistics and contents were investigated using two-tailed Spearman’s correlation. Results We screened 7,640 videos, of which 27 were found eligible for this study. Overall, a median of 12 out of 18 points (interquartile range: 8–14 key points) were covered; no videos covered all of the 18 points assessed. We found the most difficulties in the approach to visualization of how to approach the patient and how to examine the fundus. Time spent on fundus examination correlated with the number of views per week (Spearman’s ρ=0.53; P=0.029). Conclusion Videos may help overcome the pedagogical issues in teaching direct ophthalmoscopy; however, the few available videos on YouTube fail to address this particular issue adequately. There is a need for high-quality videos that include relevant points, provide realistic visualization of the examiner’s view, and give particular emphasis on fundus examination. PMID:27574393

  3. Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research

    NASA Astrophysics Data System (ADS)

    Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.

    2008-12-01

    In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.

  4. Celeris: A GPU-accelerated open source software with a Boussinesq-type wave solver for real-time interactive simulation and visualization

    NASA Astrophysics Data System (ADS)

    Tavakkol, Sasan; Lynett, Patrick

    2017-08-01

    In this paper, we introduce an interactive coastal wave simulation and visualization software, called Celeris. Celeris is an open source software which needs minimum preparation to run on a Windows machine. The software solves the extended Boussinesq equations using a hybrid finite volume-finite difference method and supports moving shoreline boundaries. The simulation and visualization are performed on the GPU using Direct3D libraries, which enables the software to run faster than real-time. Celeris provides a first-of-its-kind interactive modeling platform for coastal wave applications and it supports simultaneous visualization with both photorealistic and colormapped rendering capabilities. We validate our software through comparison with three standard benchmarks for non-breaking and breaking waves.

  5. The nature of the (visualization) game: Challenges and opportunities from computational geophysics

    NASA Astrophysics Data System (ADS)

    Kellogg, L. H.

    2016-12-01

    As the geosciences enters the era of big data, modeling and visualization become increasingly vital tools for discovery, understanding, education, and communication. Here, we focus on modeling and visualization of the structure and dynamics of the Earth's surface and interior. The past decade has seen accelerated data acquisition, including higher resolution imaging and modeling of Earth's deep interior, complex models of geodynamics, and high resolution topographic imaging of the changing surface, with an associated acceleration of computational modeling through better scientific software, increased computing capability, and the use of innovative methods of scientific visualization. The role of modeling is to describe a system, answer scientific questions, and test hypotheses; the term "model" encompasses mathematical models, computational models, physical models, conceptual models, statistical models, and visual models of a structure or process. These different uses of the term require thoughtful communication to avoid confusion. Scientific visualization is integral to every aspect of modeling. Not merely a means of communicating results, the best uses of visualization enable scientists to interact with their data, revealing the characteristics of the data and models to enable better interpretation and inform the direction of future investigation. Innovative immersive technologies like virtual reality, augmented reality, and remote collaboration techniques, are being adapted more widely and are a magnet for students. Time-varying or transient phenomena are especially challenging to model and to visualize; researchers and students may need to investigate the role of initial conditions in driving phenomena, while nonlinearities in the governing equations of many Earth systems make the computations and resulting visualization especially challenging. Training students how to use, design, build, and interpret scientific modeling and visualization tools prepares them to better understand the nature of complex, multiscale geoscience data.

  6. Heat-resistant DNA tile arrays constructed by template-directed photoligation through 5-carboxyvinyl-2′-deoxyuridine

    PubMed Central

    Tagawa, Miho; Shohda, Koh-ichiroh; Fujimoto, Kenzo; Sugawara, Tadashi; Suyama, Akira

    2007-01-01

    Template-directed DNA photoligation has been applied to a method to construct heat-resistant two-dimensional (2D) DNA arrays that can work as scaffolds in bottom-up assembly of functional biomolecules and nano-electronic components. DNA double-crossover AB-staggered (DXAB) tiles were covalently connected by enzyme-free template-directed photoligation, which enables a specific ligation reaction in an extremely tight space and under buffer conditions where no enzymes work efficiently. DNA nanostructures created by self-assembly of the DXAB tiles before and after photoligation have been visualized by high-resolution, tapping mode atomic force microscopy in buffer. The improvement of the heat tolerance of 2D DNA arrays was confirmed by heating and visualizing the DNA nanostructures. The heat-resistant DNA arrays may expand the potential of DNA as functional materials in biotechnology and nanotechnology. PMID:17982178

  7. Mass spectrometric imaging of red fluorescent protein in breast tumor xenografts.

    PubMed

    Chughtai, Kamila; Jiang, Lu; Post, Harm; Winnard, Paul T; Greenwood, Tiffany R; Raman, Venu; Bhujwalla, Zaver M; Heeren, Ron M A; Glunde, Kristine

    2013-05-01

    Mass spectrometric imaging (MSI) in combination with electrospray mass spectrometry (ESI-MS) is a powerful technique for visualization and identification of a variety of different biomolecules directly from thin tissue sections. As commonly used tools for molecular reporting, fluorescent proteins are molecular reporter tools that have enabled the elucidation of a multitude of biological pathways and processes. To combine these two approaches, we have performed targeted MS analysis and MALDI-MSI visualization of a tandem dimer (td)Tomato red fluorescent protein, which was expressed exclusively in the hypoxic regions of a breast tumor xenograft model. For the first time, a fluorescent protein has been visualized by both optical microscopy and MALDI-MSI. Visualization of tdTomato by MALDI-MSI directly from breast tumor tissue sections will allow us to simultaneously detect and subsequently identify novel molecules present in hypoxic regions of the tumor. MS and MALDI-MSI of fluorescent proteins, as exemplified in our study, is useful for studies in which the advantages of MS and MSI will benefit from the combination with molecular approaches that use fluorescent proteins as reporters.

  8. Many-body coherent destruction of tunneling in photonic lattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Longhi, Stefano

    2011-03-15

    An optical realization of the phenomenon of many-body coherent destruction of tunneling, recently predicted for interacting many-boson systems by Gong, Molina, and Haenggi [Phys. Rev. Lett. 103, 133002 (2009)], is proposed for light transport in engineered waveguide arrays. The optical system enables a direct visualization in Fock space of the many-body tunneling control process.

  9. Psyplot: Visualizing rectangular and triangular Climate Model Data with Python

    NASA Astrophysics Data System (ADS)

    Sommer, Philipp

    2016-04-01

    The development and use of climate models often requires the visualization of geo-referenced data. Creating visualizations should be fast, attractive, flexible, easily applicable and easily reproducible. There is a wide range of software tools available for visualizing raster data, but they often are inaccessible to many users (e.g. because they are difficult to use in a script or have low flexibility). In order to facilitate easy visualization of geo-referenced data, we developed a new framework called "psyplot," which can aid earth system scientists with their daily work. It is purely written in the programming language Python and primarily built upon the python packages matplotlib, cartopy and xray. The package can visualize data stored on the hard disk (e.g. NetCDF, GeoTIFF, any other file format supported by the xray package), or directly from the memory or Climate Data Operators (CDOs). Furthermore, data can be visualized on a rectangular grid (following or not following the CF Conventions) and on a triangular grid (following the CF or UGRID Conventions). Psyplot visualizes 2D scalar and vector fields, enabling the user to easily manage and format multiple plots at the same time, and to export the plots into all common picture formats and movies covered by the matplotlib package. The package can currently be used in an interactive python session or in python scripts, and will soon be developed for use with a graphical user interface (GUI). Finally, the psyplot framework enables flexible configuration, allows easy integration into other scripts that uses matplotlib, and provides a flexible foundation for further development.

  10. Subsurface data visualization in Virtual Reality

    NASA Astrophysics Data System (ADS)

    Krijnen, Robbert; Smelik, Ruben; Appleton, Rick; van Maanen, Peter-Paul

    2017-04-01

    Due to their increasing complexity and size, visualization of geological data is becoming more and more important. It enables detailed examining and reviewing of large volumes of geological data and it is often used as a communication tool for reporting and education to demonstrate the importance of the geology to policy makers. In the Netherlands two types of nation-wide geological models are available: 1) Layer-based models in which the subsurface is represented by a series of tops and bases of geological or hydrogeological units, and 2) Voxel models in which the subsurface is subdivided in a regular grid of voxels that can contain different properties per voxel. The Geological Survey of the Netherlands (GSN) provides an interactive web portal that delivers maps and vertical cross-sections of such layer-based and voxel models. From this portal you can download a 3D subsurface viewer that can visualize the voxel model data of an area of 20 × 25 km with 100 × 100 × 5 meter voxel resolution on a desktop computer. Virtual Reality (VR) technology enables us to enhance the visualization of this volumetric data in a more natural way as compared to a standard desktop, keyboard mouse setup. The use of VR for data visualization is not new but recent developments has made expensive hardware and complex setups unnecessary. The availability of consumer of-the-shelf VR hardware enabled us to create an new intuitive and low visualization tool. A VR viewer has been implemented using the HTC Vive head set and allows visualization and analysis of the GSN voxel model data with geological or hydrogeological units. The user can navigate freely around the voxel data (20 × 25 km) which is presented in a virtual room at a scale of 2 × 2 or 3 × 3 meters. To enable analysis, e.g. hydraulic conductivity, the user can select filters to remove specific hydrogeological units. The user can also use slicing to cut-off specific sections of the voxel data to get a closer look. This slicing can be done in any direction using a 'virtual knife'. Future plans are to further improve performance from 30 up to 90 Hz update rate to reduce possible motion sickness, add more advanced filtering capabilities as well as a multi user setup, annotation capabilities and visualizing of historical data.

  11. Emerging feed-forward inhibition allows the robust formation of direction selectivity in the developing ferret visual cortex

    PubMed Central

    Escobar, Gina M.; Maffei, Arianna; Miller, Paul

    2014-01-01

    The computation of direction selectivity requires that a cell respond to joint spatial and temporal characteristics of the stimulus that cannot be separated into independent components. Direction selectivity in ferret visual cortex is not present at the time of eye opening but instead develops in the days and weeks following eye opening in a process that requires visual experience with moving stimuli. Classic Hebbian or spike timing-dependent modification of excitatory feed-forward synaptic inputs is unable to produce direction-selective cells from unselective or weakly directionally biased initial conditions because inputs eventually grow so strong that they can independently drive cortical neurons, violating the joint spatial-temporal activation requirement. Furthermore, without some form of synaptic competition, cells cannot develop direction selectivity in response to training with bidirectional stimulation, as cells in ferret visual cortex do. We show that imposing a maximum lateral geniculate nucleus (LGN)-to-cortex synaptic weight allows neurons to develop direction-selective responses that maintain the requirement for joint spatial and temporal activation. We demonstrate that a novel form of inhibitory plasticity, postsynaptic activity-dependent long-term potentiation of inhibition (POSD-LTPi), which operates in the developing cortex at the time of eye opening, can provide synaptic competition and enables robust development of direction-selective receptive fields with unidirectional or bidirectional stimulation. We propose a general model of the development of spatiotemporal receptive fields that consists of two phases: an experience-independent establishment of initial biases, followed by an experience-dependent amplification or modification of these biases via correlation-based plasticity of excitatory inputs that compete against gradually increasing feed-forward inhibition. PMID:24598528

  12. A framework for breast cancer visualization using augmented reality x-ray vision technique in mobile technology

    NASA Astrophysics Data System (ADS)

    Rahman, Hameedur; Arshad, Haslina; Mahmud, Rozi; Mahayuddin, Zainal Rasyid

    2017-10-01

    Breast Cancer patients who require breast biopsy has increased over the past years. Augmented Reality guided core biopsy of breast has become the method of choice for researchers. However, this cancer visualization has limitations to the extent of superimposing the 3D imaging data only. In this paper, we are introducing an Augmented Reality visualization framework that enables breast cancer biopsy image guidance by using X-Ray vision technique on a mobile display. This framework consists of 4 phases where it initially acquires the image from CT/MRI and process the medical images into 3D slices, secondly it will purify these 3D grayscale slices into 3D breast tumor model using 3D modeling reconstruction technique. Further, in visualization processing this virtual 3D breast tumor model has been enhanced using X-ray vision technique to see through the skin of the phantom and the final composition of it is displayed on handheld device to optimize the accuracy of the visualization in six degree of freedom. The framework is perceived as an improved visualization experience because the Augmented Reality x-ray vision allowed direct understanding of the breast tumor beyond the visible surface and direct guidance towards accurate biopsy targets.

  13. Direct visualization reveals kinetics of meiotic chromosome synapsis

    DOE PAGES

    Rog, Ofer; Dernburg, Abby  F.

    2015-03-17

    The synaptonemal complex (SC) is a conserved protein complex that stabilizes interactions along homologous chromosomes (homologs) during meiosis. The SC regulates genetic exchanges between homologs, thereby enabling reductional division and the production of haploid gametes. Here, we directly observe SC assembly (synapsis) by optimizing methods for long-term fluorescence recording in C. elegans. We report that synapsis initiates independently on each chromosome pair at or near pairing centers—specialized regions required for homolog associations. Once initiated, the SC extends rapidly and mostly irreversibly to chromosome ends. Quantitation of SC initiation frequencies and extension rates reveals that initiation is a rate-limiting step inmore » homolog interactions. Eliminating the dynein-driven chromosome movements that accompany synapsis severely retards SC extension, revealing a new role for these conserved motions. This work provides the first opportunity to directly observe and quantify key aspects of meiotic chromosome interactions and will enable future in vivo analysis of germline processes.« less

  14. The use of computer imaging techniques to visualize cardiac muscle cells in three dimensions.

    PubMed

    Marino, T A; Cook, P N; Cook, L T; Dwyer, S J

    1980-11-01

    Atrial muscle cells and atrioventricular bundle cells were reconstructed using a computer-assisted three-dimensional reconstruction system. This reconstruction technique permitted these cells to be viewed from any direction. The cell surfaces were approximated using triangular tiles, and this optimization technique for cell reconstruction allowed for the computation of cell surface area and cell volume. A transparent mode is described which enables the investigator to examine internal cellular features such as the shape and location of the nucleus. In addition, more than one cell can be displayed simultaneously, and, therefore, spatial relationships are preserved and intercellular relationships viewed directly. The use of computer imaging techniques allows for a more complete collection of quantitative morphological data and also the visualization of the morphological information gathered.

  15. Direct Administration of Nerve-Specific Contrast to Improve Nerve Sparing Radical Prostatectomy

    PubMed Central

    Barth, Connor W.; Gibbs, Summer L.

    2017-01-01

    Nerve damage remains a major morbidity following nerve sparing radical prostatectomy, significantly affecting quality of life post-surgery. Nerve-specific fluorescence guided surgery offers a potential solution by enhancing nerve visualization intraoperatively. However, the prostate is highly innervated and only the cavernous nerve structures require preservation to maintain continence and potency. Systemic administration of a nerve-specific fluorophore would lower nerve signal to background ratio (SBR) in vital nerve structures, making them difficult to distinguish from all nervous tissue in the pelvic region. A direct administration methodology to enable selective nerve highlighting for enhanced nerve SBR in a specific nerve structure has been developed herein. The direct administration methodology demonstrated equivalent nerve-specific contrast to systemic administration at optimal exposure times. However, the direct administration methodology provided a brighter fluorescent nerve signal, facilitating nerve-specific fluorescence imaging at video rate, which was not possible following systemic administration. Additionally, the direct administration methodology required a significantly lower fluorophore dose than systemic administration, that when scaled to a human dose falls within the microdosing range. Furthermore, a dual fluorophore tissue staining method was developed that alleviates fluorescence background signal from adipose tissue accumulation using a spectrally distinct adipose tissue specific fluorophore. These results validate the use of the direct administration methodology for specific nerve visualization with fluorescence image-guided surgery, which would improve vital nerve structure identification and visualization during nerve sparing radical prostatectomy. PMID:28255352

  16. Direct Administration of Nerve-Specific Contrast to Improve Nerve Sparing Radical Prostatectomy.

    PubMed

    Barth, Connor W; Gibbs, Summer L

    2017-01-01

    Nerve damage remains a major morbidity following nerve sparing radical prostatectomy, significantly affecting quality of life post-surgery. Nerve-specific fluorescence guided surgery offers a potential solution by enhancing nerve visualization intraoperatively. However, the prostate is highly innervated and only the cavernous nerve structures require preservation to maintain continence and potency. Systemic administration of a nerve-specific fluorophore would lower nerve signal to background ratio (SBR) in vital nerve structures, making them difficult to distinguish from all nervous tissue in the pelvic region. A direct administration methodology to enable selective nerve highlighting for enhanced nerve SBR in a specific nerve structure has been developed herein. The direct administration methodology demonstrated equivalent nerve-specific contrast to systemic administration at optimal exposure times. However, the direct administration methodology provided a brighter fluorescent nerve signal, facilitating nerve-specific fluorescence imaging at video rate, which was not possible following systemic administration. Additionally, the direct administration methodology required a significantly lower fluorophore dose than systemic administration, that when scaled to a human dose falls within the microdosing range. Furthermore, a dual fluorophore tissue staining method was developed that alleviates fluorescence background signal from adipose tissue accumulation using a spectrally distinct adipose tissue specific fluorophore. These results validate the use of the direct administration methodology for specific nerve visualization with fluorescence image-guided surgery, which would improve vital nerve structure identification and visualization during nerve sparing radical prostatectomy.

  17. The Role of Direct and Visual Force Feedback in Suturing Using a 7-DOF Dual-Arm Teleoperated System.

    PubMed

    Talasaz, Ali; Trejos, Ana Luisa; Patel, Rajni V

    2017-01-01

    The lack of haptic feedback in robotics-assisted surgery can result in tissue damage or accidental tool-tissue hits. This paper focuses on exploring the effect of haptic feedback via direct force reflection and visual presentation of force magnitudes on performance during suturing in robotics-assisted minimally invasive surgery (RAMIS). For this purpose, a haptics-enabled dual-arm master-slave teleoperation system capable of measuring tool-tissue interaction forces in all seven Degrees-of-Freedom (DOFs) was used. Two suturing tasks, tissue puncturing and knot-tightening, were chosen to assess user skills when suturing on phantom tissue. Sixteen subjects participated in the trials and their performance was evaluated from various points of view: force consistency, number of accidental hits with tissue, amount of tissue damage, quality of the suture knot, and the time required to accomplish the task. According to the results, visual force feedback was not very useful during the tissue puncturing task as different users needed different amounts of force depending on the penetration of the needle into the tissue. Direct force feedback, however, was more useful for this task to apply less force and to minimize the amount of damage to the tissue. Statistical results also reveal that both visual and direct force feedback were required for effective knot tightening: direct force feedback could reduce the number of accidental hits with the tissue and also the amount of tissue damage, while visual force feedback could help to securely tighten the suture knots and maintain force consistency among different trials/users. These results provide evidence of the importance of 7-DOF force reflection when performing complex tasks in a RAMIS setting.

  18. Control of a visual keyboard using an electrocorticographic brain-computer interface.

    PubMed

    Krusienski, Dean J; Shih, Jerry J

    2011-05-01

    Brain-computer interfaces (BCIs) are devices that enable severely disabled people to communicate and interact with their environments using their brain waves. Most studies investigating BCI in humans have used scalp EEG as the source of electrical signals and focused on motor control of prostheses or computer cursors on a screen. The authors hypothesize that the use of brain signals obtained directly from the cortical surface will more effectively control a communication/spelling task compared to scalp EEG. A total of 6 patients with medically intractable epilepsy were tested for the ability to control a visual keyboard using electrocorticographic (ECOG) signals. ECOG data collected during a P300 visual task paradigm were preprocessed and used to train a linear classifier to subsequently predict the intended target letters. The classifier was able to predict the intended target character at or near 100% accuracy using fewer than 15 stimulation sequences in 5 of the 6 people tested. ECOG data from electrodes outside the language cortex contributed to the classifier and enabled participants to write words on a visual keyboard. This is a novel finding because previous invasive BCI research in humans used signals exclusively from the motor cortex to control a computer cursor or prosthetic device. These results demonstrate that ECOG signals from electrodes both overlying and outside the language cortex can reliably control a visual keyboard to generate language output without voice or limb movements.

  19. Vision for perception and vision for action in the primate brain.

    PubMed

    Goodale, M A

    1998-01-01

    Visual systems first evolved not to enable animals to see, but to provide distal sensory control of their movements. Vision as 'sight' is a relative newcomer to the evolutionary landscape, but its emergence has enabled animals to carry out complex cognitive operations on perceptual representations of the world. The two streams of visual processing that have been identified in the primate cerebral cortex are a reflection of these two functions of vision. The dorsal 'action' stream projecting from primary visual cortex to the posterior parietal cortex provides flexible control of more ancient subcortical visuomotor modules for the production of motor acts. The ventral 'perceptual' stream projecting from the primary visual cortex to the temporal lobe provides the rich and detailed representation of the world required for cognitive operations. Both streams process information about the structure of objects and about their spatial locations--and both are subject to the modulatory influences of attention. Each stream, however, uses visual information in different ways. Transformations carried out in the ventral stream permit the formation of perceptual representations that embody the enduring characteristics of objects and their relations; those carried out in the dorsal stream which utilize moment-to-moment information about objects within egocentric frames of reference, mediate the control of skilled actions. Both streams work together in the production of goal-directed behaviour.

  20. Ultra-Rapid serial visual presentation reveals dynamics of feedforward and feedback processes in the ventral visual pathway.

    PubMed

    Mohsenzadeh, Yalda; Qin, Sheng; Cichy, Radoslaw M; Pantazis, Dimitrios

    2018-06-21

    Human visual recognition activates a dense network of overlapping feedforward and recurrent neuronal processes, making it hard to disentangle processing in the feedforward from the feedback direction. Here, we used ultra-rapid serial visual presentation to suppress sustained activity that blurs the boundaries of processing steps, enabling us to resolve two distinct stages of processing with MEG multivariate pattern classification. The first processing stage was the rapid activation cascade of the bottom-up sweep, which terminated early as visual stimuli were presented at progressively faster rates. The second stage was the emergence of categorical information with peak latency that shifted later in time with progressively faster stimulus presentations, indexing time-consuming recurrent processing. Using MEG-fMRI fusion with representational similarity, we localized recurrent signals in early visual cortex. Together, our findings segregated an initial bottom-up sweep from subsequent feedback processing, and revealed the neural signature of increased recurrent processing demands for challenging viewing conditions. © 2018, Mohsenzadeh et al.

  1. Wind Wake Watcher v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Shawn

    This software enables the user to produce Google Earth visualizations of turbine wake effects for wind farms. The visualizations are based on computations of statistical quantities that vary with wind direction and help quantify the effects on power production of upwind turbines on turbines in their wakes. The results of the software are plot images and kml files that can be loaded into Google Earth. The statistics computed are described in greater detail in the paper: S. Martin, C. H. Westergaard, and J. White (2016), Visualizing Wind Farm Wakes Using SCADA Data, in Wither Turbulence and Big Data in themore » 21st Century? Eds. A. Pollard, L. Castillo, L. Danaila, and M. Glauser. Springer, pgs. 231-254.« less

  2. Analytical Thinking, Analytical Action: Using Prelab Video Demonstrations and e-Quizzes to Improve Undergraduate Preparedness for Analytical Chemistry Practical Classes

    ERIC Educational Resources Information Center

    Jolley, Dianne F.; Wilson, Stephen R.; Kelso, Celine; O'Brien, Glennys; Mason, Claire E.

    2016-01-01

    This project utilizes visual and critical thinking approaches to develop a higher-education synergistic prelab training program for a large second-year undergraduate analytical chemistry class, directing more of the cognitive learning to the prelab phase. This enabled students to engage in more analytical thinking prior to engaging in the…

  3. Temporal Limitations in the Effective Binding of Attended Target Attributes in the Mutual Masking of Visual Objects

    ERIC Educational Resources Information Center

    Hommuk, Karita; Bachmann, Talis

    2009-01-01

    The problem of feature binding has been examined under conditions of distributed attention or with spatially dispersed stimuli. We studied binding by asking whether selective attention to a feature of a masked object enables perceptual access to the other features of that object using conditions in which spatial attention was directed at a single…

  4. Advancing Water Science through Data Visualization

    NASA Astrophysics Data System (ADS)

    Li, X.; Troy, T.

    2014-12-01

    As water scientists, we are increasingly handling larger and larger datasets with many variables, making it easy to lose ourselves in the details. Advanced data visualization will play an increasingly significant role in propelling the development of water science in research, economy, policy and education. It can enable analysis within research and further data scientists' understanding of behavior and processes and can potentially affect how the public, whom we often want to inform, understands our work. Unfortunately for water scientists, data visualization is approached in an ad hoc manner when a more formal methodology or understanding could potentially significantly improve both research within the academy and outreach to the public. Firstly to broaden and deepen scientific understanding, data visualization can allow for more analyzed targets to be processed simultaneously and can represent the variables effectively, finding patterns, trends and relationships; thus it can even explores the new research direction or branch of water science. Depending on visualization, we can detect and separate the pivotal and trivial influential factors more clearly to assume and abstract the original complex target system. Providing direct visual perception of the differences between observation data and prediction results of models, data visualization allows researchers to quickly examine the quality of models in water science. Secondly data visualization can also improve public awareness and perhaps influence behavior. Offering decision makers clearer perspectives of potential profits of water, data visualization can amplify the economic value of water science and also increase relevant employment rates. Providing policymakers compelling visuals of the role of water for social and natural systems, data visualization can advance the water management and legislation of water conservation. By building the publics' own data visualization through apps and games about water science, they can absorb the knowledge about water indirectly and incite the awareness of water problems.

  5. Direct and Indirect Visualization of Bacterial Effector Delivery into Diverse Plant Cell Types during Infection[OPEN

    PubMed Central

    Henry, Elizabeth; Jauneau, Alain; Deslandes, Laurent

    2017-01-01

    To cause disease, diverse pathogens deliver effector proteins into host cells. Pathogen effectors can inhibit defense responses, alter host physiology, and represent important cellular probes to investigate plant biology. However, effector function and localization have primarily been investigated after overexpression in planta. Visualizing effector delivery during infection is challenging due to the plant cell wall, autofluorescence, and low effector abundance. Here, we used a GFP strand system to directly visualize bacterial effectors delivered into plant cells through the type III secretion system. GFP is a beta barrel that can be divided into 11 strands. We generated transgenic Arabidopsis thaliana plants expressing GFP1-10 (strands 1 to 10). Multiple bacterial effectors tagged with the complementary strand 11 epitope retained their biological function in Arabidopsis and tomato (Solanum lycopersicum). Infection of plants expressing GFP1-10 with bacteria delivering GFP11-tagged effectors enabled direct effector detection in planta. We investigated the temporal and spatial delivery of GFP11-tagged effectors during infection with the foliar pathogen Pseudomonas syringae and the vascular pathogen Ralstonia solanacearum. Thus, the GFP strand system can be broadly used to investigate effector biology in planta. PMID:28600390

  6. Data Flow Analysis and Visualization for Spatiotemporal Statistical Data without Trajectory Information.

    PubMed

    Kim, Seokyeon; Jeong, Seongmin; Woo, Insoo; Jang, Yun; Maciejewski, Ross; Ebert, David S

    2018-03-01

    Geographic visualization research has focused on a variety of techniques to represent and explore spatiotemporal data. The goal of those techniques is to enable users to explore events and interactions over space and time in order to facilitate the discovery of patterns, anomalies and relationships within the data. However, it is difficult to extract and visualize data flow patterns over time for non-directional statistical data without trajectory information. In this work, we develop a novel flow analysis technique to extract, represent, and analyze flow maps of non-directional spatiotemporal data unaccompanied by trajectory information. We estimate a continuous distribution of these events over space and time, and extract flow fields for spatial and temporal changes utilizing a gravity model. Then, we visualize the spatiotemporal patterns in the data by employing flow visualization techniques. The user is presented with temporal trends of geo-referenced discrete events on a map. As such, overall spatiotemporal data flow patterns help users analyze geo-referenced temporal events, such as disease outbreaks, crime patterns, etc. To validate our model, we discard the trajectory information in an origin-destination dataset and apply our technique to the data and compare the derived trajectories and the original. Finally, we present spatiotemporal trend analysis for statistical datasets including twitter data, maritime search and rescue events, and syndromic surveillance.

  7. Anisoft - Advanced Treatment of Magnetic Anisotropy Data

    NASA Astrophysics Data System (ADS)

    Chadima, M.

    2017-12-01

    Since its first release, Anisoft (Anisotropy Data Browser) has gained a wide popularity in magnetic fabric community mainly due to its simple and user-friendly interface enabling very fast visualization of magnetic anisotropy tensors. Here, a major Anisoft update is presented transforming a rather simple data viewer into a platform offering an advanced treatment of magnetic anisotropy data. The updated software introduces new enlarged binary data format which stores both in-phase and out-of-phase (if measured) susceptibility tensors (AMS) or tensors of anisotropy of magnetic remanence (AMR) together with their respective confidence ellipses and values of F-tests for anisotropy. In addition to the tensor data, a whole array of specimen orientation angles, orientation of mesoscopic foliation(s) and lineation(s) is stored for each record enabling later editing or corrections. The input data may be directly acquired by AGICO Kappabridges (AMS) or Spinner Magnetometers (AMR); imported from various data formats, including the long-time standard binary ran-format; or manually created. Multiple anisotropy files can be combined together or split into several files by manual data selection or data filtering according to their values. Anisotropy tensors are conventionally visualized as principal directions (eigenvectors) in equal-area projection (stereoplot) together with a wide array of quantitative anisotropy parameters presented in histograms or in color-coded scatter plots showing mutual relationship of up to three quantitative parameters. When dealing with AMS in variable low fields, field-independent and field-dependent components of anisotropy can be determined (Hrouda 2009). For a group of specimens, individual principal directions can be contoured, or a mean tensor and respective confidence ellipses of its principal directions can be calculated using either the Hext-Jelinek (Jelinek 1978) statistics or the Bootstrap method (Constable & Tauxe 1990). Each graphical output can be exported into several vector or raster graphical formats or, via clipboard, pasted directly into a presentation or publication manuscript. Calculated principal directions or anisotropy parameters can be exported into various types of text files ready to be visualized or processed by any software of user's choice.

  8. An Effective Histological Staining Process to Visualize Bone Interstitial Fluid Space Using Confocal Microscopy

    PubMed Central

    Ciani, Cesare; Doty, Stephen B.; Fritton, Susannah P.

    2009-01-01

    Bone is a composite porous material with two functional levels of porosity: the vascular porosity that surrounds blood vessels and the lacunar-canalicular porosity that surrounds the osteocytes. Both the vascular porosity and lacunar-canalicular porosity are directly involved in interstitial fluid flow, thought to play an important role in bone’s maintenance. Because of the small dimensions of the lacunar-canalicular porosity, interstitial fluid space has been difficult to visualize and quantify. We report a new staining protocol that is reliable and easily reproducible, using fluorescein isothiocyanate (FITC) as a probe visualized by confocal microscopy. Reconstructed FITC-stained cross sections enable effective visualization of bone microstructure and microporosities. This new staining process can be used to analyze interstitial fluid space, providing high-resolution quantification of the vascular pores and the lacunar-canalicular network of cortical and cancellous bone. PMID:19442607

  9. Mechanisms for Rapid Adaptive Control of Motion Processing in Macaque Visual Cortex.

    PubMed

    McLelland, Douglas; Baker, Pamela M; Ahmed, Bashir; Kohn, Adam; Bair, Wyeth

    2015-07-15

    A key feature of neural networks is their ability to rapidly adjust their function, including signal gain and temporal dynamics, in response to changes in sensory inputs. These adjustments are thought to be important for optimizing the sensitivity of the system, yet their mechanisms remain poorly understood. We studied adaptive changes in temporal integration in direction-selective cells in macaque primary visual cortex, where specific hypotheses have been proposed to account for rapid adaptation. By independently stimulating direction-specific channels, we found that the control of temporal integration of motion at one direction was independent of motion signals driven at the orthogonal direction. We also found that individual neurons can simultaneously support two different profiles of temporal integration for motion in orthogonal directions. These findings rule out a broad range of adaptive mechanisms as being key to the control of temporal integration, including untuned normalization and nonlinearities of spike generation and somatic adaptation in the recorded direction-selective cells. Such mechanisms are too broadly tuned, or occur too far downstream, to explain the channel-specific and multiplexed temporal integration that we observe in single neurons. Instead, we are compelled to conclude that parallel processing pathways are involved, and we demonstrate one such circuit using a computer model. This solution allows processing in different direction/orientation channels to be separately optimized and is sensible given that, under typical motion conditions (e.g., translation or looming), speed on the retina is a function of the orientation of image components. Many neurons in visual cortex are understood in terms of their spatial and temporal receptive fields. It is now known that the spatiotemporal integration underlying visual responses is not fixed but depends on the visual input. For example, neurons that respond selectively to motion direction integrate signals over a shorter time window when visual motion is fast and a longer window when motion is slow. We investigated the mechanisms underlying this useful adaptation by recording from neurons as they responded to stimuli moving in two different directions at different speeds. Computer simulations of our results enabled us to rule out several candidate theories in favor of a model that integrates across multiple parallel channels that operate at different time scales. Copyright © 2015 the authors 0270-6474/15/3510268-13$15.00/0.

  10. Before your very eyes: the value and limitations of eye tracking in medical education.

    PubMed

    Kok, Ellen M; Jarodzka, Halszka

    2017-01-01

    Medicine is a highly visual discipline. Physicians from many specialties constantly use visual information in diagnosis and treatment. However, they are often unable to explain how they use this information. Consequently, it is unclear how to train medical students in this visual processing. Eye tracking is a research technique that may offer answers to these open questions, as it enables researchers to investigate such visual processes directly by measuring eye movements. This may help researchers understand the processes that support or hinder a particular learning outcome. In this article, we clarify the value and limitations of eye tracking for medical education researchers. For example, eye tracking can clarify how experience with medical images mediates diagnostic performance and how students engage with learning materials. Furthermore, eye tracking can also be used directly for training purposes by displaying eye movements of experts in medical images. Eye movements reflect cognitive processes, but cognitive processes cannot be directly inferred from eye-tracking data. In order to interpret eye-tracking data properly, theoretical models must always be the basis for designing experiments as well as for analysing and interpreting eye-tracking data. The interpretation of eye-tracking data is further supported by sound experimental design and methodological triangulation. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  11. Musical Interfaces: Visualization and Reconstruction of Music with a Microfluidic Two-Phase Flow

    PubMed Central

    Mak, Sze Yi; Li, Zida; Frere, Arnaud; Chan, Tat Chuen; Shum, Ho Cheung

    2014-01-01

    Detection of sound wave in fluids can hardly be realized because of the lack of approaches to visualize the very minute sound-induced fluid motion. In this paper, we demonstrate the first direct visualization of music in the form of ripples at a microfluidic aqueous-aqueous interface with an ultra-low interfacial tension. The interfaces respond to sound of different frequency and amplitude robustly with sufficiently precise time resolution for the recording of musical notes and even subsequent reconstruction with high fidelity. Our work shows the possibility of sensing and transmitting vibrations as tiny as those induced by sound. This robust control of the interfacial dynamics enables a platform for investigating the mechanical properties of microstructures and for studying frequency-dependent phenomena, for example, in biological systems. PMID:25327509

  12. Visualizing Breath using Digital Holography

    NASA Astrophysics Data System (ADS)

    Hobson, P. R.; Reid, I. D.; Wilton, J. B.

    2013-02-01

    Artist Jayne Wilton and physicists Peter Hobson and Ivan Reid of Brunel University are collaborating at Brunel University on a project which aims to use a range of techniques to make visible the normally invisible dynamics of the breath and the verbal and non-verbal communication it facilitates. The breath is a source of a wide range of chemical, auditory and physical exchanges with the direct environment. Digital Holography is being investigated to enable a visually stimulating articulation of the physical trajectory of the breath as it leaves the mouth. Initial findings of this research are presented. Real time digital hologram replay allows the audience to move through holographs of breath-born particles.

  13. Online Analysis Enhances Use of NASA Earth Science Data

    NASA Technical Reports Server (NTRS)

    Acker, James G.; Leptoukh, Gregory

    2007-01-01

    Giovanni, the Goddard Earth Sciences Data and Information Services Center (GES DISC) Interactive Online Visualization and Analysis Infrastructure, has provided researchers with advanced capabilities to perform data exploration and analysis with observational data from NASA Earth observation satellites. In the past 5-10 years, examining geophysical events and processes with remote-sensing data required a multistep process of data discovery, data acquisition, data management, and ultimately data analysis. Giovanni accelerates this process by enabling basic visualization and analysis directly on the World Wide Web. In the last two years, Giovanni has added new data acquisition functions and expanded analysis options to increase its usefulness to the Earth science research community.

  14. A multilevel layout algorithm for visualizing physical and genetic interaction networks, with emphasis on their modular organization.

    PubMed

    Tuikkala, Johannes; Vähämaa, Heidi; Salmela, Pekka; Nevalainen, Olli S; Aittokallio, Tero

    2012-03-26

    Graph drawing is an integral part of many systems biology studies, enabling visual exploration and mining of large-scale biological networks. While a number of layout algorithms are available in popular network analysis platforms, such as Cytoscape, it remains poorly understood how well their solutions reflect the underlying biological processes that give rise to the network connectivity structure. Moreover, visualizations obtained using conventional layout algorithms, such as those based on the force-directed drawing approach, may become uninformative when applied to larger networks with dense or clustered connectivity structure. We implemented a modified layout plug-in, named Multilevel Layout, which applies the conventional layout algorithms within a multilevel optimization framework to better capture the hierarchical modularity of many biological networks. Using a wide variety of real life biological networks, we carried out a systematic evaluation of the method in comparison with other layout algorithms in Cytoscape. The multilevel approach provided both biologically relevant and visually pleasant layout solutions in most network types, hence complementing the layout options available in Cytoscape. In particular, it could improve drawing of large-scale networks of yeast genetic interactions and human physical interactions. In more general terms, the biological evaluation framework developed here enables one to assess the layout solutions from any existing or future graph drawing algorithm as well as to optimize their performance for a given network type or structure. By making use of the multilevel modular organization when visualizing biological networks, together with the biological evaluation of the layout solutions, one can generate convenient visualizations for many network biology applications.

  15. Flight Deck Technologies to Enable NextGen Low Visibility Surface Operations

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence (Lance) J., III; Arthur, Jarvis (Trey) J.; Kramer, Lynda J.; Norman, Robert M.; Bailey, Randall E.; Jones, Denise R.; Karwac, Jerry R., Jr.; Shelton, Kevin J.; Ellis, Kyle K. E.

    2013-01-01

    Many key capabilities are being identified to enable Next Generation Air Transportation System (NextGen), including the concept of Equivalent Visual Operations (EVO) . replicating the capacity and safety of today.s visual flight rules (VFR) in all-weather conditions. NASA is striving to develop the technologies and knowledge to enable EVO and to extend EVO towards a Better-Than-Visual operational concept. This operational concept envisions an .equivalent visual. paradigm where an electronic means provides sufficient visual references of the external world and other required flight references on flight deck displays that enable Visual Flight Rules (VFR)-like operational tempos while maintaining and improving safety of VFR while using VFR-like procedures in all-weather conditions. The Langley Research Center (LaRC) has recently completed preliminary research on flight deck technologies for low visibility surface operations. The work assessed the potential of enhanced vision and airport moving map displays to achieve equivalent levels of safety and performance to existing low visibility operational requirements. The work has the potential to better enable NextGen by perhaps providing an operational credit for conducting safe low visibility surface operations by use of the flight deck technologies.

  16. Better-Than-Visual Technologies for Next Generation Air Transportation System Terminal Maneuvering Area Operations

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Bailey, Randall E.; Shelton, Kevin J.; Jones, Denise R.; Kramer, Lynda J.; Arthur, Jarvis J., III; Williams, Steve P.; Barmore, Bryan E.; Ellis, Kyle E.; Rehfeld, Sherri A.

    2011-01-01

    A consortium of industry, academia and government agencies are devising new concepts for future U.S. aviation operations under the Next Generation Air Transportation System (NextGen). Many key capabilities are being identified to enable NextGen, including the concept of Equivalent Visual Operations (EVO) replicating the capacity and safety of today's visual flight rules (VFR) in all-weather conditions. NASA is striving to develop the technologies and knowledge to enable EVO and to extend EVO towards a Better-Than-Visual (BTV) operational concept. The BTV operational concept uses an electronic means to provide sufficient visual references of the external world and other required flight references on flight deck displays that enable VFR-like operational tempos and maintain and improve the safety of VFR while using VFR-like procedures in all-weather conditions. NASA Langley Research Center (LaRC) research on technologies to enable the concept of BTV is described.

  17. Experience-dependent plasticity from eye opening enables lasting, visual cortex-dependent enhancement of motion vision.

    PubMed

    Prusky, Glen T; Silver, Byron D; Tschetter, Wayne W; Alam, Nazia M; Douglas, Robert M

    2008-09-24

    Developmentally regulated plasticity of vision has generally been associated with "sensitive" or "critical" periods in juvenile life, wherein visual deprivation leads to loss of visual function. Here we report an enabling form of visual plasticity that commences in infant rats from eye opening, in which daily threshold testing of optokinetic tracking, amid otherwise normal visual experience, stimulates enduring, visual cortex-dependent enhancement (>60%) of the spatial frequency threshold for tracking. The perceptual ability to use spatial frequency in discriminating between moving visual stimuli is also improved by the testing experience. The capacity for inducing enhancement is transitory and effectively limited to infancy; however, enhanced responses are not consolidated and maintained unless in-kind testing experience continues uninterrupted into juvenile life. The data show that selective visual experience from infancy can alone enable visual function. They also indicate that plasticity associated with visual deprivation may not be the only cause of developmental visual dysfunction, because we found that experientially inducing enhancement in late infancy, without subsequent reinforcement of the experience in early juvenile life, can lead to enduring loss of function.

  18. Real-time decoding of the direction of covert visuospatial attention

    NASA Astrophysics Data System (ADS)

    Andersson, Patrik; Ramsey, Nick F.; Raemaekers, Mathijs; Viergever, Max A.; Pluim, Josien P. W.

    2012-08-01

    Brain-computer interfaces (BCIs) make it possible to translate a person’s intentions into actions without depending on the muscular system. Brain activity is measured and classified into commands, thereby creating a direct link between the mind and the environment, enabling, e.g., cursor control or navigation of a wheelchair or robot. Most BCI research is conducted with scalp EEG but recent developments move toward intracranial electrodes for paralyzed people. The vast majority of BCI studies focus on the motor system as the appropriate target for recording and decoding movement intentions. However, properties of the visual system may make the visual system an attractive and intuitive alternative. We report on a study investigating feasibility of decoding covert visuospatial attention in real time, exploiting the full potential of a 7 T MRI scanner to obtain the necessary signal quality, capitalizing on earlier fMRI studies indicating that covert visuospatial attention changes activity in the visual areas that respond to stimuli presented in the attended area of the visual field. Healthy volunteers were instructed to shift their attention from the center of the screen to one of four static targets in the periphery, without moving their eyes from the center. During the first part of the fMRI-run, the relevant brain regions were located using incremental statistical analysis. During the second part, the activity in these regions was extracted and classified, and the subject was given visual feedback of the result. Performance was assessed as the number of trials where the real-time classifier correctly identified the direction of attention. On average, 80% of trials were correctly classified (chance level <25%) based on a single image volume, indicating very high decoding performance. While we restricted the experiment to five attention target regions (four peripheral and one central), the number of directions can be higher provided the brain activity patterns can be distinguished. In summary, the visual system promises to be an effective target for BCI control.

  19. Use of an augmented-vision device for visual search by patients with tunnel vision.

    PubMed

    Luo, Gang; Peli, Eli

    2006-09-01

    To study the effect of an augmented-vision device that superimposes minified contour images over natural vision on visual search performance of patients with tunnel vision. Twelve subjects with tunnel vision searched for targets presented outside their visual fields (VFs) on a blank background under three cue conditions (with contour cues provided by the device, with auditory cues, and without cues). Three subjects (VF, 8 degrees -11 degrees wide) carried out the search over a 90 degrees x 74 degrees area, and nine subjects (VF, 7 degrees -16 degrees wide) carried out the search over a 66 degrees x 52 degrees area. Eye and head movements were recorded for performance analyses that included directness of search path, search time, and gaze speed. Directness of the search path was greatly and significantly improved when the contour or auditory cues were provided in the larger and the smaller area searches. When using the device, a significant reduction in search time (28% approximately 74%) was demonstrated by all three subjects in the larger area search and by subjects with VFs wider than 10 degrees in the smaller area search (average, 22%). Directness and gaze speed accounted for 90% of the variability of search time. Although performance improvement with the device for the larger search area was obvious, whether it was helpful for the smaller search area depended on VF and gaze speed. Because improvement in directness was demonstrated, increased gaze speed, which could result from further training and adaptation to the device, might enable patients with small VFs to benefit from the device for visual search tasks.

  20. Intuitive Visualization of Transient Flow: Towards a Full 3D Tool

    NASA Astrophysics Data System (ADS)

    Michel, Isabel; Schröder, Simon; Seidel, Torsten; König, Christoph

    2015-04-01

    Visualization of geoscientific data is a challenging task especially when targeting a non-professional audience. In particular, the graphical presentation of transient vector data can be a significant problem. With STRING Fraunhofer ITWM (Kaiserslautern, Germany) in collaboration with delta h Ingenieurgesellschaft mbH (Witten, Germany) developed a commercial software for intuitive 2D visualization of 3D flow problems. Through the intuitive character of the visualization experts can more easily transport their findings to non-professional audiences. In STRING pathlets moving with the flow provide an intuition of velocity and direction of both steady-state and transient flow fields. The visualization concept is based on the Lagrangian view of the flow which means that the pathlets' movement is along the direction given by pathlines. In order to capture every detail of the flow an advanced method for intelligent, time-dependent seeding of the pathlets is implemented based on ideas of the Finite Pointset Method (FPM) originally conceived at and continuously developed by Fraunhofer ITWM. Furthermore, by the same method pathlets are removed during the visualization to avoid visual cluttering. Additional scalar flow attributes, for example concentration or potential, can either be mapped directly to the pathlets or displayed in the background of the pathlets on the 2D visualization plane. The extensive capabilities of STRING are demonstrated with the help of different applications in groundwater modeling. We will discuss the strengths and current restrictions of STRING which have surfaced during daily use of the software, for example by delta h. Although the software focusses on the graphical presentation of flow data for non-professional audiences its intuitive visualization has also proven useful to experts when investigating details of flow fields. Due to the popular reception of STRING and its limitation to 2D, the need arises for the extension to a full 3D tool. Currently STRING can generate animations of single 2D cuts, either planar or curved surfaces, through 3D simulation domains. To provide a general tool for experts enabling also direct exploration and analysis of large 3D flow fields the software needs to be extended to intuitive as well as interactive visualizations of entire 3D flow domains. The current research concerning this project, which is funded by the Federal Ministry for Economic Affairs and Energy (Germany), is presented.

  1. Perceptual learning modifies untrained pursuit eye movements.

    PubMed

    Szpiro, Sarit F A; Spering, Miriam; Carrasco, Marisa

    2014-07-07

    Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response. © 2014 ARVO.

  2. Perceptual learning modifies untrained pursuit eye movements

    PubMed Central

    Szpiro, Sarit F. A.; Spering, Miriam; Carrasco, Marisa

    2014-01-01

    Perceptual learning improves detection and discrimination of relevant visual information in mature humans, revealing sensory plasticity. Whether visual perceptual learning affects motor responses is unknown. Here we implemented a protocol that enabled us to address this question. We tested a perceptual response (motion direction estimation, in which observers overestimate motion direction away from a reference) and a motor response (voluntary smooth pursuit eye movements). Perceptual training led to greater overestimation and, remarkably, it modified untrained smooth pursuit. In contrast, pursuit training did not affect overestimation in either pursuit or perception, even though observers in both training groups were exposed to the same stimuli for the same time period. A second experiment revealed that estimation training also improved discrimination, indicating that overestimation may optimize perceptual sensitivity. Hence, active perceptual training is necessary to alter perceptual responses, and an acquired change in perception suffices to modify pursuit, a motor response. PMID:25002412

  3. Visualization of migration of human cortical neurons generated from induced pluripotent stem cells.

    PubMed

    Bamba, Yohei; Kanemura, Yonehiro; Okano, Hideyuki; Yamasaki, Mami

    2017-09-01

    Neuronal migration is considered a key process in human brain development. However, direct observation of migrating human cortical neurons in the fetal brain is accompanied by ethical concerns and is a major obstacle in investigating human cortical neuronal migration. We established a novel system that enables direct visualization of migrating cortical neurons generated from human induced pluripotent stem cells (hiPSCs). We observed the migration of cortical neurons generated from hiPSCs derived from a control and from a patient with lissencephaly. Our system needs no viable brain tissue, which is usually used in slice culture. Migratory behavior of human cortical neuron can be observed more easily and more vividly by its fluorescence and glial scaffold than that by earlier methods. Our in vitro experimental system provides a new platform for investigating development of the human central nervous system and brain malformation. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Decoding facial blends of emotion: visual field, attentional and hemispheric biases.

    PubMed

    Ross, Elliott D; Shayya, Luay; Champlain, Amanda; Monnot, Marilee; Prodan, Calin I

    2013-12-01

    Most clinical research assumes that modulation of facial expressions is lateralized predominantly across the right-left hemiface. However, social psychological research suggests that facial expressions are organized predominantly across the upper-lower face. Because humans learn to cognitively control facial expression for social purposes, the lower face may display a false emotion, typically a smile, to enable approach behavior. In contrast, the upper face may leak a person's true feeling state by producing a brief facial blend of emotion, i.e. a different emotion on the upper versus lower face. Previous studies from our laboratory have shown that upper facial emotions are processed preferentially by the right hemisphere under conditions of directed attention if facial blends of emotion are presented tachistoscopically to the mid left and right visual fields. This paper explores how facial blends are processed within the four visual quadrants. The results, combined with our previous research, demonstrate that lower more so than upper facial emotions are perceived best when presented to the viewer's left and right visual fields just above the horizontal axis. Upper facial emotions are perceived best when presented to the viewer's left visual field just above the horizontal axis under conditions of directed attention. Thus, by gazing at a person's left ear, which also avoids the social stigma of eye-to-eye contact, one's ability to decode facial expressions should be enhanced. Published by Elsevier Inc.

  5. Decoding the direction of imagined visual motion using 7 T ultra-high field fMRI

    PubMed Central

    Emmerling, Thomas C.; Zimmermann, Jan; Sorger, Bettina; Frost, Martin A.; Goebel, Rainer

    2016-01-01

    There is a long-standing debate about the neurocognitive implementation of mental imagery. One form of mental imagery is the imagery of visual motion, which is of interest due to its naturalistic and dynamic character. However, so far only the mere occurrence rather than the specific content of motion imagery was shown to be detectable. In the current study, the application of multi-voxel pattern analysis to high-resolution functional data of 12 subjects acquired with ultra-high field 7 T functional magnetic resonance imaging allowed us to show that imagery of visual motion can indeed activate the earliest levels of the visual hierarchy, but the extent thereof varies highly between subjects. Our approach enabled classification not only of complex imagery, but also of its actual contents, in that the direction of imagined motion out of four options was successfully identified in two thirds of the subjects and with accuracies of up to 91.3% in individual subjects. A searchlight analysis confirmed the local origin of decodable information in striate and extra-striate cortex. These high-accuracy findings not only shed new light on a central question in vision science on the constituents of mental imagery, but also show for the first time that the specific sub-categorical content of visual motion imagery is reliably decodable from brain imaging data on a single-subject level. PMID:26481673

  6. Self-Adaptive Correction of Heading Direction in Stair Climbing for Tracked Mobile Robots Using Visual Servoing Approach

    NASA Astrophysics Data System (ADS)

    Ji, Peng; Song, Aiguo; Song, Zimo; Liu, Yuqing; Jiang, Guohua; Zhao, Guopu

    2017-02-01

    In this paper, we describe a heading direction correction algorithm for a tracked mobile robot. To save hardware resources as far as possible, the mobile robot’s wrist camera is used as the only sensor, which is rotated to face stairs. An ensemble heading deviation detector is proposed to help the mobile robot correct its heading direction. To improve the generalization ability, a multi-scale Gabor filter is used to process the input image previously. Final deviation result is acquired by applying the majority vote strategy on all the classifiers’ results. The experimental results show that our detector is able to enable the mobile robot to correct its heading direction adaptively while it is climbing the stairs.

  7. Translating novel findings of perceptual-motor codes into the neuro-rehabilitation of movement disorders.

    PubMed

    Pazzaglia, Mariella; Galli, Giulia

    2015-01-01

    The bidirectional flow of perceptual and motor information has recently proven useful as rehabilitative tool for re-building motor memories. We analyzed how the visual-motor approach has been successfully applied in neurorehabilitation, leading to surprisingly rapid and effective improvements in action execution. We proposed that the contribution of multiple sensory channels during treatment enables individuals to predict and optimize motor behavior, having a greater effect than visual input alone. We explored how the state-of-the-art neuroscience techniques show direct evidence that employment of visual-motor approach leads to increased motor cortex excitability and synaptic and cortical map plasticity. This super-additive response to multimodal stimulation may maximize neural plasticity, potentiating the effect of conventional treatment, and will be a valuable approach when it comes to advances in innovative methodologies.

  8. Observing polymersome dynamics in controlled microscale flows

    NASA Astrophysics Data System (ADS)

    Kumar, Subhalakshmi; Shenoy, Anish; Schroeder, Charles

    2015-03-01

    Achieving an understanding of single particle rheology for large yet deformable particles with controlled membrane viscoelasticity is major challenge in soft materials. In this work, we directly visualize the dynamics of single polymersomes (~ 10 μm in size) in an extensional flow using optical microscopy. We generate polymer vesicular structures composed of polybutadiene-block-polyethylene oxide (PB-b-PEO) copolymers. Single polymersomes are confined near the stagnation point of a planar extensional flow using an automated microfluidic trap, thereby enabling the direct observation of polymersome dynamics under fluid flows with controlled strains and strain rates. In a series of experiments, we investigate the effect of varying elasticity in vesicular membranes on polymersome deformation, along with the impact of decreasing membrane fluidity upon increasing diblock copolymer molecular weight. Overall, we believe that this approach will enable precise characterization of the role of membrane properties on single particle rheology for deformable polymersomes.

  9. A multilevel layout algorithm for visualizing physical and genetic interaction networks, with emphasis on their modular organization

    PubMed Central

    2012-01-01

    Background Graph drawing is an integral part of many systems biology studies, enabling visual exploration and mining of large-scale biological networks. While a number of layout algorithms are available in popular network analysis platforms, such as Cytoscape, it remains poorly understood how well their solutions reflect the underlying biological processes that give rise to the network connectivity structure. Moreover, visualizations obtained using conventional layout algorithms, such as those based on the force-directed drawing approach, may become uninformative when applied to larger networks with dense or clustered connectivity structure. Methods We implemented a modified layout plug-in, named Multilevel Layout, which applies the conventional layout algorithms within a multilevel optimization framework to better capture the hierarchical modularity of many biological networks. Using a wide variety of real life biological networks, we carried out a systematic evaluation of the method in comparison with other layout algorithms in Cytoscape. Results The multilevel approach provided both biologically relevant and visually pleasant layout solutions in most network types, hence complementing the layout options available in Cytoscape. In particular, it could improve drawing of large-scale networks of yeast genetic interactions and human physical interactions. In more general terms, the biological evaluation framework developed here enables one to assess the layout solutions from any existing or future graph drawing algorithm as well as to optimize their performance for a given network type or structure. Conclusions By making use of the multilevel modular organization when visualizing biological networks, together with the biological evaluation of the layout solutions, one can generate convenient visualizations for many network biology applications. PMID:22448851

  10. Predictive Feedback and Conscious Visual Experience

    PubMed Central

    Panichello, Matthew F.; Cheung, Olivia S.; Bar, Moshe

    2012-01-01

    The human brain continuously generates predictions about the environment based on learned regularities in the world. These predictions actively and efficiently facilitate the interpretation of incoming sensory information. We review evidence that, as a result of this facilitation, predictions directly influence conscious experience. Specifically, we propose that predictions enable rapid generation of conscious percepts and bias the contents of awareness in situations of uncertainty. The possible neural mechanisms underlying this facilitation are discussed. PMID:23346068

  11. Enhanced visualization of peripheral retinal vasculature with wavefront sensorless adaptive optics OCT angiography in diabetic patients

    PubMed Central

    Polans, James; Cunefare, David; Cole, Eli; Keller, Brenton; Mettu, Priyatham S.; Cousins, Scott W.; Allingham, Michael J.; Izatt, Joseph A.; Farsiu, Sina

    2017-01-01

    Optical coherence tomography angiography (OCTA) is a promising technique for non-invasive visualization of vessel networks in the human eye. We debut a system capable of acquiring wide field-of-view (>70°) OCT angiograms without mosaicking. Additionally, we report on enhancing the visualization of peripheral microvasculature using wavefront sensorless adaptive optics (WSAO). We employed a fast WSAO algorithm that enabled wavefront correction in <2 seconds by iterating the mirror shape at the speed of OCT B-scans rather than volumes. Also, we contrasted ~7° field-of-view OCTA angiograms acquired in the periphery with and without WSAO correction. On average, WSAO improved the sharpness of microvasculature by 65% in healthy and 38% in diseased eyes. Preliminary observations demonstrated that the location of 7° images could be identified directly from the wide field-of-view angiogram. A pilot study on a normal subject and patients with diabetic retinopathy showed the impact of utilizing WSAO for OCTA when visualizing peripheral vasculature pathologies. PMID:28059209

  12. Plugin free remote visualization in the browser

    NASA Astrophysics Data System (ADS)

    Tamm, Georg; Slusallek, Philipp

    2015-01-01

    Today, users access information and rich media from anywhere using the web browser on their desktop computers, tablets or smartphones. But the web evolves beyond media delivery. Interactive graphics applications like visualization or gaming become feasible as browsers advance in the functionality they provide. However, to deliver large-scale visualization to thin clients like mobile devices, a dedicated server component is necessary. Ideally, the client runs directly within the browser the user is accustomed to, requiring no installation of a plugin or native application. In this paper, we present the state-of-the-art of technologies which enable plugin free remote rendering in the browser. Further, we describe a remote visualization system unifying these technologies. The system transfers rendering results to the client as images or as a video stream. We utilize the upcoming World Wide Web Consortium (W3C) conform Web Real-Time Communication (WebRTC) standard, and the Native Client (NaCl) technology built into Chrome, to deliver video with low latency.

  13. Remote visual analysis of large turbulence databases at multiple scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pulido, Jesus; Livescu, Daniel; Kanov, Kalin

    The remote analysis and visualization of raw large turbulence datasets is challenging. Current accurate direct numerical simulations (DNS) of turbulent flows generate datasets with billions of points per time-step and several thousand time-steps per simulation. Until recently, the analysis and visualization of such datasets was restricted to scientists with access to large supercomputers. The public Johns Hopkins Turbulence database simplifies access to multi-terabyte turbulence datasets and facilitates the computation of statistics and extraction of features through the use of commodity hardware. In this paper, we present a framework designed around wavelet-based compression for high-speed visualization of large datasets and methodsmore » supporting multi-resolution analysis of turbulence. By integrating common technologies, this framework enables remote access to tools available on supercomputers and over 230 terabytes of DNS data over the Web. Finally, the database toolset is expanded by providing access to exploratory data analysis tools, such as wavelet decomposition capabilities and coherent feature extraction.« less

  14. Remote visual analysis of large turbulence databases at multiple scales

    DOE PAGES

    Pulido, Jesus; Livescu, Daniel; Kanov, Kalin; ...

    2018-06-15

    The remote analysis and visualization of raw large turbulence datasets is challenging. Current accurate direct numerical simulations (DNS) of turbulent flows generate datasets with billions of points per time-step and several thousand time-steps per simulation. Until recently, the analysis and visualization of such datasets was restricted to scientists with access to large supercomputers. The public Johns Hopkins Turbulence database simplifies access to multi-terabyte turbulence datasets and facilitates the computation of statistics and extraction of features through the use of commodity hardware. In this paper, we present a framework designed around wavelet-based compression for high-speed visualization of large datasets and methodsmore » supporting multi-resolution analysis of turbulence. By integrating common technologies, this framework enables remote access to tools available on supercomputers and over 230 terabytes of DNS data over the Web. Finally, the database toolset is expanded by providing access to exploratory data analysis tools, such as wavelet decomposition capabilities and coherent feature extraction.« less

  15. Toward Head-Up and Head-Worn Displays for Equivalent Visual Operations

    NASA Technical Reports Server (NTRS)

    Prinzel, Lawrence J., III; Arthur, Jarvis J.; Bailey, Randall E.; Shelton, Kevin J.; Kramer, Lynda J.; Jones, Denise R.; Williams, Steven P.; Harrison, Stephanie J.; Ellis, Kyle K.

    2015-01-01

    A key capability envisioned for the future air transportation system is the concept of equivalent visual operations (EVO). EVO is the capability to achieve the safety of current-day Visual Flight Rules (VFR) operations and maintain the operational tempos of VFR irrespective of the weather and visibility conditions. Enhanced Flight Vision Systems (EFVS) offer a path to achieve EVO. NASA has successfully tested EFVS for commercial flight operations that has helped establish the technical merits of EFVS, without reliance on natural vision, to runways without category II/III ground-based navigation and lighting requirements. The research has tested EFVS for operations with both Head-Up Displays (HUDs) and "HUD equivalent" Head-Worn Displays (HWDs). The paper describes the EVO concept and representative NASA EFVS research that demonstrate the potential of these technologies to safely conduct operations in visibilities as low as 1000 feet Runway Visual Range (RVR). Future directions are described including efforts to enable low-visibility approach, landing, and roll-outs using EFVS under conditions as low as 300 feet RVR.

  16. Use of computational modeling combined with advanced visualization to develop strategies for the design of crop ideotypes to address food security

    DOE PAGES

    Christensen, A. J.; Srinivasan, V.; Hart, J. C.; ...

    2018-03-17

    Sustainable crop production is a contributing factor to current and future food security. Innovative technologies are needed to design strategies that will achieve higher crop yields on less land and with fewer resources. Computational modeling coupled with advanced scientific visualization enables researchers to explore and interact with complex agriculture, nutrition, and climate data to predict how crops will respond to untested environments. These virtual observations and predictions can direct the development of crop ideotypes designed to meet future yield and nutritional demands. This review surveys modeling strategies for the development of crop ideotypes and scientific visualization technologies that have ledmore » to discoveries in “big data” analysis. Combined modeling and visualization approaches have been used to realistically simulate crops and to guide selection that immediately enhances crop quantity and quality under challenging environmental conditions. Lastly, this survey of current and developing technologies indicates that integrative modeling and advanced scientific visualization may help overcome challenges in agriculture and nutrition data as large-scale and multidimensional data become available in these fields.« less

  17. Use of computational modeling combined with advanced visualization to develop strategies for the design of crop ideotypes to address food security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, A. J.; Srinivasan, V.; Hart, J. C.

    Sustainable crop production is a contributing factor to current and future food security. Innovative technologies are needed to design strategies that will achieve higher crop yields on less land and with fewer resources. Computational modeling coupled with advanced scientific visualization enables researchers to explore and interact with complex agriculture, nutrition, and climate data to predict how crops will respond to untested environments. These virtual observations and predictions can direct the development of crop ideotypes designed to meet future yield and nutritional demands. This review surveys modeling strategies for the development of crop ideotypes and scientific visualization technologies that have ledmore » to discoveries in “big data” analysis. Combined modeling and visualization approaches have been used to realistically simulate crops and to guide selection that immediately enhances crop quantity and quality under challenging environmental conditions. Lastly, this survey of current and developing technologies indicates that integrative modeling and advanced scientific visualization may help overcome challenges in agriculture and nutrition data as large-scale and multidimensional data become available in these fields.« less

  18. Use of computational modeling combined with advanced visualization to develop strategies for the design of crop ideotypes to address food security.

    PubMed

    Christensen, A J; Srinivasan, Venkatraman; Hart, John C; Marshall-Colon, Amy

    2018-05-01

    Sustainable crop production is a contributing factor to current and future food security. Innovative technologies are needed to design strategies that will achieve higher crop yields on less land and with fewer resources. Computational modeling coupled with advanced scientific visualization enables researchers to explore and interact with complex agriculture, nutrition, and climate data to predict how crops will respond to untested environments. These virtual observations and predictions can direct the development of crop ideotypes designed to meet future yield and nutritional demands. This review surveys modeling strategies for the development of crop ideotypes and scientific visualization technologies that have led to discoveries in "big data" analysis. Combined modeling and visualization approaches have been used to realistically simulate crops and to guide selection that immediately enhances crop quantity and quality under challenging environmental conditions. This survey of current and developing technologies indicates that integrative modeling and advanced scientific visualization may help overcome challenges in agriculture and nutrition data as large-scale and multidimensional data become available in these fields.

  19. A neural measure of precision in visual working memory.

    PubMed

    Ester, Edward F; Anderson, David E; Serences, John T; Awh, Edward

    2013-05-01

    Recent studies suggest that the temporary storage of visual detail in working memory is mediated by sensory recruitment or sustained patterns of stimulus-specific activation within feature-selective regions of visual cortex. According to a strong version of this hypothesis, the relative "quality" of these patterns should determine the clarity of an individual's memory. Here, we provide a direct test of this claim. We used fMRI and a forward encoding model to characterize population-level orientation-selective responses in visual cortex while human participants held an oriented grating in memory. This analysis, which enables a precise quantitative description of multivoxel, population-level activity measured during working memory storage, revealed graded response profiles whose amplitudes were greatest for the remembered orientation and fell monotonically as the angular distance from this orientation increased. Moreover, interparticipant differences in the dispersion-but not the amplitude-of these response profiles were strongly correlated with performance on a concurrent memory recall task. These findings provide important new evidence linking the precision of sustained population-level responses in visual cortex and memory acuity.

  20. Use of computational modeling combined with advanced visualization to develop strategies for the design of crop ideotypes to address food security

    PubMed Central

    Christensen, A J; Srinivasan, Venkatraman; Hart, John C; Marshall-Colon, Amy

    2018-01-01

    Abstract Sustainable crop production is a contributing factor to current and future food security. Innovative technologies are needed to design strategies that will achieve higher crop yields on less land and with fewer resources. Computational modeling coupled with advanced scientific visualization enables researchers to explore and interact with complex agriculture, nutrition, and climate data to predict how crops will respond to untested environments. These virtual observations and predictions can direct the development of crop ideotypes designed to meet future yield and nutritional demands. This review surveys modeling strategies for the development of crop ideotypes and scientific visualization technologies that have led to discoveries in “big data” analysis. Combined modeling and visualization approaches have been used to realistically simulate crops and to guide selection that immediately enhances crop quantity and quality under challenging environmental conditions. This survey of current and developing technologies indicates that integrative modeling and advanced scientific visualization may help overcome challenges in agriculture and nutrition data as large-scale and multidimensional data become available in these fields. PMID:29562368

  1. An open-source data storage and visualization back end for experimental data.

    PubMed

    Nielsen, Kenneth; Andersen, Thomas; Jensen, Robert; Nielsen, Jane H; Chorkendorff, Ib

    2014-04-01

    In this article, a flexible free and open-source software system for data logging and presentation will be described. The system is highly modular and adaptable and can be used in any laboratory in which continuous and/or ad hoc measurements require centralized storage. A presentation component for the data back end has furthermore been written that enables live visualization of data on any device capable of displaying Web pages. The system consists of three parts: data-logging clients, a data server, and a data presentation Web site. The logging of data from independent clients leads to high resilience to equipment failure, whereas the central storage of data dramatically eases backup and data exchange. The visualization front end allows direct monitoring of acquired data to see live progress of long-duration experiments. This enables the user to alter experimental conditions based on these data and to interfere with the experiment if needed. The data stored consist both of specific measurements and of continuously logged system parameters. The latter is crucial to a variety of automation and surveillance features, and three cases of such features are described: monitoring system health, getting status of long-duration experiments, and implementation of instant alarms in the event of failure.

  2. Imaging of all three coronary arteries by transthoracic echocardiography. an illustrated guide

    PubMed Central

    Krzanowski, Marek; Bodzoń, Wojciech; Dimitrow, Paweł Petkow

    2003-01-01

    Background Improvements in ultrasound technology has enabled direct, transthoracic visualization of long portions of coronary arteries : the left anterior descending (LAD), circumflex (Cx) and right coronary artery (RCA). Transthoracic measurements of coronary flow velocity were proved to be highly reproducible and correlated with invasive measurements. While clinical applications of transthoracic echocardiography (TTE) of principal coronary arteries are still very limited they will likely grow. The echocardiographers may therefore be interested to know the ultrasonic views, technique of examination and be aware where to look for coronary arteries and how to optimize the images. Methods A step-by-step approach to direct, transthoracic visualization of the LAD, Cx and RCA is presented. The technique of examination is discussed, correlations with basic coronary angiography views and heart anatomy are shown and extensively illustrated with photographs and movie-pictures. Hints concerning optimization of ultrasound images are presented and artifacts of imaging are discussed. Conclusions Direct, transthoracic examination of the LAD, Cx and RCA in adults is possible and may become a useful adjunct to other methods of coronary artery examination but studies are needed to establish its role. PMID:14622441

  3. Use of an augmented-vision device for visual search by patients with tunnel vision

    PubMed Central

    Luo, Gang; Peli, Eli

    2006-01-01

    Purpose To study the effect of an augmented-vision device that superimposes minified contour images over natural vision on visual search performance of patients with tunnel vision. Methods Twelve subjects with tunnel vision searched for targets presented outside their visual fields (VF) on a blank background under three cue conditions (with contour cues provided by the device, with auditory cues, and without cues). Three subjects (VF: 8º to 11º wide) carried out the search over a 90º×74º area, and nine subjects (VF: 7º to 16º wide) over a 66º×52º area. Eye and head movements were recorded for performance analyses that included directness of search path, search time, and gaze speed. Results Directness of the search path was greatly and significantly improved when the contour or auditory cues were provided in both the larger and smaller area search. When using the device, a significant reduction in search time (28%~74%) was demonstrated by all 3 subjects in the larger area search and by subjects with VF wider than 10º in the smaller area search (average 22%). Directness and the gaze speed accounted for 90% of the variability of search time. Conclusions While performance improvement with the device for the larger search area was obvious, whether it was helpful for the smaller search area depended on VF and gaze speed. As improvement in directness was demonstrated, increased gaze speed, which could result from further training and adaptation to the device, might enable patients with small VFs to benefit from the device for visual search tasks. PMID:16936136

  4. Microfluidic Model Porous Media: Fabrication and Applications.

    PubMed

    Anbari, Alimohammad; Chien, Hung-Ta; Datta, Sujit S; Deng, Wen; Weitz, David A; Fan, Jing

    2018-05-01

    Complex fluid flow in porous media is ubiquitous in many natural and industrial processes. Direct visualization of the fluid structure and flow dynamics is critical for understanding and eventually manipulating these processes. However, the opacity of realistic porous media makes such visualization very challenging. Micromodels, microfluidic model porous media systems, have been developed to address this challenge. They provide a transparent interconnected porous network that enables the optical visualization of the complex fluid flow occurring inside at the pore scale. In this Review, the materials and fabrication methods to make micromodels, the main research activities that are conducted with micromodels and their applications in petroleum, geologic, and environmental engineering, as well as in the food and wood industries, are discussed. The potential applications of micromodels in other areas are also discussed and the key issues that should be addressed in the near future are proposed. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. An indoor navigation system for the visually impaired.

    PubMed

    Guerrero, Luis A; Vasquez, Francisco; Ochoa, Sergio F

    2012-01-01

    Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the user's trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment.

  6. Dichoptic training enables the adult amblyopic brain to learn.

    PubMed

    Li, Jinrong; Thompson, Benjamin; Deng, Daming; Chan, Lily Y L; Yu, Minbin; Hess, Robert F

    2013-04-22

    Adults with amblyopia, a common visual cortex disorder caused primarily by binocular disruption during an early critical period, do not respond to conventional therapy involving occlusion of one eye. But it is now clear that the adult human visual cortex has a significant degree of plasticity, suggesting that something must be actively preventing the adult brain from learning to see through the amblyopic eye. One possibility is an inhibitory signal from the contralateral eye that suppresses cortical inputs from the amblyopic eye. Such a gating mechanism could explain the apparent lack of plasticity within the adult amblyopic visual cortex. Here we provide direct evidence that alleviating suppression of the amblyopic eye through dichoptic stimulus presentation induces greater levels of plasticity than forced use of the amblyopic eye alone. This indicates that suppression is a key gating mechanism that prevents the amblyopic brain from learning to see. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Direct visualization of in vitro drug mobilization from Lescol XL tablets using two-dimensional (19)F and (1)H magnetic resonance imaging.

    PubMed

    Chen, Chen; Gladden, Lynn F; Mantle, Michael D

    2014-02-03

    This article reports the application of in vitro multinuclear ((19)F and (1)H) two-dimensional magnetic resonance imaging (MRI) to study both dissolution media ingress and drug egress from a commercial Lescol XL extended release tablet in a United States Pharmacopeia Type IV (USP-IV) dissolution cell under pharmacopoeial conditions. Noninvasive spatial maps of tablet swelling and dissolution, as well as the mobilization and distribution of the drug are quantified and visualized. Two-dimensional active pharmaceutical ingredient (API) mobilization and distribution maps were obtained via (19)F MRI. (19)F API maps were coregistered with (1)H T2-relaxation time maps enabling the simultaneous visualization of drug distribution and gel layer dynamics within the swollen tablet. The behavior of the MRI data is also discussed in terms of its relationship to the UV drug release behavior.

  8. A non-invasive method for studying an index of pupil diameter and visual performance in the rhesus monkey.

    PubMed

    Fairhall, Sarah J; Dickson, Carol A; Scott, Leah; Pearce, Peter C

    2006-04-01

    A non-invasive model has been developed to estimate gaze direction and relative pupil diameter, in minimally restrained rhesus monkeys, to investigate the effects of low doses of ocularly administered cholinergic compounds on visual performance. Animals were trained to co-operate with a novel device, which enabled eye movements to be recorded using modified human eye-tracking equipment, and to perform a task which determined visual threshold contrast. Responses were made by gaze transfer under twilight conditions. 4% w/v pilocarpine nitrate was studied to demonstrate the suitability of the model. Pilocarpine induced marked miosis for >3 h which was accompanied by a decrement in task performance. The method obviates the need for invasive surgery and, as the position of point of gaze can be approximately defined, the approach may have utility in other areas of research involving non-human primates.

  9. Nested Tracking Graphs

    DOE PAGES

    Lukasczyk, Jonas; Weber, Gunther; Maciejewski, Ross; ...

    2017-06-01

    Tracking graphs are a well established tool in topological analysis to visualize the evolution of components and their properties over time, i.e., when components appear, disappear, merge, and split. However, tracking graphs are limited to a single level threshold and the graphs may vary substantially even under small changes to the threshold. To examine the evolution of features for varying levels, users have to compare multiple tracking graphs without a direct visual link between them. We propose a novel, interactive, nested graph visualization based on the fact that the tracked superlevel set components for different levels are related to eachmore » other through their nesting hierarchy. This approach allows us to set multiple tracking graphs in context to each other and enables users to effectively follow the evolution of components for different levels simultaneously. We show the effectiveness of our approach on datasets from finite pointset methods, computational fluid dynamics, and cosmology simulations.« less

  10. Virtual and augmented medical imaging environments: enabling technology for minimally invasive cardiac interventional guidance.

    PubMed

    Linte, Cristian A; White, James; Eagleson, Roy; Guiraudon, Gérard M; Peters, Terry M

    2010-01-01

    Virtual and augmented reality environments have been adopted in medicine as a means to enhance the clinician's view of the anatomy and facilitate the performance of minimally invasive procedures. Their value is truly appreciated during interventions where the surgeon cannot directly visualize the targets to be treated, such as during cardiac procedures performed on the beating heart. These environments must accurately represent the real surgical field and require seamless integration of pre- and intra-operative imaging, surgical tracking, and visualization technology in a common framework centered around the patient. This review begins with an overview of minimally invasive cardiac interventions, describes the architecture of a typical surgical guidance platform including imaging, tracking, registration and visualization, highlights both clinical and engineering accuracy limitations in cardiac image guidance, and discusses the translation of the work from the laboratory into the operating room together with typically encountered challenges.

  11. Fluorescence imaging of chromosomal DNA using click chemistry

    NASA Astrophysics Data System (ADS)

    Ishizuka, Takumi; Liu, Hong Shan; Ito, Kenichiro; Xu, Yan

    2016-09-01

    Chromosome visualization is essential for chromosome analysis and genetic diagnostics. Here, we developed a click chemistry approach for multicolor imaging of chromosomal DNA instead of the traditional dye method. We first demonstrated that the commercially available reagents allow for the multicolor staining of chromosomes. We then prepared two pro-fluorophore moieties that served as light-up reporters to stain chromosomal DNA based on click reaction and visualized the clear chromosomes in multicolor. We applied this strategy in fluorescence in situ hybridization (FISH) and identified, with high sensitivity and specificity, telomere DNA at the end of the chromosome. We further extended this approach to observe several basic stages of cell division. We found that the click reaction enables direct visualization of the chromosome behavior in cell division. These results suggest that the technique can be broadly used for imaging chromosomes and may serve as a new approach for chromosome analysis and genetic diagnostics.

  12. A Visual Editor in Java for View

    NASA Technical Reports Server (NTRS)

    Stansifer, Ryan

    2000-01-01

    In this project we continued the development of a visual editor in the Java programming language to create screens on which to display real-time data. The data comes from the numerous systems monitoring the operation of the space shuttle while on the ground and in space, and from the many tests of subsystems. The data can be displayed on any computer platform running a Java-enabled World Wide Web (WWW) browser and connected to the Internet. Previously a special-purpose program bad been written to display data on emulations of character-based display screens used for many years at NASA. The goal now is to display bit-mapped screens created by a visual editor. We report here on the visual editor that creates the display screens. This project continues the work we bad done previously. Previously we had followed the design of the 'beanbox,' a prototype visual editor created by Sun Microsystems. We abandoned this approach and implemented a prototype using a more direct approach. In addition, our prototype is based on newly released Java 2 graphical user interface (GUI) libraries. The result has been a visually more appealing appearance and a more robust application.

  13. Direct multiplex imaging and optogenetics of Rho GTPases enabled by near-infrared FRET.

    PubMed

    Shcherbakova, Daria M; Cox Cammer, Natasha; Huisman, Tsipora M; Verkhusha, Vladislav V; Hodgson, Louis

    2018-06-01

    Direct visualization and light control of several cellular processes is a challenge, owing to the spectral overlap of available genetically encoded probes. Here we report the most red-shifted monomeric near-infrared (NIR) fluorescent protein, miRFP720, and the fully NIR Förster resonance energy transfer (FRET) pair miRFP670-miRFP720, which together enabled design of biosensors compatible with CFP-YFP imaging and blue-green optogenetic tools. We developed a NIR biosensor for Rac1 GTPase and demonstrated its use in multiplexed imaging and light control of Rho GTPase signaling pathways. Specifically, we combined the Rac1 biosensor with CFP-YFP FRET biosensors for RhoA and for Rac1-GDI binding, and concurrently used the LOV-TRAP tool for upstream Rac1 activation. We directly observed and quantified antagonism between RhoA and Rac1 dependent on the RhoA-downstream effector ROCK; showed that Rac1 activity and GDI binding closely depend on the spatiotemporal coordination between these two molecules; and simultaneously observed Rac1 activity during optogenetic manipulation of Rac1.

  14. Mars Exploration Rover Operations with the Science Activity Planner

    NASA Technical Reports Server (NTRS)

    Jeffrey S. Norris; Powell, Mark W.; Vona, Marsette A.; Backes, Paul G.; Wick, Justin V.

    2005-01-01

    The Science Activity Planner (SAP) is the primary science operations tool for the Mars Exploration Rover mission and NASA's Software of the Year for 2004. SAP utilizes a variety of visualization and planning capabilities to enable the mission operations team to direct the activities of the Spirit and Opportunity rovers. This paper outlines some of the challenging requirements that drove the design of SAP and discusses lessons learned from the development and use of SAP in mission operations.

  15. AAS WorldWide Telescope: A Seamless, Cross-platform Data Visualization Engine for Astronomy Research, Education, and Democratizing Data

    NASA Astrophysics Data System (ADS)

    Rosenfield, Philip; Fay, Jonathan; Gilchrist, Ronald K.; Cui, Chenzhou; Weigel, A. David; Robitaille, Thomas; Otor, Oderah Justin; Goodman, Alyssa

    2018-05-01

    The American Astronomical Society’s WorldWide Telescope (WWT) project enables terabytes of astronomical images, data, and stories to be viewed and shared among researchers, exhibited in science museums, projected into full-dome immersive planetariums and virtual reality headsets, and taught in classrooms, from middle school to college. We review the WWT ecosystem, how WWT has been used in the astronomical community, and comment on future directions.

  16. Fast interactive exploration of 4D MRI flow data

    NASA Astrophysics Data System (ADS)

    Hennemuth, A.; Friman, O.; Schumann, C.; Bock, J.; Drexl, J.; Huellebrand, M.; Markl, M.; Peitgen, H.-O.

    2011-03-01

    1- or 2-directional MRI blood flow mapping sequences are an integral part of standard MR protocols for diagnosis and therapy control in heart diseases. Recent progress in rapid MRI has made it possible to acquire volumetric, 3-directional cine images in reasonable scan time. In addition to flow and velocity measurements relative to arbitrarily oriented image planes, the analysis of 3-dimensional trajectories enables the visualization of flow patterns, local features of flow trajectories or possible paths into specific regions. The anatomical and functional information allows for advanced hemodynamic analysis in different application areas like stroke risk assessment, congenital and acquired heart disease, aneurysms or abdominal collaterals and cranial blood flow. The complexity of the 4D MRI flow datasets and the flow related image analysis tasks makes the development of fast comprehensive data exploration software for advanced flow analysis a challenging task. Most existing tools address only individual aspects of the analysis pipeline such as pre-processing, quantification or visualization, or are difficult to use for clinicians. The goal of the presented work is to provide a software solution that supports the whole image analysis pipeline and enables data exploration with fast intuitive interaction and visualization methods. The implemented methods facilitate the segmentation and inspection of different vascular systems. Arbitrary 2- or 3-dimensional regions for quantitative analysis and particle tracing can be defined interactively. Synchronized views of animated 3D path lines, 2D velocity or flow overlays and flow curves offer a detailed insight into local hemodynamics. The application of the analysis pipeline is shown for 6 cases from clinical practice, illustrating the usefulness for different clinical questions. Initial user tests show that the software is intuitive to learn and even inexperienced users achieve good results within reasonable processing times.

  17. Retrieval evaluation and distance learning from perceived similarity between endomicroscopy videos.

    PubMed

    André, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas

    2011-01-01

    Evaluating content-based retrieval (CBR) is challenging because it requires an adequate ground-truth. When the available groundtruth is limited to textual metadata such as pathological classes, retrieval results can only be evaluated indirectly, for example in terms of classification performance. In this study we first present a tool to generate perceived similarity ground-truth that enables direct evaluation of endomicroscopic video retrieval. This tool uses a four-points Likert scale and collects subjective pairwise similarities perceived by multiple expert observers. We then evaluate against the generated ground-truth a previously developed dense bag-of-visual-words method for endomicroscopic video retrieval. Confirming the results of previous indirect evaluation based on classification, our direct evaluation shows that this method significantly outperforms several other state-of-the-art CBR methods. In a second step, we propose to improve the CBR method by learning an adjusted similarity metric from the perceived similarity ground-truth. By minimizing a margin-based cost function that differentiates similar and dissimilar video pairs, we learn a weight vector applied to the visual word signatures of videos. Using cross-validation, we demonstrate that the learned similarity distance is significantly better correlated with the perceived similarity than the original visual-word-based distance.

  18. A Brain-Computer Interface (BCI) system to use arbitrary Windows applications by directly controlling mouse and keyboard.

    PubMed

    Spuler, Martin

    2015-08-01

    A Brain-Computer Interface (BCI) allows to control a computer by brain activity only, without the need for muscle control. In this paper, we present an EEG-based BCI system based on code-modulated visual evoked potentials (c-VEPs) that enables the user to work with arbitrary Windows applications. Other BCI systems, like the P300 speller or BCI-based browsers, allow control of one dedicated application designed for use with a BCI. In contrast, the system presented in this paper does not consist of one dedicated application, but enables the user to control mouse cursor and keyboard input on the level of the operating system, thereby making it possible to use arbitrary applications. As the c-VEP BCI method was shown to enable very fast communication speeds (writing more than 20 error-free characters per minute), the presented system is the next step in replacing the traditional mouse and keyboard and enabling complete brain-based control of a computer.

  19. High brightness x ray source for directed energy and holographic imaging applications, phase 2

    NASA Astrophysics Data System (ADS)

    McPherson, Armon; Rhodes, Charles K.

    1992-03-01

    Advances in x-ray imaging technology and x-ray sources are such that a new technology can be brought to commercialization enabling the three-dimensional (3-D) microvisualization of hydrated biological specimens. The Company is engaged in a program whose main goal is the development of a new technology for direct three dimensional (3-D) x-ray holographic imaging. It is believed that this technology will have a wide range of important applications in the defense, medical, and scientific sectors. For example, in the medical area, it is expected that biomedical science will constitute a very active and substantial market, because the application of physical technologies for the direct visualization of biological entities has had a long and extremely fruitful history.

  20. Recent advances in near-infrared fluorescence-guided imaging surgery using indocyanine green.

    PubMed

    Namikawa, Tsutomu; Sato, Takayuki; Hanazaki, Kazuhiro

    2015-12-01

    Near-infrared (NIR) fluorescence imaging has better tissue penetration, allowing for the effective rejection of excitation light and detection deep inside organs. Indocyanine green (ICG) generates NIR fluorescence after illumination by an NIR ray, enabling real-time intraoperative visualization of superficial lymphatic channels and vessels transcutaneously. The HyperEye Medical System (HEMS) can simultaneously detect NIR rays under room light to provide color imaging, which enables visualization under bright light. Thus, NIR fluorescence imaging using ICG can provide for excellent diagnostic accuracy in detecting sentinel lymph nodes in cancer and microvascular circulation in various ischemic diseases, to assist us with intraoperative decision making. Including HEMS in this system could further improve the sentinel lymph node mapping and intraoperative identification of blood supply in reconstructive organs and ischemic diseases, making it more attractive than conventional imaging. Moreover, the development of new laparoscopic imaging systems equipped with NIR will allow fluorescence-guided surgery in a minimally invasive setting. Future directions, including the conjugation of NIR fluorophores to target specific cancer markers might be realistic technology with diagnostic and therapeutic benefits.

  1. Three-dimensional Super Resolution Microscopy of F-actin Filaments by Interferometric PhotoActivated Localization Microscopy (iPALM).

    PubMed

    Wang, Yilin; Kanchanawong, Pakorn

    2016-12-01

    Fluorescence microscopy enables direct visualization of specific biomolecules within cells. However, for conventional fluorescence microscopy, the spatial resolution is restricted by diffraction to ~ 200 nm within the image plane and > 500 nm along the optical axis. As a result, fluorescence microscopy has long been severely limited in the observation of ultrastructural features within cells. The recent development of super resolution microscopy methods has overcome this limitation. In particular, the advent of photoswitchable fluorophores enables localization-based super resolution microscopy, which provides resolving power approaching the molecular-length scale. Here, we describe the application of a three-dimensional super resolution microscopy method based on single-molecule localization microscopy and multiphase interferometry, called interferometric PhotoActivated Localization Microscopy (iPALM). This method provides nearly isotropic resolution on the order of 20 nm in all three dimensions. Protocols for visualizing the filamentous actin cytoskeleton, including specimen preparation and operation of the iPALM instrument, are described here. These protocols are also readily adaptable and instructive for the study of other ultrastructural features in cells.

  2. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging.

    PubMed

    Tremsin, Anton S; Perrodin, Didier; Losko, Adrian S; Vogel, Sven C; Bourke, Mark A M; Bizarri, Gregory A; Bourret, Edith D

    2017-04-20

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of "blind" processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes.

  3. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    NASA Astrophysics Data System (ADS)

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.; Vogel, Sven C.; Bourke, Mark A. M.; Bizarri, Gregory A.; Bourret, Edith D.

    2017-04-01

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of “blind” processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes.

  4. Effectiveness of basic display augmentation in vehicular control by visual field cues

    NASA Technical Reports Server (NTRS)

    Grunwald, A. J.; Merhav, S. J.

    1978-01-01

    The paper investigates the effectiveness of different basic display augmentation concepts - fixed reticle, velocity vector, and predicted future vehicle path - for RPVs controlled by a vehicle-mounted TV camera. The task is lateral manual control of a low flying RPV along a straight reference line in the presence of random side gusts. The man-machine system and the visual interface are modeled as a linear time-invariant system. Minimization of a quadratic performance criterion is assumed to underlie the control strategy of a well-trained human operator. The solution for the optimal feedback matrix enables the explicit computation of the variances of lateral deviation and directional error of the vehicle and of the control force that are used as performance measures.

  5. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    PubMed Central

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.; Vogel, Sven C.; Bourke, Mark A.M.; Bizarri, Gregory A.; Bourret, Edith D.

    2017-01-01

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of “blind” processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes. PMID:28425461

  6. Uncluttered Single-Image Visualization of Vascular Structures using GPU and Integer Programming

    PubMed Central

    Won, Joong-Ho; Jeon, Yongkweon; Rosenberg, Jarrett; Yoon, Sungroh; Rubin, Geoffrey D.; Napel, Sandy

    2013-01-01

    Direct projection of three-dimensional branching structures, such as networks of cables, blood vessels, or neurons onto a 2D image creates the illusion of intersecting structural parts and creates challenges for understanding and communication. We present a method for visualizing such structures, and demonstrate its utility in visualizing the abdominal aorta and its branches, whose tomographic images might be obtained by computed tomography or magnetic resonance angiography, in a single two-dimensional stylistic image, without overlaps among branches. The visualization method, termed uncluttered single-image visualization (USIV), involves optimization of geometry. This paper proposes a novel optimization technique that utilizes an interesting connection of the optimization problem regarding USIV to the protein structure prediction problem. Adopting the integer linear programming-based formulation for the protein structure prediction problem, we tested the proposed technique using 30 visualizations produced from five patient scans with representative anatomical variants in the abdominal aortic vessel tree. The novel technique can exploit commodity-level parallelism, enabling use of general-purpose graphics processing unit (GPGPU) technology that yields a significant speedup. Comparison of the results with the other optimization technique previously reported elsewhere suggests that, in most aspects, the quality of the visualization is comparable to that of the previous one, with a significant gain in the computation time of the algorithm. PMID:22291148

  7. Scalable Visual Analytics of Massive Textual Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Manoj Kumar; Bohn, Shawn J.; Cowley, Wendy E.

    2007-04-01

    This paper describes the first scalable implementation of text processing engine used in Visual Analytics tools. These tools aid information analysts in interacting with and understanding large textual information content through visual interfaces. By developing parallel implementation of the text processing engine, we enabled visual analytics tools to exploit cluster architectures and handle massive dataset. The paper describes key elements of our parallelization approach and demonstrates virtually linear scaling when processing multi-gigabyte data sets such as Pubmed. This approach enables interactive analysis of large datasets beyond capabilities of existing state-of-the art visual analytics tools.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hohenberger, Erik; Freitag, Nathan; Rosenmann, Daniel

    Here, we present a facile method for fabricating nanostructured silver films containing a high density of nanoscopic gap features through a surface directed phenomenon utilizing nanoporous scaffolds rather than through traditional lithographic patterning processes. This method enables tunability of the silver film growth by simply adjusting the formulation and processing conditions of the nanoporous film prior to metallization. We further demonstrate that this process can produce nanoscopic gaps in thick (100 nm) silver films supporting localized surface plasmon resonance with large field amplification within the gaps while enabling launching of propagating surface plasmons within the silver grains. These enhanced fieldsmore » provide metal enhanced fluorescence with enhancement factors as high as 21 times compared to glass, as well as enable visualization of single fluorophore emission. This work provides a low-cost rapid approach for producing novel nanostructures capable of broadband fluorescence amplification, with potential applications including plasmonic and fluorescence based optical sensing and imaging applications.« less

  9. Magnetic resonance imaging-a diagnostic tool for postoperative evaluation of dental implants: a case report.

    PubMed

    Wanner, Laura; Ludwig, Ute; Hövener, Jan-Bernd; Nelson, Katja; Flügge, Tabea

    2018-04-01

    Compared with cone beam computed tomography (CBCT), magnetic resonance imaging (MRI) might be superior for the diagnosis of nerve lesions associated with implant placement. A patient presented with unilateral pain associated with dysesthesia in the region of the right lower lip and chin after implant placement. Conventional orthopantomography could not identify an association between the position of the inferior alveolar nerve and the implant. For 3-dimensional display of the implant in relation to the surrounding anatomy, CBCT was compared with MRI. MRI enabled the precise depiction of the implant position and its spatial relation to the inferior alveolar nerve, whereas the nerve position and its exact course within the mandible could not be directly displayed in CBCT. MRI may be a valuable, radiation-free diagnostic tool for the visualization of intraoral hard and soft tissues, offering an objective assessment of nerve injuries by a direct visualization of the inferior alveolar neurovascular bundle. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Asymmetry hidden in birds’ tracks reveals wind, heading, and orientation ability over the ocean

    PubMed Central

    Goto, Yusuke; Yoda, Ken; Sato, Katsufumi

    2017-01-01

    Numerous flying and swimming animals constantly need to control their heading (that is, their direction of orientation) in a flow to reach their distant destination. However, animal orientation in a flow has yet to be satisfactorily explained because it is difficult to directly measure animal heading and flow. We constructed a new animal movement model based on the asymmetric distribution of the GPS (Global Positioning System) track vector along its mean vector, which might be caused by wind flow. This statistical model enabled us to simultaneously estimate animal heading (navigational decision-making) and ocean wind information over the range traversed by free-ranging birds. We applied this method to the tracking data of homing seabirds. The wind flow estimated by the model was consistent with the spatiotemporally coarse wind information provided by an atmospheric simulation model. The estimated heading information revealed that homing seabirds could head in a direction different from that leading to the colony to offset wind effects and to enable them to eventually move in the direction they intended to take, even though they are over the open sea where visual cues are unavailable. Our results highlight the utility of combining large data sets of animal movements with the “inverse problem approach,” enabling unobservable causal factors to be estimated from the observed output data. This approach potentially initiates a new era of analyzing animal decision-making in the field. PMID:28959724

  11. On three-dimensional misorientation spaces.

    PubMed

    Krakow, Robert; Bennett, Robbie J; Johnstone, Duncan N; Vukmanovic, Zoja; Solano-Alvarez, Wilberth; Lainé, Steven J; Einsle, Joshua F; Midgley, Paul A; Rae, Catherine M F; Hielscher, Ralf

    2017-10-01

    Determining the local orientation of crystals in engineering and geological materials has become routine with the advent of modern crystallographic mapping techniques. These techniques enable many thousands of orientation measurements to be made, directing attention towards how such orientation data are best studied. Here, we provide a guide to the visualization of misorientation data in three-dimensional vector spaces, reduced by crystal symmetry, to reveal crystallographic orientation relationships. Domains for all point group symmetries are presented and an analysis methodology is developed and applied to identify crystallographic relationships, indicated by clusters in the misorientation space, in examples from materials science and geology. This analysis aids the determination of active deformation mechanisms and evaluation of cluster centres and spread enables more accurate description of transformation processes supporting arguments regarding provenance.

  12. On three-dimensional misorientation spaces

    NASA Astrophysics Data System (ADS)

    Krakow, Robert; Bennett, Robbie J.; Johnstone, Duncan N.; Vukmanovic, Zoja; Solano-Alvarez, Wilberth; Lainé, Steven J.; Einsle, Joshua F.; Midgley, Paul A.; Rae, Catherine M. F.; Hielscher, Ralf

    2017-10-01

    Determining the local orientation of crystals in engineering and geological materials has become routine with the advent of modern crystallographic mapping techniques. These techniques enable many thousands of orientation measurements to be made, directing attention towards how such orientation data are best studied. Here, we provide a guide to the visualization of misorientation data in three-dimensional vector spaces, reduced by crystal symmetry, to reveal crystallographic orientation relationships. Domains for all point group symmetries are presented and an analysis methodology is developed and applied to identify crystallographic relationships, indicated by clusters in the misorientation space, in examples from materials science and geology. This analysis aids the determination of active deformation mechanisms and evaluation of cluster centres and spread enables more accurate description of transformation processes supporting arguments regarding provenance.

  13. The role of 3-D interactive visualization in blind surveys of H I in galaxies

    NASA Astrophysics Data System (ADS)

    Punzo, D.; van der Hulst, J. M.; Roerdink, J. B. T. M.; Oosterloo, T. A.; Ramatsoku, M.; Verheijen, M. A. W.

    2015-09-01

    Upcoming H I surveys will deliver large datasets, and automated processing using the full 3-D information (two positional dimensions and one spectral dimension) to find and characterize H I objects is imperative. In this context, visualization is an essential tool for enabling qualitative and quantitative human control on an automated source finding and analysis pipeline. We discuss how Visual Analytics, the combination of automated data processing and human reasoning, creativity and intuition, supported by interactive visualization, enables flexible and fast interaction with the 3-D data, helping the astronomer to deal with the analysis of complex sources. 3-D visualization, coupled to modeling, provides additional capabilities helping the discovery and analysis of subtle structures in the 3-D domain. The requirements for a fully interactive visualization tool are: coupled 1-D/2-D/3-D visualization, quantitative and comparative capabilities, combined with supervised semi-automated analysis. Moreover, the source code must have the following characteristics for enabling collaborative work: open, modular, well documented, and well maintained. We review four state of-the-art, 3-D visualization packages assessing their capabilities and feasibility for use in the case of 3-D astronomical data.

  14. Integration of today's digital state with tomorrow's visual environment

    NASA Astrophysics Data System (ADS)

    Fritsche, Dennis R.; Liu, Victor; Markandey, Vishal; Heimbuch, Scott

    1996-03-01

    New developments in visual communication technologies, and the increasingly digital nature of the industry infrastructure as a whole, are converging to enable new visual environments with an enhanced visual component in interaction, entertainment, and education. New applications and markets can be created, but this depends on the ability of the visual communications industry to provide market solutions that are cost effective and user friendly. Industry-wide cooperation in the development of integrated, open architecture applications enables the realization of such market solutions. This paper describes the work being done by Texas Instruments, in the development of its Digital Light ProcessingTM technology, to support the development of new visual communications technologies and applications.

  15. Electron Microscopy of Living Cells During in Situ Fluorescence Microscopy

    PubMed Central

    Liv, Nalan; van Oosten Slingeland, Daan S. B.; Baudoin, Jean-Pierre; Kruit, Pieter; Piston, David W.; Hoogenboom, Jacob P.

    2016-01-01

    We present an approach toward dynamic nanoimaging: live fluorescence of cells encapsulated in a bionanoreactor is complemented with in situ scanning electron microscopy (SEM) on an integrated microscope. This allows us to take SEM snapshots on-demand, that is, at a specific location in time, at a desired region of interest, guided by the dynamic fluorescence imaging. We show that this approach enables direct visualization, with EM resolution, of the distribution of bioconjugated quantum dots on cellular extensions during uptake and internalization. PMID:26580231

  16. High performance nanobio photocatalyst for targeted brain cancer therapy.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rozhkova, E.; Ulasov, I.; Dimitrijevic, N. M.

    We report pronounced and specific antiglioblastoma cell phototoxicity of 5 nm TiO{sub 2} particles covalently tethered to an antibody via a dihydroxybenzene bivalent linker. The linker application enables absorption of a visible part of the solar spectrum by the nanobio hybrid. The phototoxicity is mediated by reactive oxygen species (ROS) that initiate programmed death of the cancer cell. Synchrotron X-ray fluorescence microscopy (XFM) was applied for direct visualization of the nanobioconjugate distribution through a single brain cancer cell at the submicrometer scale.

  17. P-MartCancer–Interactive Online Software to Enable Analysis of Shotgun Cancer Proteomic Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Webb-Robertson, Bobbie-Jo M.; Bramer, Lisa M.; Jensen, Jeffrey L.

    P-MartCancer is a new interactive web-based software environment that enables biomedical and biological scientists to perform in-depth analyses of global proteomics data without requiring direct interaction with the data or with statistical software. P-MartCancer offers a series of statistical modules associated with quality assessment, peptide and protein statistics, protein quantification and exploratory data analyses driven by the user via customized workflows and interactive visualization. Currently, P-MartCancer offers access to multiple cancer proteomic datasets generated through the Clinical Proteomics Tumor Analysis Consortium (CPTAC) at the peptide, gene and protein levels. P-MartCancer is deployed using Azure technologies (http://pmart.labworks.org/cptac.html), the web-service is alternativelymore » available via Docker Hub (https://hub.docker.com/r/pnnl/pmart-web/) and many statistical functions can be utilized directly from an R package available on GitHub (https://github.com/pmartR).« less

  18. Glyph-based analysis of multimodal directional distributions in vector field ensembles

    NASA Astrophysics Data System (ADS)

    Jarema, Mihaela; Demir, Ismail; Kehrer, Johannes; Westermann, Rüdiger

    2015-04-01

    Ensemble simulations are increasingly often performed in the geosciences in order to study the uncertainty and variability of model predictions. Describing ensemble data by mean and standard deviation can be misleading in case of multimodal distributions. We present first results of a glyph-based visualization of multimodal directional distributions in 2D and 3D vector ensemble data. Directional information on the circle/sphere is modeled using mixtures of probability density functions (pdfs), which enables us to characterize the distributions with relatively few parameters. The resulting mixture models are represented by 2D and 3D lobular glyphs showing direction, spread and strength of each principal mode of the distributions. A 3D extension of our approach is realized by means of an efficient GPU rendering technique. We demonstrate our method in the context of ensemble weather simulations.

  19. Ultrahigh-Speed Optical Coherence Tomography for Three-Dimensional and En Face Imaging of the Retina and Optic Nerve Head

    PubMed Central

    Srinivasan, Vivek J.; Adler, Desmond C.; Chen, Yueli; Gorczynska, Iwona; Huber, Robert; Duker, Jay S.; Schuman, Joel S.; Fujimoto, James G.

    2009-01-01

    Purpose To demonstrate ultrahigh-speed optical coherence tomography (OCT) imaging of the retina and optic nerve head at 249,000 axial scans per second and a wavelength of 1060 nm. To investigate methods for visualization of the retina, choroid, and optic nerve using high-density sampling enabled by improved imaging speed. Methods A swept-source OCT retinal imaging system operating at a speed of 249,000 axial scans per second was developed. Imaging of the retina, choroid, and optic nerve were performed. Display methods such as speckle reduction, slicing along arbitrary planes, en face visualization of reflectance from specific retinal layers, and image compounding were investigated. Results High-definition and three-dimensional (3D) imaging of the normal retina and optic nerve head were performed. Increased light penetration at 1060 nm enabled improved visualization of the choroid, lamina cribrosa, and sclera. OCT fundus images and 3D visualizations were generated with higher pixel density and less motion artifacts than standard spectral/Fourier domain OCT. En face images enabled visualization of the porous structure of the lamina cribrosa, nerve fiber layer, choroid, photoreceptors, RPE, and capillaries of the inner retina. Conclusions Ultrahigh-speed OCT imaging of the retina and optic nerve head at 249,000 axial scans per second is possible. The improvement of ∼5 to 10× in imaging speed over commercial spectral/Fourier domain OCT technology enables higher density raster scan protocols and improved performance of en face visualization methods. The combination of the longer wavelength and ultrahigh imaging speed enables excellent visualization of the choroid, sclera, and lamina cribrosa. PMID:18658089

  20. A novel anisotropic fast marching method and its application to blood flow computation in phase-contrast MRI.

    PubMed

    Schwenke, M; Hennemuth, A; Fischer, B; Friman, O

    2012-01-01

    Phase-contrast MRI (PC MRI) can be used to assess blood flow dynamics noninvasively inside the human body. The acquired images can be reconstructed into flow vector fields. Traditionally, streamlines can be computed based on the vector fields to visualize flow patterns and particle trajectories. The traditional methods may give a false impression of precision, as they do not consider the measurement uncertainty in the PC MRI images. In our prior work, we incorporated the uncertainty of the measurement into the computation of particle trajectories. As a major part of the contribution, a novel numerical scheme for solving the anisotropic Fast Marching problem is presented. A computing time comparison to state-of-the-art methods is conducted on artificial tensor fields. A visual comparison of healthy to pathological blood flow patterns is given. The comparison shows that the novel anisotropic Fast Marching solver outperforms previous schemes in terms of computing time. The visual comparison of flow patterns directly visualizes large deviations of pathological flow from healthy flow. The novel anisotropic Fast Marching solver efficiently resolves even strongly anisotropic path costs. The visualization method enables the user to assess the uncertainty of particle trajectories derived from PC MRI images.

  1. Visual scan-path analysis with feature space transient fixation moments

    NASA Astrophysics Data System (ADS)

    Dempere-Marco, Laura; Hu, Xiao-Peng; Yang, Guang-Zhong

    2003-05-01

    The study of eye movements provides useful insight into the cognitive processes underlying visual search tasks. The analysis of the dynamics of eye movements has often been approached from a purely spatial perspective. In many cases, however, it may not be possible to define meaningful or consistent dynamics without considering the features underlying the scan paths. In this paper, the definition of the feature space has been attempted through the concept of visual similarity and non-linear low dimensional embedding, which defines a mapping from the image space into a low dimensional feature manifold that preserves the intrinsic similarity of image patterns. This has enabled the definition of perceptually meaningful features without the use of domain specific knowledge. Based on this, this paper introduces a new concept called Feature Space Transient Fixation Moments (TFM). The approach presented tackles the problem of feature space representation of visual search through the use of TFM. We demonstrate the practical values of this concept for characterizing the dynamics of eye movements in goal directed visual search tasks. We also illustrate how this model can be used to elucidate the fundamental steps involved in skilled search tasks through the evolution of transient fixation moments.

  2. A Data Model and Task Space for Data of Interest (DOI) Eye-Tracking Analyses.

    PubMed

    Jianu, Radu; Alam, Sayeed Safayet

    2018-03-01

    Eye-tracking data is traditionally analyzed by looking at where on a visual stimulus subjects fixate, or, to facilitate more advanced analyses, by using area-of-interests (AOI) defined onto visual stimuli. Recently, there is increasing interest in methods that capture what users are looking at rather than where they are looking. By instrumenting visualization code that transforms a data model into visual content, gaze coordinates reported by an eye-tracker can be mapped directly to granular data shown on the screen, producing temporal sequences of data objects that subjects viewed in an experiment. Such data collection, which is called gaze to object mapping (GTOM) or data-of-interest analysis (DOI), can be done reliably with limited overhead and can facilitate research workflows not previously possible. Our paper contributes to establishing a foundation of DOI analyses by defining a DOI data model and highlighting its differences to AOI data in structure and scale; by defining and exemplifying a space of DOI enabled tasks; by describing three concrete examples of DOI experimentation in three different domains; and by discussing immediate research challenges in creating a framework of visual support for DOI experimentation and analysis.

  3. Extensible Computational Chemistry Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-08-09

    ECCE provides a sophisticated graphical user interface, scientific visualization tools, and the underlying data management framework enabling scientists to efficiently set up calculations and store, retrieve, and analyze the rapidly growing volumes of data produced by computational chemistry studies. ECCE was conceived as part of the Environmental Molecular Sciences Laboratory construction to solve the problem of researchers being able to effectively utilize complex computational chemistry codes and massively parallel high performance compute resources. Bringing the power of these codes and resources to the desktops of researcher and thus enabling world class research without users needing a detailed understanding of themore » inner workings of either the theoretical codes or the supercomputers needed to run them was a grand challenge problem in the original version of the EMSL. ECCE allows collaboration among researchers using a web-based data repository where the inputs and results for all calculations done within ECCE are organized. ECCE is a first of kind end-to-end problem solving environment for all phases of computational chemistry research: setting up calculations with sophisticated GUI and direct manipulation visualization tools, submitting and monitoring calculations on remote high performance supercomputers without having to be familiar with the details of using these compute resources, and performing results visualization and analysis including creating publication quality images. ECCE is a suite of tightly integrated applications that are employed as the user moves through the modeling process.« less

  4. Visually Imperceptible Liquid-Metal Circuits for Transparent, Stretchable Electronics with Direct Laser Writing.

    PubMed

    Pan, Chengfeng; Kumar, Kitty; Li, Jianzhao; Markvicka, Eric J; Herman, Peter R; Majidi, Carmel

    2018-03-01

    A material architecture and laser-based microfabrication technique is introduced to produce electrically conductive films (sheet resistance = 2.95 Ω sq -1 ; resistivity = 1.77 × 10 -6 Ω m) that are soft, elastic (strain limit >100%), and optically transparent. The films are composed of a grid-like array of visually imperceptible liquid-metal (LM) lines on a clear elastomer. Unlike previous efforts in transparent LM circuitry, the current approach enables fully imperceptible electronics that have not only high optical transmittance (>85% at 550 nm) but are also invisible under typical lighting conditions and reading distances. This unique combination of properties is enabled with a laser writing technique that results in LM grid patterns with a line width and pitch as small as 4.5 and 100 µm, respectively-yielding grid-like wiring that has adequate conductivity for digital functionality but is also well below the threshold for visual perception. The electrical, mechanical, electromechanical, and optomechanical properties of the films are characterized and it is found that high conductivity and transparency are preserved at tensile strains of ≈100%. To demonstrate their effectiveness for emerging applications in transparent displays and sensing electronics, the material architecture is incorporated into a couple of illustrative use cases related to chemical hazard warning. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. MEGALEX: A megastudy of visual and auditory word recognition.

    PubMed

    Ferrand, Ludovic; Méot, Alain; Spinelli, Elsa; New, Boris; Pallier, Christophe; Bonin, Patrick; Dufau, Stéphane; Mathôt, Sebastiaan; Grainger, Jonathan

    2018-06-01

    Using the megastudy approach, we report a new database (MEGALEX) of visual and auditory lexical decision times and accuracy rates for tens of thousands of words. We collected visual lexical decision data for 28,466 French words and the same number of pseudowords, and auditory lexical decision data for 17,876 French words and the same number of pseudowords (synthesized tokens were used for the auditory modality). This constitutes the first large-scale database for auditory lexical decision, and the first database to enable a direct comparison of word recognition in different modalities. Different regression analyses were conducted to illustrate potential ways to exploit this megastudy database. First, we compared the proportions of variance accounted for by five word frequency measures. Second, we conducted item-level regression analyses to examine the relative importance of the lexical variables influencing performance in the different modalities (visual and auditory). Finally, we compared the similarities and differences between the two modalities. All data are freely available on our website ( https://sedufau.shinyapps.io/megalex/ ) and are searchable at www.lexique.org , inside the Open Lexique search engine.

  6. Vroom: designing an augmented environment for remote collaboration in digital cinema production

    NASA Astrophysics Data System (ADS)

    Margolis, Todd; Cornish, Tracy

    2013-03-01

    As media technologies become increasingly affordable, compact and inherently networked, new generations of telecollaborative platforms continue to arise which integrate these new affordances. Virtual reality has been primarily concerned with creating simulations of environments that can transport participants to real or imagined spaces that replace the "real world". Meanwhile Augmented Reality systems have evolved to interleave objects from Virtual Reality environments into the physical landscape. Perhaps now there is a new class of systems that reverse this precept to enhance dynamic media landscapes and immersive physical display environments to enable intuitive data exploration through collaboration. Vroom (Virtual Room) is a next-generation reconfigurable tiled display environment in development at the California Institute for Telecommunications and Information Technology (Calit2) at the University of California, San Diego. Vroom enables freely scalable digital collaboratories, connecting distributed, high-resolution visualization resources for collaborative work in the sciences, engineering and the arts. Vroom transforms a physical space into an immersive media environment with large format interactive display surfaces, video teleconferencing and spatialized audio built on a highspeed optical network backbone. Vroom enables group collaboration for local and remote participants to share knowledge and experiences. Possible applications include: remote learning, command and control, storyboarding, post-production editorial review, high resolution video playback, 3D visualization, screencasting and image, video and multimedia file sharing. To support these various scenarios, Vroom features support for multiple user interfaces (optical tracking, touch UI, gesture interface, etc.), support for directional and spatialized audio, giga-pixel image interactivity, 4K video streaming, 3D visualization and telematic production. This paper explains the design process that has been utilized to make Vroom an accessible and intuitive immersive environment for remote collaboration specifically for digital cinema production.

  7. Exploring the Engagement Effects of Visual Programming Language for Data Structure Courses

    ERIC Educational Resources Information Center

    Chang, Chih-Kai; Yang, Ya-Fei; Tsai, Yu-Tzu

    2017-01-01

    Previous research indicates that understanding the state of learning motivation enables researchers to deeply understand students' learning processes. Studies have shown that visual programming languages use graphical code, enabling learners to learn effectively, improve learning effectiveness, increase learning fun, and offering various other…

  8. Strand Displacement Amplification Reaction on Quantum Dot-Encoded Silica Bead for Visual Detection of Multiplex MicroRNAs.

    PubMed

    Qu, Xiaojun; Jin, Haojun; Liu, Yuqian; Sun, Qingjiang

    2018-03-06

    The combination of microbead array, isothermal amplification, and molecular signaling enables the continuous development of next-generation molecular diagnostic techniques. Herein we reported the implementation of nicking endonuclease-assisted strand displacement amplification reaction on quantum dots-encoded microbead (Qbead), and demonstrated its feasibility for multiplexed miRNA assay in real sample. The Qbead featured with well-defined core-shell superstructure with dual-colored quantum dots loaded in silica core and shell, respectively, exhibiting remarkably high optical encoding stability. Specially designed stem-loop-structured probes were immobilized onto the Qbead for specific target recognition and amplification. In the presence of low abundance of miRNA target, the target triggered exponential amplification, producing a large quantity of stem-G-quadruplexes, which could be selectively signaled by a fluorescent G-quadruplex intercalator. In one-step operation, the Qbead-based isothermal amplification and signaling generated emissive "core-shell-satellite" superstructure, changing the Qbead emission-color. The target abundance-dependent emission-color changes of the Qbead allowed direct, visual detection of specific miRNA target. This visualization method achieved limit of detection at the subfemtomolar level with a linear dynamic range of 4.5 logs, and point-mutation discrimination capability for precise miRNA analyses. The array of three encoded Qbeads could simultaneously quantify three miRNA biomarkers in ∼500 human hepatoma carcinoma cells. With the advancements in ease of operation, multiplexing, and visualization capabilities, the isothermal amplification-on-Qbead assay could potentially enable the development of point-of-care diagnostics.

  9. Exploratory Visual Analytics of a Dynamically Built Network of Nodes in a WebGL-Enabled Browser

    DTIC Science & Technology

    2014-01-01

    dimensionality reduction, feature extraction, high-dimensional data, t-distributed stochastic neighbor embedding, neighbor retrieval visualizer, visual...WebGL-enabled rendering is supported natively by browsers such as the latest Mozilla Firefox , Google Chrome, and Microsoft Internet Explorer 11. At the...appropriate names. The resultant 26-node network is displayed in a Mozilla Firefox browser in figure 2 (also see appendix B). 3 Figure 1. The

  10. Bring It to the Pitch: Combining Video and Movement Data to Enhance Team Sport Analysis.

    PubMed

    Stein, Manuel; Janetzko, Halldor; Lamprecht, Andreas; Breitkreutz, Thorsten; Zimmermann, Philipp; Goldlucke, Bastian; Schreck, Tobias; Andrienko, Gennady; Grossniklaus, Michael; Keim, Daniel A

    2018-01-01

    Analysts in professional team sport regularly perform analysis to gain strategic and tactical insights into player and team behavior. Goals of team sport analysis regularly include identification of weaknesses of opposing teams, or assessing performance and improvement potential of a coached team. Current analysis workflows are typically based on the analysis of team videos. Also, analysts can rely on techniques from Information Visualization, to depict e.g., player or ball trajectories. However, video analysis is typically a time-consuming process, where the analyst needs to memorize and annotate scenes. In contrast, visualization typically relies on an abstract data model, often using abstract visual mappings, and is not directly linked to the observed movement context anymore. We propose a visual analytics system that tightly integrates team sport video recordings with abstract visualization of underlying trajectory data. We apply appropriate computer vision techniques to extract trajectory data from video input. Furthermore, we apply advanced trajectory and movement analysis techniques to derive relevant team sport analytic measures for region, event and player analysis in the case of soccer analysis. Our system seamlessly integrates video and visualization modalities, enabling analysts to draw on the advantages of both analysis forms. Several expert studies conducted with team sport analysts indicate the effectiveness of our integrated approach.

  11. Missing depth cues in virtual reality limit performance and quality of three dimensional reaching movements

    PubMed Central

    Mayo, Johnathan; Baur, Kilian; Wittmann, Frieder; Riener, Robert; Wolf, Peter

    2018-01-01

    Background Goal-directed reaching for real-world objects by humans is enabled through visual depth cues. In virtual environments, the number and quality of available visual depth cues is limited, which may affect reaching performance and quality of reaching movements. Methods We assessed three-dimensional reaching movements in five experimental groups each with ten healthy volunteers. Three groups used a two-dimensional computer screen and two groups used a head-mounted display. The first screen group received the typically recreated visual depth cues, such as aerial and linear perspective, occlusion, shadows, and texture gradients. The second screen group received an abstract minimal rendering lacking those. The third screen group received the cues of the first screen group and absolute depth cues enabled by retinal image size of a known object, which realized with visual renderings of the handheld device and a ghost handheld at the target location. The two head-mounted display groups received the same virtually recreated visual depth cues as the second or the third screen group respectively. Additionally, they could rely on stereopsis and motion parallax due to head-movements. Results and conclusion All groups using the screen performed significantly worse than both groups using the head-mounted display in terms of completion time normalized by the straight-line distance to the target. Both groups using the head-mounted display achieved the optimal minimum in number of speed peaks and in hand path ratio, indicating that our subjects performed natural movements when using a head-mounted display. Virtually recreated visual depth cues had a minor impact on reaching performance. Only the screen group with rendered handhelds could outperform the other screen groups. Thus, if reaching performance in virtual environments is in the main scope of a study, we suggest applying a head-mounted display. Otherwise, when two-dimensional screens are used, achievable performance is likely limited by the reduced depth perception and not just by subjects’ motor skills. PMID:29293512

  12. An Unroofing Method to Observe the Cytoskeleton Directly at Molecular Resolution Using Atomic Force Microscopy

    PubMed Central

    Usukura, Eiji; Narita, Akihiro; Yagi, Akira; Ito, Shuichi; Usukura, Jiro

    2016-01-01

    An improved unroofing method enabled the cantilever of an atomic force microscope (AFM) to reach directly into a cell to visualize the intracellular cytoskeletal actin filaments, microtubules, clathrin coats, and caveolae in phosphate-buffered saline (PBS) at a higher resolution than conventional electron microscopy. All of the actin filaments clearly exhibited a short periodicity of approximately 5–6 nm, which was derived from globular actins linked to each other to form filaments, as well as a long helical periodicity. The polarity of the actin filaments appeared to be determined by the shape of the periodic striations. Microtubules were identified based on their thickness. Clathrin coats and caveolae were observed on the cytoplasmic surface of cell membranes. The area containing clathrin molecules and their terminal domains was directly visualized. Characteristic ridge structures located at the surface of the caveolae were observed at high resolution, similar to those observed with electron microscopy (EM). Overall, unroofing allowed intracellular AFM imaging in a liquid environment with a level of quality equivalent or superior to that of EM. Thus, AFMs are anticipated to provide cutting-edge findings in cell biology and histology. PMID:27273367

  13. Thermal feature extraction of servers in a datacenter using thermal image registration

    NASA Astrophysics Data System (ADS)

    Liu, Hang; Ran, Jian; Xie, Ting; Gao, Shan

    2017-09-01

    Thermal cameras provide fine-grained thermal information that enhances monitoring and enables automatic thermal management in large datacenters. Recent approaches employing mobile robots or thermal camera networks can already identify the physical locations of hot spots. Other distribution information used to optimize datacenter management can also be obtained automatically using pattern recognition technology. However, most of the features extracted from thermal images, such as shape and gradient, may be affected by changes in the position and direction of the thermal camera. This paper presents a method for extracting the thermal features of a hot spot or a server in a container datacenter. First, thermal and visual images are registered based on textural characteristics extracted from images acquired in datacenters. Then, the thermal distribution of each server is standardized. The features of a hot spot or server extracted from the standard distribution can reduce the impact of camera position and direction. The results of experiments show that image registration is efficient for aligning the corresponding visual and thermal images in the datacenter, and the standardization procedure reduces the impacts of camera position and direction on hot spot or server features.

  14. Visualizing period fluctuations in strained-layer superlattices with scanning tunneling microscopy

    NASA Astrophysics Data System (ADS)

    Kanedy, K.; Lopez, F.; Wood, M. R.; Gmachl, C. F.; Weimer, M.; Klem, J. F.; Hawkins, S. D.; Shaner, E. A.; Kim, J. K.

    2018-01-01

    We show how cross-sectional scanning tunneling microscopy (STM) may be used to accurately map the period fluctuations throughout epitaxial, strained-layer superlattices based on the InAs/InAsSb and InGaAs/InAlAs material systems. The concept, analogous to Bragg's law in high-resolution x-ray diffraction, relies on an analysis of the [001]-convolved reciprocal-space satellite peaks obtained from discrete Fourier transforms of individual STM images. Properly implemented, the technique enables local period measurements that reliably discriminate vertical fluctuations localized to within ˜5 superlattice repeats along the [001] growth direction and orthogonal, lateral fluctuations localized to within ˜40 nm along <110> directions in the growth plane. While not as accurate as x-ray, the inherent, single-image measurement error associated with the method may be made as small as 0.1%, allowing the vertical or lateral period fluctuations contributing to inhomogeneous energy broadening and carrier localization in these structures to be pinpointed and quantified. The direct visualization of unexpectedly large, lateral period fluctuations on nanometer length scales in both strain-balanced systems supports a common understanding in terms of correlated interface roughness.

  15. Development of High-speed Visualization System of Hypocenter Data Using CUDA-based GPU computing

    NASA Astrophysics Data System (ADS)

    Kumagai, T.; Okubo, K.; Uchida, N.; Matsuzawa, T.; Kawada, N.; Takeuchi, N.

    2014-12-01

    After the Great East Japan Earthquake on March 11, 2011, intelligent visualization of seismic information is becoming important to understand the earthquake phenomena. On the other hand, to date, the quantity of seismic data becomes enormous as a progress of high accuracy observation network; we need to treat many parameters (e.g., positional information, origin time, magnitude, etc.) to efficiently display the seismic information. Therefore, high-speed processing of data and image information is necessary to handle enormous amounts of seismic data. Recently, GPU (Graphic Processing Unit) is used as an acceleration tool for data processing and calculation in various study fields. This movement is called GPGPU (General Purpose computing on GPUs). In the last few years the performance of GPU keeps on improving rapidly. GPU computing gives us the high-performance computing environment at a lower cost than before. Moreover, use of GPU has an advantage of visualization of processed data, because GPU is originally architecture for graphics processing. In the GPU computing, the processed data is always stored in the video memory. Therefore, we can directly write drawing information to the VRAM on the video card by combining CUDA and the graphics API. In this study, we employ CUDA and OpenGL and/or DirectX to realize full-GPU implementation. This method makes it possible to write drawing information to the VRAM on the video card without PCIe bus data transfer: It enables the high-speed processing of seismic data. The present study examines the GPU computing-based high-speed visualization and the feasibility for high-speed visualization system of hypocenter data.

  16. Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory.

    PubMed

    Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E

    2010-05-01

    The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.

  17. Bounded-Degree Approximations of Stochastic Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, Christopher J.; Pinar, Ali; Kiyavash, Negar

    2017-06-01

    We propose algorithms to approximate directed information graphs. Directed information graphs are probabilistic graphical models that depict causal dependencies between stochastic processes in a network. The proposed algorithms identify optimal and near-optimal approximations in terms of Kullback-Leibler divergence. The user-chosen sparsity trades off the quality of the approximation against visual conciseness and computational tractability. One class of approximations contains graphs with speci ed in-degrees. Another class additionally requires that the graph is connected. For both classes, we propose algorithms to identify the optimal approximations and also near-optimal approximations, using a novel relaxation of submodularity. We also propose algorithms to identifymore » the r-best approximations among these classes, enabling robust decision making.« less

  18. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of "blind" processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production.more » This technique is widely applicable and is not limited to crystal growth processes.« less

  19. High-Speed Atomic Force Microscopy

    NASA Astrophysics Data System (ADS)

    Ando, Toshio; Uchihashi, Takayuki; Kodera, Noriyuki

    2012-08-01

    The technology of high-speed atomic force microscopy (HS-AFM) has reached maturity. HS-AFM enables us to directly visualize the structure and dynamics of biological molecules in physiological solutions at subsecond to sub-100 ms temporal resolution. By this microscopy, dynamically acting molecules such as myosin V walking on an actin filament and bacteriorhodopsin in response to light are successfully visualized. High-resolution molecular movies reveal the dynamic behavior of molecules in action in great detail. Inferences no longer have to be made from static snapshots of molecular structures and from the dynamic behavior of optical markers attached to biomolecules. In this review, we first describe theoretical considerations for the highest possible imaging rate, then summarize techniques involved in HS-AFM and highlight recent imaging studies. Finally, we briefly discuss future challenges to explore.

  20. Sound-induced Interfacial Dynamics in a Microfluidic Two-phase Flow

    NASA Astrophysics Data System (ADS)

    Mak, Sze Yi; Shum, Ho Cheung

    2014-11-01

    Retrieving sound wave by a fluidic means is challenging due to the difficulty in visualizing the very minute sound-induced fluid motion. This work studies the interfacial response of multiphase systems towards fluctuation in the flow. We demonstrate a direct visualization of music in the form of ripples at a microfluidic aqueous-aqueous interface with an ultra-low interfacial tension. The interface shows a passive response to sound of different frequencies with sufficiently precise time resolution, enabling the recording of musical notes and even subsequent reconstruction with high fidelity. This suggests that sensing and transmitting vibrations as tiny as those induced by sound could be realized in low interfacial tension systems. The robust control of the interfacial dynamics could be adopted for droplet and complex-fiber generation.

  1. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    DOE PAGES

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.; ...

    2017-04-20

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of "blind" processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production.more » This technique is widely applicable and is not limited to crystal growth processes.« less

  2. Multi-focus and multi-level techniques for visualization and analysis of networks with thematic data

    NASA Astrophysics Data System (ADS)

    Cossalter, Michele; Mengshoel, Ole J.; Selker, Ted

    2013-01-01

    Information-rich data sets bring several challenges in the areas of visualization and analysis, even when associated with node-link network visualizations. This paper presents an integration of multi-focus and multi-level techniques that enable interactive, multi-step comparisons in node-link networks. We describe NetEx, a visualization tool that enables users to simultaneously explore different parts of a network and its thematic data, such as time series or conditional probability tables. NetEx, implemented as a Cytoscape plug-in, has been applied to the analysis of electrical power networks, Bayesian networks, and the Enron e-mail repository. In this paper we briefly discuss visualization and analysis of the Enron social network, but focus on data from an electrical power network. Specifically, we demonstrate how NetEx supports the analytical task of electrical power system fault diagnosis. Results from a user study with 25 subjects suggest that NetEx enables more accurate isolation of complex faults compared to an especially designed software tool.

  3. High performance visual display for HENP detectors

    NASA Astrophysics Data System (ADS)

    McGuigan, Michael; Smith, Gordon; Spiletic, John; Fine, Valeri; Nevski, Pavel

    2001-08-01

    A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following characteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactive control, including the ability to slice, search and mark areas of the detector. We incorporate the ability to make a high quality still image of a view of the detector and the ability to generate animations and a fly through of the detector and output these to MPEG or VRML models. We develop data compression hardware and software so that remote interactive visualization will be possible among dispersed collaborators. We obtain real time visual display for events accumulated during simulations.

  4. Visual and tactile interfaces for bi-directional human robot communication

    NASA Astrophysics Data System (ADS)

    Barber, Daniel; Lackey, Stephanie; Reinerman-Jones, Lauren; Hudson, Irwin

    2013-05-01

    Seamless integration of unmanned and systems and Soldiers in the operational environment requires robust communication capabilities. Multi-Modal Communication (MMC) facilitates achieving this goal due to redundancy and levels of communication superior to single mode interaction using auditory, visual, and tactile modalities. Visual signaling using arm and hand gestures is a natural method of communication between people. Visual signals standardized within the U.S. Army Field Manual and in use by Soldiers provide a foundation for developing gestures for human to robot communication. Emerging technologies using Inertial Measurement Units (IMU) enable classification of arm and hand gestures for communication with a robot without the requirement of line-of-sight needed by computer vision techniques. These devices improve the robustness of interpreting gestures in noisy environments and are capable of classifying signals relevant to operational tasks. Closing the communication loop between Soldiers and robots necessitates them having the ability to return equivalent messages. Existing visual signals from robots to humans typically require highly anthropomorphic features not present on military vehicles. Tactile displays tap into an unused modality for robot to human communication. Typically used for hands-free navigation and cueing, existing tactile display technologies are used to deliver equivalent visual signals from the U.S. Army Field Manual. This paper describes ongoing research to collaboratively develop tactile communication methods with Soldiers, measure classification accuracy of visual signal interfaces, and provides an integration example including two robotic platforms.

  5. Experience-enabled enhancement of adult visual cortex function.

    PubMed

    Tschetter, Wayne W; Alam, Nazia M; Yee, Christopher W; Gorz, Mario; Douglas, Robert M; Sagdullaev, Botir; Prusky, Glen T

    2013-03-20

    We previously reported in adult mice that visuomotor experience during monocular deprivation (MD) augmented enhancement of visual-cortex-dependent behavior through the non-deprived eye (NDE) during deprivation, and enabled enhanced function to persist after MD. We investigated the physiological substrates of this experience-enabled form of adult cortical plasticity by measuring visual behavior and visually evoked potentials (VEPs) in binocular visual cortex of the same mice before, during, and after MD. MD on its own potentiated VEPs contralateral to the NDE during MD and shifted ocular dominance (OD) in favor of the NDE in both hemispheres. Whereas we expected visuomotor experience during MD to augment these effects, instead enhanced responses contralateral to the NDE, and the OD shift ipsilateral to the NDE were attenuated. However, in the same animals, we measured NMDA receptor-dependent VEP potentiation ipsilateral to the NDE during MD, which persisted after MD. The results indicate that visuomotor experience during adult MD leads to enduring enhancement of behavioral function, not simply by amplifying MD-induced changes in cortical OD, but through an independent process of increasing NDE drive in ipsilateral visual cortex. Because the plasticity is resident in the mature visual cortex and selectively effects gain of visual behavior through experiential means, it may have the therapeutic potential to target and non-invasively treat eye- or visual-field-specific cortical impairment.

  6. Incidental biasing of attention from visual long-term memory.

    PubMed

    Fan, Judith E; Turk-Browne, Nicholas B

    2016-06-01

    Holding recently experienced information in mind can help us achieve our current goals. However, such immediate and direct forms of guidance from working memory are less helpful over extended delays or when other related information in long-term memory is useful for reaching these goals. Here we show that information that was encoded in the past but is no longer present or relevant to the task also guides attention. We examined this by associating multiple unique features with novel shapes in visual long-term memory (VLTM), and subsequently testing how memories for these objects biased the deployment of attention. In Experiment 1, VLTM for associated features guided visual search for the shapes, even when these features had never been task-relevant. In Experiment 2, associated features captured attention when presented in isolation during a secondary task that was completely unrelated to the shapes. These findings suggest that long-term memory enables a durable and automatic type of memory-based attentional control. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. An Indoor Navigation System for the Visually Impaired

    PubMed Central

    Guerrero, Luis A.; Vasquez, Francisco; Ochoa, Sergio F.

    2012-01-01

    Navigation in indoor environments is highly challenging for the severely visually impaired, particularly in spaces visited for the first time. Several solutions have been proposed to deal with this challenge. Although some of them have shown to be useful in real scenarios, they involve an important deployment effort or use artifacts that are not natural for blind users. This paper presents an indoor navigation system that was designed taking into consideration usability as the quality requirement to be maximized. This solution enables one to identify the position of a person and calculates the velocity and direction of his movements. Using this information, the system determines the user's trajectory, locates possible obstacles in that route, and offers navigation information to the user. The solution has been evaluated using two experimental scenarios. Although the results are still not enough to provide strong conclusions, they indicate that the system is suitable to guide visually impaired people through an unknown built environment. PMID:22969398

  8. All-optical recording and stimulation of retinal neurons in vivo in retinal degeneration mice

    PubMed Central

    Strazzeri, Jennifer M.; Williams, David R.; Merigan, William H.

    2018-01-01

    Here we demonstrate the application of a method that could accelerate the development of novel therapies by allowing direct and repeatable visualization of cellular function in the living eye, to study loss of vision in animal models of retinal disease, as well as evaluate the time course of retinal function following therapeutic intervention. We use high-resolution adaptive optics scanning light ophthalmoscopy to image fluorescence from the calcium sensor GCaMP6s. In mice with photoreceptor degeneration (rd10), we measured restored visual responses in ganglion cell layer neurons expressing the red-shifted channelrhodopsin ChrimsonR over a six-week period following significant loss of visual responses. Combining a fluorescent calcium sensor, a channelrhodopsin, and adaptive optics enables all-optical stimulation and recording of retinal neurons in the living eye. Because the retina is an accessible portal to the central nervous system, our method also provides a novel non-invasive method of dissecting neuronal processing in the brain. PMID:29596518

  9. The Geophysical Fluid Flow Cell Experiment

    NASA Technical Reports Server (NTRS)

    Hart, J. E.; Ohlsen, D.; Kittleman, S.; Borhani, N.; Leslie, F.; Miller, T.

    1999-01-01

    The Geophysical Fluid Flow Cell (GFFC) experiment performed visualizations of thermal convection in a rotating differentially heated spherical shell of fluid. In these experiments dielectric polarization forces are used to generate a radially directed buoyancy force. This enables the laboratory simulation of a number of geophysically and astrophysically important situations in which sphericity and rotation both impose strong constraints on global scale fluid motions. During USML-2 a large set of experiments with spherically symmetric heating were carried out. These enabled the determination of critical points for the transition to various forms of nonaxisymmetric convection and, for highly turbulent flows, the transition latitudes separating the different modes of motion. This paper presents a first analysis of these experiments as well as data on the general performance of the instrument during the USML-2 flight.

  10. On three-dimensional misorientation spaces

    PubMed Central

    Bennett, Robbie J.; Vukmanovic, Zoja; Solano-Alvarez, Wilberth; Lainé, Steven J.; Einsle, Joshua F.; Midgley, Paul A.; Rae, Catherine M. F.; Hielscher, Ralf

    2017-01-01

    Determining the local orientation of crystals in engineering and geological materials has become routine with the advent of modern crystallographic mapping techniques. These techniques enable many thousands of orientation measurements to be made, directing attention towards how such orientation data are best studied. Here, we provide a guide to the visualization of misorientation data in three-dimensional vector spaces, reduced by crystal symmetry, to reveal crystallographic orientation relationships. Domains for all point group symmetries are presented and an analysis methodology is developed and applied to identify crystallographic relationships, indicated by clusters in the misorientation space, in examples from materials science and geology. This analysis aids the determination of active deformation mechanisms and evaluation of cluster centres and spread enables more accurate description of transformation processes supporting arguments regarding provenance. PMID:29118660

  11. Image-Enabled Discourse: Investigating the Creation of Visual Information as Communicative Practice

    ERIC Educational Resources Information Center

    Snyder, Jaime

    2012-01-01

    Anyone who has clarified a thought or prompted a response during a conversation by drawing a picture has exploited the potential of image making as an interactive tool for conveying information. Images are increasingly ubiquitous in daily communication, in large part due to advances in visually enabled information and communication technologies…

  12. TU-G-BRA-02: Can We Extract Lung Function Directly From 4D-CT Without Deformable Image Registration?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kipritidis, J; Woodruff, H; Counter, W

    Purpose: Dynamic CT ventilation imaging (CT-VI) visualizes air volume changes in the lung by evaluating breathing-induced lung motion using deformable image registration (DIR). Dynamic CT-VI could enable functionally adaptive lung cancer radiation therapy, but its sensitivity to DIR parameters poses challenges for validation. We hypothesize that a direct metric using CT parameters derived from Hounsfield units (HU) alone can provide similar ventilation images without DIR. We compare the accuracy of Direct and Dynamic CT-VIs versus positron emission tomography (PET) images of inhaled {sup 68}Ga-labelled nanoparticles (‘Galligas’). Methods: 25 patients with lung cancer underwent Galligas 4D-PET/CT scans prior to radiation therapy.more » For each patient we produced three CT- VIs. (i) Our novel method, Direct CT-VI, models blood-gas exchange as the product of air and tissue density at each lung voxel based on time-averaged 4D-CT HU values. Dynamic CT-VIs were produced by evaluating: (ii) regional HU changes, and (iii) regional volume changes between the exhale and inhale 4D-CT phase images using a validated B-spline DIR method. We assessed the accuracy of each CT-VI by computing the voxel-wise Spearman correlation with free-breathing Galligas PET, and also performed a visual analysis. Results: Surprisingly, Direct CT-VIs exhibited better global correlation with Galligas PET than either of the dynamic CT-VIs. The (mean ± SD) correlations were (0.55 ± 0.16), (0.41 ± 0.22) and (0.29 ± 0.27) for Direct, Dynamic HU-based and Dynamic volume-based CT-VIs respectively. Visual comparison of Direct CT-VI to PET demonstrated similarity for emphysema defects and ventral-to-dorsal gradients, but inability to identify decreased ventilation distal to tumor-obstruction. Conclusion: Our data supports the hypothesis that Direct CT-VIs are as accurate as Dynamic CT-VIs in terms of global correlation with Galligas PET. Visual analysis, however, demonstrated that different CT-VI algorithms might have varying accuracy depending on the underlying cause of ventilation abnormality. This research was supported by a National Health and Medical Research Council (NHMRC) Australia Fellowship, an Cancer Institute New South Wales Early Career Fellowship 13-ECF-1/15 and NHMRC scholarship APP1038399. No commercial funding was received for this work.« less

  13. Unaware Processing of Tools in the Neural System for Object-Directed Action Representation.

    PubMed

    Tettamanti, Marco; Conca, Francesca; Falini, Andrea; Perani, Daniela

    2017-11-01

    The hypothesis that the brain constitutively encodes observed manipulable objects for the actions they afford is still debated. Yet, crucial evidence demonstrating that, even in the absence of perceptual awareness, the mere visual appearance of a manipulable object triggers a visuomotor coding in the action representation system including the premotor cortex, has hitherto not been provided. In this fMRI study, we instantiated reliable unaware visual perception conditions by means of continuous flash suppression, and we tested in 24 healthy human participants (13 females) whether the visuomotor object-directed action representation system that includes left-hemispheric premotor, parietal, and posterior temporal cortices is activated even under subliminal perceptual conditions. We found consistent activation in the target visuomotor cortices, both with and without perceptual awareness, specifically for pictures of manipulable versus non-manipulable objects. By means of a multivariate searchlight analysis, we also found that the brain activation patterns in this visuomotor network enabled the decoding of manipulable versus non-manipulable object picture processing, both with and without awareness. These findings demonstrate the intimate neural coupling between visual perception and motor representation that underlies manipulable object processing: manipulable object stimuli specifically engage the visuomotor object-directed action representation system, in a constitutive manner that is independent from perceptual awareness. This perceptuo-motor coupling endows the brain with an efficient mechanism for monitoring and planning reactions to external stimuli in the absence of awareness. SIGNIFICANCE STATEMENT Our brain constantly encodes the visual information that hits the retina, leading to a stimulus-specific activation of sensory and semantic representations, even for objects that we do not consciously perceive. Do these unconscious representations encompass the motor programming of actions that could be accomplished congruently with the objects' functions? In this fMRI study, we instantiated unaware visual perception conditions, by dynamically suppressing the visibility of manipulable object pictures with mondrian masks. Despite escaping conscious perception, manipulable objects activated an object-directed action representation system that includes left-hemispheric premotor, parietal, and posterior temporal cortices. This demonstrates that visuomotor encoding occurs independently of conscious object perception. Copyright © 2017 the authors 0270-6474/17/3710712-13$15.00/0.

  14. A study of morphology, provenance, and movement of desert sand seas in Africa, Asia, and Australia

    NASA Technical Reports Server (NTRS)

    Mckee, E. D. (Principal Investigator); Breed, C. S.

    1973-01-01

    The author has identified the following significant results. Recent acquisition of generally high quality color prints for most of the test sites has enabled the project to make significant advances in preparing mosaics of sand desert areas under study. Computer enhancement of imagery of selected sites, where details of complex dune forms need to be determined, has been achieved with arrival of computer-compatible ERTS-1 tapes. Further, a comparator, recently received, gives precise visual measurements of width, length, and spacing of sand bodies and so improves comparison of patterns in various test sites. Considerable additional meteorological data recently received on sand-moving winds in China, Pakistan, Libya and other areas enabled much progress to be made in developing overlays for the dune mosaics. These data show direction, speed, and frequency of winds. Other new data for use in preparing overlays used with ERTS-1 image mosaics include ground truth on moisture control, geologic settings, and plant distribution. With the addition of visual observation data and prints from hand-held photography now being obtained by the Skylab mission, much progress in interpreting the patterns of sand seas for 17 desert sites is anticipated.

  15. A study of morphology, provenance, and movement of desert sand seas in Africa, Asia, and Australia

    NASA Technical Reports Server (NTRS)

    Mckee, E. D. (Principal Investigator); Breed, C. S.

    1974-01-01

    The author has identified the following significant results. Recent acquisition of generally high quality color prints for most of the test sites has enabled this project to make significant advances in preparing mosaics of sand desert areas under study. Computer enhancement of imagery, where details of complex dune forms need to be determined, has been achieved with arrival of computer-compatible ERTS-1 tapes. Further, a comparator, recently received, gives precise visual measurements of width, length, and spacing of sand bodies and so improves comparison of patterns in various test sites. Considerable additional meteorological data recently received on sand-moving winds in China, Pakistan, Libya, and other study areas enabled much progress to be made in developing overlays for the dune mosaics. These data show direction, speed, and frequency of winds. Other new data for use in preparing overlays used with ERTS-1 image mosaics include ground truth on moisture control, geologic settings, and plant distribution. With the addition of visual observation data and prints from hand-held photography now being obtained by the Skylab 4 mission, much progress in interpreting the patterns of sand seas for 17 desert sites is anticipated.

  16. Interventional magnetic resonance angiography with no strings attached: wireless active catheter visualization.

    PubMed

    Quick, Harald H; Zenge, Michael O; Kuehl, Hilmar; Kaiser, Gernot; Aker, Stephanie; Massing, Sandra; Bosk, Silke; Ladd, Mark E

    2005-02-01

    Active instrument visualization strategies for interventional MR angiography (MRA) require vascular instruments to be equipped with some type of radiofrequency (RF) coil or dipole RF antenna for MR signal detection. Such visualization strategies traditionally necessitate a connection to the scanner with either coaxial cable or laser fibers. In order to eliminate any wire connection, RF resonators that inductively couple their signal to MR surface coils were implemented into catheters to enable wireless active instrument visualization. Instrument background to contrast-to-noise ratio was systematically investigated as a function of the excitation flip angle. Signal coupling between the catheter RF coil and surface RF coils was evaluated qualitatively and quantitatively as a function of the catheter position and orientation with regard to the static magnetic field B0 and to the surface coils. In vivo evaluation of the instruments was performed in interventional MRA procedures on five pigs under MR guidance. Cartesian and projection reconstruction TrueFISP imaging enabled simultaneous visualization of the instruments and vascular morphology in real time. The implementation of RF resonators enabled robust visualization of the catheter curvature to the very tip. Additionally, the active visualization strategy does not require any wire connection to the scanner and thus does not hamper the interventionalist during the course of an intervention.

  17. Social Circles: A 3D User Interface for Facebook

    NASA Astrophysics Data System (ADS)

    Rodrigues, Diego; Oakley, Ian

    Online social network services are increasingly popular web applications which display large amounts of rich multimedia content: contacts, status updates, photos and event information. Arguing that this quantity of information overwhelms conventional user interfaces, this paper presents Social Circles, a rich interactive visualization designed to support real world users of social network services in everyday tasks such as keeping up with friends and organizing their network. It achieves this by using 3D UIs, fluid animations and a spatial metaphor to enable direct manipulation of a social network.

  18. A facile route towards large area self-assembled nanoscale silver film morphologies and their applications towards metal enhanced fluorescence

    DOE PAGES

    Hohenberger, Erik; Freitag, Nathan; Rosenmann, Daniel; ...

    2017-04-19

    Here, we present a facile method for fabricating nanostructured silver films containing a high density of nanoscopic gap features through a surface directed phenomenon utilizing nanoporous scaffolds rather than through traditional lithographic patterning processes. This method enables tunability of the silver film growth by simply adjusting the formulation and processing conditions of the nanoporous film prior to metallization. We further demonstrate that this process can produce nanoscopic gaps in thick (100 nm) silver films supporting localized surface plasmon resonance with large field amplification within the gaps while enabling launching of propagating surface plasmons within the silver grains. These enhanced fieldsmore » provide metal enhanced fluorescence with enhancement factors as high as 21 times compared to glass, as well as enable visualization of single fluorophore emission. This work provides a low-cost rapid approach for producing novel nanostructures capable of broadband fluorescence amplification, with potential applications including plasmonic and fluorescence based optical sensing and imaging applications.« less

  19. Web-based interactive visualization in a Grid-enabled neuroimaging application using HTML5.

    PubMed

    Siewert, René; Specovius, Svenja; Wu, Jie; Krefting, Dagmar

    2012-01-01

    Interactive visualization and correction of intermediate results are required in many medical image analysis pipelines. To allow certain interaction in the remote execution of compute- and data-intensive applications, new features of HTML5 are used. They allow for transparent integration of user interaction into Grid- or Cloud-enabled scientific workflows. Both 2D and 3D visualization and data manipulation can be performed through a scientific gateway without the need to install specific software or web browser plugins. The possibilities of web-based visualization are presented along the FreeSurfer-pipeline, a popular compute- and data-intensive software tool for quantitative neuroimaging.

  20. Visual representation of scientific information.

    PubMed

    Wong, Bang

    2011-02-15

    Great technological advances have enabled researchers to generate an enormous amount of data. Data analysis is replacing data generation as the rate-limiting step in scientific research. With this wealth of information, we have an opportunity to understand the molecular causes of human diseases. However, the unprecedented scale, resolution, and variety of data pose new analytical challenges. Visual representation of data offers insights that can lead to new understanding, whether the purpose is analysis or communication. This presentation shows how art, design, and traditional illustration can enable scientific discovery. Examples will be drawn from the Broad Institute's Data Visualization Initiative, aimed at establishing processes for creating informative visualization models.

  1. Bimetallic Effect of Single Nanocatalysts Visualized by Super-Resolution Catalysis Imaging

    DOE PAGES

    Chen, Guanqun; Zou, Ningmu; Chen, Bo; ...

    2017-11-01

    Compared with their monometallic counterparts, bimetallic nanoparticles often show enhanced catalytic activity associated with the bimetallic interface. Direct quantitation of catalytic activity at the bimetallic interface is important for understanding the enhancement mechanism, but challenging experimentally. Here using single-molecule super-resolution catalysis imaging in correlation with electron microscopy, we report the first quantitative visualization of enhanced bimetallic activity within single bimetallic nanoparticles. We focus on heteronuclear bimetallic PdAu nanoparticles that present a well-defined Pd–Au bimetallic interface in catalyzing a photodriven fluorogenic disproportionation reaction. Our approach also enables a direct comparison between the bimetallic and monometallic regions within the same nanoparticle. Theoreticalmore » calculations further provide insights into the electronic nature of N–O bond activation of the reactant (resazurin) adsorbed on bimetallic sites. Subparticle activity correlation between bimetallic enhancement and monometallic activity suggests that the favorable locations to construct bimetallic sites are those monometallic sites with higher activity, leading to a strategy for making effective bimetallic nanocatalysts. Furthermore, the results highlight the power of super-resolution catalysis imaging in gaining insights that could help improve nanocatalysts.« less

  2. Visualizing the origins of selfish de novo mutations in individual seminiferous tubules of human testes

    PubMed Central

    Maher, Geoffrey J.; McGowan, Simon J.; Giannoulatou, Eleni; Verrill, Clare; Goriely, Anne; Wilkie, Andrew O. M.

    2016-01-01

    De novo point mutations arise predominantly in the male germline and increase in frequency with age, but it has not previously been possible to locate specific, identifiable mutations directly within the seminiferous tubules of human testes. Using microdissection of tubules exhibiting altered expression of the spermatogonial markers MAGEA4, FGFR3, and phospho-AKT, whole genome amplification, and DNA sequencing, we establish an in situ strategy for discovery and analysis of pathogenic de novo mutations. In 14 testes from men aged 39–90 y, we identified 11 distinct gain-of-function mutations in five genes (fibroblast growth factor receptors FGFR2 and FGFR3, tyrosine phosphatase PTPN11, and RAS oncogene homologs HRAS and KRAS) from 16 of 22 tubules analyzed; all mutations have known associations with severe diseases, ranging from congenital or perinatal lethal disorders to somatically acquired cancers. These results support proposed selfish selection of spermatogonial mutations affecting growth factor receptor-RAS signaling, highlight its prevalence in older men, and enable direct visualization of the microscopic anatomy of elongated mutant clones. PMID:26858415

  3. Visualizing the origins of selfish de novo mutations in individual seminiferous tubules of human testes.

    PubMed

    Maher, Geoffrey J; McGowan, Simon J; Giannoulatou, Eleni; Verrill, Clare; Goriely, Anne; Wilkie, Andrew O M

    2016-03-01

    De novo point mutations arise predominantly in the male germline and increase in frequency with age, but it has not previously been possible to locate specific, identifiable mutations directly within the seminiferous tubules of human testes. Using microdissection of tubules exhibiting altered expression of the spermatogonial markers MAGEA4, FGFR3, and phospho-AKT, whole genome amplification, and DNA sequencing, we establish an in situ strategy for discovery and analysis of pathogenic de novo mutations. In 14 testes from men aged 39-90 y, we identified 11 distinct gain-of-function mutations in five genes (fibroblast growth factor receptors FGFR2 and FGFR3, tyrosine phosphatase PTPN11, and RAS oncogene homologs HRAS and KRAS) from 16 of 22 tubules analyzed; all mutations have known associations with severe diseases, ranging from congenital or perinatal lethal disorders to somatically acquired cancers. These results support proposed selfish selection of spermatogonial mutations affecting growth factor receptor-RAS signaling, highlight its prevalence in older men, and enable direct visualization of the microscopic anatomy of elongated mutant clones.

  4. Word-Synchronous Optical Sampling of Periodically Repeated OTDM Data Words for True Waveform Visualization

    NASA Astrophysics Data System (ADS)

    Benkler, Erik; Telle, Harald R.

    2007-06-01

    An improved phase-locked loop (PLL) for versatile synchronization of a sampling pulse train to an optical data stream is presented. It enables optical sampling of the true waveform of repetitive high bit-rate optical time division multiplexed (OTDM) data words such as pseudorandom bit sequences. Visualization of the true waveform can reveal details, which cause systematic bit errors. Such errors cannot be inferred from eye diagrams and require word-synchronous sampling. The programmable direct-digital-synthesis circuit used in our novel PLL approach allows flexible adaption of virtually any problem-specific synchronization scenario, including those required for waveform sampling, for jitter measurements by slope detection, and for classical eye-diagrams. Phase comparison of the PLL is performed at 10-GHz OTDM base clock rate, leading to a residual synchronization jitter of less than 70 fs.

  5. Application of Deep Learning in Automated Analysis of Molecular Images in Cancer: A Survey

    PubMed Central

    Xue, Yong; Chen, Shihui; Liu, Yong

    2017-01-01

    Molecular imaging enables the visualization and quantitative analysis of the alterations of biological procedures at molecular and/or cellular level, which is of great significance for early detection of cancer. In recent years, deep leaning has been widely used in medical imaging analysis, as it overcomes the limitations of visual assessment and traditional machine learning techniques by extracting hierarchical features with powerful representation capability. Research on cancer molecular images using deep learning techniques is also increasing dynamically. Hence, in this paper, we review the applications of deep learning in molecular imaging in terms of tumor lesion segmentation, tumor classification, and survival prediction. We also outline some future directions in which researchers may develop more powerful deep learning models for better performance in the applications in cancer molecular imaging. PMID:29114182

  6. A New Era of Image Guidance with Magnetic Resonance-guided Radiation Therapy for Abdominal and Thoracic Malignancies

    PubMed Central

    Paliwal, Bhudatt; Hill, Patrick; Bayouth, John E; Geurts, Mark W; Baschnagel, Andrew M; Bradley, Kristin A; Harari, Paul M; Rosenberg, Stephen; Brower, Jeffrey V; Wojcieszynski, Andrzej P; Hullett, Craig; Bayliss, R A; Labby, Zacariah E; Bassetti, Michael F

    2018-01-01

    Magnetic resonance-guided radiation therapy (MRgRT) offers advantages for image guidance for radiotherapy treatments as compared to conventional computed tomography (CT)-based modalities. The superior soft tissue contrast of magnetic resonance (MR) enables an improved visualization of the gross tumor and adjacent normal tissues in the treatment of abdominal and thoracic malignancies. Online adaptive capabilities, coupled with advanced motion management of real-time tracking of the tumor, directly allow for high-precision inter-/intrafraction localization. The primary aim of this case series is to describe MR-based interventions for localizing targets not well-visualized with conventional image-guided technologies. The abdominal and thoracic sites of the lung, kidney, liver, and gastric targets are described to illustrate the technological advancement of MR-guidance in radiotherapy. PMID:29872602

  7. Enhancing Icing Training for Pilots Through Web-Based Multimedia

    NASA Technical Reports Server (NTRS)

    Fletcher, William; Nolan, Gary; Adanich, Emery; Bond, Thomas H.

    2006-01-01

    The Aircraft Icing Project of the NASA Aviation Safety Program has developed a number of in-flight icing education and training aids designed to increase pilot awareness about the hazards associated with various icing conditions. The challenges and advantages of transitioning these icing training materials to a Web-based delivery are discussed. Innovative Web-based delivery devices increased course availability to pilots and dispatchers while increasing course flexibility and utility. These courses are customizable for both self-directed and instructor-led learning. Part of our goal was to create training materials with enough flexibility to enable Web-based delivery and downloadable portability while maintaining a rich visual multimedia-based learning experience. Studies suggest that using visually based multimedia techniques increases the effectiveness of icing training materials. This paper describes these concepts, gives examples, and discusses the transitional challenges.

  8. Temporal dynamics of attention during encoding vs. maintenance of working memory: complementary views from event-related potentials and alpha-band oscillations

    PubMed Central

    Myers, Nicholas E.; Walther, Lena; Wallis, George; Stokes, Mark G.; Nobre, Anna C.

    2015-01-01

    Working memory (WM) is strongly influenced by attention. In visual working-memory tasks, recall performance can be improved by an attention-guiding cue presented before encoding (precue) or during maintenance (retrocue). Although precues and retrocues recruit a similar fronto-parietal control network, the two are likely to exhibit some processing differences, since precues invite anticipation of upcoming information, while retrocues may guide prioritisation, protection, and selection of information already in mind. Here we explored the behavioral and electrophysiological differences between precueing and retrocueing in a new visual working-memory task designed to permit a direct comparison between cueing conditions. We found marked differences in event-related potential (ERP) profiles between the precue and retrocue conditions. In line with precues primarily generating an anticipatory shift of attention toward the location of an upcoming item, we found a robust lateralization in late cue-evoked potentials associated with target anticipation. Retrocues elicited a different pattern of ERPs that was compatible with an early selection mechanism, but not with stimulus anticipation. In contrast to the distinct ERP patterns, alpha band (8-14 Hz) lateralization was indistinguishable between cue types (reflecting, in both conditions, the location of the cued item). We speculate that whereas alpha-band lateralization after a precue is likely to enable anticipatory attention, lateralization after a retrocue may instead enable the controlled spatiotopic access to recently encoded visual information. PMID:25244118

  9. Three-dimensional visualization of nanostructured surfaces and bacterial attachment using Autodesk® Maya®.

    PubMed

    Boshkovikj, Veselin; Fluke, Christopher J; Crawford, Russell J; Ivanova, Elena P

    2014-02-28

    There has been a growing interest in understanding the ways in which bacteria interact with nano-structured surfaces. As a result, there is a need for innovative approaches to enable researchers to visualize the biological processes taking place, despite the fact that it is not possible to directly observe these processes. We present a novel approach for the three-dimensional visualization of bacterial interactions with nano-structured surfaces using the software package Autodesk Maya. Our approach comprises a semi-automated stage, where actual surface topographic parameters, obtained using an atomic force microscope, are imported into Maya via a custom Python script, followed by a 'creative stage', where the bacterial cells and their interactions with the surfaces are visualized using available experimental data. The 'Dynamics' and 'nDynamics' capabilities of the Maya software allowed the construction and visualization of plausible interaction scenarios. This capability provides a practical aid to knowledge discovery, assists in the dissemination of research results, and provides an opportunity for an improved public understanding. We validated our approach by graphically depicting the interactions between the two bacteria being used for modeling purposes, Staphylococcus aureus and Pseudomonas aeruginosa, with different titanium substrate surfaces that are routinely used in the production of biomedical devices.

  10. Three-dimensional visualization of nanostructured surfaces and bacterial attachment using Autodesk® Maya®

    NASA Astrophysics Data System (ADS)

    Boshkovikj, Veselin; Fluke, Christopher J.; Crawford, Russell J.; Ivanova, Elena P.

    2014-02-01

    There has been a growing interest in understanding the ways in which bacteria interact with nano-structured surfaces. As a result, there is a need for innovative approaches to enable researchers to visualize the biological processes taking place, despite the fact that it is not possible to directly observe these processes. We present a novel approach for the three-dimensional visualization of bacterial interactions with nano-structured surfaces using the software package Autodesk Maya. Our approach comprises a semi-automated stage, where actual surface topographic parameters, obtained using an atomic force microscope, are imported into Maya via a custom Python script, followed by a `creative stage', where the bacterial cells and their interactions with the surfaces are visualized using available experimental data. The `Dynamics' and `nDynamics' capabilities of the Maya software allowed the construction and visualization of plausible interaction scenarios. This capability provides a practical aid to knowledge discovery, assists in the dissemination of research results, and provides an opportunity for an improved public understanding. We validated our approach by graphically depicting the interactions between the two bacteria being used for modeling purposes, Staphylococcus aureus and Pseudomonas aeruginosa, with different titanium substrate surfaces that are routinely used in the production of biomedical devices.

  11. Three-dimensional visualization of nanostructured surfaces and bacterial attachment using Autodesk® Maya®

    PubMed Central

    Boshkovikj, Veselin; Fluke, Christopher J.; Crawford, Russell J.; Ivanova, Elena P.

    2014-01-01

    There has been a growing interest in understanding the ways in which bacteria interact with nano-structured surfaces. As a result, there is a need for innovative approaches to enable researchers to visualize the biological processes taking place, despite the fact that it is not possible to directly observe these processes. We present a novel approach for the three-dimensional visualization of bacterial interactions with nano-structured surfaces using the software package Autodesk Maya. Our approach comprises a semi-automated stage, where actual surface topographic parameters, obtained using an atomic force microscope, are imported into Maya via a custom Python script, followed by a ‘creative stage', where the bacterial cells and their interactions with the surfaces are visualized using available experimental data. The ‘Dynamics' and ‘nDynamics' capabilities of the Maya software allowed the construction and visualization of plausible interaction scenarios. This capability provides a practical aid to knowledge discovery, assists in the dissemination of research results, and provides an opportunity for an improved public understanding. We validated our approach by graphically depicting the interactions between the two bacteria being used for modeling purposes, Staphylococcus aureus and Pseudomonas aeruginosa, with different titanium substrate surfaces that are routinely used in the production of biomedical devices. PMID:24577105

  12. STAR: an integrated solution to management and visualization of sequencing data.

    PubMed

    Wang, Tao; Liu, Jie; Shen, Li; Tonti-Filippini, Julian; Zhu, Yun; Jia, Haiyang; Lister, Ryan; Whitaker, John W; Ecker, Joseph R; Millar, A Harvey; Ren, Bing; Wang, Wei

    2013-12-15

    Easily visualization of complex data features is a necessary step to conduct studies on next-generation sequencing (NGS) data. We developed STAR, an integrated web application that enables online management, visualization and track-based analysis of NGS data. STAR is a multilayer web service system. On the client side, STAR leverages JavaScript, HTML5 Canvas and asynchronous communications to deliver a smoothly scrolling desktop-like graphical user interface with a suite of in-browser analysis tools that range from providing simple track configuration controls to sophisticated feature detection within datasets. On the server side, STAR supports private session state retention via an account management system and provides data management modules that enable collection, visualization and analysis of third-party sequencing data from the public domain with over thousands of tracks hosted to date. Overall, STAR represents a next-generation data exploration solution to match the requirements of NGS data, enabling both intuitive visualization and dynamic analysis of data. STAR browser system is freely available on the web at http://wanglab.ucsd.edu/star/browser and https://github.com/angell1117/STAR-genome-browser.

  13. High-throughput electrical measurement and microfluidic sorting of semiconductor nanowires.

    PubMed

    Akin, Cevat; Feldman, Leonard C; Durand, Corentin; Hus, Saban M; Li, An-Ping; Hui, Ho Yee; Filler, Michael A; Yi, Jingang; Shan, Jerry W

    2016-05-24

    Existing nanowire electrical characterization tools not only are expensive and require sophisticated facilities, but are far too slow to enable statistical characterization of highly variable samples. They are also generally not compatible with further sorting and processing of nanowires. Here, we demonstrate a high-throughput, solution-based electro-orientation-spectroscopy (EOS) method, which is capable of automated electrical characterization of individual nanowires by direct optical visualization of their alignment behavior under spatially uniform electric fields of different frequencies. We demonstrate that EOS can quantitatively characterize the electrical conductivities of nanowires over a 6-order-of-magnitude range (10(-5) to 10 S m(-1), corresponding to typical carrier densities of 10(10)-10(16) cm(-3)), with different fluids used to suspend the nanowires. By implementing EOS in a simple microfluidic device, continuous electrical characterization is achieved, and the sorting of nanowires is demonstrated as a proof-of-concept. With measurement speeds two orders of magnitude faster than direct-contact methods, the automated EOS instrument enables for the first time the statistical characterization of highly variable 1D nanomaterials.

  14. Displays enabling mobile multimedia

    NASA Astrophysics Data System (ADS)

    Kimmel, Jyrki

    2007-02-01

    With the rapid advances in telecommunications networks, mobile multimedia delivery to handsets is now a reality. While a truly immersive multimedia experience is still far ahead in the mobile world, significant advances have been made in the constituent audio-visual technologies to make this become possible. One of the critical components in multimedia delivery is the mobile handset display. While such alternatives as headset-style near-to-eye displays, autostereoscopic displays, mini-projectors, and roll-out flexible displays can deliver either a larger virtual screen size than the pocketable dimensions of the mobile device can offer, or an added degree of immersion by adding the illusion of the third dimension in the viewing experience, there are still challenges in the full deployment of such displays in real-life mobile communication terminals. Meanwhile, direct-view display technologies have developed steadily, and can provide a development platform for an even better viewing experience for multimedia in the near future. The paper presents an overview of the mobile display technology space with an emphasis on the advances and potential in developing direct-view displays further to meet the goal of enabling multimedia in the mobile domain.

  15. Protein-Coupled Fluorescent Probe To Visualize Potassium Ion Transition on Cellular Membranes.

    PubMed

    Hirata, Tomoya; Terai, Takuya; Yamamura, Hisao; Shimonishi, Manabu; Komatsu, Toru; Hanaoka, Kenjiro; Ueno, Tasuku; Imaizumi, Yuji; Nagano, Tetsuo; Urano, Yasuteru

    2016-03-01

    K(+) is the most abundant metal ion in cells, and changes of [K(+)] around cell membranes play important roles in physiological events. However, there is no practical method to selectively visualize [K(+)] at the surface of cells. To address this issue, we have developed a protein-coupled fluorescent probe for K(+), TLSHalo. TLSHalo is responsive to [K(+)] in the physiological range, with good selectivity over Na(+) and retains its K(+)-sensing properties after covalent conjugation with HaloTag protein. By using cells expressing HaloTag on the plasma membrane, we successfully directed TLSHalo specifically to the outer surface of target cells. This enabled us to visualize localized extracellular [K(+)] change with TLSHalo under a fluorescence microscope in real time. To confirm the experimental value of this system, we used TLSHalo to monitor extracellular [K(+)] change induced by K(+) ionophores or by activation of a native Ca(2+)-dependent K(+) channel (BK channel). Further, we show that K(+) efflux via BK channel induced by electrical stimulation at the bottom surface of the cells can be visualized with TLSHalo by means of total internal reflection fluorescence microscope (TIRFM) imaging. Our methodology should be useful to analyze physiological K(+) dynamics with high spatiotemporal resolution.

  16. A multi-criteria approach to camera motion design for volume data animation.

    PubMed

    Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.

  17. Dynamic wake prediction and visualization with uncertainty analysis

    NASA Technical Reports Server (NTRS)

    Holforty, Wendy L. (Inventor); Powell, J. David (Inventor)

    2005-01-01

    A dynamic wake avoidance system utilizes aircraft and atmospheric parameters readily available in flight to model and predict airborne wake vortices in real time. A novel combination of algorithms allows for a relatively simple yet robust wake model to be constructed based on information extracted from a broadcast. The system predicts the location and movement of the wake based on the nominal wake model and correspondingly performs an uncertainty analysis on the wake model to determine a wake hazard zone (no fly zone), which comprises a plurality of wake planes, each moving independently from another. The system selectively adjusts dimensions of each wake plane to minimize spatial and temporal uncertainty, thereby ensuring that the actual wake is within the wake hazard zone. The predicted wake hazard zone is communicated in real time directly to a user via a realistic visual representation. In an example, the wake hazard zone is visualized on a 3-D flight deck display to enable a pilot to visualize or see a neighboring aircraft as well as its wake. The system substantially enhances the pilot's situational awareness and allows for a further safe decrease in spacing, which could alleviate airport and airspace congestion.

  18. Image Mapping and Visual Attention on the Sensory Ego-Sphere

    NASA Technical Reports Server (NTRS)

    Fleming, Katherine Achim; Peters, Richard Alan, II

    2012-01-01

    The Sensory Ego-Sphere (SES) is a short-term memory for a robot in the form of an egocentric, tessellated, spherical, sensory-motor map of the robot s locale. Visual attention enables fast alignment of overlapping images without warping or position optimization, since an attentional point (AP) on the composite typically corresponds to one on each of the collocated regions in the images. Such alignment speeds analysis of the multiple images of the area. Compositing and attention were performed two ways and compared: (1) APs were computed directly on the composite and not on the full-resolution images until the time of retrieval; and (2) the attentional operator was applied to all incoming imagery. It was found that although the second method was slower, it produced consistent and, thereby, more useful APs. The SES is an integral part of a control system that will enable a robot to learn new behaviors based on its previous experiences, and that will enable it to recombine its known behaviors in such a way as to solve related, but novel, task problems with apparent creativity. The approach is to combine sensory-motor data association and dimensionality reduction to learn navigation and manipulation tasks as sequences of basic behaviors that can be implemented with a small set of closed-loop controllers. Over time, the aggregate of behaviors and their transition probabilities form a stochastic network. Then given a task, the robot finds a path in the network that leads from its current state to the goal. The SES provides a short-term memory for the cognitive functions of the robot, association of sensory and motor data via spatio-temporal coincidence, direction of the attention of the robot, navigation through spatial localization with respect to known or discovered landmarks, and structured data sharing between the robot and human team members, the individuals in multi-robot teams, or with a C3 center.

  19. Three-dimensional bright-field scanning transmission electron microscopy elucidate novel nanostructure in microbial biofilms.

    PubMed

    Hickey, William J; Shetty, Ameesha R; Massey, Randall J; Toso, Daniel B; Austin, Jotham

    2017-01-01

    Bacterial biofilms play key roles in environmental and biomedical processes, and understanding their activities requires comprehension of their nanoarchitectural characteristics. Electron microscopy (EM) is an essential tool for nanostructural analysis, but conventional EM methods are limited in that they either provide topographical information alone, or are suitable for imaging only relatively thin (<300 nm) sample volumes. For biofilm investigations, these are significant restrictions. Understanding structural relations between cells requires imaging of a sample volume sufficiently large to encompass multiple cells and the capture of both external and internal details of cell structure. An emerging EM technique with such capabilities is bright-field scanning transmission electron microscopy (BF-STEM) and in the present report BF-STEM was coupled with tomography to elucidate nanostructure in biofilms formed by the polycyclic aromatic hydrocarbon-degrading soil bacterium, Delftia acidovorans Cs1-4. Dual-axis BF-STEM enabled high-resolution 3-D tomographic recontructions (6-10 nm) visualization of thick (1250 and 1500 nm) sections. The 3-D data revealed that novel extracellular structures, termed nanopods, were polymorphic and formed complex networks within cell clusters. BF-STEM tomography enabled visualization of conduits formed by nanopods that could enable intercellular movement of outer membrane vesicles, and thereby enable direct communication between cells. This report is the first to document application of dual-axis BF-STEM tomography to obtain high-resolution 3-D images of novel nanostructures in bacterial biofilms. Future work with dual-axis BF-STEM tomography combined with correlative light electron microscopy may provide deeper insights into physiological functions associated with nanopods as well as other nanostructures. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  20. Yet More Visualized JAMSTEC Cruise and Dive Information

    NASA Astrophysics Data System (ADS)

    Tomiyama, T.; Hase, H.; Fukuda, K.; Saito, H.; Kayo, M.; Matsuda, S.; Azuma, S.

    2014-12-01

    Every year, JAMSTEC performs about a hundred of research cruises and numerous dive surveys using its research vessels and submersibles. JAMSTEC provides data and samples obtained during these cruises and dives to international users through a series of data sites on the Internet. The "DARWIN (http://www.godac.jamstec.go.jp/darwin/e)" data site disseminates cruise and dive information. On DARWIN, users can search interested cruises and dives with a combination search form or an interactive tree menu, and find lists of observation data as well as links to surrounding databases. Document catalog, physical sample databases, and visual archive of dive surveys (e. g. in http://www.godac.jamstec.go.jp/jmedia/portal/e) are directly accessible from the lists. In 2014, DARWIN experienced an update, which was arranged mainly for enabling on-demand data visualization. Using login users' functions, users can put listed data items into the virtual basket and then trim, plot and download the data. The visualization tools help users to quickly grasp the quality and characteristics of observation data. Meanwhile, JAMSTEC launched a new data site named "JDIVES (http://www.godac.jamstec.go.jp/jdives/e)" to visualize data and sample information obtained by dive surveys. JDIVES shows tracks of dive surveys on the "Google Earth Plugin" and diagrams of deep-sea environmental data such as temperature, salinity, and depth. Submersible camera images and links to associated databases are placed along the dive tracks. The JDVIES interface enables users to perform so-called virtual dive surveys, which can help users to understand local geometries of dive spots and geological settings of associated data and samples. It is not easy for individual researchers to organize a huge amount of information recovered from each cruise and dive. The improved visibility and accessibility of JAMSTEC databases are advantageous not only for second-hand users, but also for on-board researchers themselves.

  1. 3D visualization of solar wind ion data from the Chang'E-1 exploration

    NASA Astrophysics Data System (ADS)

    Zhang, Tian; Sun, Yankui; Tang, Zesheng

    2011-10-01

    Chang'E-1 (abbreviation CE-1), China's first Moon-orbiting spacecraft launched in 2007, carried equipment called the Solar Wind Ion Detector (abbreviation SWID), which sent back tens of gigabytes of solar wind ion differential number flux data. These data are essential for furthering our understanding of the cislunar space environment. However, to fully comprehend and analyze these data presents considerable difficulties, not only because of their huge size (57 GB), but also because of their complexity. Therefore, a new 3D visualization method is developed to give a more intuitive representation than traditional 1D and 2D visualizations, and in particular to offer a better indication of the direction of the incident ion differential number flux and the relative spatial position of CE-1 with respect to the Sun, the Earth, and the Moon. First, a coordinate system named Selenocentric Solar Ecliptic (SSE) which is more suitable for our goal is chosen, and solar wind ion differential number flux vectors in SSE are calculated from Geocentric Solar Ecliptic System (GSE) and Moon Center Coordinate (MCC) coordinates of the spacecraft, and then the ion differential number flux distribution in SSE is visualized in 3D space. This visualization method is integrated into an interactive visualization analysis software tool named vtSWIDs, developed in MATLAB, which enables researchers to browse through numerous records and manipulate the visualization results in real time. The tool also provides some useful statistical analysis functions, and can be easily expanded.

  2. Vernier perceptual learning transfers to completely untrained retinal locations after double training: A “piggybacking” effect

    PubMed Central

    Wang, Rui; Zhang, Jun-Yun; Klein, Stanley A.; Levi, Dennis M.; Yu, Cong

    2014-01-01

    Perceptual learning, a process in which training improves visual discrimination, is often specific to the trained retinal location, and this location specificity is frequently regarded as an indication of neural plasticity in the retinotopic visual cortex. However, our previous studies have shown that “double training” enables location-specific perceptual learning, such as Vernier learning, to completely transfer to a new location where an irrelevant task is practiced. Here we show that Vernier learning can be actuated by less location-specific orientation or motion-direction learning to transfer to completely untrained retinal locations. This “piggybacking” effect occurs even if both tasks are trained at the same retinal location. However, piggybacking does not occur when the Vernier task is paired with a more location-specific contrast-discrimination task. This previously unknown complexity challenges the current understanding of perceptual learning and its specificity/transfer. Orientation and motion-direction learning, but not contrast and Vernier learning, appears to activate a global process that allows learning transfer to untrained locations. Moreover, when paired with orientation or motion-direction learning, Vernier learning may be “piggybacked” by the activated global process to transfer to other untrained retinal locations. How this task-specific global activation process is achieved is as yet unknown. PMID:25398974

  3. Declarative language design for interactive visualization.

    PubMed

    Heer, Jeffrey; Bostock, Michael

    2010-01-01

    We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude.

  4. Impact of feature saliency on visual category learning.

    PubMed

    Hammer, Rubi

    2015-01-01

    People have to sort numerous objects into a large number of meaningful categories while operating in varying contexts. This requires identifying the visual features that best predict the 'essence' of objects (e.g., edibility), rather than categorizing objects based on the most salient features in a given context. To gain this capacity, visual category learning (VCL) relies on multiple cognitive processes. These may include unsupervised statistical learning, that requires observing multiple objects for learning the statistics of their features. Other learning processes enable incorporating different sources of supervisory information, alongside the visual features of the categorized objects, from which the categorical relations between few objects can be deduced. These deductions enable inferring that objects from the same category may differ from one another in some high-saliency feature dimensions, whereas lower-saliency feature dimensions can best differentiate objects from distinct categories. Here I illustrate how feature saliency affects VCL, by also discussing kinds of supervisory information enabling reflective categorization. Arguably, principles debated here are often being ignored in categorization studies.

  5. Impact of feature saliency on visual category learning

    PubMed Central

    Hammer, Rubi

    2015-01-01

    People have to sort numerous objects into a large number of meaningful categories while operating in varying contexts. This requires identifying the visual features that best predict the ‘essence’ of objects (e.g., edibility), rather than categorizing objects based on the most salient features in a given context. To gain this capacity, visual category learning (VCL) relies on multiple cognitive processes. These may include unsupervised statistical learning, that requires observing multiple objects for learning the statistics of their features. Other learning processes enable incorporating different sources of supervisory information, alongside the visual features of the categorized objects, from which the categorical relations between few objects can be deduced. These deductions enable inferring that objects from the same category may differ from one another in some high-saliency feature dimensions, whereas lower-saliency feature dimensions can best differentiate objects from distinct categories. Here I illustrate how feature saliency affects VCL, by also discussing kinds of supervisory information enabling reflective categorization. Arguably, principles debated here are often being ignored in categorization studies. PMID:25954220

  6. 4D ASL-based MR angiography for visualization of distal arteries and leptomeningeal collateral vessels in moyamoya disease: a comparison of techniques.

    PubMed

    Togao, Osamu; Hiwatashi, Akio; Obara, Makoto; Yamashita, Koji; Momosaka, Daichi; Nishimura, Ataru; Arimura, Koichi; Hata, Nobuhiro; Yoshimoto, Koji; Iihara, Koji; Van Cauteren, Marc; Honda, Hiroshi

    2018-05-08

    To evaluate the performance of four-dimensional pseudo-continuous arterial spin labeling (4D-pCASL)-based angiography using CENTRA-keyhole and view sharing (4D-PACK) in the visualization of flow dynamics in distal cerebral arteries and leptomeningeal anastomosis (LMA) collaterals in moyamoya disease in comparison with contrast inherent inflow-enhanced multiphase angiography (CINEMA), with reference to digital subtraction angiography (DSA). Thirty-two cerebral hemispheres from 19 patients with moyamoya disease (mean age, 29.7 ± 19.6 years; five males, 14 females) underwent both 4D-MR angiography and DSA. Qualitative evaluations included the visualization of anterograde middle cerebral artery (MCA) flow and retrograde flow via LMA collaterals with reference to DSA. Quantitative evaluations included assessments of the contrast-to-noise ratio (CNR) on these vessels. The linear mixed-effect model was used to compare the 4D-PACK and CINEMA methods. The vessel visualization scores were significantly higher with 4D-PACK than with CINEMA in the visualization of anterograde flow for both Observer 1 (CINEMA, 3.53 ± 1.39; 4D-PACK, 4.53 ± 0.80; p < 0.0001) and Observer 2 (CINEMA, 3.50±1.39; 4D-PACK, 4.31 ± 0.86; p = 0.0009). The scores were higher with 4D-PACK than with CINEMA in the visualization of retrograde flow for both Observer 1 (CINEMA, 3.44 ± 1.05; 4D-PACK, 4.47 ± 0.88; p < 0.0001) and Observer 2 (CINEMA, 3.19 ± 1.20; 4D-PACK, 4.38 ± 0.91; p < 0.0001). The maximum CNR in the anterograde flow was higher in 4D-PACK (40.1 ± 16.1, p = 0.0001) than in CINEMA (27.0 ± 16.6). The maximum CNR in the retrograde flow was higher in 4D-PACK (36.1 ± 10.0, p < 0.0001) than in CINEMA (15.4 ± 8.0). The 4D-PACK provided better visualization and higher CNRs in distal cerebral arteries and LMA collaterals compared with CINEMA in patients with this disease. • The 4D-PACK enables good visualization of distal cerebral arteries in moyamoya disease. • The 4D-PACK enables direct visualization of leptomeningeal collateral vessels in moyamoya disease. • Vessel visualization by 4D-PACK can be useful in assessing cerebral hemodynamics.

  7. Improved Visualization of Gastrointestinal Slow Wave Propagation Using a Novel Wavefront-Orientation Interpolation Technique.

    PubMed

    Mayne, Terence P; Paskaranandavadivel, Niranchan; Erickson, Jonathan C; OGrady, Gregory; Cheng, Leo K; Angeli, Timothy R

    2018-02-01

    High-resolution mapping of gastrointestinal (GI) slow waves is a valuable technique for research and clinical applications. Interpretation of high-resolution GI mapping data relies on animations of slow wave propagation, but current methods remain as rudimentary, pixelated electrode activation animations. This study aimed to develop improved methods of visualizing high-resolution slow wave recordings that increases ease of interpretation. The novel method of "wavefront-orientation" interpolation was created to account for the planar movement of the slow wave wavefront, negate any need for distance calculations, remain robust in atypical wavefronts (i.e., dysrhythmias), and produce an appropriate interpolation boundary. The wavefront-orientation method determines the orthogonal wavefront direction and calculates interpolated values as the mean slow wave activation-time (AT) of the pair of linearly adjacent electrodes along that direction. Stairstep upsampling increased smoothness and clarity. Animation accuracy of 17 human high-resolution slow wave recordings (64-256 electrodes) was verified by visual comparison to the prior method showing a clear improvement in wave smoothness that enabled more accurate interpretation of propagation, as confirmed by an assessment of clinical applicability performed by eight GI clinicians. Quantitatively, the new method produced accurate interpolation values compared to experimental data (mean difference 0.02 ± 0.05 s) and was accurate when applied solely to dysrhythmic data (0.02 ± 0.06 s), both within the error in manual AT marking (mean 0.2 s). Mean interpolation processing time was 6.0 s per wave. These novel methods provide a validated visualization platform that will improve analysis of high-resolution GI mapping in research and clinical translation.

  8. Telescopic multi-resolution augmented reality

    NASA Astrophysics Data System (ADS)

    Jenkins, Jeffrey; Frenchi, Christopher; Szu, Harold

    2014-05-01

    To ensure a self-consistent scaling approximation, the underlying microscopic fluctuation components can naturally influence macroscopic means, which may give rise to emergent observable phenomena. In this paper, we describe a consistent macroscopic (cm-scale), mesoscopic (micron-scale), and microscopic (nano-scale) approach to introduce Telescopic Multi-Resolution (TMR) into current Augmented Reality (AR) visualization technology. We propose to couple TMR-AR by introducing an energy-matter interaction engine framework that is based on known Physics, Biology, Chemistry principles. An immediate payoff of TMR-AR is a self-consistent approximation of the interaction between microscopic observables and their direct effect on the macroscopic system that is driven by real-world measurements. Such an interdisciplinary approach enables us to achieve more than multiple scale, telescopic visualization of real and virtual information but also conducting thought experiments through AR. As a result of the consistency, this framework allows us to explore a large dimensionality parameter space of measured and unmeasured regions. Towards this direction, we explore how to build learnable libraries of biological, physical, and chemical mechanisms. Fusing analytical sensors with TMR-AR libraries provides a robust framework to optimize testing and evaluation through data-driven or virtual synthetic simulations. Visualizing mechanisms of interactions requires identification of observable image features that can indicate the presence of information in multiple spatial and temporal scales of analog data. The AR methodology was originally developed to enhance pilot-training as well as `make believe' entertainment industries in a user-friendly digital environment We believe TMR-AR can someday help us conduct thought experiments scientifically, to be pedagogically visualized in a zoom-in-and-out, consistent, multi-scale approximations.

  9. Integration of bio-inspired, control-based visual and olfactory data for the detection of an elusive target

    NASA Astrophysics Data System (ADS)

    Duong, Tuan A.; Duong, Nghi; Le, Duong

    2017-01-01

    In this paper, we present an integration technique using a bio-inspired, control-based visual and olfactory receptor system to search for elusive targets in practical environments where the targets cannot be seen obviously by either sensory data. Bio-inspired Visual System is based on a modeling of extended visual pathway which consists of saccadic eye movements and visual pathway (vertebrate retina, lateral geniculate nucleus and visual cortex) to enable powerful target detections of noisy, partial, incomplete visual data. Olfactory receptor algorithm, namely spatial invariant independent component analysis, that was developed based on data of old factory receptor-electronic nose (enose) of Caltech, is adopted to enable the odorant target detection in an unknown environment. The integration of two systems is a vital approach and sets up a cornerstone for effective and low-cost of miniaturized UAVs or fly robots for future DOD and NASA missions, as well as for security systems in Internet of Things environments.

  10. Properties of V1 Neurons Tuned to Conjunctions of Visual Features: Application of the V1 Saliency Hypothesis to Visual Search behavior

    PubMed Central

    Zhaoping, Li; Zhe, Li

    2012-01-01

    From a computational theory of V1, we formulate an optimization problem to investigate neural properties in the primary visual cortex (V1) from human reaction times (RTs) in visual search. The theory is the V1 saliency hypothesis that the bottom-up saliency of any visual location is represented by the highest V1 response to it relative to the background responses. The neural properties probed are those associated with the less known V1 neurons tuned simultaneously or conjunctively in two feature dimensions. The visual search is to find a target bar unique in color (C), orientation (O), motion direction (M), or redundantly in combinations of these features (e.g., CO, MO, or CM) among uniform background bars. A feature singleton target is salient because its evoked V1 response largely escapes the iso-feature suppression on responses to the background bars. The responses of the conjunctively tuned cells are manifested in the shortening of the RT for a redundant feature target (e.g., a CO target) from that predicted by a race between the RTs for the two corresponding single feature targets (e.g., C and O targets). Our investigation enables the following testable predictions. Contextual suppression on the response of a CO-tuned or MO-tuned conjunctive cell is weaker when the contextual inputs differ from the direct inputs in both feature dimensions, rather than just one. Additionally, CO-tuned cells and MO-tuned cells are often more active than the single feature tuned cells in response to the redundant feature targets, and this occurs more frequently for the MO-tuned cells such that the MO-tuned cells are no less likely than either the M-tuned or O-tuned neurons to be the most responsive neuron to dictate saliency for an MO target. PMID:22719829

  11. Properties of V1 neurons tuned to conjunctions of visual features: application of the V1 saliency hypothesis to visual search behavior.

    PubMed

    Zhaoping, Li; Zhe, Li

    2012-01-01

    From a computational theory of V1, we formulate an optimization problem to investigate neural properties in the primary visual cortex (V1) from human reaction times (RTs) in visual search. The theory is the V1 saliency hypothesis that the bottom-up saliency of any visual location is represented by the highest V1 response to it relative to the background responses. The neural properties probed are those associated with the less known V1 neurons tuned simultaneously or conjunctively in two feature dimensions. The visual search is to find a target bar unique in color (C), orientation (O), motion direction (M), or redundantly in combinations of these features (e.g., CO, MO, or CM) among uniform background bars. A feature singleton target is salient because its evoked V1 response largely escapes the iso-feature suppression on responses to the background bars. The responses of the conjunctively tuned cells are manifested in the shortening of the RT for a redundant feature target (e.g., a CO target) from that predicted by a race between the RTs for the two corresponding single feature targets (e.g., C and O targets). Our investigation enables the following testable predictions. Contextual suppression on the response of a CO-tuned or MO-tuned conjunctive cell is weaker when the contextual inputs differ from the direct inputs in both feature dimensions, rather than just one. Additionally, CO-tuned cells and MO-tuned cells are often more active than the single feature tuned cells in response to the redundant feature targets, and this occurs more frequently for the MO-tuned cells such that the MO-tuned cells are no less likely than either the M-tuned or O-tuned neurons to be the most responsive neuron to dictate saliency for an MO target.

  12. How Lovebirds Maneuver Rapidly Using Super-Fast Head Saccades and Image Feature Stabilization

    PubMed Central

    Kress, Daniel; van Bokhorst, Evelien; Lentink, David

    2015-01-01

    Diurnal flying animals such as birds depend primarily on vision to coordinate their flight path during goal-directed flight tasks. To extract the spatial structure of the surrounding environment, birds are thought to use retinal image motion (optical flow) that is primarily induced by motion of their head. It is unclear what gaze behaviors birds perform to support visuomotor control during rapid maneuvering flight in which they continuously switch between flight modes. To analyze this, we measured the gaze behavior of rapidly turning lovebirds in a goal-directed task: take-off and fly away from a perch, turn on a dime, and fly back and land on the same perch. High-speed flight recordings revealed that rapidly turning lovebirds perform a remarkable stereotypical gaze behavior with peak saccadic head turns up to 2700 degrees per second, as fast as insects, enabled by fast neck muscles. In between saccades, gaze orientation is held constant. By comparing saccade and wingbeat phase, we find that these super-fast saccades are coordinated with the downstroke when the lateral visual field is occluded by the wings. Lovebirds thus maximize visual perception by overlying behaviors that impair vision, which helps coordinate maneuvers. Before the turn, lovebirds keep a high contrast edge in their visual midline. Similarly, before landing, the lovebirds stabilize the center of the perch in their visual midline. The perch on which the birds land swings, like a branch in the wind, and we find that retinal size of the perch is the most parsimonious visual cue to initiate landing. Our observations show that rapidly maneuvering birds use precisely timed stereotypic gaze behaviors consisting of rapid head turns and frontal feature stabilization, which facilitates optical flow based flight control. Similar gaze behaviors have been reported for visually navigating humans. This finding can inspire more effective vision-based autopilots for drones. PMID:26107413

  13. OpenMSI: A High-Performance Web-Based Platform for Mass Spectrometry Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Greiner, Annette; Cholia, Shreyas

    Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data accessmore » (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis.« less

  14. "Relative CIR": an image enhancement and visualization technique

    USGS Publications Warehouse

    Fleming, Michael D.

    1993-01-01

    Many techniques exist to spectrally and spatially enhance digital multispectral scanner data. One technique enhances an image while keeping the colors as they would appear in a color-infrared (CIR) image. This "relative CIR" technique generates an image that is both spectrally and spatially enhanced, while displaying a maximum range of colors. The technique enables an interpreter to visualize either spectral or land cover classes by their relative CIR characteristics. A relative CIR image is generated by developed spectral statistics for each class in the classifications and then, using a nonparametric approach for spectral enhancement, the means of the classes for each band are ranked. A 3 by 3 pixel smoothing filter is applied to the classification for spatial enhancement and the classes are mapped to the representative rank for each band. Practical applications of the technique include displaying an image classification product as a CIR image that was not derived directly from a spectral image, visualizing how a land cover classification would look as a CIR image, and displaying a spectral classification or intermediate product that will be used to label spectral classes.

  15. Imaging the distribution of individual platinum-based anticancer drug molecules attached to single-wall carbon nanotubes

    PubMed Central

    Bhirde, Ashwin A; Sousa, Alioscka A; Patel, Vyomesh; Azari, Afrouz A; Gutkind, J Silvio; Leapman, Richard D; Rusling, James F

    2009-01-01

    Aims To image the distribution of drug molecules attached to single-wall carbon nanotubes (SWNTs). Materials & methods Herein we report the use of scanning transmission electron microscopy (STEM) for atomic scale visualization and quantitation of single platinum-based drug molecules attached to SWNTs designed for targeted drug delivery. Fourier transform infrared spectroscopy and energy-dispersive x-ray spectroscopy were used for characterization of the SWNT drug conjugates. Results Z-contrast STEM imaging enabled visualization of the first-line anticancer drug cisplatin on the nanotubes at single molecule level. The identity and presence of cisplatin on the nanotubes was confirmed using energy-dispersive x-ray spectroscopy and Fourier transform infrared spectroscopy. STEM tomography was also used to provide additional insights concerning the nanotube conjugates. Finally, our observations provide a rationale for exploring the use of SWNT bioconjugates to selectively target and kill squamous cancer cells. Conclusion Z-contrast STEM imaging provides a means for direct visualization of heavy metal containing molecules (i.e., cisplatin) attached to surfaces of carbon SWNTs along with distribution and quantitation. PMID:19839812

  16. Fusion interfaces for tactical environments: An application of virtual reality technology

    NASA Technical Reports Server (NTRS)

    Haas, Michael W.

    1994-01-01

    The term Fusion Interface is defined as a class of interface which integrally incorporates both virtual and nonvirtual concepts and devices across the visual, auditory, and haptic sensory modalities. A fusion interface is a multisensory virtually-augmented synthetic environment. A new facility has been developed within the Human Engineering Division of the Armstrong Laboratory dedicated to exploratory development of fusion interface concepts. This new facility, the Fusion Interfaces for Tactical Environments (FITE) Facility is a specialized flight simulator enabling efficient concept development through rapid prototyping and direct experience of new fusion concepts. The FITE Facility also supports evaluation of fusion concepts by operation fighter pilots in an air combat environment. The facility is utilized by a multidisciplinary design team composed of human factors engineers, electronics engineers, computer scientists, experimental psychologists, and oeprational pilots. The FITE computational architecture is composed of twenty-five 80486-based microcomputers operating in real-time. The microcomputers generate out-the-window visuals, in-cockpit and head-mounted visuals, localized auditory presentations, haptic displays on the stick and rudder pedals, as well as executing weapons models, aerodynamic models, and threat models.

  17. High-pressure sapphire cell for phase equilibria measurements of CO2/organic/water systems.

    PubMed

    Pollet, Pamela; Ethier, Amy L; Senter, James C; Eckert, Charles A; Liotta, Charles L

    2014-01-24

    The high pressure sapphire cell apparatus was constructed to visually determine the composition of multiphase systems without physical sampling. Specifically, the sapphire cell enables visual data collection from multiple loadings to solve a set of material balances to precisely determine phase composition. Ternary phase diagrams can then be established to determine the proportion of each component in each phase at a given condition. In principle, any ternary system can be studied although ternary systems (gas-liquid-liquid) are the specific examples discussed herein. For instance, the ternary THF-Water-CO2 system was studied at 25 and 40 °C and is described herein. Of key importance, this technique does not require sampling. Circumventing the possible disturbance of the system equilibrium upon sampling, inherent measurement errors, and technical difficulties of physically sampling under pressure is a significant benefit of this technique. Perhaps as important, the sapphire cell also enables the direct visual observation of the phase behavior. In fact, as the CO2 pressure is increased, the homogeneous THF-Water solution phase splits at about 2 MPa. With this technique, it was possible to easily and clearly observe the cloud point and determine the composition of the newly formed phases as a function of pressure. The data acquired with the sapphire cell technique can be used for many applications. In our case, we measured swelling and composition for tunable solvents, like gas-expanded liquids, gas-expanded ionic liquids and Organic Aqueous Tunable Systems (OATS)(1-4). For the latest system, OATS, the high-pressure sapphire cell enabled the study of (1) phase behavior as a function of pressure and temperature, (2) composition of each phase (gas-liquid-liquid) as a function of pressure and temperature and (3) catalyst partitioning in the two liquid phases as a function of pressure and composition. Finally, the sapphire cell is an especially effective tool to gather accurate and reproducible measurements in a timely fashion.

  18. STAR: an integrated solution to management and visualization of sequencing data

    PubMed Central

    Wang, Tao; Liu, Jie; Shen, Li; Tonti-Filippini, Julian; Zhu, Yun; Jia, Haiyang; Lister, Ryan; Whitaker, John W.; Ecker, Joseph R.; Millar, A. Harvey; Ren, Bing; Wang, Wei

    2013-01-01

    Motivation: Easily visualization of complex data features is a necessary step to conduct studies on next-generation sequencing (NGS) data. We developed STAR, an integrated web application that enables online management, visualization and track-based analysis of NGS data. Results: STAR is a multilayer web service system. On the client side, STAR leverages JavaScript, HTML5 Canvas and asynchronous communications to deliver a smoothly scrolling desktop-like graphical user interface with a suite of in-browser analysis tools that range from providing simple track configuration controls to sophisticated feature detection within datasets. On the server side, STAR supports private session state retention via an account management system and provides data management modules that enable collection, visualization and analysis of third-party sequencing data from the public domain with over thousands of tracks hosted to date. Overall, STAR represents a next-generation data exploration solution to match the requirements of NGS data, enabling both intuitive visualization and dynamic analysis of data. Availability and implementation: STAR browser system is freely available on the web at http://wanglab.ucsd.edu/star/browser and https://github.com/angell1117/STAR-genome-browser. Contact: wei-wang@ucsd.edu PMID:24078702

  19. A visual identification key utilizing both gestalt and analytic approaches to identification of Carices present in North America (Plantae, Cyperaceae)

    PubMed Central

    2013-01-01

    Abstract Images are a critical part of the identification process because they enable direct, immediate and relatively unmediated comparisons between a specimen being identified and one or more reference specimens. The Carices Interactive Visual Identification Key (CIVIK) is a novel tool for identification of North American Carex species, the largest vascular plant genus in North America, and two less numerous closely-related genera, Cymophyllus and Kobresia. CIVIK incorporates 1288 high-resolution tiled image sets that allow users to zoom in to view minute structures that are crucial at times for identification in these genera. Morphological data are derived from the earlier Carex Interactive Identification Key (CIIK) which in turn used data from the Flora of North America treatments. In this new iteration, images can be viewed in a grid or histogram format, allowing multiple representations of data. In both formats the images are fully zoomable. PMID:24723777

  20. ePMV embeds molecular modeling into professional animation software environments.

    PubMed

    Johnson, Graham T; Autin, Ludovic; Goodsell, David S; Sanner, Michel F; Olson, Arthur J

    2011-03-09

    Increasingly complex research has made it more difficult to prepare data for publication, education, and outreach. Many scientists must also wade through black-box code to interface computational algorithms from diverse sources to supplement their bench work. To reduce these barriers we have developed an open-source plug-in, embedded Python Molecular Viewer (ePMV), that runs molecular modeling software directly inside of professional 3D animation applications (hosts) to provide simultaneous access to the capabilities of these newly connected systems. Uniting host and scientific algorithms into a single interface allows users from varied backgrounds to assemble professional quality visuals and to perform computational experiments with relative ease. By enabling easy exchange of algorithms, ePMV can facilitate interdisciplinary research, smooth communication between broadly diverse specialties, and provide a common platform to frame and visualize the increasingly detailed intersection(s) of cellular and molecular biology. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. ePMV Embeds Molecular Modeling into Professional Animation Software Environments

    PubMed Central

    Johnson, Graham T.; Autin, Ludovic; Goodsell, David S.; Sanner, Michel F.; Olson, Arthur J.

    2011-01-01

    SUMMARY Increasingly complex research has made it more difficult to prepare data for publication, education, and outreach. Many scientists must also wade through black-box code to interface computational algorithms from diverse sources to supplement their bench work. To reduce these barriers, we have developed an open-source plug-in, embedded Python Molecular Viewer (ePMV), that runs molecular modeling software directly inside of professional 3D animation applications (hosts) to provide simultaneous access to the capabilities of these newly connected systems. Uniting host and scientific algorithms into a single interface allows users from varied backgrounds to assemble professional quality visuals and to perform computational experiments with relative ease. By enabling easy exchange of algorithms, ePMV can facilitate interdisciplinary research, smooth communication between broadly diverse specialties and provide a common platform to frame and visualize the increasingly detailed intersection(s) of cellular and molecular biology. PMID:21397181

  2. Bimolecular fluorescence complementation (BiFC) analysis as a probe of protein interactions in living cells.

    PubMed

    Kerppola, Tom K

    2008-01-01

    Protein interactions are a fundamental mechanism for the generation of biological regulatory specificity. The study of protein interactions in living cells is of particular significance because the interactions that occur in a particular cell depend on the full complement of proteins present in the cell and the external stimuli that influence the cell. Bimolecular fluorescence complementation (BiFC) analysis enables direct visualization of protein interactions in living cells. The BiFC assay is based on the association between two nonfluorescent fragments of a fluorescent protein when they are brought in proximity to each other by an interaction between proteins fused to the fragments. Numerous protein interactions have been visualized using the BiFC assay in many different cell types and organisms. The BiFC assay is technically straightforward and can be performed using standard molecular biology and cell culture reagents and a regular fluorescence microscope or flow cytometer.

  3. Visual statistical learning is related to natural language ability in adults: An ERP study.

    PubMed

    Daltrozzo, Jerome; Emerson, Samantha N; Deocampo, Joanne; Singh, Sonia; Freggens, Marjorie; Branum-Martin, Lee; Conway, Christopher M

    2017-03-01

    Statistical learning (SL) is believed to enable language acquisition by allowing individuals to learn regularities within linguistic input. However, neural evidence supporting a direct relationship between SL and language ability is scarce. We investigated whether there are associations between event-related potential (ERP) correlates of SL and language abilities while controlling for the general level of selective attention. Seventeen adults completed tests of visual SL, receptive vocabulary, grammatical ability, and sentence completion. Response times and ERPs showed that SL is related to receptive vocabulary and grammatical ability. ERPs indicated that the relationship between SL and grammatical ability was independent of attention while the association between SL and receptive vocabulary depended on attention. The implications of these dissociative relationships in terms of underlying mechanisms of SL and language are discussed. These results further elucidate the cognitive nature of the links between SL mechanisms and language abilities. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Variability in visual working memory ability limits the efficiency of perceptual decision making.

    PubMed

    Ester, Edward F; Ho, Tiffany C; Brown, Scott D; Serences, John T

    2014-04-02

    The ability to make rapid and accurate decisions based on limited sensory information is a critical component of visual cognition. Available evidence suggests that simple perceptual discriminations are based on the accumulation and integration of sensory evidence over time. However, the memory system(s) mediating this accumulation are unclear. One candidate system is working memory (WM), which enables the temporary maintenance of information in a readily accessible state. Here, we show that individual variability in WM capacity is strongly correlated with the speed of evidence accumulation in speeded two-alternative forced choice tasks. This relationship generalized across different decision-making tasks, and could not be easily explained by variability in general arousal or vigilance. Moreover, we show that performing a difficult discrimination task while maintaining a concurrent memory load has a deleterious effect on the latter, suggesting that WM storage and decision making are directly linked.

  5. Visual statistical learning is related to natural language ability in adults: An ERP Study

    PubMed Central

    Daltrozzo, Jerome; Emerson, Samantha N.; Deocampo, Joanne; Singh, Sonia; Freggens, Marjorie; Branum-Martin, Lee; Conway, Christopher M.

    2017-01-01

    Statistical learning (SL) is believed to enable language acquisition by allowing individuals to learn regularities within linguistic input. However, neural evidence supporting a direct relationship between SL and language ability is scarce. We investigated whether there are associations between event-related potential (ERP) correlates of SL and language abilities while controlling for the general level of selective attention. Seventeen adults completed tests of visual SL, receptive vocabulary, grammatical ability, and sentence completion. Response times and ERPs showed that SL is related to receptive vocabulary and grammatical ability. ERPs indicated that the relationship between SL and grammatical ability was independent of attention while the association between SL and receptive vocabulary depended on attention. The implications of these dissociative relationships in terms of underlying mechanisms of SL and language are discussed. These results further elucidate the cognitive nature of the links between SL mechanisms and language abilities. PMID:28086142

  6. VisBOL: Web-Based Tools for Synthetic Biology Design Visualization.

    PubMed

    McLaughlin, James Alastair; Pocock, Matthew; Mısırlı, Göksel; Madsen, Curtis; Wipat, Anil

    2016-08-19

    VisBOL is a Web-based application that allows the rendering of genetic circuit designs, enabling synthetic biologists to visually convey designs in SBOL visual format. VisBOL designs can be exported to formats including PNG and SVG images to be embedded in Web pages, presentations and publications. The VisBOL tool enables the automated generation of visualizations from designs specified using the Synthetic Biology Open Language (SBOL) version 2.0, as well as a range of well-known bioinformatics formats including GenBank and Pigeoncad notation. VisBOL is provided both as a user accessible Web site and as an open-source (BSD) JavaScript library that can be used to embed diagrams within other content and software.

  7. [Development of fluorescent probes for bone imaging in vivo ~Fluorescent probes for intravital imaging of osteoclast activity~.

    PubMed

    Minoshima, Masafumi; Kikuchi, Kazuya

    Fluorescent molecules are widely used as a tool to directly visualize target biomolecules in vivo. Fluorescent probes have the advantage that desired function can be rendered based on rational design. For bone-imaging fluorescent probes in vivo, they should be delivered to bone tissue upon administration. Recently, a fluorescent probe for detecting osteoclast activity was developed. The fluorescent probe has acid-sensitive fluorescence property, specific delivery to bone tissue, and durability against laser irradiation, which enabled real-time intravital imaging of bone-resorbing osteoclasts for a long period of time.

  8. Operating a Geiger Müller tube using a PC sound card

    NASA Astrophysics Data System (ADS)

    Azooz, A. A.

    2009-01-01

    In this paper, a simple MATLAB-based PC program that enables the computer to function as a replacement for the electronic scalar-counter system associated with a Geiger-Müller (GM) tube is described. The program utilizes the ability of MATLAB to acquire data directly from the computer sound card. The signal from the GM tube is applied to the computer sound card via the line in port. All standard GM experiments, pulse shape and statistical analysis experiments can be carried out using this system. A new visual demonstration of dead time effects is also presented.

  9. Real-time visual mosaicking and navigation on the seafloor

    NASA Astrophysics Data System (ADS)

    Richmond, Kristof

    Remote robotic exploration holds vast potential for gaining knowledge about extreme environments accessible to humans only with great difficulty. Robotic explorers have been sent to other solar system bodies, and on this planet into inaccessible areas such as caves and volcanoes. In fact, the largest unexplored land area on earth lies hidden in the airless cold and intense pressure of the ocean depths. Exploration in the oceans is further hindered by water's high absorption of electromagnetic radiation, which both inhibits remote sensing from the surface, and limits communications with the bottom. The Earth's oceans thus provide an attractive target for developing remote exploration capabilities. As a result, numerous robotic vehicles now routinely survey this environment, from remotely operated vehicles piloted over tethers from the surface to torpedo-shaped autonomous underwater vehicles surveying the mid-waters. However, these vehicles are limited in their ability to navigate relative to their environment. This limits their ability to return to sites with precision without the use of external navigation aids, and to maneuver near and interact with objects autonomously in the water and on the sea floor. The enabling of environment-relative positioning on fully autonomous underwater vehicles will greatly extend their power and utility for remote exploration in the furthest reaches of the Earth's waters---even under ice and under ground---and eventually in extraterrestrial liquid environments such as Europa's oceans. This thesis presents an operational, fielded system for visual navigation of underwater robotic vehicles in unexplored areas of the seafloor. The system does not depend on external sensing systems, using only instruments on board the vehicle. As an area is explored, a camera is used to capture images and a composite view, or visual mosaic, of the ocean bottom is created in real time. Side-to-side visual registration of images is combined with dead-reckoned navigation information in a framework allowing the creation and updating of large, locally consistent mosaics. These mosaics are used as maps in which the vehicle can navigate and localize itself with respect to points in the environment. The system achieves real-time performance in several ways. First, wherever possible, direct sensing of motion parameters is used in place of extracting them from visual data. Second, trajectories are chosen to enable a hierarchical search for side-to-side links which limits the amount of searching performed without sacrificing robustness. Finally, the map estimation is formulated as a sparse, linear information filter allowing rapid updating of large maps. The visual navigation enabled by the work in this thesis represents a new capability for remotely operated vehicles, and an enabling capability for a new generation of autonomous vehicles which explore and interact with remote, unknown and unstructured underwater environments. The real-time mosaic can be used on current tethered vehicles to create pilot aids and provide a vehicle user with situational awareness of the local environment and the position of the vehicle within it. For autonomous vehicles, the visual navigation system enables precise environment-relative positioning and mapping, without requiring external navigation systems, opening the way for ever-expanding autonomous exploration capabilities. The utility of this system was demonstrated in the field at sites of scientific interest using the ROVs Ventana and Tiburon operated by the Monterey Bay Aquarium Research Institute. A number of sites in and around Monterey Bay, California were mosaicked using the system, culminating in a complete imaging of the wreck site of the USS Macon , where real-time visual mosaics containing thousands of images were generated while navigating using only sensor systems on board the vehicle.

  10. A high-quality high-fidelity visualization of the September 11 attack on the World Trade Center.

    PubMed

    Rosen, Paul; Popescu, Voicu; Hoffmann, Christoph; Irfanoglu, Ayhan

    2008-01-01

    In this application paper, we describe the efforts of a multidisciplinary team towards producing a visualization of the September 11 Attack on the North Tower of New York's World Trade Center. The visualization was designed to meet two requirements. First, the visualization had to depict the impact with high fidelity, by closely following the laws of physics. Second, the visualization had to be eloquent to a nonexpert user. This was achieved by first designing and computing a finite-element analysis (FEA) simulation of the impact between the aircraft and the top 20 stories of the building, and then by visualizing the FEA results with a state-of-the-art commercial animation system. The visualization was enabled by an automatic translator that converts the simulation data into an animation system 3D scene. We built upon a previously developed translator. The translator was substantially extended to enable and control visualization of fire and of disintegrating elements, to better scale with the number of nodes and number of states, to handle beam elements with complex profiles, and to handle smoothed particle hydrodynamics liquid representation. The resulting translator is a powerful automatic and scalable tool for high-quality visualization of FEA results.

  11. QuIN: A Web Server for Querying and Visualizing Chromatin Interaction Networks.

    PubMed

    Thibodeau, Asa; Márquez, Eladio J; Luo, Oscar; Ruan, Yijun; Menghi, Francesca; Shin, Dong-Guk; Stitzel, Michael L; Vera-Licona, Paola; Ucar, Duygu

    2016-06-01

    Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers) at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1) building and visualizing chromatin interaction networks, 2) annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3) querying network components based on gene name or chromosome location, and 4) utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions. QuIN's web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/.

  12. Cell type-specific manipulation with GFP-dependent Cre recombinase.

    PubMed

    Tang, Jonathan C Y; Rudolph, Stephanie; Dhande, Onkar S; Abraira, Victoria E; Choi, Seungwon; Lapan, Sylvain W; Drew, Iain R; Drokhlyansky, Eugene; Huberman, Andrew D; Regehr, Wade G; Cepko, Constance L

    2015-09-01

    There are many transgenic GFP reporter lines that allow the visualization of specific populations of cells. Using such lines for functional studies requires a method that transforms GFP into a molecule that enables genetic manipulation. We developed a method that exploits GFP for gene manipulation, Cre recombinase dependent on GFP (CRE-DOG), a split component system that uses GFP and its derivatives to directly induce Cre/loxP recombination. Using plasmid electroporation and AAV viral vectors, we delivered CRE-DOG to multiple GFP mouse lines, which led to effective recombination selectively in GFP-labeled cells. Furthermore, CRE-DOG enabled optogenetic control of these neurons. Beyond providing a new set of tools for manipulation of gene expression selectively in GFP(+) cells, we found that GFP can be used to reconstitute the activity of a protein not known to have a modular structure, suggesting that this strategy might be applicable to a wide range of proteins.

  13. Bimolecular fluorescence complementation: visualization of molecular interactions in living cells.

    PubMed

    Kerppola, Tom K

    2008-01-01

    A variety of experimental methods have been developed for the analysis of protein interactions. The majority of these methods either require disruption of the cells to detect molecular interactions or rely on indirect detection of the protein interaction. The bimolecular fluorescence complementation (BiFC) assay provides a direct approach for the visualization of molecular interactions in living cells and organisms. The BiFC approach is based on the facilitated association between two fragments of a fluorescent protein when the fragments are brought together by an interaction between proteins fused to the fragments. The BiFC approach has been used for visualization of interactions among a variety of structurally diverse interaction partners in many different cell types. It enables detection of transient complexes as well as complexes formed by a subpopulation of the interaction partners. It is essential to include negative controls in each experiment in which the interface between the interaction partners has been mutated or deleted. The BiFC assay has been adapted for simultaneous visualization of multiple protein complexes in the same cell and the competition for shared interaction partners. A ubiquitin-mediated fluorescence complementation assay has also been developed for visualization of the covalent modification of proteins by ubiquitin family peptides. These fluorescence complementation assays have a great potential to illuminate a variety of biological interactions in the future.

  14. DeviceEditor visual biological CAD canvas

    PubMed Central

    2012-01-01

    Background Biological Computer Aided Design (bioCAD) assists the de novo design and selection of existing genetic components to achieve a desired biological activity, as part of an integrated design-build-test cycle. To meet the emerging needs of Synthetic Biology, bioCAD tools must address the increasing prevalence of combinatorial library design, design rule specification, and scar-less multi-part DNA assembly. Results We report the development and deployment of web-based bioCAD software, DeviceEditor, which provides a graphical design environment that mimics the intuitive visual whiteboard design process practiced in biological laboratories. The key innovations of DeviceEditor include visual combinatorial library design, direct integration with scar-less multi-part DNA assembly design automation, and a graphical user interface for the creation and modification of design specification rules. We demonstrate how biological designs are rendered on the DeviceEditor canvas, and we present effective visualizations of genetic component ordering and combinatorial variations within complex designs. Conclusions DeviceEditor liberates researchers from DNA base-pair manipulation, and enables users to create successful prototypes using standardized, functional, and visual abstractions. Open and documented software interfaces support further integration of DeviceEditor with other bioCAD tools and software platforms. DeviceEditor saves researcher time and institutional resources through correct-by-construction design, the automation of tedious tasks, design reuse, and the minimization of DNA assembly costs. PMID:22373390

  15. X-ray intravital microscopy for functional imaging in rat hearts using synchrotron radiation coronary microangiography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Umetani, K.; Fukushima, K.

    2013-03-15

    An X-ray intravital microscopy technique was developed to enable in vivo visualization of the coronary, cerebral, and pulmonary arteries in rats without exposure of organs and with spatial resolution in the micrometer range and temporal resolution in the millisecond range. We have refined the system continually in terms of the spatial resolution and exposure time. X-rays transmitted through an object are detected by an X-ray direct-conversion type detector, which incorporates an X-ray SATICON pickup tube. The spatial resolution has been improved to 6 {mu}m, yielding sharp images of small arteries. The exposure time has been shortened to around 2 msmore » using a new rotating-disk X-ray shutter, enabling imaging of beating rat hearts. Quantitative evaluations of the X-ray intravital microscopy technique were extracted from measurements of the smallest-detectable vessel size and detection of the vessel function. The smallest-diameter vessel viewed for measurements is determined primarily by the concentration of iodinated contrast material. The iodine concentration depends on the injection technique. We used ex vivo rat hearts under Langendorff perfusion for accurate evaluation. After the contrast agent is injected into the origin of the aorta in an isolated perfused rat heart, the contrast agent is delivered directly into the coronary arteries with minimum dilution. The vascular internal diameter response of coronary arterial circulation is analyzed to evaluate the vessel function. Small blood vessels of more than about 50 {mu}m diameters were visualized clearly at heart rates of around 300 beats/min. Vasodilation compared to the control was observed quantitatively using drug manipulation. Furthermore, the apparent increase in the number of small vessels with diameters of less than about 50 {mu}m was observed after the vasoactive agents increased the diameters of invisible small blood vessels to visible sizes. This technique is expected to offer the potential for direct investigation of mechanisms of vascular dysfunctions.« less

  16. Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow.

    PubMed

    Wongsuphasawat, Kanit; Smilkov, Daniel; Wexler, James; Wilson, Jimbo; Mane, Dandelion; Fritz, Doug; Krishnan, Dilip; Viegas, Fernanda B; Wattenberg, Martin

    2018-01-01

    We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.

  17. Experiences in using DISCUS for visualizing human communication

    NASA Astrophysics Data System (ADS)

    Groehn, Matti; Nieminen, Marko; Haho, Paeivi; Smeds, Riitta

    2000-02-01

    In this paper, we present further improvement to the DISCUS software that can be used to record and analyze the flow and constants of business process simulation session discussion. The tool was initially introduced in 'visual data exploration and analysis IV' conference. The initial features of the tool enabled the visualization of discussion flow in business process simulation sessions and the creation of SOM analyses. The improvements of the tool consists of additional visualization possibilities that enable quick on-line analyses and improved graphical statistics. We have also created the very first interface to audio data and implemented two ways to visualize it. We also outline additional possibilities to use the tool in other application areas: these include usability testing and the possibility to use the tool for capturing design rationale in a product development process. The data gathered with DISCUS may be used in other applications, and further work may be done with data ming techniques.

  18. A Critical Review of the Use of Virtual Reality in Construction Engineering Education and Training.

    PubMed

    Wang, Peng; Wu, Peng; Wang, Jun; Chi, Hung-Lin; Wang, Xiangyu

    2018-06-08

    Virtual Reality (VR) has been rapidly recognized and implemented in construction engineering education and training (CEET) in recent years due to its benefits of providing an engaging and immersive environment. The objective of this review is to critically collect and analyze the VR applications in CEET, aiming at all VR-related journal papers published from 1997 to 2017. The review follows a three-stage analysis on VR technologies, applications and future directions through a systematic analysis. It is found that the VR technologies adopted for CEET evolve over time, from desktop-based VR, immersive VR, 3D game-based VR, to Building Information Modelling (BIM)-enabled VR. A sibling technology, Augmented Reality (AR), for CEET adoptions has also emerged in recent years. These technologies have been applied in architecture and design visualization, construction health and safety training, equipment and operational task training, as well as structural analysis. Future research directions, including the integration of VR with emerging education paradigms and visualization technologies, have also been provided. The findings are useful for both researchers and educators to usefully integrate VR in their education and training programs to improve the training performance.

  19. Designing solid-liquid interphases for sodium batteries.

    PubMed

    Choudhury, Snehashis; Wei, Shuya; Ozhabes, Yalcin; Gunceler, Deniz; Zachman, Michael J; Tu, Zhengyuan; Shin, Jung Hwan; Nath, Pooja; Agrawal, Akanksha; Kourkoutis, Lena F; Arias, Tomas A; Archer, Lynden A

    2017-10-12

    Secondary batteries based on earth-abundant sodium metal anodes are desirable for both stationary and portable electrical energy storage. Room-temperature sodium metal batteries are impractical today because morphological instability during recharge drives rough, dendritic electrodeposition. Chemical instability of liquid electrolytes also leads to premature cell failure as a result of parasitic reactions with the anode. Here we use joint density-functional theoretical analysis to show that the surface diffusion barrier for sodium ion transport is a sensitive function of the chemistry of solid-electrolyte interphase. In particular, we find that a sodium bromide interphase presents an exceptionally low energy barrier to ion transport, comparable to that of metallic magnesium. We evaluate this prediction by means of electrochemical measurements and direct visualization studies. These experiments reveal an approximately three-fold reduction in activation energy for ion transport at a sodium bromide interphase. Direct visualization of sodium electrodeposition confirms large improvements in stability of sodium deposition at sodium bromide-rich interphases.The chemistry at the interface between electrolyte and electrode plays a critical role in determining battery performance. Here, the authors show that a NaBr enriched solid-electrolyte interphase can lower the surface diffusion barrier for sodium ions, enabling stable electrodeposition.

  20. Motion-related resource allocation in dynamic wireless visual sensor network environments.

    PubMed

    Katsenou, Angeliki V; Kondi, Lisimachos P; Parsopoulos, Konstantinos E

    2014-01-01

    This paper investigates quality-driven cross-layer optimization for resource allocation in direct sequence code division multiple access wireless visual sensor networks. We consider a single-hop network topology, where each sensor transmits directly to a centralized control unit (CCU) that manages the available network resources. Our aim is to enable the CCU to jointly allocate the transmission power and source-channel coding rates for each node, under four different quality-driven criteria that take into consideration the varying motion characteristics of each recorded video. For this purpose, we studied two approaches with a different tradeoff of quality and complexity. The first one allocates the resources individually for each sensor, whereas the second clusters them according to the recorded level of motion. In order to address the dynamic nature of the recorded scenery and re-allocate the resources whenever it is dictated by the changes in the amount of motion in the scenery, we propose a mechanism based on the particle swarm optimization algorithm, combined with two restarting schemes that either exploit the previously determined resource allocation or conduct a rough estimation of it. Experimental simulations demonstrate the efficiency of the proposed approaches.

  1. PeptideDepot: flexible relational database for visual analysis of quantitative proteomic data and integration of existing protein information.

    PubMed

    Yu, Kebing; Salomon, Arthur R

    2009-12-01

    Recently, dramatic progress has been achieved in expanding the sensitivity, resolution, mass accuracy, and scan rate of mass spectrometers able to fragment and identify peptides through MS/MS. Unfortunately, this enhanced ability to acquire proteomic data has not been accompanied by a concomitant increase in the availability of flexible tools allowing users to rapidly assimilate, explore, and analyze this data and adapt to various experimental workflows with minimal user intervention. Here we fill this critical gap by providing a flexible relational database called PeptideDepot for organization of expansive proteomic data sets, collation of proteomic data with available protein information resources, and visual comparison of multiple quantitative proteomic experiments. Our software design, built upon the synergistic combination of a MySQL database for safe warehousing of proteomic data with a FileMaker-driven graphical user interface for flexible adaptation to diverse workflows, enables proteomic end-users to directly tailor the presentation of proteomic data to the unique analysis requirements of the individual proteomics lab. PeptideDepot may be deployed as an independent software tool or integrated directly with our high throughput autonomous proteomic pipeline used in the automated acquisition and post-acquisition analysis of proteomic data.

  2. Towards Infusing Giovanni with a Semantic and Provenance Aware Visualization System

    NASA Astrophysics Data System (ADS)

    Del Rio, N.; Pinheiro da Silva, P.; Leptoukh, G. G.; Lynnes, C.

    2011-12-01

    Giovanni is a Web-based application developed by GES DISC that provides simple and intuitive ways to visualize, analyze, and access vast amounts of Earth science remote sensed data. Currently, the Giovanni visualization module is only aware of the physical links (i.e., hard-coded) between data and services and consequently cannot be easily adapted to new visualization scenarios. VisKo, a semantically enabled visualization framework, can be leveraged by Giovanni as a semantic bridge between data and visualization. VisKo relates data and visualization services at conceptual (i.e., ontological) levels and relies on reasoning systems to leverage the conceptual relationships to automatically infer physical links, facilitating an adaptable environment for new visualization scenarios. This is particularly useful for Giovanni, which has been constantly retrofitted with new visualization software packages to keep up with advancement in visualization capabilities. During our prototype integration of Giovanni with VisKo, a number of future steps were identified that if implemented could cement the integration and promote our prototype to operational status. A number of integration issues arose including the mediation of different languages used by each system to characterize datasets; VisKo relies on semantic data characterization to "match-up" data with visualization processes. It was necessary to identify mappings between Giovanni XML provenance and Proof Markup Language, which is understood by VisKo. Although a translator was implemented based on identified mappings, a more elegant solution is to develop a domain data ontology specific to Giovanni and to "align" this ontology with PML, enabling VisKo to directly ingest the semantic descriptions of Giovanni data. Additionally, the relationship between dataset components (e.g., variables and attributes) and visualization plot components (e.g., geometries, axes, titles) should also be modeled. In Giovanni, meta-data descriptions are used to configure the different properties of the plots such as titles, color-tables, and variable-to-axis bindings. Giovanni services rely on a set of custom attributes and naming conventions that help identify the relationships between dataset components and plot properties. VisKo visualization services however are generic modules that do not rely on any domain specific conventions for identifying relationships between dataset attributes and plot configuration. Rather, VisKo services rely on parameters to configure specific behaviors of the generic services. The relationship between VisKo parameters and plot properties however has yet to formally documented, partly because VisKo regards plots as holistic entities without any internal structure from which to relate parameters. We understand the need for a visualization plot ontology that defines plot components, their retinal properties, such as position and color, and the relationship between the plot properties to controlling service parameter sets. The plot ontology would also be linked to our domain data ontology, providing VisKo with the comprehensive understanding about how data attributes can cue the configuration of plots, and how a specific plot configuration relates to service parameters.

  3. Supporting Students' Knowledge Integration with Technology-Enhanced Inquiry Curricula

    ERIC Educational Resources Information Center

    Chiu, Jennifer Lopseen

    2010-01-01

    Dynamic visualizations of scientific phenomena have the potential to transform how students learn and understand science. Dynamic visualizations enable interaction and experimentation with unobservable atomic-level phenomena. A series of studies clarify the conditions under which embedding dynamic visualizations in technology-enhanced inquiry…

  4. Microreact: visualizing and sharing data for genomic epidemiology and phylogeography

    PubMed Central

    Argimón, Silvia; Abudahab, Khalil; Goater, Richard J. E.; Fedosejev, Artemij; Bhai, Jyothish; Glasner, Corinna; Feil, Edward J.; Holden, Matthew T. G.; Yeats, Corin A.; Grundmann, Hajo; Spratt, Brian G.

    2016-01-01

    Visualization is frequently used to aid our interpretation of complex datasets. Within microbial genomics, visualizing the relationships between multiple genomes as a tree provides a framework onto which associated data (geographical, temporal, phenotypic and epidemiological) are added to generate hypotheses and to explore the dynamics of the system under investigation. Selected static images are then used within publications to highlight the key findings to a wider audience. However, these images are a very inadequate way of exploring and interpreting the richness of the data. There is, therefore, a need for flexible, interactive software that presents the population genomic outputs and associated data in a user-friendly manner for a wide range of end users, from trained bioinformaticians to front-line epidemiologists and health workers. Here, we present Microreact, a web application for the easy visualization of datasets consisting of any combination of trees, geographical, temporal and associated metadata. Data files can be uploaded to Microreact directly via the web browser or by linking to their location (e.g. from Google Drive/Dropbox or via API), and an integrated visualization via trees, maps, timelines and tables provides interactive querying of the data. The visualization can be shared as a permanent web link among collaborators, or embedded within publications to enable readers to explore and download the data. Microreact can act as an end point for any tool or bioinformatic pipeline that ultimately generates a tree, and provides a simple, yet powerful, visualization method that will aid research and discovery and the open sharing of datasets. PMID:28348833

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dorier, Matthieu; Sisneros, Roberto; Bautista Gomez, Leonard

    While many parallel visualization tools now provide in situ visualization capabilities, the trend has been to feed such tools with large amounts of unprocessed output data and let them render everything at the highest possible resolution. This leads to an increased run time of simulations that still have to complete within a fixed-length job allocation. In this paper, we tackle the challenge of enabling in situ visualization under performance constraints. Our approach shuffles data across processes according to its content and filters out part of it in order to feed a visualization pipeline with only a reorganized subset of themore » data produced by the simulation. Our framework leverages fast, generic evaluation procedures to score blocks of data, using information theory, statistics, and linear algebra. It monitors its own performance and adapts dynamically to achieve appropriate visual fidelity within predefined performance constraints. Experiments on the Blue Waters supercomputer with the CM1 simulation show that our approach enables a 5 speedup with respect to the initial visualization pipeline and is able to meet performance constraints.« less

  6. 3D Printing of Biomolecular Models for Research and Pedagogy

    PubMed Central

    Da Veiga Beltrame, Eduardo; Tyrwhitt-Drake, James; Roy, Ian; Shalaby, Raed; Suckale, Jakob; Pomeranz Krummel, Daniel

    2017-01-01

    The construction of physical three-dimensional (3D) models of biomolecules can uniquely contribute to the study of the structure-function relationship. 3D structures are most often perceived using the two-dimensional and exclusively visual medium of the computer screen. Converting digital 3D molecular data into real objects enables information to be perceived through an expanded range of human senses, including direct stereoscopic vision, touch, and interaction. Such tangible models facilitate new insights, enable hypothesis testing, and serve as psychological or sensory anchors for conceptual information about the functions of biomolecules. Recent advances in consumer 3D printing technology enable, for the first time, the cost-effective fabrication of high-quality and scientifically accurate models of biomolecules in a variety of molecular representations. However, the optimization of the virtual model and its printing parameters is difficult and time consuming without detailed guidance. Here, we provide a guide on the digital design and physical fabrication of biomolecule models for research and pedagogy using open source or low-cost software and low-cost 3D printers that use fused filament fabrication technology. PMID:28362403

  7. Intelligent Visualization of Geo-Information on the Future Web

    NASA Astrophysics Data System (ADS)

    Slusallek, P.; Jochem, R.; Sons, K.; Hoffmann, H.

    2012-04-01

    Visualization is a key component of the "Observation Web" and will become even more important in the future as geo data becomes more widely accessible. The common statement that "Data that cannot be seen, does not exist" is especially true for non-experts, like most citizens. The Web provides the most interesting platform for making data easily and widely available. However, today's Web is not well suited for the interactive visualization and exploration that is often needed for geo data. Support for 3D data was added only recently and at an extremely low level (WebGL), but even the 2D visualization capabilities of HTML e.g. (images, canvas, SVG) are rather limited, especially regarding interactivity. We have developed XML3D as an extension to HTML-5. It allows for compactly describing 2D and 3D data directly as elements of an HTML-5 document. All graphics elements are part of the Document Object Model (DOM) and can be manipulated via the same set of DOM events and methods that millions of Web developers use on a daily basis. Thus, XML3D makes highly interactive 2D and 3D visualization easily usable, not only for geo data. XML3D is supported by any WebGL-capable browser but we also provide native implementations in Firefox and Chromium. As an example, we show how OpenStreetMap data can be mapped directly to XML3D and visualized interactively in any Web page. We show how this data can be easily augmented with additional data from the Web via a few lines of Javascript. We also show how embedded semantic data (via RDFa) allows for linking the visualization back to the data's origin, thus providing an immersive interface for interacting with and modifying the original data. XML3D is used as key input for standardization within the W3C Community Group on "Declarative 3D for the Web" chaired by the DFKI and has recently been selected as one of the Generic Enabler for the EU Future Internet initiative.

  8. BactoGeNIE: A large-scale comparative genome visualization for big displays

    DOE PAGES

    Aurisano, Jillian; Reda, Khairi; Johnson, Andrew; ...

    2015-08-13

    The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less

  9. BactoGeNIE: a large-scale comparative genome visualization for big displays

    PubMed Central

    2015-01-01

    Background The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. Results In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE through a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. Conclusions BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics. PMID:26329021

  10. BactoGeNIE: A large-scale comparative genome visualization for big displays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aurisano, Jillian; Reda, Khairi; Johnson, Andrew

    The volume of complete bacterial genome sequence data available to comparative genomics researchers is rapidly increasing. However, visualizations in comparative genomics--which aim to enable analysis tasks across collections of genomes--suffer from visual scalability issues. While large, multi-tiled and high-resolution displays have the potential to address scalability issues, new approaches are needed to take advantage of such environments, in order to enable the effective visual analysis of large genomics datasets. In this paper, we present Bacterial Gene Neighborhood Investigation Environment, or BactoGeNIE, a novel and visually scalable design for comparative gene neighborhood analysis on large display environments. We evaluate BactoGeNIE throughmore » a case study on close to 700 draft Escherichia coli genomes, and present lessons learned from our design process. In conclusion, BactoGeNIE accommodates comparative tasks over substantially larger collections of neighborhoods than existing tools and explicitly addresses visual scalability. Given current trends in data generation, scalable designs of this type may inform visualization design for large-scale comparative research problems in genomics.« less

  11. Unmanned Aircraft System (UAS) Traffic Management (UTM): Enabling Civilian Low-Altitude Airspace and Unmanned Aerial System Operations

    NASA Technical Reports Server (NTRS)

    Kopardekar, Parimal Hemchandra

    2016-01-01

    Just a year ago we laid out the UTM challenges and NASA's proposed solutions. During the past year NASA's goal continues to be to conduct research, development and testing to identify airspace operations requirements to enable large-scale visual and beyond visual line-of-sight UAS operations in the low-altitude airspace. Significant progress has been made, and NASA is continuing to move forward.

  12. Failure to use corollary discharge to remap visual target locations is associated with psychotic symptom severity in schizophrenia

    PubMed Central

    Rösler, Lara; Rolfs, Martin; van der Stigchel, Stefan; Neggers, Sebastiaan F. W.; Cahn, Wiepke; Kahn, René S.

    2015-01-01

    Corollary discharge (CD) refers to “copies” of motor signals sent to sensory areas, allowing prediction of future sensory states. They enable the putative mechanisms supporting the distinction between self-generated and externally generated sensations. Accordingly, many authors have suggested that disturbed CD engenders psychotic symptoms of schizophrenia, which are characterized by agency distortions. CD also supports perceived visual stability across saccadic eye movements and is used to predict the postsaccadic retinal coordinates of visual stimuli, a process called remapping. We tested whether schizophrenia patients (SZP) show remapping disturbances as evidenced by systematic transsaccadic mislocalizations of visual targets. SZP and healthy controls (HC) performed a task in which a saccadic target disappeared upon saccade initiation and, after a brief delay, reappeared at a horizontally displaced position. HC judged the direction of this displacement accurately, despite spatial errors in saccade landing site, indicating that their comparison of the actual to predicted postsaccadic target location relied on accurate CD. SZP performed worse and relied more on saccade landing site as a proxy for the presaccadic target, consistent with disturbed CD. This remapping failure was strongest in patients with more severe psychotic symptoms, consistent with the theoretical link between disturbed CD and phenomenological experiences in schizophrenia. PMID:26108951

  13. Visualization of DNA molecules in time during electrophoresis

    NASA Technical Reports Server (NTRS)

    Lubega, Seth

    1991-01-01

    For several years individual DNA molecules have been observed and photographed during agarose gel electrophoresis. The DNA molecule is clearly the largest molecule known. Nevertheless, the largest molecule is still too small to be seen using a microscope. A technique developed by Morikawa and Yanagida has made it possible to visualize individual DNA molecules. When these long molecules are labeled with appropriate fluorescence dyes and observed under a fluorescence microscope, although it is not possible to directly visualize the local ultrastructure of the molecules, yet because they are long light emitting chains, their microscopic dynamical behavior can be observed. This visualization works in the same principle that enables one to observe a star through a telescope because it emits light against a dark background. The dynamics of individual DNA molecules migrating through agarose matrix during electrophoresis have been described by Smith et al. (1989), Schwartz and Koval (1989), and Bustamante et al. (1990). DNA molecules during agarose gel electrophoresis advance lengthwise thorough the gel in an extended configuration. They display an extension-contraction motion and tend to bunch up in their leading ends as the 'heads' find new pores through the gel. From time to time they get hooked on obstacles in the gel to form U-shaped configurations before they resume their linear configuration.

  14. Using open-source programs to create a web-based portal for hydrologic information

    NASA Astrophysics Data System (ADS)

    Kim, H.

    2013-12-01

    Some hydrologic data sets, such as basin climatology, precipitation, and terrestrial water storage, are not easily obtainable and distributable due to their size and complexity. We present a Hydrologic Information Portal (HIP) that has been implemented at the University of California for Hydrologic Modeling (UCCHM) and that has been organized around the large river basins of North America. This portal can be easily accessed through a modern web browser that enables easy access and visualization of such hydrologic data sets. Some of the main features of our HIP include a set of data visualization features so that users can search, retrieve, analyze, integrate, organize, and map data within large river basins. Recent information technologies such as Google Maps, Tornado (Python asynchronous web server), NumPy/SciPy (Scientific Library for Python) and d3.js (Visualization library for JavaScript) were incorporated into the HIP to create ease in navigating large data sets. With such open source libraries, HIP can give public users a way to combine and explore various data sets by generating multiple chart types (Line, Bar, Pie, Scatter plot) directly from the Google Maps viewport. Every rendered object such as a basin shape on the viewport is clickable, and this is the first step to access the visualization of data sets.

  15. Visual examination apparatus

    NASA Technical Reports Server (NTRS)

    Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)

    1976-01-01

    An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location including a projection system for displaying to a patient a series of visual stimuli. A response switch enables him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system thereby provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.

  16. Visualizing conserved gene location across microbe genomes

    NASA Astrophysics Data System (ADS)

    Shaw, Chris D.

    2009-01-01

    This paper introduces an analysis-based zoomable visualization technique for displaying the location of genes across many related species of microbes. The purpose of this visualizatiuon is to enable a biologist to examine the layout of genes in the organism of interest with respect to the gene organization of related organisms. During the genomic annotation process, the ability to observe gene organization in common with previously annotated genomes can help a biologist better confirm the structure and function of newly analyzed microbe DNA sequences. We have developed a visualization and analysis tool that enables the biologist to observe and examine gene organization among genomes, in the context of the primary sequence of interest. This paper describes the visualization and analysis steps, and presents a case study using a number of Rickettsia genomes.

  17. Cytoscape: the network visualization tool for GenomeSpace workflows.

    PubMed

    Demchak, Barry; Hull, Tim; Reich, Michael; Liefeld, Ted; Smoot, Michael; Ideker, Trey; Mesirov, Jill P

    2014-01-01

    Modern genomic analysis often requires workflows incorporating multiple best-of-breed tools. GenomeSpace is a web-based visual workbench that combines a selection of these tools with mechanisms that create data flows between them. One such tool is Cytoscape 3, a popular application that enables analysis and visualization of graph-oriented genomic networks. As Cytoscape runs on the desktop, and not in a web browser, integrating it into GenomeSpace required special care in creating a seamless user experience and enabling appropriate data flows. In this paper, we present the design and operation of the Cytoscape GenomeSpace app, which accomplishes this integration, thereby providing critical analysis and visualization functionality for GenomeSpace users. It has been downloaded over 850 times since the release of its first version in September, 2013.

  18. Visualization and Analysis for Near-Real-Time Decision Making in Distributed Workflows

    DOE PAGES

    Pugmire, David; Kress, James; Choi, Jong; ...

    2016-08-04

    Data driven science is becoming increasingly more common, complex, and is placing tremendous stresses on visualization and analysis frameworks. Data sources producing 10GB per second (and more) are becoming increasingly commonplace in both simulation, sensor and experimental sciences. These data sources, which are often distributed around the world, must be analyzed by teams of scientists that are also distributed. Enabling scientists to view, query and interact with such large volumes of data in near-real-time requires a rich fusion of visualization and analysis techniques, middleware and workflow systems. Here, this paper discusses initial research into visualization and analysis of distributed datamore » workflows that enables scientists to make near-real-time decisions of large volumes of time varying data.« less

  19. Cytoscape: the network visualization tool for GenomeSpace workflows

    PubMed Central

    Demchak, Barry; Hull, Tim; Reich, Michael; Liefeld, Ted; Smoot, Michael; Ideker, Trey; Mesirov, Jill P.

    2014-01-01

    Modern genomic analysis often requires workflows incorporating multiple best-of-breed tools. GenomeSpace is a web-based visual workbench that combines a selection of these tools with mechanisms that create data flows between them. One such tool is Cytoscape 3, a popular application that enables analysis and visualization of graph-oriented genomic networks. As Cytoscape runs on the desktop, and not in a web browser, integrating it into GenomeSpace required special care in creating a seamless user experience and enabling appropriate data flows. In this paper, we present the design and operation of the Cytoscape GenomeSpace app, which accomplishes this integration, thereby providing critical analysis and visualization functionality for GenomeSpace users. It has been downloaded over 850 times since the release of its first version in September, 2013. PMID:25165537

  20. Food shopping, sensory determinants of food choice and meal preparation by visually impaired people. Obstacles and expectations in daily food experiences.

    PubMed

    Kostyra, Eliza; Żakowska-Biemans, Sylwia; Śniegocka, Katarzyna; Piotrowska, Anna

    2017-06-01

    The number of visually impaired and blind people is rising worldwide due to ageing of the global population, but research regarding the impact of visual impairment on the ability of a person to choose food and to prepare meals is scarce. The aim of this study was threefold: to investigate factors determining the choices of food products in people with various levels of impaired vision; to identify obstacles they face while purchasing food, preparing meals and eating out; and to determine what would help them in the areas of food shopping and meal preparation. The data was collected from 250 blind and visually impaired subjects, recruited with the support of the National Association of the Blind. The study revealed that majority of the visually impaired make food purchases at a supermarket or local grocery and they tend to favour shopping for food via the Internet. Direct sale channels like farmers markets were rarely used by the visually impaired. The most frequently mentioned factors that facilitated their food shopping decisions were the assistance of salespersons, product labelling in Braille, scanners that enable the reading of labels and a permanent place for products on the shop shelves. Meal preparation, particularly peeling, slicing and frying, posed many challenges to the visually impaired. More than half of the respondents ate meals outside the home, mainly with family or friends. The helpfulness of the staff and a menu in Braille were crucial for them to have a positive dining out experience. The results of the study provide valuable insights into the food choices and eating experiences of visually impaired people, and also suggest some practical implications to improve their independence and quality of life. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Earlier Visual N1 Latencies in Expert Video-Game Players: A Temporal Basis of Enhanced Visuospatial Performance?

    PubMed Central

    Latham, Andrew J.; Patston, Lucy L. M.; Westermann, Christine; Kirk, Ian J.; Tippett, Lynette J.

    2013-01-01

    Increasing behavioural evidence suggests that expert video game players (VGPs) show enhanced visual attention and visuospatial abilities, but what underlies these enhancements remains unclear. We administered the Poffenberger paradigm with concurrent electroencephalogram (EEG) recording to assess occipital N1 latencies and interhemispheric transfer time (IHTT) in expert VGPs. Participants comprised 15 right-handed male expert VGPs and 16 non-VGP controls matched for age, handedness, IQ and years of education. Expert VGPs began playing before age 10, had a minimum 8 years experience, and maintained playtime of at least 20 hours per week over the last 6 months. Non-VGPs had little-to-no game play experience (maximum 1.5 years). Participants responded to checkerboard stimuli presented to the left and right visual fields while 128-channel EEG was recorded. Expert VGPs responded significantly more quickly than non-VGPs. Expert VGPs also had significantly earlier occipital N1s in direct visual pathways (the hemisphere contralateral to the visual field in which the stimulus was presented). IHTT was calculated by comparing the latencies of occipital N1 components between hemispheres. No significant between-group differences in electrophysiological estimates of IHTT were found. Shorter N1 latencies may enable expert VGPs to discriminate attended visual stimuli significantly earlier than non-VGPs and contribute to faster responding in visual tasks. As successful video-game play requires precise, time pressured, bimanual motor movements in response to complex visual stimuli, which in this sample began during early childhood, these differences may reflect the experience and training involved during the development of video-game expertise, but training studies are needed to test this prediction. PMID:24058667

  2. Earlier visual N1 latencies in expert video-game players: a temporal basis of enhanced visuospatial performance?

    PubMed

    Latham, Andrew J; Patston, Lucy L M; Westermann, Christine; Kirk, Ian J; Tippett, Lynette J

    2013-01-01

    Increasing behavioural evidence suggests that expert video game players (VGPs) show enhanced visual attention and visuospatial abilities, but what underlies these enhancements remains unclear. We administered the Poffenberger paradigm with concurrent electroencephalogram (EEG) recording to assess occipital N1 latencies and interhemispheric transfer time (IHTT) in expert VGPs. Participants comprised 15 right-handed male expert VGPs and 16 non-VGP controls matched for age, handedness, IQ and years of education. Expert VGPs began playing before age 10, had a minimum 8 years experience, and maintained playtime of at least 20 hours per week over the last 6 months. Non-VGPs had little-to-no game play experience (maximum 1.5 years). Participants responded to checkerboard stimuli presented to the left and right visual fields while 128-channel EEG was recorded. Expert VGPs responded significantly more quickly than non-VGPs. Expert VGPs also had significantly earlier occipital N1s in direct visual pathways (the hemisphere contralateral to the visual field in which the stimulus was presented). IHTT was calculated by comparing the latencies of occipital N1 components between hemispheres. No significant between-group differences in electrophysiological estimates of IHTT were found. Shorter N1 latencies may enable expert VGPs to discriminate attended visual stimuli significantly earlier than non-VGPs and contribute to faster responding in visual tasks. As successful video-game play requires precise, time pressured, bimanual motor movements in response to complex visual stimuli, which in this sample began during early childhood, these differences may reflect the experience and training involved during the development of video-game expertise, but training studies are needed to test this prediction.

  3. PRISM: An open source framework for the interactive design of GPU volume rendering shaders.

    PubMed

    Drouin, Simon; Collins, D Louis

    2018-01-01

    Direct volume rendering has become an essential tool to explore and analyse 3D medical images. Despite several advances in the field, it remains a challenge to produce an image that highlights the anatomy of interest, avoids occlusion of important structures, provides an intuitive perception of shape and depth while retaining sufficient contextual information. Although the computer graphics community has proposed several solutions to address specific visualization problems, the medical imaging community still lacks a general volume rendering implementation that can address a wide variety of visualization use cases while avoiding complexity. In this paper, we propose a new open source framework called the Programmable Ray Integration Shading Model, or PRISM, that implements a complete GPU ray-casting solution where critical parts of the ray integration algorithm can be replaced to produce new volume rendering effects. A graphical user interface allows clinical users to easily experiment with pre-existing rendering effect building blocks drawn from an open database. For programmers, the interface enables real-time editing of the code inside the blocks. We show that in its default mode, the PRISM framework produces images very similar to those produced by a widely-adopted direct volume rendering implementation in VTK at comparable frame rates. More importantly, we demonstrate the flexibility of the framework by showing how several volume rendering techniques can be implemented in PRISM with no more than a few lines of code. Finally, we demonstrate the simplicity of our system in a usability study with 5 medical imaging expert subjects who have none or little experience with volume rendering. The PRISM framework has the potential to greatly accelerate development of volume rendering for medical applications by promoting sharing and enabling faster development iterations and easier collaboration between engineers and clinical personnel.

  4. PRISM: An open source framework for the interactive design of GPU volume rendering shaders

    PubMed Central

    Collins, D. Louis

    2018-01-01

    Direct volume rendering has become an essential tool to explore and analyse 3D medical images. Despite several advances in the field, it remains a challenge to produce an image that highlights the anatomy of interest, avoids occlusion of important structures, provides an intuitive perception of shape and depth while retaining sufficient contextual information. Although the computer graphics community has proposed several solutions to address specific visualization problems, the medical imaging community still lacks a general volume rendering implementation that can address a wide variety of visualization use cases while avoiding complexity. In this paper, we propose a new open source framework called the Programmable Ray Integration Shading Model, or PRISM, that implements a complete GPU ray-casting solution where critical parts of the ray integration algorithm can be replaced to produce new volume rendering effects. A graphical user interface allows clinical users to easily experiment with pre-existing rendering effect building blocks drawn from an open database. For programmers, the interface enables real-time editing of the code inside the blocks. We show that in its default mode, the PRISM framework produces images very similar to those produced by a widely-adopted direct volume rendering implementation in VTK at comparable frame rates. More importantly, we demonstrate the flexibility of the framework by showing how several volume rendering techniques can be implemented in PRISM with no more than a few lines of code. Finally, we demonstrate the simplicity of our system in a usability study with 5 medical imaging expert subjects who have none or little experience with volume rendering. The PRISM framework has the potential to greatly accelerate development of volume rendering for medical applications by promoting sharing and enabling faster development iterations and easier collaboration between engineers and clinical personnel. PMID:29534069

  5. Google Glass-Directed Monitoring and Control of Microfluidic Biosensors and Actuators

    PubMed Central

    Zhang, Yu Shrike; Busignani, Fabio; Ribas, João; Aleman, Julio; Rodrigues, Talles Nascimento; Shaegh, Seyed Ali Mousavi; Massa, Solange; Rossi, Camilla Baj; Taurino, Irene; Shin, Su-Ryon; Calzone, Giovanni; Amaratunga, Givan Mark; Chambers, Douglas Leon; Jabari, Saman; Niu, Yuxi; Manoharan, Vijayan; Dokmeci, Mehmet Remzi; Carrara, Sandro; Demarchi, Danilo; Khademhosseini, Ali

    2016-01-01

    Google Glass is a recently designed wearable device capable of displaying information in a smartphone-like hands-free format by wireless communication. The Glass also provides convenient control over remote devices, primarily enabled by voice recognition commands. These unique features of the Google Glass make it useful for medical and biomedical applications where hands-free experiences are strongly preferred. Here, we report for the first time, an integral set of hardware, firmware, software, and Glassware that enabled wireless transmission of sensor data onto the Google Glass for on-demand data visualization and real-time analysis. Additionally, the platform allowed the user to control outputs entered through the Glass, therefore achieving bi-directional Glass-device interfacing. Using this versatile platform, we demonstrated its capability in monitoring physical and physiological parameters such as temperature, pH, and morphology of liver- and heart-on-chips. Furthermore, we showed the capability to remotely introduce pharmaceutical compounds into a microfluidic human primary liver bioreactor at desired time points while monitoring their effects through the Glass. We believe that such an innovative platform, along with its concept, has set up a premise in wearable monitoring and controlling technology for a wide variety of applications in biomedicine. PMID:26928456

  6. Google Glass-Directed Monitoring and Control of Microfluidic Biosensors and Actuators

    NASA Astrophysics Data System (ADS)

    Zhang, Yu Shrike; Busignani, Fabio; Ribas, João; Aleman, Julio; Rodrigues, Talles Nascimento; Shaegh, Seyed Ali Mousavi; Massa, Solange; Rossi, Camilla Baj; Taurino, Irene; Shin, Su-Ryon; Calzone, Giovanni; Amaratunga, Givan Mark; Chambers, Douglas Leon; Jabari, Saman; Niu, Yuxi; Manoharan, Vijayan; Dokmeci, Mehmet Remzi; Carrara, Sandro; Demarchi, Danilo; Khademhosseini, Ali

    2016-03-01

    Google Glass is a recently designed wearable device capable of displaying information in a smartphone-like hands-free format by wireless communication. The Glass also provides convenient control over remote devices, primarily enabled by voice recognition commands. These unique features of the Google Glass make it useful for medical and biomedical applications where hands-free experiences are strongly preferred. Here, we report for the first time, an integral set of hardware, firmware, software, and Glassware that enabled wireless transmission of sensor data onto the Google Glass for on-demand data visualization and real-time analysis. Additionally, the platform allowed the user to control outputs entered through the Glass, therefore achieving bi-directional Glass-device interfacing. Using this versatile platform, we demonstrated its capability in monitoring physical and physiological parameters such as temperature, pH, and morphology of liver- and heart-on-chips. Furthermore, we showed the capability to remotely introduce pharmaceutical compounds into a microfluidic human primary liver bioreactor at desired time points while monitoring their effects through the Glass. We believe that such an innovative platform, along with its concept, has set up a premise in wearable monitoring and controlling technology for a wide variety of applications in biomedicine.

  7. Google Glass-Directed Monitoring and Control of Microfluidic Biosensors and Actuators.

    PubMed

    Zhang, Yu Shrike; Busignani, Fabio; Ribas, João; Aleman, Julio; Rodrigues, Talles Nascimento; Shaegh, Seyed Ali Mousavi; Massa, Solange; Baj Rossi, Camilla; Taurino, Irene; Shin, Su-Ryon; Calzone, Giovanni; Amaratunga, Givan Mark; Chambers, Douglas Leon; Jabari, Saman; Niu, Yuxi; Manoharan, Vijayan; Dokmeci, Mehmet Remzi; Carrara, Sandro; Demarchi, Danilo; Khademhosseini, Ali

    2016-03-01

    Google Glass is a recently designed wearable device capable of displaying information in a smartphone-like hands-free format by wireless communication. The Glass also provides convenient control over remote devices, primarily enabled by voice recognition commands. These unique features of the Google Glass make it useful for medical and biomedical applications where hands-free experiences are strongly preferred. Here, we report for the first time, an integral set of hardware, firmware, software, and Glassware that enabled wireless transmission of sensor data onto the Google Glass for on-demand data visualization and real-time analysis. Additionally, the platform allowed the user to control outputs entered through the Glass, therefore achieving bi-directional Glass-device interfacing. Using this versatile platform, we demonstrated its capability in monitoring physical and physiological parameters such as temperature, pH, and morphology of liver- and heart-on-chips. Furthermore, we showed the capability to remotely introduce pharmaceutical compounds into a microfluidic human primary liver bioreactor at desired time points while monitoring their effects through the Glass. We believe that such an innovative platform, along with its concept, has set up a premise in wearable monitoring and controlling technology for a wide variety of applications in biomedicine.

  8. Orienteering in Knowledge Spaces: The Hyperbolic Geometry of Wikipedia Mathematics

    PubMed Central

    Leibon, Gregory; Rockmore, Daniel N.

    2013-01-01

    In this paper we show how the coupling of the notion of a network with directions with the adaptation of the four-point probe from materials testing gives rise to a natural geometry on such networks. This four-point probe geometry shares many of the properties of hyperbolic geometry wherein the network directions take the place of the sphere at infinity, enabling a navigation of the network in terms of pairs of directions: the geodesic through a pair of points is oriented from one direction to another direction, the pair of which are uniquely determined. We illustrate this in the interesting example of the pages of Wikipedia devoted to Mathematics, or “The MathWiki.” The applicability of these ideas extends beyond Wikipedia to provide a natural framework for visual search and to prescribe a natural mode of navigation for any kind of “knowledge space” in which higher order concepts aggregate various instances of information. Other examples would include genre or author organization of cultural objects such as books, movies, documents or even merchandise in an online store. PMID:23844017

  9. Orienteering in knowledge spaces: the hyperbolic geometry of Wikipedia Mathematics.

    PubMed

    Leibon, Gregory; Rockmore, Daniel N

    2013-01-01

    In this paper we show how the coupling of the notion of a network with directions with the adaptation of the four-point probe from materials testing gives rise to a natural geometry on such networks. This four-point probe geometry shares many of the properties of hyperbolic geometry wherein the network directions take the place of the sphere at infinity, enabling a navigation of the network in terms of pairs of directions: the geodesic through a pair of points is oriented from one direction to another direction, the pair of which are uniquely determined. We illustrate this in the interesting example of the pages of Wikipedia devoted to Mathematics, or "The MathWiki." The applicability of these ideas extends beyond Wikipedia to provide a natural framework for visual search and to prescribe a natural mode of navigation for any kind of "knowledge space" in which higher order concepts aggregate various instances of information. Other examples would include genre or author organization of cultural objects such as books, movies, documents or even merchandise in an online store.

  10. Quality labeled faces in the wild (QLFW): a database for studying face recognition in real-world environments

    NASA Astrophysics Data System (ADS)

    Karam, Lina J.; Zhu, Tong

    2015-03-01

    The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.

  11. IN13B-1660: Analytics and Visualization Pipelines for Big Data on the NASA Earth Exchange (NEX) and OpenNEX

    NASA Technical Reports Server (NTRS)

    Chaudhary, Aashish; Votava, Petr; Nemani, Ramakrishna R.; Michaelis, Andrew; Kotfila, Chris

    2016-01-01

    We are developing capabilities for an integrated petabyte-scale Earth science collaborative analysis and visualization environment. The ultimate goal is to deploy this environment within the NASA Earth Exchange (NEX) and OpenNEX in order to enhance existing science data production pipelines in both high-performance computing (HPC) and cloud environments. Bridging of HPC and cloud is a fairly new concept under active research and this system significantly enhances the ability of the scientific community to accelerate analysis and visualization of Earth science data from NASA missions, model outputs and other sources. We have developed a web-based system that seamlessly interfaces with both high-performance computing (HPC) and cloud environments, providing tools that enable science teams to develop and deploy large-scale analysis, visualization and QA pipelines of both the production process and the data products, and enable sharing results with the community. Our project is developed in several stages each addressing separate challenge - workflow integration, parallel execution in either cloud or HPC environments and big-data analytics or visualization. This work benefits a number of existing and upcoming projects supported by NEX, such as the Web Enabled Landsat Data (WELD), where we are developing a new QA pipeline for the 25PB system.

  12. Analytics and Visualization Pipelines for Big ­Data on the NASA Earth Exchange (NEX) and OpenNEX

    NASA Astrophysics Data System (ADS)

    Chaudhary, A.; Votava, P.; Nemani, R. R.; Michaelis, A.; Kotfila, C.

    2016-12-01

    We are developing capabilities for an integrated petabyte-scale Earth science collaborative analysis and visualization environment. The ultimate goal is to deploy this environment within the NASA Earth Exchange (NEX) and OpenNEX in order to enhance existing science data production pipelines in both high-performance computing (HPC) and cloud environments. Bridging of HPC and cloud is a fairly new concept under active research and this system significantly enhances the ability of the scientific community to accelerate analysis and visualization of Earth science data from NASA missions, model outputs and other sources. We have developed a web-based system that seamlessly interfaces with both high-performance computing (HPC) and cloud environments, providing tools that enable science teams to develop and deploy large-scale analysis, visualization and QA pipelines of both the production process and the data products, and enable sharing results with the community. Our project is developed in several stages each addressing separate challenge - workflow integration, parallel execution in either cloud or HPC environments and big-data analytics or visualization. This work benefits a number of existing and upcoming projects supported by NEX, such as the Web Enabled Landsat Data (WELD), where we are developing a new QA pipeline for the 25PB system.

  13. Visually driven chaining of elementary swim patterns into a goal-directed motor sequence: a virtual reality study of zebrafish prey capture.

    PubMed

    Trivedi, Chintan A; Bollmann, Johann H

    2013-01-01

    Prey capture behavior critically depends on rapid processing of sensory input in order to track, approach, and catch the target. When using vision, the nervous system faces the problem of extracting relevant information from a continuous stream of input in order to detect and categorize visible objects as potential prey and to select appropriate motor patterns for approach. For prey capture, many vertebrates exhibit intermittent locomotion, in which discrete motor patterns are chained into a sequence, interrupted by short periods of rest. Here, using high-speed recordings of full-length prey capture sequences performed by freely swimming zebrafish larvae in the presence of a single paramecium, we provide a detailed kinematic analysis of first and subsequent swim bouts during prey capture. Using Fourier analysis, we show that individual swim bouts represent an elementary motor pattern. Changes in orientation are directed toward the target on a graded scale and are implemented by an asymmetric tail bend component superimposed on this basic motor pattern. To further investigate the role of visual feedback on the efficiency and speed of this complex behavior, we developed a closed-loop virtual reality setup in which minimally restrained larvae recapitulated interconnected swim patterns closely resembling those observed during prey capture in freely moving fish. Systematic variation of stimulus properties showed that prey capture is initiated within a narrow range of stimulus size and velocity. Furthermore, variations in the delay and location of swim triggered visual feedback showed that the reaction time of secondary and later swims is shorter for stimuli that appear within a narrow spatio-temporal window following a swim. This suggests that the larva may generate an expectation of stimulus position, which enables accelerated motor sequencing if the expectation is met by appropriate visual feedback.

  14. Comparison of the Visual Capabilities of an Amphibious and an Aquatic Goby That Inhabit Tidal Mudflats.

    PubMed

    Takiyama, Tomo; Hamasaki, Sawako; Yoshida, Masayuki

    2016-01-01

    The mudskipper Periophthalmus modestus and the yellowfin goby Acanthogobius flavimanus are gobiid teleosts that both inhabit the intertidal mudflats in estuaries. While P. modestus has an amphibious lifestyle and forages on the exposed mudflat during low tide, the aquatic A. flavimanus can be found at the same mudflat at high tide. This study primarily aimed to elucidate the differential adaptations of these organisms to their respective habitats by comparing visual capacities and motor control in orienting behavior during prey capture. Analyses of retinal ganglion cell topography demonstrated that both species possess an area in the dorsotemporal region of the retina, indicating high acuity in the lower frontal visual field. Additionally, P. modestus has a minor area in the nasal portion of the retina near the optic disc. The horizontally extended specialized area in P. modestus possibly reflects the need for optimized horizontal sight on the exposed mudflat. Behavioral experiments to determine postural and eye direction control when orienting toward the object of interest revealed that these species direct their visual axes to the target situated below eye level just before a rapid approach toward it. A characteristic feature of the orienting behavior of P. modestus was that they aimed at the target by using the specialized retinal area by rotating the eye and lifting the head before jumping to attack the target located above eye level. This behavior could be an adaptation to a terrestrial feeding habitat in which buoyancy is irrelevant. This study provides insights into the adaptive mechanisms of gobiid species and the evolutionary changes enabling them to forage on land. © 2016 S. Karger AG, Basel.

  15. An open-architecture approach to defect analysis software for mask inspection systems

    NASA Astrophysics Data System (ADS)

    Pereira, Mark; Pai, Ravi R.; Reddy, Murali Mohan; Krishna, Ravi M.

    2009-04-01

    Industry data suggests that Mask Inspection represents the second biggest component of Mask Cost and Mask Turn Around Time (TAT). Ever decreasing defect size targets lead to more sensitive mask inspection across the chip, thus generating too many defects. Hence, more operator time is being spent in analyzing and disposition of defects. Also, the fact that multiple Mask Inspection Systems and Defect Analysis strategies would typically be in use in a Mask Shop or a Wafer Foundry further complicates the situation. In this scenario, there is a need for a versatile, user friendly and extensible Defect Analysis software that reduces operator analysis time and enables correct classification and disposition of mask defects by providing intuitive visual and analysis aids. We propose a new vendor-neutral defect analysis software, NxDAT, based on an open architecture. The open architecture of NxDAT makes it easily extensible to support defect analysis for mask inspection systems from different vendors. The capability to load results from mask inspection systems from different vendors either directly or through a common interface enables the functionality of establishing correlation between inspections carried out by mask inspection systems from different vendors. This capability of NxDAT enhances the effectiveness of defect analysis as it directly addresses the real-life scenario where multiple types of mask inspection systems from different vendors co-exist in mask shops or wafer foundries. The open architecture also potentially enables loading wafer inspection results as well as loading data from other related tools such as Review Tools, Repair Tools, CD-SEM tools etc, and correlating them with the corresponding mask inspection results. A unique concept of Plug-In interface to NxDAT further enhances the openness of the architecture of NxDAT by enabling end-users to add their own proprietary defect analysis and image processing algorithms. The plug-in interface makes it possible for the end-users to make use of their collected knowledge through the years of experience in mask inspection process by encapsulating the knowledge into software utilities and plugging them into NxDAT. The plug-in interface is designed with the intent of enabling the pro-active mask defect analysis teams to build competitive differentiation into their defect analysis process while protecting their knowledge internally within their company. By providing interface with all major standard layout and mask data formats, NxDAT enables correlation of defect data on reticles with design and mask databases, further extending the effectiveness of defect analysis for D2DB inspection. NxDAT also includes many other advanced features for easy and fast navigation, visual display of defects, defect selection, multi-tier classification, defect clustering and gridding, sophisticated CD and contact measurement analysis, repeatability analysis such as adder analysis, defect trend, capture rate etc.

  16. Dynamics of the Action of Biocides in Pseudomonas aeruginosa Biofilms▿†

    PubMed Central

    Bridier, A.; Dubois-Brissonnet, F.; Greub, G.; Thomas, V.; Briandet, R.

    2011-01-01

    The biocidal activity of peracetic acid (PAA) and benzalkonium chloride (BAC) on Pseudomonas aeruginosa biofilms was investigated by using a recently developed confocal laser scanning microscopy (CLSM) method that enables the direct and real-time visualization of cell inactivation within the structure. This technique is based on monitoring the loss of fluorescence that corresponds to the leakage of a fluorophore out of cells due to membrane permeabilization by the biocides. Although this approach has previously been used with success with various Gram-positive species, it is not directly applicable to the visualization of Gram-negative strains such as P. aeruginosa, particularly because of limitations regarding fluorescence staining. After adapting the staining procedure to P. aeruginosa, the action of PAA and BAC on the biofilm formed by strain ATCC 15442 was investigated. The results revealed specific inactivation patterns as a function of the mode of action of the biocides. While PAA treatment triggered a uniform loss of fluorescence in the structure, the action of BAC was first localized at the periphery of cell clusters and then gradually spread throughout the biofilm. Visualization of the action of BAC in biofilms formed by three clinical isolates then confirmed the presence of a delay in penetration, showing that diffusion-reaction limitations could provide a major explanation for the resistance of P. aeruginosa biofilms to this biocide. Biochemical analysis suggested a key role for extracellular matrix characteristics in these processes. PMID:21422224

  17. Program Supports Scientific Visualization

    NASA Technical Reports Server (NTRS)

    Keith, Stephan

    1994-01-01

    Primary purpose of General Visualization System (GVS) computer program is to support scientific visualization of data generated by panel-method computer program PMARC_12 (inventory number ARC-13362) on Silicon Graphics Iris workstation. Enables user to view PMARC geometries and wakes as wire frames or as light shaded objects. GVS is written in C language.

  18. Identifying Secondary-School Students' Difficulties When Reading Visual Representations Displayed in Physics Simulations

    ERIC Educational Resources Information Center

    López, Víctor; Pintó, Roser

    2017-01-01

    Computer simulations are often considered effective educational tools, since their visual and communicative power enable students to better understand physical systems and phenomena. However, previous studies have found that when students read visual representations some reading difficulties can arise, especially when these are complex or dynamic…

  19. Visualizing the Heliosphere

    NASA Technical Reports Server (NTRS)

    Bridgman, William T.; Shirah, Greg W.; Mitchell, Horace G.

    2008-01-01

    Today, scientific data and models can combine with modern animation tools to produce compelling visualizations to inform and educate. The Scientific Visualization Studio at Goddard Space Flight Center merges these techniques from the very different worlds of entertainment and science to enable scientists and the general public to 'see the unseeable' in new ways.

  20. Visual examination apparatus

    NASA Technical Reports Server (NTRS)

    Haines, R. F.; Fitzgerald, J. W.; Rositano, S. A. (Inventor)

    1973-01-01

    An automated visual examination apparatus for measuring visual sensitivity and mapping blind spot location is described. The apparatus includes a projection system for displaying to a patient a series of visual stimuli, a response switch enabling him to indicate his reaction to the stimuli, and a recording system responsive to both the visual stimuli per se and the patient's response. The recording system provides a correlated permanent record of both stimuli and response from which a substantive and readily apparent visual evaluation can be made.

  1. Self-Management of Patient Body Position, Pose, and Motion Using Wide-Field, Real-Time Optical Measurement Feedback: Results of a Volunteer Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parkhurst, James M.; Price, Gareth J., E-mail: gareth.price@christie.nhs.uk; Faculty of Medical and Human Sciences, Manchester Academic Health Sciences Centre, University of Manchester, Manchester

    2013-12-01

    Purpose: We present the results of a clinical feasibility study, performed in 10 healthy volunteers undergoing a simulated treatment over 3 sessions, to investigate the use of a wide-field visual feedback technique intended to help patients control their pose while reducing motion during radiation therapy treatment. Methods and Materials: An optical surface sensor is used to capture wide-area measurements of a subject's body surface with visualizations of these data displayed back to them in real time. In this study we hypothesize that this active feedback mechanism will enable patients to control their motion and help them maintain their setup posemore » and position. A capability hierarchy of 3 different level-of-detail abstractions of the measured surface data is systematically compared. Results: Use of the device enabled volunteers to increase their conformance to a reference surface, as measured by decreased variability across their body surfaces. The use of visual feedback also enabled volunteers to reduce their respiratory motion amplitude to 1.7 ± 0.6 mm compared with 2.7 ± 1.4 mm without visual feedback. Conclusions: The use of live feedback of their optically measured body surfaces enabled a set of volunteers to better manage their pose and motion when compared with free breathing. The method is suitable to be taken forward to patient studies.« less

  2. Competitive integration of visual and goal-related signals on neuronal accumulation rate: a correlate of oculomotor capture in the superior colliculus.

    PubMed

    White, Brian J; Marino, Robert A; Boehnke, Susan E; Itti, Laurent; Theeuwes, Jan; Munoz, Douglas P

    2013-10-01

    The mechanisms that underlie the integration of visual and goal-related signals for the production of saccades remain poorly understood. Here, we examined how spatial proximity of competing stimuli shapes goal-directed responses in the superior colliculus (SC), a midbrain structure closely associated with the control of visual attention and eye movements. Monkeys were trained to perform an oculomotor-capture task [Theeuwes, J., Kramer, A. F., Hahn, S., Irwin, D. E., & Zelinsky, G. J. Influence of attentional capture on oculomotor control. Journal of Experimental Psychology. Human Perception and Performance, 25, 1595-1608, 1999], in which a target singleton was revealed via an isoluminant color change in all but one item. On a portion of the trials, an additional salient item abruptly appeared near or far from the target. We quantified how spatial proximity between the abrupt-onset and the target shaped the goal-directed response. We found that the appearance of an abrupt-onset near the target induced a transient decrease in goal-directed discharge of SC visuomotor neurons. Although this was indicative of spatial competition, it was immediately followed by a rebound in presaccadic activation, which facilitated the saccadic response (i.e., it induced shorter saccadic RT). A similar suppression also occurred at most nontarget locations even in the absence of the abrupt-onset. This is indicative of a mechanism that enabled monkeys to quickly discount stimuli that shared the common nontarget feature. These results reveal a pattern of excitation/inhibition across the SC visuomotor map that acted to facilitate optimal behavior-the short duration suppression minimized the probability of capture by salient distractors, whereas a subsequent boost in accumulation rate ensured a fast goal-directed response. Such nonlinear dynamics should be incorporated into future biologically plausible models of saccade behavior.

  3. EMERALD: Coping with the Explosion of Seismic Data

    NASA Astrophysics Data System (ADS)

    West, J. D.; Fouch, M. J.; Arrowsmith, R.

    2009-12-01

    The geosciences are currently generating an unparalleled quantity of new public broadband seismic data with the establishment of large-scale seismic arrays such as the EarthScope USArray, which are enabling new and transformative scientific discoveries of the structure and dynamics of the Earth’s interior. Much of this explosion of data is a direct result of the formation of the IRIS consortium, which has enabled an unparalleled level of open exchange of seismic instrumentation, data, and methods. The production of these massive volumes of data has generated new and serious data management challenges for the seismological community. A significant challenge is the maintenance and updating of seismic metadata, which includes information such as station location, sensor orientation, instrument response, and clock timing data. This key information changes at unknown intervals, and the changes are not generally communicated to data users who have already downloaded and processed data. Another basic challenge is the ability to handle massive seismic datasets when waveform file volumes exceed the fundamental limitations of a computer’s operating system. A third, long-standing challenge is the difficulty of exchanging seismic processing codes between researchers; each scientist typically develops his or her own unique directory structure and file naming convention, requiring that codes developed by another researcher be rewritten before they can be used. To address these challenges, we are developing EMERALD (Explore, Manage, Edit, Reduce, & Analyze Large Datasets). The overarching goal of the EMERALD project is to enable more efficient and effective use of seismic datasets ranging from just a few hundred to millions of waveforms with a complete database-driven system, leading to higher quality seismic datasets for scientific analysis and enabling faster, more efficient scientific research. We will present a preliminary (beta) version of EMERALD, an integrated, extensible, standalone database server system based on the open-source PostgreSQL database engine. The system is designed for fast and easy processing of seismic datasets, and provides the necessary tools to manage very large datasets and all associated metadata. EMERALD provides methods for efficient preprocessing of seismic records; large record sets can be easily and quickly searched, reviewed, revised, reprocessed, and exported. EMERALD can retrieve and store station metadata and alert the user to metadata changes. The system provides many methods for visualizing data, analyzing dataset statistics, and tracking the processing history of individual datasets. EMERALD allows development and sharing of visualization and processing methods using any of 12 programming languages. EMERALD is designed to integrate existing software tools; the system provides wrapper functionality for existing widely-used programs such as GMT, SOD, and TauP. Users can interact with EMERALD via a web browser interface, or they can directly access their data from a variety of database-enabled external tools. Data can be imported and exported from the system in a variety of file formats, or can be directly requested and downloaded from the IRIS DMC from within EMERALD.

  4. Remote visualization and scale analysis of large turbulence datatsets

    NASA Astrophysics Data System (ADS)

    Livescu, D.; Pulido, J.; Burns, R.; Canada, C.; Ahrens, J.; Hamann, B.

    2015-12-01

    Accurate simulations of turbulent flows require solving all the dynamically relevant scales of motions. This technique, called Direct Numerical Simulation, has been successfully applied to a variety of simple flows; however, the large-scale flows encountered in Geophysical Fluid Dynamics (GFD) would require meshes outside the range of the most powerful supercomputers for the foreseeable future. Nevertheless, the current generation of petascale computers has enabled unprecedented simulations of many types of turbulent flows which focus on various GFD aspects, from the idealized configurations extensively studied in the past to more complex flows closer to the practical applications. The pace at which such simulations are performed only continues to increase; however, the simulations themselves are restricted to a small number of groups with access to large computational platforms. Yet the petabytes of turbulence data offer almost limitless information on many different aspects of the flow, from the hierarchy of turbulence moments, spectra and correlations, to structure-functions, geometrical properties, etc. The ability to share such datasets with other groups can significantly reduce the time to analyze the data, help the creative process and increase the pace of discovery. Using the largest DOE supercomputing platforms, we have performed some of the biggest turbulence simulations to date, in various configurations, addressing specific aspects of turbulence production and mixing mechanisms. Until recently, the visualization and analysis of such datasets was restricted by access to large supercomputers. The public Johns Hopkins Turbulence database simplifies the access to multi-Terabyte turbulence datasets and facilitates turbulence analysis through the use of commodity hardware. First, one of our datasets, which is part of the database, will be described and then a framework that adds high-speed visualization and wavelet support for multi-resolution analysis of turbulence will be highlighted. The addition of wavelet support reduces the latency and bandwidth requirements for visualization, allowing for many concurrent users, and enables new types of analyses, including scale decomposition and coherent feature extraction.

  5. A Robot Hand Testbed Designed for Enhancing Embodiment and Functional Neurorehabilitation of Body Schema in Subjects with Upper Limb Impairment or Loss

    PubMed Central

    Hellman, Randall B.; Chang, Eric; Tanner, Justin; Helms Tillery, Stephen I.; Santos, Veronica J.

    2015-01-01

    Many upper limb amputees experience an incessant, post-amputation “phantom limb pain” and report that their missing limbs feel paralyzed in an uncomfortable posture. One hypothesis is that efferent commands no longer generate expected afferent signals, such as proprioceptive feedback from changes in limb configuration, and that the mismatch of motor commands and visual feedback is interpreted as pain. Non-invasive therapeutic techniques for treating phantom limb pain, such as mirror visual feedback (MVF), rely on visualizations of postural changes. Advances in neural interfaces for artificial sensory feedback now make it possible to combine MVF with a high-tech “rubber hand” illusion, in which subjects develop a sense of embodiment with a fake hand when subjected to congruent visual and somatosensory feedback. We discuss clinical benefits that could arise from the confluence of known concepts such as MVF and the rubber hand illusion, and new technologies such as neural interfaces for sensory feedback and highly sensorized robot hand testbeds, such as the “BairClaw” presented here. Our multi-articulating, anthropomorphic robot testbed can be used to study proprioceptive and tactile sensory stimuli during physical finger–object interactions. Conceived for artificial grasp, manipulation, and haptic exploration, the BairClaw could also be used for future studies on the neurorehabilitation of somatosensory disorders due to upper limb impairment or loss. A remote actuation system enables the modular control of tendon-driven hands. The artificial proprioception system enables direct measurement of joint angles and tendon tensions while temperature, vibration, and skin deformation are provided by a multimodal tactile sensor. The provision of multimodal sensory feedback that is spatiotemporally consistent with commanded actions could lead to benefits such as reduced phantom limb pain, and increased prosthesis use due to improved functionality and reduced cognitive burden. PMID:25745391

  6. A robot hand testbed designed for enhancing embodiment and functional neurorehabilitation of body schema in subjects with upper limb impairment or loss.

    PubMed

    Hellman, Randall B; Chang, Eric; Tanner, Justin; Helms Tillery, Stephen I; Santos, Veronica J

    2015-01-01

    Many upper limb amputees experience an incessant, post-amputation "phantom limb pain" and report that their missing limbs feel paralyzed in an uncomfortable posture. One hypothesis is that efferent commands no longer generate expected afferent signals, such as proprioceptive feedback from changes in limb configuration, and that the mismatch of motor commands and visual feedback is interpreted as pain. Non-invasive therapeutic techniques for treating phantom limb pain, such as mirror visual feedback (MVF), rely on visualizations of postural changes. Advances in neural interfaces for artificial sensory feedback now make it possible to combine MVF with a high-tech "rubber hand" illusion, in which subjects develop a sense of embodiment with a fake hand when subjected to congruent visual and somatosensory feedback. We discuss clinical benefits that could arise from the confluence of known concepts such as MVF and the rubber hand illusion, and new technologies such as neural interfaces for sensory feedback and highly sensorized robot hand testbeds, such as the "BairClaw" presented here. Our multi-articulating, anthropomorphic robot testbed can be used to study proprioceptive and tactile sensory stimuli during physical finger-object interactions. Conceived for artificial grasp, manipulation, and haptic exploration, the BairClaw could also be used for future studies on the neurorehabilitation of somatosensory disorders due to upper limb impairment or loss. A remote actuation system enables the modular control of tendon-driven hands. The artificial proprioception system enables direct measurement of joint angles and tendon tensions while temperature, vibration, and skin deformation are provided by a multimodal tactile sensor. The provision of multimodal sensory feedback that is spatiotemporally consistent with commanded actions could lead to benefits such as reduced phantom limb pain, and increased prosthesis use due to improved functionality and reduced cognitive burden.

  7. Fluorescence In situ Hybridization: Cell-Based Genetic Diagnostic and Research Applications.

    PubMed

    Cui, Chenghua; Shu, Wei; Li, Peining

    2016-01-01

    Fluorescence in situ hybridization (FISH) is a macromolecule recognition technology based on the complementary nature of DNA or DNA/RNA double strands. Selected DNA strands incorporated with fluorophore-coupled nucleotides can be used as probes to hybridize onto the complementary sequences in tested cells and tissues and then visualized through a fluorescence microscope or an imaging system. This technology was initially developed as a physical mapping tool to delineate genes within chromosomes. Its high analytical resolution to a single gene level and high sensitivity and specificity enabled an immediate application for genetic diagnosis of constitutional common aneuploidies, microdeletion/microduplication syndromes, and subtelomeric rearrangements. FISH tests using panels of gene-specific probes for somatic recurrent losses, gains, and translocations have been routinely applied for hematologic and solid tumors and are one of the fastest-growing areas in cancer diagnosis. FISH has also been used to detect infectious microbias and parasites like malaria in human blood cells. Recent advances in FISH technology involve various methods for improving probe labeling efficiency and the use of super resolution imaging systems for direct visualization of intra-nuclear chromosomal organization and profiling of RNA transcription in single cells. Cas9-mediated FISH (CASFISH) allowed in situ labeling of repetitive sequences and single-copy sequences without the disruption of nuclear genomic organization in fixed or living cells. Using oligopaint-FISH and super-resolution imaging enabled in situ visualization of chromosome haplotypes from differentially specified single-nucleotide polymorphism loci. Single molecule RNA FISH (smRNA-FISH) using combinatorial labeling or sequential barcoding by multiple round of hybridization were applied to measure mRNA expression of multiple genes within single cells. Research applications of these single molecule single cells DNA and RNA FISH techniques have visualized intra-nuclear genomic structure and sub-cellular transcriptional dynamics of many genes and revealed their functions in various biological processes.

  8. A PCR detection method for rapid identification of Melissococcus pluton in honeybee larvae.

    PubMed

    Govan, V A; Brözel, V; Allsopp, M H; Davison, S

    1998-05-01

    Melissococcus pluton is the causative agent of European foulbrood, a disease of honeybee larvae. This bacterium is particularly difficult to isolate because of its stringent growth requirements and competition from other bacteria. PCR was used selectively to amplify specific rRNA gene sequences of M. pluton from pure culture, from crude cell lysates, and directly from infected bee larvae. The PCR primers were designed from M. pluton 16S rRNA sequence data. The PCR products were visualized by agarose gel electrophoresis and confirmed as originating from M. pluton by sequencing in both directions. Detection was highly specific, and the probes did not hybridize with DNA from other bacterial species tested. This method enabled the rapid and specific detection and identification of M. pluton from pure cultures and infected bee larvae.

  9. A PCR Detection Method for Rapid Identification of Melissococcus pluton in Honeybee Larvae

    PubMed Central

    Govan, V. A.; Brözel, V.; Allsopp, M. H.; Davison, S.

    1998-01-01

    Melissococcus pluton is the causative agent of European foulbrood, a disease of honeybee larvae. This bacterium is particularly difficult to isolate because of its stringent growth requirements and competition from other bacteria. PCR was used selectively to amplify specific rRNA gene sequences of M. pluton from pure culture, from crude cell lysates, and directly from infected bee larvae. The PCR primers were designed from M. pluton 16S rRNA sequence data. The PCR products were visualized by agarose gel electrophoresis and confirmed as originating from M. pluton by sequencing in both directions. Detection was highly specific, and the probes did not hybridize with DNA from other bacterial species tested. This method enabled the rapid and specific detection and identification of M. pluton from pure cultures and infected bee larvae. PMID:9572987

  10. Protein subcellular localization assays using split fluorescent proteins

    DOEpatents

    Waldo, Geoffrey S [Santa Fe, NM; Cabantous, Stephanie [Los Alamos, NM

    2009-09-08

    The invention provides protein subcellular localization assays using split fluorescent protein systems. The assays are conducted in living cells, do not require fixation and washing steps inherent in existing immunostaining and related techniques, and permit rapid, non-invasive, direct visualization of protein localization in living cells. The split fluorescent protein systems used in the practice of the invention generally comprise two or more self-complementing fragments of a fluorescent protein, such as GFP, wherein one or more of the fragments correspond to one or more beta-strand microdomains and are used to "tag" proteins of interest, and a complementary "assay" fragment of the fluorescent protein. Either or both of the fragments may be functionalized with a subcellular targeting sequence enabling it to be expressed in or directed to a particular subcellular compartment (i.e., the nucleus).

  11. Planetary Surface Visualization and Analytics

    NASA Astrophysics Data System (ADS)

    Law, E. S.; Solar System Treks Team

    2018-04-01

    An introduction and update of the Solar System Treks Project which provides a suite of interactive visualization and analysis tools to enable users (engineers, scientists, public) to access large amounts of mapped planetary data products.

  12. A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents

    PubMed Central

    Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha

    2017-01-01

    Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control—enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates. PMID:28446872

  13. A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents.

    PubMed

    Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha

    2017-01-01

    Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control-enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates.

  14. CerebralWeb: a Cytoscape.js plug-in to visualize networks stratified by subcellular localization.

    PubMed

    Frias, Silvia; Bryan, Kenneth; Brinkman, Fiona S L; Lynn, David J

    2015-01-01

    CerebralWeb is a light-weight JavaScript plug-in that extends Cytoscape.js to enable fast and interactive visualization of molecular interaction networks stratified based on subcellular localization or other user-supplied annotation. The application is designed to be easily integrated into any website and is configurable to support customized network visualization. CerebralWeb also supports the automatic retrieval of Cerebral-compatible localizations for human, mouse and bovine genes via a web service and enables the automated parsing of Cytoscape compatible XGMML network files. CerebralWeb currently supports embedded network visualization on the InnateDB (www.innatedb.com) and Allergy and Asthma Portal (allergen.innatedb.com) database and analysis resources. Database tool URL: http://www.innatedb.com/CerebralWeb © The Author(s) 2015. Published by Oxford University Press.

  15. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion

    PubMed Central

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828

  16. The wide-range ejector flowmeter: calibrated gas evacuation comprising both high and low gas flows.

    PubMed

    Waaben, J; Brinkløv, M M; Jørgensen, S

    1984-11-01

    The wide-range ejector flowmeter is an active scavenging system applying calibrated gas removal directly to the anaesthetic circuit. The evacuation rate can be adjusted on the flowmeter under visual control using the calibration scale ranging from 200 ml X min-1 to 151 X min-1. The accuracy of the calibration was tested on three ejector flowmeters at 12 different presettings. The percentage deviation from presetting varied from + 18 to - 19.4 per cent. The ejector flowmeter enables the provision of consistent and accurately calibrated extraction of waste gases and is applicable within a wide range of fresh gas flows.

  17. Photolithography of Dithiocarbamate-Anchored Monolayers and Polymers on Gold

    PubMed Central

    Leonov, Alexei P.; Wei, Alexander

    2011-01-01

    Dithiocarbamate (DTC)-anchored monolayers and polymers were investigated as positive resists for UV photolithography on planar and roughened Au surfaces. DTCs were formed in situ by the condensation of CS2 with monovalent or polyvalent amines such as linear polyethyleneimine (PEI) under mildly basic aqueous conditions, just prior to surface passivation. The robust adsorption of the polyvalent PEI-DTC to Au surfaces supported high levels of resistance to photoablation, providing opportunities to generate thin films with gradient functionality. Treatment of photopatterned substrates with alkanethiols produced binary coatings, enabling a direct visual comparison of DTC- and thiol-passivated surfaces against chemically induced corrosion using confocal microscopy. PMID:21894240

  18. On the predictions of the 11B solid state NMR parameters

    NASA Astrophysics Data System (ADS)

    Czernek, Jiří; Brus, Jiří

    2016-07-01

    The set of boron containing compounds has been subject to the prediction of the 11B solid state NMR spectral parameters using DFT-GIPAW methods properly treating the solid phase effects. The quantification of the differences between measured and theoretical values has been presented, which is directly applicable in structural studies involving 11B nuclei. In particular, a simple scheme has been proposed, which is expected to provide for an estimate of the 11B chemical shift within ±2.0 ppm from the experimental value. The computer program, INFOR, enabling the visualization of concomitant Euler rotations related to the tensorial transformations has been presented.

  19. Soft X-ray Foucault test: A path to diffraction-limited imaging

    NASA Astrophysics Data System (ADS)

    Ray-Chaudhuri, A. K.; Ng, W.; Liang, S.; Cerrina, F.

    1994-08-01

    We present the development of a soft X-ray Foucault test capable of characterizing the imaging properties of a soft X-ray optical system at its operational wavelength and its operational configuration. This optical test enables direct visual inspection of imaging aberrations and provides real-time feedback for the alignment of high resolution soft X-ray optical systems. A first application of this optical test was carried out on a Mo-Si multilayer-coated Schwarzschild objective as part of the MAXIMUM project. Results from the alignment procedure are presented as well as the possibility for testing in the hard X-ray regime.

  20. Laser Capture Microdissection in the Genomic and Proteomic Era: Targeting the Genetic Basis of Cancer

    PubMed Central

    Domazet, Barbara; MacLennan, Gregory T.; Lopez-Beltran, Antonio; Montironi, Rodolfo; Cheng, Liang

    2008-01-01

    The advent of new technologies has enabled deeper insight into processes atsubcellular levels, which will ultimately improve diagnostic procedures and patient outcome. Thanks to cell enrichment methods, it is now possible to study cells in their native environment. This has greatly contributed to a rapid growth in several areas, such as gene expression analysis, proteomics, and metabolonomics. Laser capture microdissection (LCM) as a method of procuring subpopulations of cells under direct visual inspection is playing an important role in these areas. This review provides an overview of existing LCM technology and its downstream applications in genomics, proteomics, diagnostics and therapy. PMID:18787684

  1. Laser capture microdissection in the genomic and proteomic era: targeting the genetic basis of cancer.

    PubMed

    Domazet, Barbara; Maclennan, Gregory T; Lopez-Beltran, Antonio; Montironi, Rodolfo; Cheng, Liang

    2008-03-15

    The advent of new technologies has enabled deeper insight into processes at subcellular levels, which will ultimately improve diagnostic procedures and patient outcome. Thanks to cell enrichment methods, it is now possible to study cells in their native environment. This has greatly contributed to a rapid growth in several areas, such as gene expression analysis, proteomics, and metabolonomics. Laser capture microdissection (LCM) as a method of procuring subpopulations of cells under direct visual inspection is playing an important role in these areas. This review provides an overview of existing LCM technology and its downstream applications in genomics, proteomics, diagnostics and therapy.

  2. Spelling: A Visual Skill.

    ERIC Educational Resources Information Center

    Hendrickson, Homer

    1988-01-01

    Spelling problems arise due to problems with form discrimination and inadequate visualization. A child's sequence of visual development involves learning motor control and coordination, with vision directing and monitoring the movements; learning visual comparison of size, shape, directionality, and solidity; developing visual memory or recall;…

  3. Open Source Next Generation Visualization Software for Interplanetary Missions

    NASA Technical Reports Server (NTRS)

    Trimble, Jay; Rinker, George

    2016-01-01

    Mission control is evolving quickly, driven by the requirements of new missions, and enabled by modern computing capabilities. Distributed operations, access to data anywhere, data visualization for spacecraft analysis that spans multiple data sources, flexible reconfiguration to support multiple missions, and operator use cases, are driving the need for new capabilities. NASA's Advanced Multi-Mission Operations System (AMMOS), Ames Research Center (ARC) and the Jet Propulsion Laboratory (JPL) are collaborating to build a new generation of mission operations software for visualization, to enable mission control anywhere, on the desktop, tablet and phone. The software is built on an open source platform that is open for contributions (http://nasa.github.io/openmct).

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pugmire, David; Kress, James; Choi, Jong

    Data driven science is becoming increasingly more common, complex, and is placing tremendous stresses on visualization and analysis frameworks. Data sources producing 10GB per second (and more) are becoming increasingly commonplace in both simulation, sensor and experimental sciences. These data sources, which are often distributed around the world, must be analyzed by teams of scientists that are also distributed. Enabling scientists to view, query and interact with such large volumes of data in near-real-time requires a rich fusion of visualization and analysis techniques, middleware and workflow systems. Here, this paper discusses initial research into visualization and analysis of distributed datamore » workflows that enables scientists to make near-real-time decisions of large volumes of time varying data.« less

  5. How Does Technology-Enabled Active Learning Affect Undergraduate Students' Understanding of Electromagnetism Concepts?

    ERIC Educational Resources Information Center

    Dori, Yehudit Judy; Belcher, John

    2005-01-01

    Educational technology supports meaningful learning and enables the presentation of spatial and dynamic images, which portray relationships among complex concepts. The Technology-Enabled Active Learning (TEAL) Project at the Massachusetts Institute of Technology (MIT) involves media-rich software for simulation and visualization in freshman…

  6. An interactive, stereoscopic virtual environment for medical imaging visualization, simulation and training

    NASA Astrophysics Data System (ADS)

    Krueger, Evan; Messier, Erik; Linte, Cristian A.; Diaz, Gabriel

    2017-03-01

    Recent advances in medical image acquisition allow for the reconstruction of anatomies with 3D, 4D, and 5D renderings. Nevertheless, standard anatomical and medical data visualization still relies heavily on the use of traditional 2D didactic tools (i.e., textbooks and slides), which restrict the presentation of image data to a 2D slice format. While these approaches have their merits beyond being cost effective and easy to disseminate, anatomy is inherently three-dimensional. By using 2D visualizations to illustrate more complex morphologies, important interactions between structures can be missed. In practice, such as in the planning and execution of surgical interventions, professionals require intricate knowledge of anatomical complexities, which can be more clearly communicated and understood through intuitive interaction with 3D volumetric datasets, such as those extracted from high-resolution CT or MRI scans. Open source, high quality, 3D medical imaging datasets are freely available, and with the emerging popularity of 3D display technologies, affordable and consistent 3D anatomical visualizations can be created. In this study we describe the design, implementation, and evaluation of one such interactive, stereoscopic visualization paradigm for human anatomy extracted from 3D medical images. A stereoscopic display was created by projecting the scene onto the lab floor using sequential frame stereo projection and viewed through active shutter glasses. By incorporating a PhaseSpace motion tracking system, a single viewer can navigate an augmented reality environment and directly manipulate virtual objects in 3D. While this paradigm is sufficiently versatile to enable a wide variety of applications in need of 3D visualization, we designed our study to work as an interactive game, which allows users to explore the anatomy of various organs and systems. In this study we describe the design, implementation, and evaluation of an interactive and stereoscopic visualization platform for exploring and understanding human anatomy. This system can present medical imaging data in three dimensions and allows for direct physical interaction and manipulation by the viewer. This should provide numerous benefits over traditional, 2D display and interaction modalities, and in our analysis, we aim to quantify and qualify users' visual and motor interactions with the virtual environment when employing this interactive display as a 3D didactic tool.

  7. Metabolome searcher: a high throughput tool for metabolite identification and metabolic pathway mapping directly from mass spectrometry and using genome restriction.

    PubMed

    Dhanasekaran, A Ranjitha; Pearson, Jon L; Ganesan, Balasubramanian; Weimer, Bart C

    2015-02-25

    Mass spectrometric analysis of microbial metabolism provides a long list of possible compounds. Restricting the identification of the possible compounds to those produced by the specific organism would benefit the identification process. Currently, identification of mass spectrometry (MS) data is commonly done using empirically derived compound databases. Unfortunately, most databases contain relatively few compounds, leaving long lists of unidentified molecules. Incorporating genome-encoded metabolism enables MS output identification that may not be included in databases. Using an organism's genome as a database restricts metabolite identification to only those compounds that the organism can produce. To address the challenge of metabolomic analysis from MS data, a web-based application to directly search genome-constructed metabolic databases was developed. The user query returns a genome-restricted list of possible compound identifications along with the putative metabolic pathways based on the name, formula, SMILES structure, and the compound mass as defined by the user. Multiple queries can be done simultaneously by submitting a text file created by the user or obtained from the MS analysis software. The user can also provide parameters specific to the experiment's MS analysis conditions, such as mass deviation, adducts, and detection mode during the query so as to provide additional levels of evidence to produce the tentative identification. The query results are provided as an HTML page and downloadable text file of possible compounds that are restricted to a specific genome. Hyperlinks provided in the HTML file connect the user to the curated metabolic databases housed in ProCyc, a Pathway Tools platform, as well as the KEGG Pathway database for visualization and metabolic pathway analysis. Metabolome Searcher, a web-based tool, facilitates putative compound identification of MS output based on genome-restricted metabolic capability. This enables researchers to rapidly extend the possible identifications of large data sets for metabolites that are not in compound databases. Putative compound names with their associated metabolic pathways from metabolomics data sets are returned to the user for additional biological interpretation and visualization. This novel approach enables compound identification by restricting the possible masses to those encoded in the genome.

  8. Virtual Worlds, Virtual Literacy: An Educational Exploration

    ERIC Educational Resources Information Center

    Stoerger, Sharon

    2008-01-01

    Virtual worlds enable students to learn through seeing, knowing, and doing within visually rich and mentally engaging spaces. Rather than reading about events, students become part of the events through the adoption of a pre-set persona. Along with visual feedback that guides the players' activities and the development of visual skills, visual…

  9. Form + Theme + Context: Balancing Considerations for Meaningful Art Learning

    ERIC Educational Resources Information Center

    Sandell, Renee

    2006-01-01

    Today's students need visual literacy skills and knowledge that enable them to encode concepts as well as decode the meaning of society's images, ideas, and media of the past as well as the increasingly complex visual world. In this article, the author discusses how art teachers can help students understand the increasingly visual/material…

  10. Visual and Verbal Literacy.

    ERIC Educational Resources Information Center

    Stewig, John Warren

    Visual literacy--seeing with insight--enables child viewers of pictures to examine elements such as color, line, shape, form, depth, and detail to see what relations exist both among these components and between what is in the picture and their previous visual experience. The viewer can extract meaning and respond to it, either by talking or…

  11. Visual Data Comm: A Tool for Visualizing Data Communication in the Multi Sector Planner Study

    NASA Technical Reports Server (NTRS)

    Lee, Hwasoo Eric

    2010-01-01

    Data comm is a new technology proposed in future air transport system as a potential tool to provide comprehensive data connectivity. It is a key enabler to manage 4D trajectory digitally, potentially resulting in improved flight times and increased throughput. Future concepts with data comm integration have been tested in a number of human-in-the-loop studies but analyzing the results has proven to be particularly challenging because future traffic environment in which data comm is fully enabled has assumed high traffic density, resulting in data set with large amount of information. This paper describes the motivation, design, current and potential future application of Visual Data Comm (VDC), a tool for visualizing data developed in Java using Processing library which is a tool package designed for interactive visualization programming. This paper includes an example of an application of VDC on data pertaining to the most recent Multi Sector Planner study, conducted at NASA s Airspace Operations Laboratory in 2009, in which VDC was used to visualize and interpret data comm activities

  12. Streaming Visual Analytics Workshop Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Kristin A.; Burtner, Edwin R.; Kritzstein, Brian P.

    How can we best enable users to understand complex emerging events and make appropriate assessments from streaming data? This was the central question addressed at a three-day workshop on streaming visual analytics. This workshop was organized by Pacific Northwest National Laboratory for a government sponsor. It brought together forty researchers and subject matter experts from government, industry, and academia. This report summarizes the outcomes from that workshop. It describes elements of the vision for a streaming visual analytic environment and set of important research directions needed to achieve this vision. Streaming data analysis is in many ways the analysis andmore » understanding of change. However, current visual analytics systems usually focus on static data collections, meaning that dynamically changing conditions are not appropriately addressed. The envisioned mixed-initiative streaming visual analytics environment creates a collaboration between the analyst and the system to support the analysis process. It raises the level of discourse from low-level data records to higher-level concepts. The system supports the analyst’s rapid orientation and reorientation as situations change. It provides an environment to support the analyst’s critical thinking. It infers tasks and interests based on the analyst’s interactions. The system works as both an assistant and a devil’s advocate, finding relevant data and alerts as well as considering alternative hypotheses. Finally, the system supports sharing of findings with others. Making such an environment a reality requires research in several areas. The workshop discussions focused on four broad areas: support for critical thinking, visual representation of change, mixed-initiative analysis, and the use of narratives for analysis and communication.« less

  13. `We put on the glasses and Moon comes closer!' Urban Second Graders Exploring the Earth, the Sun and Moon Through 3D Technologies in a Science and Literacy Unit

    NASA Astrophysics Data System (ADS)

    Isik-Ercan, Zeynep; Zeynep Inan, Hatice; Nowak, Jeffrey A.; Kim, Beomjin

    2014-01-01

    This qualitative case study describes (a) the ways 3D visualization, coupled with other science and literacy experiences, supported young children's first exploration of the Earth-Sun-Moon system and (b) the perspectives of classroom teachers and children on using 3D visualization. We created three interactive 3D software modules that simulate day and night, Moon phases and seasons. These modules were used in a science and literacy unit for 35 second graders at an urban elementary school in Midwestern USA. Data included pre- and post-interviews, audio-taped lessons and classroom observations. Post-interviews demonstrated that children's knowledge of the shapes and the movements of the Earth and Moon, alternation of day and night, the occurrence of the seasons, and Moon's changing appearance increased. Second graders reported that they enjoyed expanding their knowledge through hands-on experiences; through its reality effect, 3D visualization enabled them to observe the space objects that move in the virtual space. The teachers noted that 3D visualization stimulated children's interest in space and that using 3D visualization in combination with other teaching methods-literacy experiences, videos and photos, simulations, discussions, and presentations-supported student learning. The teachers and the students still experienced challenges using 3D visualization due to technical problems with 3D vision and time constraints. We conclude that 3D visualization offers hands-on experiences for challenging science concepts and may support young children's ability to view phenomena that would typically be observed through direct, long-term observations in outer space. Results imply a reconsideration of assumed capabilities of young children to understand astronomical phenomena.

  14. Congruent representation of visual and acoustic space in the superior colliculus of the echolocating bat Phyllostomus discolor.

    PubMed

    Hoffmann, Susanne; Vega-Zuniga, Tomas; Greiter, Wolfgang; Krabichler, Quirin; Bley, Alexandra; Matthes, Mariana; Zimmer, Christiane; Firzlaff, Uwe; Luksch, Harald

    2016-11-01

    The midbrain superior colliculus (SC) commonly features a retinotopic representation of visual space in its superficial layers, which is congruent with maps formed by multisensory neurons and motor neurons in its deep layers. Information flow between layers is suggested to enable the SC to mediate goal-directed orienting movements. While most mammals strongly rely on vision for orienting, some species such as echolocating bats have developed alternative strategies, which raises the question how sensory maps are organized in these animals. We probed the visual system of the echolocating bat Phyllostomus discolor and found that binocular high acuity vision is frontally oriented and thus aligned with the biosonar system, whereas monocular visual fields cover a large area of peripheral space. For the first time in echolocating bats, we could show that in contrast with other mammals, visual processing is restricted to the superficial layers of the SC. The topographic representation of visual space, however, followed the general mammalian pattern. In addition, we found a clear topographic representation of sound azimuth in the deeper collicular layers, which was congruent with the superficial visual space map and with a previously documented map of orienting movements. Especially for bats navigating at high speed in densely structured environments, it is vitally important to transfer and coordinate spatial information between sensors and motor systems. Here, we demonstrate first evidence for the existence of congruent maps of sensory space in the bat SC that might serve to generate a unified representation of the environment to guide motor actions. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  15. QuIN: A Web Server for Querying and Visualizing Chromatin Interaction Networks

    PubMed Central

    Thibodeau, Asa; Márquez, Eladio J.; Luo, Oscar; Ruan, Yijun; Shin, Dong-Guk; Stitzel, Michael L.; Ucar, Duygu

    2016-01-01

    Recent studies of the human genome have indicated that regulatory elements (e.g. promoters and enhancers) at distal genomic locations can interact with each other via chromatin folding and affect gene expression levels. Genomic technologies for mapping interactions between DNA regions, e.g., ChIA-PET and HiC, can generate genome-wide maps of interactions between regulatory elements. These interaction datasets are important resources to infer distal gene targets of non-coding regulatory elements and to facilitate prioritization of critical loci for important cellular functions. With the increasing diversity and complexity of genomic information and public ontologies, making sense of these datasets demands integrative and easy-to-use software tools. Moreover, network representation of chromatin interaction maps enables effective data visualization, integration, and mining. Currently, there is no software that can take full advantage of network theory approaches for the analysis of chromatin interaction datasets. To fill this gap, we developed a web-based application, QuIN, which enables: 1) building and visualizing chromatin interaction networks, 2) annotating networks with user-provided private and publicly available functional genomics and interaction datasets, 3) querying network components based on gene name or chromosome location, and 4) utilizing network based measures to identify and prioritize critical regulatory targets and their direct and indirect interactions. AVAILABILITY: QuIN’s web server is available at http://quin.jax.org QuIN is developed in Java and JavaScript, utilizing an Apache Tomcat web server and MySQL database and the source code is available under the GPLV3 license available on GitHub: https://github.com/UcarLab/QuIN/. PMID:27336171

  16. In-flight flow visualization results from the X-29A aircraft at high angles of attack

    NASA Technical Reports Server (NTRS)

    Delfrate, John H.; Saltzman, John A.

    1992-01-01

    Flow visualization techniques were used on the X-29A aircraft at high angles of attack to study the vortical flow off the forebody and the surface flow on the wing and tail. The forebody vortex system was studied because asymmetries in the vortex system were suspected of inducing uncommanded yawing moments at zero sideslip. Smoke enabled visualization of the vortex system and correlation of its orientation with flight yawing moment data. Good agreement was found between vortex system asymmetries and the occurrence of yawing moments. Surface flow on the forward-swept wing of the X-29A was studied using tufts and flow cones. As angle of attack increased, separated flow initiated at the root and spread outboard encompassing the full wing by 30 deg angle of attack. In general, the progression of the separated flow correlated well with subscale model lift data. Surface flow on the vertical tail was also studied using tufts and flow cones. As angle of attack increased, separated flow initiated at the root and spread upward. The area of separated flow on the vertical tail at angles of attack greater than 20 deg correlated well with the marked decrease in aircraft directional stability.

  17. Restoring visual perception using microsystem technologies: engineering and manufacturing perspectives.

    PubMed

    Krisch, I; Hosticka, B J

    2007-01-01

    Microsystem technologies offer significant advantages in the development of neural prostheses. In the last two decades, it has become feasible to develop intelligent prostheses that are fully implantable into the human body with respect to functionality, complexity, size, weight, and compactness. Design and development enforce collaboration of various disciplines including physicians, engineers, and scientists. The retina implant system can be taken as one sophisticated example of a prosthesis which bypasses neural defects and enables direct electrical stimulation of nerve cells. This micro implantable visual prosthesis assists blind patients to return to the normal course of life. The retina implant is intended for patients suffering from retinitis pigmentosa or macular degeneration. In this contribution, we focus on the epiretinal prosthesis and discuss topics like system design, data and power transfer, fabrication, packaging and testing. In detail, the system is based upon an implantable micro electro stimulator which is powered and controlled via a wireless inductive link. Microelectronic circuits for data encoding and stimulation are assembled on flexible substrates with an integrated electrode array. The implant system is encapsulated using parylene C and silicone rubber. Results extracted from experiments in vivo demonstrate the retinotopic activation of the visual cortex.

  18. MALDI Mass Spectrometry Imaging for Visualizing In Situ Metabolism of Endogenous Metabolites and Dietary Phytochemicals

    PubMed Central

    Fujimura, Yoshinori; Miura, Daisuke

    2014-01-01

    Understanding the spatial distribution of bioactive small molecules is indispensable for elucidating their biological or pharmaceutical roles. Mass spectrometry imaging (MSI) enables determination of the distribution of ionizable molecules present in tissue sections of whole-body or single heterogeneous organ samples by direct ionization and detection. This emerging technique is now widely used for in situ label-free molecular imaging of endogenous or exogenous small molecules. MSI allows the simultaneous visualization of many types of molecules including a parent molecule and its metabolites. Thus, MSI has received much attention as a potential tool for pathological analysis, understanding pharmaceutical mechanisms, and biomarker discovery. On the other hand, several issues regarding the technical limitations of MSI are as of yet still unresolved. In this review, we describe the capabilities of the latest matrix-assisted laser desorption/ionization (MALDI)-MSI technology for visualizing in situ metabolism of endogenous metabolites or dietary phytochemicals (food factors), and also discuss the technical problems and new challenges, including MALDI matrix selection and metabolite identification, that need to be addressed for effective and widespread application of MSI in the diverse fields of biological, biomedical, and nutraceutical (food functionality) research. PMID:24957029

  19. Modality-Driven Classification and Visualization of Ensemble Variance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald

    Paper for the IEEE Visualization Conference Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space.

  20. Toward Serotonin Fluorescent False Neurotransmitters: Development of Fluorescent Dual Serotonin and Vesicular Monoamine Transporter Substrates for Visualizing Serotonin Neurons.

    PubMed

    Henke, Adam; Kovalyova, Yekaterina; Dunn, Matthew; Dreier, Dominik; Gubernator, Niko G; Dincheva, Iva; Hwu, Christopher; Šebej, Peter; Ansorge, Mark S; Sulzer, David; Sames, Dalibor

    2018-05-16

    Ongoing efforts in our laboratories focus on design of optical reporters known as fluorescent false neurotransmitters (FFNs) that enable the visualization of uptake into, packaging within, and release from individual monoaminergic neurons and presynaptic sites in the brain. Here, we introduce the molecular probe FFN246 as an expansion of the FFN platform to the serotonergic system. Combining the acridone fluorophore with the ethylamine recognition element of serotonin, we identified FFN54 and FFN246 as substrates for both the serotonin transporter and the vesicular monoamine transporter 2 (VMAT2). A systematic structure-activity study revealed the basic structural chemotype of aminoalkyl acridones required for serotonin transporter (SERT) activity and enabled lowering the background labeling of these probes while maintaining SERT activity, which proved essential for obtaining sufficient signal in the brain tissue (FFN246). We demonstrate the utility of FFN246 for direct examination of SERT activity and SERT inhibitors in 96-well cell culture assays, as well as specific labeling of serotonergic neurons of the dorsal raphe nucleus in the living tissue of acute mouse brain slices. While we found only minor FFN246 accumulation in serotonergic axons in murine brain tissue, FFN246 effectively traces serotonin uptake and packaging in the soma of serotonergic neurons with improved photophysical properties and loading parameters compared to known serotonin-based fluorescent tracers.

  1. In vitro study of α-synuclein protofibrils by cryo-EM suggests a Cu(2+)-dependent aggregation pathway.

    PubMed

    Zhang, Hangyu; Griggs, Amy; Rochet, Jean-Christophe; Stanciu, Lia A

    2013-06-18

    The aggregation of α-synuclein is thought to play a role in the death of dopamine neurons in Parkinson's disease (PD). Alpha-synuclein transitions itself through an aggregation pathway consisting of pathogenic species referred to as protofibrils (or oligomer), which ultimately convert to mature fibrils. The structural heterogeneity and instability of protofibrils has significantly impeded advance related to the understanding of their structural characteristics and the amyloid aggregation mystery. Here, we report, to our knowledge for the first time, on α-synuclein protofibril structural characteristics with cryo-electron microscopy. Statistical analysis of annular protofibrils revealed a constant wall thickness as a common feature. The visualization of the assembly steps enabled us to propose a novel, to our knowledge, mechanisms for α-synuclein aggregation involving ring-opening and protofibril-protofibril interaction events. The ion channel-like protofibrils and their membrane permeability have also been found in other amyloid diseases, suggesting a common molecular mechanism of pathological aggregation. Our direct visualization of the aggregation pathway of α-synuclein opens up fresh opportunities to advance the understanding of protein aggregation mechanisms relevant to many amyloid diseases. In turn, this information would enable the development of additional therapeutic strategies aimed at suppressing toxic protofibrils of amyloid proteins involved in neurological disorders. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  2. Activity in human visual and parietal cortex reveals object-based attention in working memory.

    PubMed

    Peters, Benjamin; Kaiser, Jochen; Rahm, Benjamin; Bledowski, Christoph

    2015-02-25

    Visual attention enables observers to select behaviorally relevant information based on spatial locations, features, or objects. Attentional selection is not limited to physically present visual information, but can also operate on internal representations maintained in working memory (WM) in service of higher-order cognition. However, only little is known about whether attention to WM contents follows the same principles as attention to sensory stimuli. To address this question, we investigated in humans whether the typically observed effects of object-based attention in perception are also evident for object-based attentional selection of internal object representations in WM. In full accordance with effects in visual perception, the key behavioral and neuronal characteristics of object-based attention were observed in WM. Specifically, we found that reaction times were shorter when shifting attention to memory positions located on the currently attended object compared with equidistant positions on a different object. Furthermore, functional magnetic resonance imaging and multivariate pattern analysis of visuotopic activity in visual (areas V1-V4) and parietal cortex revealed that directing attention to one position of an object held in WM also enhanced brain activation for other positions on the same object, suggesting that attentional selection in WM activates the entire object. This study demonstrated that all characteristic features of object-based attention are present in WM and thus follows the same principles as in perception. Copyright © 2015 the authors 0270-6474/15/353360-10$15.00/0.

  3. Prey Capture Behavior Evoked by Simple Visual Stimuli in Larval Zebrafish

    PubMed Central

    Bianco, Isaac H.; Kampff, Adam R.; Engert, Florian

    2011-01-01

    Understanding how the nervous system recognizes salient stimuli in the environment and selects and executes the appropriate behavioral responses is a fundamental question in systems neuroscience. To facilitate the neuroethological study of visually guided behavior in larval zebrafish, we developed “virtual reality” assays in which precisely controlled visual cues can be presented to larvae whilst their behavior is automatically monitored using machine vision algorithms. Freely swimming larvae responded to moving stimuli in a size-dependent manner: they directed multiple low amplitude orienting turns (∼20°) toward small moving spots (1°) but reacted to larger spots (10°) with high-amplitude aversive turns (∼60°). The tracking of small spots led us to examine how larvae respond to prey during hunting routines. By analyzing movie sequences of larvae hunting paramecia, we discovered that all prey capture routines commence with eye convergence and larvae maintain their eyes in a highly converged position for the duration of the prey-tracking and capture swim phases. We adapted our virtual reality assay to deliver artificial visual cues to partially restrained larvae and found that small moving spots evoked convergent eye movements and J-turns of the tail, which are defining features of natural hunting. We propose that eye convergence represents the engagement of a predatory mode of behavior in larval fish and serves to increase the region of binocular visual space to enable stereoscopic targeting of prey. PMID:22203793

  4. Electrophysiological indices of surround suppression in humans

    PubMed Central

    Vanegas, M. Isabel; Blangero, Annabelle

    2014-01-01

    Surround suppression is a well-known example of contextual interaction in visual cortical neurophysiology, whereby the neural response to a stimulus presented within a neuron's classical receptive field is suppressed by surrounding stimuli. Human psychophysical reports present an obvious analog to the effects seen at the single-neuron level: stimuli are perceived as lower-contrast when embedded in a surround. Here we report on a visual paradigm that provides relatively direct, straightforward indices of surround suppression in human electrophysiology, enabling us to reproduce several well-known neurophysiological and psychophysical effects, and to conduct new analyses of temporal trends and retinal location effects. Steady-state visual evoked potentials (SSVEP) elicited by flickering “foreground” stimuli were measured in the context of various static surround patterns. Early visual cortex geometry and retinotopic organization were exploited to enhance SSVEP amplitude. The foreground response was strongly suppressed as a monotonic function of surround contrast. Furthermore, suppression was stronger for surrounds of matching orientation than orthogonally-oriented ones, and stronger at peripheral than foveal locations. These patterns were reproduced in psychophysical reports of perceived contrast, and peripheral electrophysiological suppression effects correlated with psychophysical effects across subjects. Temporal analysis of SSVEP amplitude revealed short-term contrast adaptation effects that caused the foreground signal to either fall or grow over time, depending on the relative contrast of the surround, consistent with stronger adaptation of the suppressive drive. This electrophysiology paradigm has clinical potential in indexing not just visual deficits but possibly gain control deficits expressed more widely in the disordered brain. PMID:25411464

  5. Meteorological Data Visualization in Multi-User Virtual Reality

    NASA Astrophysics Data System (ADS)

    Appleton, R.; van Maanen, P. P.; Fisher, W. I.; Krijnen, R.

    2017-12-01

    Due to their complexity and size, visualization of meteorological data is important. It enables the precise examining and reviewing of meteorological details and is used as a communication tool for reporting, education and to demonstrate the importance of the data to policy makers. Specifically for the UCAR community it is important to explore all of such possibilities.Virtual Reality (VR) technology enhances the visualization of volumetric and dynamical data in a more natural way as compared to a standard desktop, keyboard mouse setup. The use of VR for data visualization is not new but recent developments has made expensive hardware and complex setups unnecessary. The availability of consumer of the shelf VR hardware enabled us to create a very intuitive and low cost way to visualize meteorological data. A VR viewer has been implemented using multiple HTC Vive head sets and allows visualization and analysis of meteorological data in NetCDF format (e.g. of NCEP North America Model (NAM), see figure). Sources of atmospheric/meteorological data include radar and satellite as well as traditional weather stations. The data includes typical meteorological information such as temperature, humidity, air pressure, as well as those data described by the climate forecast (CF) model conventions (http://cfconventions.org). Other data such as lightning-strike data and ultra-high-resolution satellite data are also becoming available. The users can navigate freely around the data which is presented in a virtual room at a scale of up to 3.5 X 3.5 meters. The multiple users can manipulate the model simultaneously. Possible mutations include scaling/translating, filtering by value and using a slicing tool to cut-off specific sections of the data to get a closer look. The slicing can be done in any direction using the concept of a `virtual knife' in real-time. The users can also scoop out parts of the data and walk though successive states of the model. Future plans are (a.o.) to further improve the performance to a higher update rate (for the reduction of possible motion sickness) and to add more advanced filtering and annotation capabilities. We are looking for cooperation with data owners with use cases such as the above mentioned. This will help in further improving and developing our tool and to broaden its application into other domains.

  6. PathwayAccess: CellDesigner plugins for pathway databases.

    PubMed

    Van Hemert, John L; Dickerson, Julie A

    2010-09-15

    CellDesigner provides a user-friendly interface for graphical biochemical pathway description. Many pathway databases are not directly exportable to CellDesigner models. PathwayAccess is an extensible suite of CellDesigner plugins, which connect CellDesigner directly to pathway databases using respective Java application programming interfaces. The process is streamlined for creating new PathwayAccess plugins for specific pathway databases. Three PathwayAccess plugins, MetNetAccess, BioCycAccess and ReactomeAccess, directly connect CellDesigner to the pathway databases MetNetDB, BioCyc and Reactome. PathwayAccess plugins enable CellDesigner users to expose pathway data to analytical CellDesigner functions, curate their pathway databases and visually integrate pathway data from different databases using standard Systems Biology Markup Language and Systems Biology Graphical Notation. Implemented in Java, PathwayAccess plugins run with CellDesigner version 4.0.1 and were tested on Ubuntu Linux, Windows XP and 7, and MacOSX. Source code, binaries, documentation and video walkthroughs are freely available at http://vrac.iastate.edu/~jlv.

  7. Engagement of neural circuits underlying 2D spatial navigation in a rodent virtual reality system.

    PubMed

    Aronov, Dmitriy; Tank, David W

    2014-10-22

    Virtual reality (VR) enables precise control of an animal's environment and otherwise impossible experimental manipulations. Neural activity in rodents has been studied on virtual 1D tracks. However, 2D navigation imposes additional requirements, such as the processing of head direction and environment boundaries, and it is unknown whether the neural circuits underlying 2D representations can be sufficiently engaged in VR. We implemented a VR setup for rats, including software and large-scale electrophysiology, that supports 2D navigation by allowing rotation and walking in any direction. The entorhinal-hippocampal circuit, including place, head direction, and grid cells, showed 2D activity patterns similar to those in the real world. Furthermore, border cells were observed, and hippocampal remapping was driven by environment shape, suggesting functional processing of virtual boundaries. These results illustrate that 2D spatial representations can be engaged by visual and rotational vestibular stimuli alone and suggest a novel VR tool for studying rat navigation.

  8. A client–server framework for 3D remote visualization of radiotherapy treatment space

    PubMed Central

    Santhanam, Anand P.; Min, Yugang; Dou, Tai H.; Kupelian, Patrick; Low, Daniel A.

    2013-01-01

    Radiotherapy is safely employed for treating wide variety of cancers. The radiotherapy workflow includes a precise positioning of the patient in the intended treatment position. While trained radiation therapists conduct patient positioning, consultation is occasionally required from other experts, including the radiation oncologist, dosimetrist, or medical physicist. In many circumstances, including rural clinics and developing countries, this expertise is not immediately available, so the patient positioning concerns of the treating therapists may not get addressed. In this paper, we present a framework to enable remotely located experts to virtually collaborate and be present inside the 3D treatment room when necessary. A multi-3D camera framework was used for acquiring the 3D treatment space. A client–server framework enabled the acquired 3D treatment room to be visualized in real-time. The computational tasks that would normally occur on the client side were offloaded to the server side to enable hardware flexibility on the client side. On the server side, a client specific real-time stereo rendering of the 3D treatment room was employed using a scalable multi graphics processing units (GPU) system. The rendered 3D images were then encoded using a GPU-based H.264 encoding for streaming. Results showed that for a stereo image size of 1280 × 960 pixels, experts with high-speed gigabit Ethernet connectivity were able to visualize the treatment space at approximately 81 frames per second. For experts remotely located and using a 100 Mbps network, the treatment space visualization occurred at 8–40 frames per second depending upon the network bandwidth. This work demonstrated the feasibility of remote real-time stereoscopic patient setup visualization, enabling expansion of high quality radiation therapy into challenging environments. PMID:23440605

  9. Exploratory Application of Augmented Reality/Mixed Reality Devices for Acute Care Procedure Training.

    PubMed

    Kobayashi, Leo; Zhang, Xiao Chi; Collins, Scott A; Karim, Naz; Merck, Derek L

    2018-01-01

    Augmented reality (AR), mixed reality (MR), and virtual reality devices are enabling technologies that may facilitate effective communication in healthcare between those with information and knowledge (clinician/specialist; expert; educator) and those seeking understanding and insight (patient/family; non-expert; learner). Investigators initiated an exploratory program to enable the study of AR/MR use-cases in acute care clinical and instructional settings. Academic clinician educators, computer scientists, and diagnostic imaging specialists conducted a proof-of-concept project to 1) implement a core holoimaging pipeline infrastructure and open-access repository at the study institution, and 2) use novel AR/MR techniques on off-the-shelf devices with holoimages generated by the infrastructure to demonstrate their potential role in the instructive communication of complex medical information. The study team successfully developed a medical holoimaging infrastructure methodology to identify, retrieve, and manipulate real patients' de-identified computed tomography and magnetic resonance imagesets for rendering, packaging, transfer, and display of modular holoimages onto AR/MR headset devices and connected displays. Holoimages containing key segmentations of cervical and thoracic anatomic structures and pathology were overlaid and registered onto physical task trainers for simulation-based "blind insertion" invasive procedural training. During the session, learners experienced and used task-relevant anatomic holoimages for central venous catheter and tube thoracostomy insertion training with enhanced visual cues and haptic feedback. Direct instructor access into the learner's AR/MR headset view of the task trainer was achieved for visual-axis interactive instructional guidance. Investigators implemented a core holoimaging pipeline infrastructure and modular open-access repository to generate and enable access to modular holoimages during exploratory pilot stage applications for invasive procedure training that featured innovative AR/MR techniques on off-the-shelf headset devices.

  10. Exploratory Application of Augmented Reality/Mixed Reality Devices for Acute Care Procedure Training

    PubMed Central

    Kobayashi, Leo; Zhang, Xiao Chi; Collins, Scott A.; Karim, Naz; Merck, Derek L.

    2018-01-01

    Introduction Augmented reality (AR), mixed reality (MR), and virtual reality devices are enabling technologies that may facilitate effective communication in healthcare between those with information and knowledge (clinician/specialist; expert; educator) and those seeking understanding and insight (patient/family; non-expert; learner). Investigators initiated an exploratory program to enable the study of AR/MR use-cases in acute care clinical and instructional settings. Methods Academic clinician educators, computer scientists, and diagnostic imaging specialists conducted a proof-of-concept project to 1) implement a core holoimaging pipeline infrastructure and open-access repository at the study institution, and 2) use novel AR/MR techniques on off-the-shelf devices with holoimages generated by the infrastructure to demonstrate their potential role in the instructive communication of complex medical information. Results The study team successfully developed a medical holoimaging infrastructure methodology to identify, retrieve, and manipulate real patients’ de-identified computed tomography and magnetic resonance imagesets for rendering, packaging, transfer, and display of modular holoimages onto AR/MR headset devices and connected displays. Holoimages containing key segmentations of cervical and thoracic anatomic structures and pathology were overlaid and registered onto physical task trainers for simulation-based “blind insertion” invasive procedural training. During the session, learners experienced and used task-relevant anatomic holoimages for central venous catheter and tube thoracostomy insertion training with enhanced visual cues and haptic feedback. Direct instructor access into the learner’s AR/MR headset view of the task trainer was achieved for visual-axis interactive instructional guidance. Conclusion Investigators implemented a core holoimaging pipeline infrastructure and modular open-access repository to generate and enable access to modular holoimages during exploratory pilot stage applications for invasive procedure training that featured innovative AR/MR techniques on off-the-shelf headset devices. PMID:29383074

  11. A standard-enabled workflow for synthetic biology.

    PubMed

    Myers, Chris J; Beal, Jacob; Gorochowski, Thomas E; Kuwahara, Hiroyuki; Madsen, Curtis; McLaughlin, James Alastair; Mısırlı, Göksel; Nguyen, Tramy; Oberortner, Ernst; Samineni, Meher; Wipat, Anil; Zhang, Michael; Zundel, Zach

    2017-06-15

    A synthetic biology workflow is composed of data repositories that provide information about genetic parts, sequence-level design tools to compose these parts into circuits, visualization tools to depict these designs, genetic design tools to select parts to create systems, and modeling and simulation tools to evaluate alternative design choices. Data standards enable the ready exchange of information within such a workflow, allowing repositories and tools to be connected from a diversity of sources. The present paper describes one such workflow that utilizes, among others, the Synthetic Biology Open Language (SBOL) to describe genetic designs, the Systems Biology Markup Language to model these designs, and SBOL Visual to visualize these designs. We describe how a standard-enabled workflow can be used to produce types of design information, including multiple repositories and software tools exchanging information using a variety of data standards. Recently, the ACS Synthetic Biology journal has recommended the use of SBOL in their publications. © 2017 The Author(s); published by Portland Press Limited on behalf of the Biochemical Society.

  12. WarpIV: In situ visualization and analysis of ion accelerator simulations

    DOE PAGES

    Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc; ...

    2016-05-09

    The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less

  13. Effects of Peripheral Visual Field Loss on Eye Movements During Visual Search

    PubMed Central

    Wiecek, Emily; Pasquale, Louis R.; Fiser, Jozsef; Dakin, Steven; Bex, Peter J.

    2012-01-01

    Natural vision involves sequential eye movements that bring the fovea to locations selected by peripheral vision. How peripheral visual field loss (PVFL) affects this process is not well understood. We examine how the location and extent of PVFL affects eye movement behavior in a naturalistic visual search task. Ten patients with PVFL and 13 normally sighted subjects with full visual fields (FVF) completed 30 visual searches monocularly. Subjects located a 4° × 4° target, pseudo-randomly selected within a 26° × 11° natural image. Eye positions were recorded at 50 Hz. Search duration, fixation duration, saccade size, and number of saccades per trial were not significantly different between PVFL and FVF groups (p > 0.1). A χ2 test showed that the distributions of saccade directions for PVFL and FVL subjects were significantly different in 8 out of 10 cases (p < 0.01). Humphrey Visual Field pattern deviations for each subject were compared with the spatial distribution of eye movement directions. There were no significant correlations between saccade directional bias and visual field sensitivity across the 10 patients. Visual search performance was not significantly affected by PVFL. An analysis of eye movement directions revealed patients with PVFL show a biased directional distribution that was not directly related to the locus of vision loss, challenging feed-forward models of eye movement control. Consequently, many patients do not optimally compensate for visual field loss during visual search. PMID:23162511

  14. [Trial of eye drops recognizer for visually disabled persons].

    PubMed

    Okamoto, Norio; Suzuki, Katsuhiko; Mimura, Osamu

    2009-01-01

    The development of a device to enable the visually disabled to differentiate eye drops and their dose. The new instrument is composed of a voice generator and a two-dimensional bar-code reader (LS9208). We designed voice outputs for the visually disabled to state when (number of times) and where (right, left, or both) to administer eye drops. We then determined the minimum bar-code size that can be recognized. After attaching bar-codes of the appropriate size to the lateral or bottom surface of the eye drops container, the readability of the bar-codes was compared. The minimum discrimination bar-code size was 6 mm high x 8.5 mm long. Bar-codes on the bottom surface could be more easily recognized than bar-codes on the side. Our newly-developed device using bar-codes enables visually disabled persons to differentiate eye drops and their doses.

  15. An Optimized Centrifugal Method for Separation of Semen from Superabsorbent Polymers for Forensic Analysis.

    PubMed

    Camarena, Lucy R; Glasscock, Bailey K; Daniels, Demi; Ackley, Nicolle; Sciarretta, Marybeth; Seashols-Williams, Sarah J

    2017-03-01

    Connection of a perpetrator to a sexual assault is best performed through the confirmed presence of semen, thereby proving sexual contact. Evidentiary items can include sanitary napkins or diapers containing superabsorbent polymers (SAPs), complicating spermatozoa visualization and DNA analysis. In this report, we evaluated the impact of SAPS on the current forensic DNA workflow, developing an efficient centrifugal protocol for separating spermatozoa from SAP material. The optimized filtration method was compared to common practices of excising the top layer only, resulting in significantly higher sperm yields when a core sample of the substrate was taken. Direct isolation of the SAP-containing materials without filtering resulted in 20% sample failure; additionally, SAP material was observed in the final eluted DNA samples, causing physical interference. Thus, use of the described centrifugal-filtering method is a simple preliminary step that improves spermatozoa visualization and enables more consistent DNA yields, while also avoiding SAP interference. © 2016 American Academy of Forensic Sciences.

  16. Dual modal ultra-bright nanodots with aggregation-induced emission and gadolinium-chelation for vascular integrity and leakage detection.

    PubMed

    Feng, Guangxue; Li, Jackson Liang Yao; Claser, Carla; Balachander, Akhila; Tan, Yingrou; Goh, Chi Ching; Kwok, Immanuel Weng Han; Rénia, Laurent; Tang, Ben Zhong; Ng, Lai Guan; Liu, Bin

    2018-01-01

    The study of blood brain barrier (BBB) functions is important for neurological disorder research. However, the lack of suitable tools and methods has hampered the progress of this field. Herein, we present a hybrid nanodot strategy, termed AIE-Gd dots, comprising of a fluorogen with aggregation-induced emission (AIE) characteristics as the core to provide bright and stable fluorescence for optical imaging, and gadolinium (Gd) for accurate quantification of vascular leakage via inductively-coupled plasma mass spectrometry (ICP-MS). In this report, we demonstrate that AIE-Gd dots enable direct visualization of brain vascular networks under resting condition, and that they form localized punctate aggregates and accumulate in the brain tissue during experimental cerebral malaria, indicative of hemorrhage and BBB malfunction. With its superior detection sensitivity and multimodality, we hereby propose that AIE-Gd dots can serve as a better alternative to Evans blue for visualization and quantification of changes in brain barrier functions. Copyright © 2017. Published by Elsevier Ltd.

  17. Label-free Chemical Imaging of Fungal Spore Walls by Raman Microscopy and Multivariate Curve Resolution Analysis

    PubMed Central

    Noothalapati, Hemanth; Sasaki, Takahiro; Kaino, Tomohiro; Kawamukai, Makoto; Ando, Masahiro; Hamaguchi, Hiro-o; Yamamoto, Tatsuyuki

    2016-01-01

    Fungal cell walls are medically important since they represent a drug target site for antifungal medication. So far there is no method to directly visualize structurally similar cell wall components such as α-glucan, β-glucan and mannan with high specificity, especially in a label-free manner. In this study, we have developed a Raman spectroscopy based molecular imaging method and combined multivariate curve resolution analysis to enable detection and visualization of multiple polysaccharide components simultaneously at the single cell level. Our results show that vegetative cell and ascus walls are made up of both α- and β-glucans while spore wall is exclusively made of α-glucan. Co-localization studies reveal the absence of mannans in ascus wall but are distributed primarily in spores. Such detailed picture is believed to further enhance our understanding of the dynamic spore wall architecture, eventually leading to advancements in drug discovery and development in the near future. PMID:27278218

  18. A Virtual Reality Visualization Tool for Neuron Tracing

    PubMed Central

    Usher, Will; Klacansky, Pavol; Federer, Frederick; Bremer, Peer-Timo; Knoll, Aaron; Angelucci, Alessandra; Pascucci, Valerio

    2017-01-01

    Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists. PMID:28866520

  19. Excel2Genie: A Microsoft Excel application to improve the flexibility of the Genie-2000 Spectroscopic software.

    PubMed

    Forgács, Attila; Balkay, László; Trón, Lajos; Raics, Péter

    2014-12-01

    Excel2Genie, a simple and user-friendly Microsoft Excel interface, has been developed to the Genie-2000 Spectroscopic Software of Canberra Industries. This Excel application can directly control Canberra Multichannel Analyzer (MCA), process the acquired data and visualize them. Combination of Genie-2000 with Excel2Genie results in remarkably increased flexibility and a possibility to carry out repetitive data acquisitions even with changing parameters and more sophisticated analysis. The developed software package comprises three worksheets: display parameters and results of data acquisition, data analysis and mathematical operations carried out on the measured gamma spectra. At the same time it also allows control of these processes. Excel2Genie is freely available to assist gamma spectrum measurements and data evaluation by the interested Canberra users. With access to the Visual Basic Application (VBA) source code of this application users are enabled to modify the developed interface according to their intentions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Development of an omnidirectional gamma-ray imaging Compton camera for low-radiation-level environmental monitoring

    NASA Astrophysics Data System (ADS)

    Watanabe, Takara; Enomoto, Ryoji; Muraishi, Hiroshi; Katagiri, Hideaki; Kagaya, Mika; Fukushi, Masahiro; Kano, Daisuke; Satoh, Wataru; Takeda, Tohoru; Tanaka, Manobu M.; Tanaka, Souichi; Uchida, Tomohisa; Wada, Kiyoto; Wakamatsu, Ryo

    2018-02-01

    We have developed an omnidirectional gamma-ray imaging Compton camera for environmental monitoring at low levels of radiation. The camera consisted of only six CsI(Tl) scintillator cubes of 3.5 cm, each of which was readout by super-bialkali photo-multiplier tubes (PMTs). Our camera enables the visualization of the position of gamma-ray sources in all directions (∼4π sr) over a wide energy range between 300 and 1400 keV. The angular resolution (σ) was found to be ∼11°, which was realized using an image-sharpening technique. A high detection efficiency of 18 cps/(µSv/h) for 511 keV (1.6 cps/MBq at 1 m) was achieved, indicating the capability of this camera to visualize hotspots in areas with low-radiation-level contamination from the order of µSv/h to natural background levels. Our proposed technique can be easily used as a low-radiation-level imaging monitor in radiation control areas, such as medical and accelerator facilities.

  1. IBS: an illustrator for the presentation and visualization of biological sequences.

    PubMed

    Liu, Wenzhong; Xie, Yubin; Ma, Jiyong; Luo, Xiaotong; Nie, Peng; Zuo, Zhixiang; Lahrmann, Urs; Zhao, Qi; Zheng, Yueyuan; Zhao, Yong; Xue, Yu; Ren, Jian

    2015-10-15

    Biological sequence diagrams are fundamental for visualizing various functional elements in protein or nucleotide sequences that enable a summarization and presentation of existing information as well as means of intuitive new discoveries. Here, we present a software package called illustrator of biological sequences (IBS) that can be used for representing the organization of either protein or nucleotide sequences in a convenient, efficient and precise manner. Multiple options are provided in IBS, and biological sequences can be manipulated, recolored or rescaled in a user-defined mode. Also, the final representational artwork can be directly exported into a publication-quality figure. The standalone package of IBS was implemented in JAVA, while the online service was implemented in HTML5 and JavaScript. Both the standalone package and online service are freely available at http://ibs.biocuckoo.org. renjian.sysu@gmail.com or xueyu@hust.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  2. In situ real-time imaging of self-sorted supramolecular nanofibres

    NASA Astrophysics Data System (ADS)

    Onogi, Shoji; Shigemitsu, Hajime; Yoshii, Tatsuyuki; Tanida, Tatsuya; Ikeda, Masato; Kubota, Ryou; Hamachi, Itaru

    2016-08-01

    Self-sorted supramolecular nanofibres—a multicomponent system that consists of several types of fibre, each composed of distinct building units—play a crucial role in complex, well-organized systems with sophisticated functions, such as living cells. Designing and controlling self-sorting events in synthetic materials and understanding their structures and dynamics in detail are important elements in developing functional artificial systems. Here, we describe the in situ real-time imaging of self-sorted supramolecular nanofibre hydrogels consisting of a peptide gelator and an amphiphilic phosphate. The use of appropriate fluorescent probes enabled the visualization of self-sorted fibres entangled in two and three dimensions through confocal laser scanning microscopy and super-resolution imaging, with 80 nm resolution. In situ time-lapse imaging showed that the two types of fibre have different formation rates and that their respective physicochemical properties remain intact in the gel. Moreover, we directly visualized stochastic non-synchronous fibre formation and observed a cooperative mechanism.

  3. Science-Driven Computing: NERSC's Plan for 2006-2010

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simon, Horst D.; Kramer, William T.C.; Bailey, David H.

    NERSC has developed a five-year strategic plan focusing on three components: Science-Driven Systems, Science-Driven Services, and Science-Driven Analytics. (1) Science-Driven Systems: Balanced introduction of the best new technologies for complete computational systems--computing, storage, networking, visualization and analysis--coupled with the activities necessary to engage vendors in addressing the DOE computational science requirements in their future roadmaps. (2) Science-Driven Services: The entire range of support activities, from high-quality operations and user services to direct scientific support, that enable a broad range of scientists to effectively use NERSC systems in their research. NERSC will concentrate on resources needed to realize the promise ofmore » the new highly scalable architectures for scientific discovery in multidisciplinary computational science projects. (3) Science-Driven Analytics: The architectural and systems enhancements and services required to integrate NERSC's powerful computational and storage resources to provide scientists with new tools to effectively manipulate, visualize, and analyze the huge data sets derived from simulations and experiments.« less

  4. Visualizing Phenology and Climate Data at the National Scale

    NASA Astrophysics Data System (ADS)

    Rosemartin, A.; Marsh, L.

    2013-12-01

    Nature's Notebook is the USA National Phenology Network's national-scale plant and animal phenology observation program, designed to address the challenges posed by global change and its impacts on ecosystems and human health. Since its inception in 2009, 2,500 participants in Nature's Notebook have submitted 2.3 million records on the phenology of 17,000 organisms across the United States. An information architecture has been developed to facilitate collaboration and participatory data collection and digitization. Browser-based and mobile applications support data submission, and a MySQL/Drupal multi-site infrastructure enables data storage, access and discovery. Web services are available for both input and export of data resources. In this presentation we will focus on a tool for visualizing phenology data at the national scale. Effective data exploration for this multi-dimensional dataset requires the ability to plot sites, select species and phenophases, graph organismal phenology through time, and view integrated precipitation and temperature data. We will demonstrate the existing tool's capacity, discuss future directions and solicit feedback from the community.

  5. Instantaneous three-dimensional visualization of concentration distributions in turbulent flows with crossed-plane laser-induced fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Hoffmann, A.; Zimmermann, F.; Scharr, H.; Krömker, S.; Schulz, C.

    2005-01-01

    A laser-based technique for measuring instantaneous three-dimensional species concentration distributions in turbulent flows is presented. The laser beam from a single laser is formed into two crossed light sheets that illuminate the area of interest. The laser-induced fluorescence (LIF) signal emitted from excited species within both planes is detected with a single camera via a mirror arrangement. Image processing enables the reconstruction of the three-dimensional data set in close proximity to the cutting line of the two light sheets. Three-dimensional intensity gradients are computed and compared to the two-dimensional projections obtained from the two directly observed planes. Volume visualization by digital image processing gives unique insight into the three-dimensional structures within the turbulent processes. We apply this technique to measurements of toluene-LIF in a turbulent, non-reactive mixing process of toluene and air and to hydroxyl (OH) LIF in a turbulent methane-air flame upon excitation at 248 nm with a tunable KrF excimer laser.

  6. Transfer and scaffolding of perceptual grouping occurs across organizing principles in 3- to 7-month-old infants.

    PubMed

    Quinn, Paul C; Bhatt, Ramesh S

    2009-08-01

    Previous research has demonstrated that organizational principles become functional over different time courses of development: Lightness similarity is available at 3 months of age, but form similarity is not readily in evidence until 6 months of age. We investigated whether organization would transfer across principles and whether perceptual scaffolding can occur from an already functional principle to a not-yet-operational principle. Six- to 7-month-old infants (Experiment 1) and 3- to 4-month-old infants (Experiment 2) who were familiarized with arrays of elements organized by lightness similarity displayed a subsequent visual preference for a novel organization defined by form similarity. Results with the older infants demonstrate transfer in perceptual grouping: The organization defined by one grouping principle can direct a visual preference for a novel organization defined by a different grouping principle. Findings with the younger infants suggest that learning based on an already functional organizational process enables an organizational process that is not yet functional through perceptual scaffolding.

  7. Action-Driven Visual Object Tracking With Deep Reinforcement Learning.

    PubMed

    Yun, Sangdoo; Choi, Jongwon; Yoo, Youngjoon; Yun, Kimin; Choi, Jin Young

    2018-06-01

    In this paper, we propose an efficient visual tracker, which directly captures a bounding box containing the target object in a video by means of sequential actions learned using deep neural networks. The proposed deep neural network to control tracking actions is pretrained using various training video sequences and fine-tuned during actual tracking for online adaptation to a change of target and background. The pretraining is done by utilizing deep reinforcement learning (RL) as well as supervised learning. The use of RL enables even partially labeled data to be successfully utilized for semisupervised learning. Through the evaluation of the object tracking benchmark data set, the proposed tracker is validated to achieve a competitive performance at three times the speed of existing deep network-based trackers. The fast version of the proposed method, which operates in real time on graphics processing unit, outperforms the state-of-the-art real-time trackers with an accuracy improvement of more than 8%.

  8. Review of ultraresolution (10-100 megapixel) visualization systems built by tiling commercial display components

    NASA Astrophysics Data System (ADS)

    Hopper, Darrel G.; Haralson, David G.; Simpson, Matthew A.; Longo, Sam J.

    2002-08-01

    Ultra-resolution visualization systems are achieved by the technique of tiling many direct or project-view displays. During the past fews years, several such systems have been built from commercial electronics components (displays, computers, image generators, networks, communication links, and software). Civil applications driving this development have independently determined that they require images at 10-100 megapixel (Mpx) resolution to enable state-of-the-art research, engineering, design, stock exchanges, flight simulators, business information and enterprise control centers, education, art and entertainment. Military applications also press the art of the possible to improve the productivity of warfighters and lower the cost of providing for the national defense. The environment in some 80% of defense applications can be addressed by ruggedization of commercial components. This paper reviews the status of ultra-resolution systems based on commercial components and describes a vision for their integration into advanced yet affordable military command centers, simulator/trainers, and, eventually, crew stations in air, land, sea and space systems.

  9. IBS: an illustrator for the presentation and visualization of biological sequences

    PubMed Central

    Liu, Wenzhong; Xie, Yubin; Ma, Jiyong; Luo, Xiaotong; Nie, Peng; Zuo, Zhixiang; Lahrmann, Urs; Zhao, Qi; Zheng, Yueyuan; Zhao, Yong; Xue, Yu; Ren, Jian

    2015-01-01

    Summary: Biological sequence diagrams are fundamental for visualizing various functional elements in protein or nucleotide sequences that enable a summarization and presentation of existing information as well as means of intuitive new discoveries. Here, we present a software package called illustrator of biological sequences (IBS) that can be used for representing the organization of either protein or nucleotide sequences in a convenient, efficient and precise manner. Multiple options are provided in IBS, and biological sequences can be manipulated, recolored or rescaled in a user-defined mode. Also, the final representational artwork can be directly exported into a publication-quality figure. Availability and implementation: The standalone package of IBS was implemented in JAVA, while the online service was implemented in HTML5 and JavaScript. Both the standalone package and online service are freely available at http://ibs.biocuckoo.org. Contact: renjian.sysu@gmail.com or xueyu@hust.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26069263

  10. A Virtual Reality Visualization Tool for Neuron Tracing.

    PubMed

    Usher, Will; Klacansky, Pavol; Federer, Frederick; Bremer, Peer-Timo; Knoll, Aaron; Yarch, Jeff; Angelucci, Alessandra; Pascucci, Valerio

    2018-01-01

    Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists.

  11. My Voice Heard: The Journey of a Young Man with a Cerebral Visual Impairment

    ERIC Educational Resources Information Center

    Macintyre-Beon, Catriona; Mitchell, Kate; Gallagher, Ian; Cockburn, Debbie; Dutton, Gordon N.; Bowman, Richard

    2012-01-01

    This longitudinal case study presents John's journey through childhood and adolescence, living with visual difficulties associated with a cerebral visual impairment. It highlights the day-to-day problems that John encountered, giving practical solutions and strategies that have enabled his dream of going to a university to be realized. John and…

  12. School, Family and Other Influences on Assistive Technology Use: Access and Challenges for Students with Visual Impairment in Singapore

    ERIC Educational Resources Information Center

    Wong, Meng Ee; Cohen, Libby

    2011-01-01

    Assistive technologies are essential enablers for individuals with visual impairments, but although Singapore is technologically advanced, students with visual impairments are not yet full participants in this technological society. This study investigates the barriers and challenges to the use of assistive technologies by students with visual…

  13. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov

    2014-12-15

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: Themore » visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.« less

  14. Early auditory change detection implicitly facilitated by ignored concurrent visual change during a Braille reading task.

    PubMed

    Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya

    2013-09-01

    Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can implicitly strengthen automatic change detection from an early stage in a cross-sensory manner, at least in the vision to audition direction.

  15. A Virtual Reality System for PTCD Simulation Using Direct Visuo-Haptic Rendering of Partially Segmented Image Data.

    PubMed

    Fortmeier, Dirk; Mastmeyer, Andre; Schröder, Julian; Handels, Heinz

    2016-01-01

    This study presents a new visuo-haptic virtual reality (VR) training and planning system for percutaneous transhepatic cholangio-drainage (PTCD) based on partially segmented virtual patient models. We only use partially segmented image data instead of a full segmentation and circumvent the necessity of surface or volume mesh models. Haptic interaction with the virtual patient during virtual palpation, ultrasound probing and needle insertion is provided. Furthermore, the VR simulator includes X-ray and ultrasound simulation for image-guided training. The visualization techniques are GPU-accelerated by implementation in Cuda and include real-time volume deformations computed on the grid of the image data. Computation on the image grid enables straightforward integration of the deformed image data into the visualization components. To provide shorter rendering times, the performance of the volume deformation algorithm is improved by a multigrid approach. To evaluate the VR training system, a user evaluation has been performed and deformation algorithms are analyzed in terms of convergence speed with respect to a fully converged solution. The user evaluation shows positive results with increased user confidence after a training session. It is shown that using partially segmented patient data and direct volume rendering is suitable for the simulation of needle insertion procedures such as PTCD.

  16. Edge compression techniques for visualization of dense directed graphs.

    PubMed

    Dwyer, Tim; Henry Riche, Nathalie; Marriott, Kim; Mears, Christopher

    2013-12-01

    We explore the effectiveness of visualizing dense directed graphs by replacing individual edges with edges connected to 'modules'-or groups of nodes-such that the new edges imply aggregate connectivity. We only consider techniques that offer a lossless compression: that is, where the entire graph can still be read from the compressed version. The techniques considered are: a simple grouping of nodes with identical neighbor sets; Modular Decomposition which permits internal structure in modules and allows them to be nested; and Power Graph Analysis which further allows edges to cross module boundaries. These techniques all have the same goal--to compress the set of edges that need to be rendered to fully convey connectivity--but each successive relaxation of the module definition permits fewer edges to be drawn in the rendered graph. Each successive technique also, we hypothesize, requires a higher degree of mental effort to interpret. We test this hypothetical trade-off with two studies involving human participants. For Power Graph Analysis we propose a novel optimal technique based on constraint programming. This enables us to explore the parameter space for the technique more precisely than could be achieved with a heuristic. Although applicable to many domains, we are motivated by--and discuss in particular--the application to software dependency analysis.

  17. PeptideDepot: Flexible Relational Database for Visual Analysis of Quantitative Proteomic Data and Integration of Existing Protein Information

    PubMed Central

    Yu, Kebing; Salomon, Arthur R.

    2010-01-01

    Recently, dramatic progress has been achieved in expanding the sensitivity, resolution, mass accuracy, and scan rate of mass spectrometers able to fragment and identify peptides through tandem mass spectrometry (MS/MS). Unfortunately, this enhanced ability to acquire proteomic data has not been accompanied by a concomitant increase in the availability of flexible tools allowing users to rapidly assimilate, explore, and analyze this data and adapt to a variety of experimental workflows with minimal user intervention. Here we fill this critical gap by providing a flexible relational database called PeptideDepot for organization of expansive proteomic data sets, collation of proteomic data with available protein information resources, and visual comparison of multiple quantitative proteomic experiments. Our software design, built upon the synergistic combination of a MySQL database for safe warehousing of proteomic data with a FileMaker-driven graphical user interface for flexible adaptation to diverse workflows, enables proteomic end-users to directly tailor the presentation of proteomic data to the unique analysis requirements of the individual proteomics lab. PeptideDepot may be deployed as an independent software tool or integrated directly with our High Throughput Autonomous Proteomic Pipeline (HTAPP) used in the automated acquisition and post-acquisition analysis of proteomic data. PMID:19834895

  18. MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.

    PubMed

    Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik

    2016-01-01

    Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.

  19. Design and implementation of bimolecular fluorescence complementation (BiFC) assays for the visualization of protein interactions in living cells.

    PubMed

    Kerppola, Tom K

    2006-01-01

    Bimolecular fluorescence complementation (BiFC) analysis enables direct visualization of protein interactions in living cells. The BiFC assay is based on the discoveries that two non-fluorescent fragments of a fluorescent protein can form a fluorescent complex and that the association of the fragments can be facilitated when they are fused to two proteins that interact with each other. BiFC must be confirmed by parallel analysis of proteins in which the interaction interface has been mutated. It is not necessary for the interaction partners to juxtapose the fragments within a specific distance of each other because they can associate when they are tethered to a complex with flexible linkers. It is also not necessary for the interaction partners to form a complex with a long half-life or a high occupancy since the fragments can associate in a transient complex and un-associated fusion proteins do not interfere with detection of the complex. Many interactions can be visualized when the fusion proteins are expressed at levels comparable to their endogenous counterparts. The BiFC assay has been used for the visualization of interactions between many types of proteins in different subcellular locations and in different cell types and organisms. It is technically straightforward and can be performed using a regular fluorescence microscope and standard molecular biology and cell culture reagents.

  20. Generalizing the extensibility of a dynamic geometry software

    NASA Astrophysics Data System (ADS)

    Herceg, Đorđe; Radaković, Davorka; Herceg, Dejana

    2012-09-01

    Plug-and-play visual components in a Dynamic Geometry Software (DGS) enable development of visually attractive, rich and highly interactive dynamic drawings. We are developing SLGeometry, a DGS that contains a custom programming language, a computer algebra system (CAS engine) and a graphics subsystem. The basic extensibility framework on SLGeometry supports dynamic addition of new functions from attribute annotated classes that implement runtime metadata registration in code. We present a general plug-in framework for dynamic importing of arbitrary Silverlight user interface (UI) controls into SLGeometry at runtime. The CAS engine maintains a metadata storage that describes each imported visual component and enables two-way communication between the expressions stored in the engine and the UI controls on the screen.

  1. D3: A Collaborative Infrastructure for Aerospace Design

    NASA Technical Reports Server (NTRS)

    Walton, Joan; Filman, Robert E.; Knight, Chris; Korsmeyer, David J.; Lee, Diana D.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    DARWIN is a NASA developed, Internet-based system for enabling aerospace researchers to securely and remotely access and collaborate on the analysis of aerospace vehicle design data, primarily the results of wind-tunnel testing and numeric (e.g., computational fluid dynamics) model executions. DARWIN captures, stores and indexes data, manages derived knowledge (such as visualizations across multiple data sets) and provides an environment for designers to collaborate in the analysis of the results of testing. DARWIN is an interesting application because it supports high volumes of data, integrates multiple modalities of data display (e.g. images and data visualizations), and provides non-trivial access control mechanisms. DARWIN enables collaboration by allowing not only sharing visualizations of data, but also commentary about and view of data.

  2. Construction of Blaze at the University of Illinois at Chicago: A Shared, High-Performance, Visual Computer for Next-Generation Cyberinfrastructure-Accelerated Scientific, Engineering, Medical and Public Policy Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Maxine D.; Leigh, Jason

    2014-02-17

    The Blaze high-performance visual computing system serves the high-performance computing research and education needs of University of Illinois at Chicago (UIC). Blaze consists of a state-of-the-art, networked, computer cluster and ultra-high-resolution visualization system called CAVE2(TM) that is currently not available anywhere in Illinois. This system is connected via a high-speed 100-Gigabit network to the State of Illinois' I-WIRE optical network, as well as to national and international high speed networks, such as the Internet2, and the Global Lambda Integrated Facility. This enables Blaze to serve as an on-ramp to national cyberinfrastructure, such as the National Science Foundation’s Blue Waters petascalemore » computer at the National Center for Supercomputing Applications at the University of Illinois at Chicago and the Department of Energy’s Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. DOE award # DE-SC005067, leveraged with NSF award #CNS-0959053 for “Development of the Next-Generation CAVE Virtual Environment (NG-CAVE),” enabled us to create a first-of-its-kind high-performance visual computing system. The UIC Electronic Visualization Laboratory (EVL) worked with two U.S. companies to advance their commercial products and maintain U.S. leadership in the global information technology economy. New applications are being enabled with the CAVE2/Blaze visual computing system that is advancing scientific research and education in the U.S. and globally, and help train the next-generation workforce.« less

  3. Deciphering protein signatures using color, morphological, and topological analysis of immunohistochemically stained human tissues

    NASA Astrophysics Data System (ADS)

    Zerhouni, Erwan; Prisacari, Bogdan; Zhong, Qing; Wild, Peter; Gabrani, Maria

    2016-03-01

    Images of tissue specimens enable evidence-based study of disease susceptibility and stratification. Moreover, staining technologies empower the evidencing of molecular expression patterns by multicolor visualization, thus enabling personalized disease treatment and prevention. However, translating molecular expression imaging into direct health benefits has been slow. Two major factors contribute to that. On the one hand, disease susceptibility and progression is a complex, multifactorial molecular process. Diseases, such as cancer, exhibit cellular heterogeneity, impeding the differentiation between diverse grades or types of cell formations. On the other hand, the relative quantification of the stained tissue selected features is ambiguous, tedious and time consuming, prone to clerical error, leading to intra- and inter-observer variability and low throughput. Image analysis of digital histopathology images is a fast-developing and exciting area of disease research that aims to address the above limitations. We have developed a computational framework that extracts unique signatures using color, morphological and topological information and allows the combination thereof. The integration of the above information enables diagnosis of disease with AUC as high as 0.97. Multiple staining show significant improvement with respect to most proteins, and an AUC as high as 0.99.

  4. Detection of Proteins on Blot Membranes

    PubMed Central

    Goldman, Aaron; Harper, Sandra; Speicher, David W.

    2017-01-01

    Staining of blot membranes enables the visualization of bound proteins. Proteins are usually transferred to blot membranes by electroblotting, by direct spotting of protein solutions, or by contact blots. Staining allows the efficiency of transfer to the membrane to be monitored. This unit describes protocols for staining proteins after electroblotting from polyacrylamide gels to blot membranes such as polyvinylidene difluoride (PVDF), nitrocellulose, or nylon membranes. The same methods can be used if proteins are directly spotted, either manually or using robotics. Protocols are included for seven general protein stains (amido black, Coomassie blue, Ponceau S, colloidal gold, colloidal silver, India ink, and MemCode) and three fluorescent protein stains (fluorescamine, IAEDANS, and SYPRO Ruby). Also included is an in-depth discussion of the different blot membrane types and the compatibility of different protein stains with downstream applications, such as immunoblotting or N-terminal Edman sequencing. PMID:27801518

  5. Detection of Proteins on Blot Membranes.

    PubMed

    Goldman, Aaron; Harper, Sandra; Speicher, David W

    2016-11-01

    Staining of blot membranes enables the visualization of bound proteins. Proteins are usually transferred to blot membranes by electroblotting, by direct spotting of protein solutions, or by contact blots. Staining allows the efficiency of transfer to the membrane to be monitored. This unit describes protocols for staining proteins after electroblotting from polyacrylamide gels to blot membranes such as polyvinylidene difluoride (PVDF), nitrocellulose, or nylon membranes. The same methods can be used if proteins are directly spotted, either manually or using robotics. Protocols are included for seven general protein stains (amido black, Coomassie blue, Ponceau S, colloidal gold, colloidal silver, India ink, and MemCode) and three fluorescent protein stains (fluorescamine, IAEDANS, and SYPRO Ruby). Also included is an in-depth discussion of the different blot membrane types and the compatibility of different protein stains with downstream applications, such as immunoblotting or N-terminal Edman sequencing. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.

  6. Site-Specific Immunosuppression in Vascularized Composite Allotransplantation: Prospects and Potential

    PubMed Central

    Schnider, Jonas T.; Weinstock, Matthias; Plock, Jan A.; Solari, Mario G.; Venkataramanan, Raman; Zheng, Xin Xiao; Gorantla, Vijay S.

    2013-01-01

    Skin is the most immunogenic component of a vascularized composite allograft (VCA) and is the primary trigger and target of rejection. The skin is directly accessible for visual monitoring of acute rejection (AR) and for directed biopsy, timely therapeutic intervention, and management of AR. Logically, antirejection drugs, biologics, or other agents delivered locally to the VCA may reduce the need for systemic immunosuppression with its adverse effects. Topical FK 506 (tacrolimus) and steroids have been used in clinical VCA as an adjunct to systemic therapy with unclear beneficial effects. However, there are no commercially available topical formulations for other widely used systemic immunosuppressive drugs such as mycophenolic acid, sirolimus, and everolimus. Investigating the site-specific therapeutic effects and efficacy of systemically active agents may enable optimizing the dosing, frequency, and duration of overall immunosuppression in VCA with minimization or elimination of long-term drug-related toxicity. PMID:23476677

  7. Real-Space Mapping of the Chiral Near-Field Distributions in Spiral Antennas and Planar Metasurfaces.

    PubMed

    Schnell, M; Sarriugarte, P; Neuman, T; Khanikaev, A B; Shvets, G; Aizpurua, J; Hillenbrand, R

    2016-01-13

    Chiral antennas and metasurfaces can be designed to react differently to left- and right-handed circularly polarized light, which enables novel optical properties such as giant optical activity and negative refraction. Here, we demonstrate that the underlying chiral near-field distributions can be directly mapped with scattering-type scanning near-field optical microscopy employing circularly polarized illumination. We apply our technique to visualize, for the first time, the circular-polarization selective nanofocusing of infrared light in Archimedean spiral antennas, and explain this chiral optical effect by directional launching of traveling waves in analogy to antenna theory. Moreover, we near-field image single-layer rosette and asymmetric dipole-monopole metasurfaces and find negligible and strong chiral optical near-field contrast, respectively. Our technique paves the way for near-field characterization of optical chirality in metal nanostructures, which will be essential for the future development of chiral antennas and metasurfaces and their applications.

  8. Emergence of binocular functional properties in a monocular neural circuit

    PubMed Central

    Ramdya, Pavan; Engert, Florian

    2010-01-01

    Sensory circuits frequently integrate converging inputs while maintaining precise functional relationships between them. For example, in mammals with stereopsis, neurons at the first stages of binocular visual processing show a close alignment of receptive-field properties for each eye. Still, basic questions about the global wiring mechanisms that enable this functional alignment remain unanswered, including whether the addition of a second retinal input to an otherwise monocular neural circuit is sufficient for the emergence of these binocular properties. We addressed this question by inducing a de novo binocular retinal projection to the larval zebrafish optic tectum and examining recipient neuronal populations using in vivo two-photon calcium imaging. Notably, neurons in rewired tecta were predominantly binocular and showed matching direction selectivity for each eye. We found that a model based on local inhibitory circuitry that computes direction selectivity using the topographic structure of both retinal inputs can account for the emergence of this binocular feature. PMID:19160507

  9. Optical Coherence Tomography Enabling Non Destructive Metrology of Layered Polymeric GRIN Material

    PubMed Central

    Meemon, Panomsak; Yao, Jianing; Lee, Kye-Sung; Thompson, Kevin P.; Ponting, Michael; Baer, Eric; Rolland, Jannick P.

    2013-01-01

    Gradient Refractive INdex (GRIN) optical components have historically fallen short of theoretical expectations. A recent breakthrough is the manufacturing of nanolayered spherical GRIN (S-GRIN) polymer optical elements, where the construction method yields refractive index gradients that exceed 0.08. Here we report on the application of optical coherence tomography (OCT), including micron-class axial and lateral resolution advances, as effective, innovative methods for performing nondestructive diagnostic metrology on S-GRIN. We show that OCT can be used to visualize and quantify characteristics of the material throughout the manufacturing process. Specifically, internal film structure may be revealed and data are processed to extract sub-surface profiles of each internal film of the material to quantify 3D film thickness and homogeneity. The technique provides direct feedback into the fabrication process directed at optimizing the quality of the nanolayered S-GRIN polymer optical components.

  10. Improving the Accessibility and Use of NASA Earth Science Data

    NASA Technical Reports Server (NTRS)

    Tisdale, Matthew; Tisdale, Brian

    2015-01-01

    Many of the NASA Langley Atmospheric Science Data Center (ASDC) Distributed Active Archive Center (DAAC) multidimensional tropospheric and atmospheric chemistry data products are stored in HDF4, HDF5 or NetCDF format, which traditionally have been difficult to analyze and visualize with geospatial tools. With the rising demand from the diverse end-user communities for geospatial tools to handle multidimensional products, several applications, such as ArcGIS, have refined their software. Many geospatial applications now have new functionalities that enable the end user to: Store, serve, and perform analysis on each individual variable, its time dimension, and vertical dimension. Use NetCDF, GRIB, and HDF raster data formats across applications directly. Publish output within REST image services or WMS for time and space enabled web application development. During this webinar, participants will learn how to leverage geospatial applications such as ArcGIS, OPeNDAP and ncWMS in the production of Earth science information, and in increasing data accessibility and usability.

  11. IGA-ADS: Isogeometric analysis FEM using ADS solver

    NASA Astrophysics Data System (ADS)

    Łoś, Marcin M.; Woźniak, Maciej; Paszyński, Maciej; Lenharth, Andrew; Hassaan, Muhamm Amber; Pingali, Keshav

    2017-08-01

    In this paper we present a fast explicit solver for solution of non-stationary problems using L2 projections with isogeometric finite element method. The solver has been implemented within GALOIS framework. It enables parallel multi-core simulations of different time-dependent problems, in 1D, 2D, or 3D. We have prepared the solver framework in a way that enables direct implementation of the selected PDE and corresponding boundary conditions. In this paper we describe the installation, implementation of exemplary three PDEs, and execution of the simulations on multi-core Linux cluster nodes. We consider three case studies, including heat transfer, linear elasticity, as well as non-linear flow in heterogeneous media. The presented package generates output suitable for interfacing with Gnuplot and ParaView visualization software. The exemplary simulations show near perfect scalability on Gilbert shared-memory node with four Intel® Xeon® CPU E7-4860 processors, each possessing 10 physical cores (for a total of 40 cores).

  12. Compartmentalized microchannel array for high-throughput analysis of single cell polarized growth and dynamics

    DOE PAGES

    Geng, Tao; Bredeweg, Erin L.; Szymanski, Craig J.; ...

    2015-11-04

    Here, interrogating polarized growth is technologically challenging due to extensive cellular branching and uncontrollable environmental conditions in conventional assays. Here we present a robust and high-performance microfluidic system that enables observations of polarized growth with enhanced temporal and spatial control over prolonged periods. The system has built-in tunability and versatility to accommodate a variety of science applications requiring precisely controlled environments. Using the model filamentous fungus, Neurospora crassa, this microfluidic system enabled direct visualization and analysis of cellular heterogeneity in a clonal fungal cell population, nuclear distribution and dynamics at the subhyphal level, and quantitative dynamics of gene expression withmore » single hyphal compartment resolution in response to carbon source starvation and exchange experiments. Although the microfluidic device is demonstrated on filamentous fungi, our technology is immediately extensible to a wide array of other biosystems that exhibit similar polarized cell growth with applications ranging from bioenergy production to human health.« less

  13. Collaboration tools and techniques for large model datasets

    USGS Publications Warehouse

    Signell, R.P.; Carniel, S.; Chiggiato, J.; Janekovic, I.; Pullen, J.; Sherwood, C.R.

    2008-01-01

    In MREA and many other marine applications, it is common to have multiple models running with different grids, run by different institutions. Techniques and tools are described for low-bandwidth delivery of data from large multidimensional datasets, such as those from meteorological and oceanographic models, directly into generic analysis and visualization tools. Output is stored using the NetCDF CF Metadata Conventions, and then delivered to collaborators over the web via OPeNDAP. OPeNDAP datasets served by different institutions are then organized via THREDDS catalogs. Tools and procedures are then used which enable scientists to explore data on the original model grids using tools they are familiar with. It is also low-bandwidth, enabling users to extract just the data they require, an important feature for access from ship or remote areas. The entire implementation is simple enough to be handled by modelers working with their webmasters - no advanced programming support is necessary. ?? 2007 Elsevier B.V. All rights reserved.

  14. The Influence of New Technologies on the Visual Attention of CSIs Performing a Crime Scene Investigation.

    PubMed

    de Gruijter, Madeleine; de Poot, Christianne J; Elffers, Henk

    2016-01-01

    Currently, a series of promising new tools are under development that will enable crime scene investigators (CSIs) to analyze traces in situ during the crime scene investigation or enable them to detect blood and provide information on the age of blood. An experiment is conducted with thirty CSIs investigating a violent robbery at a mock crime scene to study the influence of such technologies on the perception and interpretation of traces during the first phase of the investigation. Results show that in their search for traces, CSIs are not directed by the availability of technologies, which is a reassuring finding. Qualitative findings suggest that CSIs are generally more focused on analyzing perpetrator traces than on reconstructing the event. A focus on perpetrator traces might become a risk when other crime-related traces are overlooked, and when analyzed traces are in fact not crime-related and in consequence lead to the identification of innocent suspects. © 2015 American Academy of Forensic Sciences.

  15. Cell Type-Specific Manipulation with GFP-Dependent Cre Recombinase

    PubMed Central

    Tang, Jonathan C Y; Rudolph, Stephanie; Dhande, Onkar S; Abraira, Victoria E; Choi, Seungwon; Lapan, Sylvain; Drew, Iain R; Drokhlyansky, Eugene; Huberman, Andrew D; Regehr, Wade G; Cepko, Constance L

    2016-01-01

    Summary There are many transgenic GFP reporter lines that allow visualization of specific populations of cells. Using such lines for functional studies requires a method that transforms GFP into a molecule that enables genetic manipulation. Here we report the creation of a method that exploits GFP for gene manipulation, Cre Recombinase Dependent on GFP (CRE-DOG), a split component system that uses GFP and its derivatives to directly induce Cre/loxP recombination. Using plasmid electroporation and AAV viral vectors, we delivered CRE-DOG to multiple GFP mouse lines, leading to effective recombination selectively in GFP-labeled cells. Further, CRE-DOG enabled optogenetic control of these neurons. Beyond providing a new set of tools for manipulation of gene expression selectively in GFP+ cells, we demonstrate that GFP can be used to reconstitute the activity of a protein not known to have a modular structure, suggesting that this strategy might be applicable to a wide range of proteins. PMID:26258682

  16. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    NASA Astrophysics Data System (ADS)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  17. UTM Safely Enabling UAS Operations in Low-Altitude Airspace

    NASA Technical Reports Server (NTRS)

    Kopardekar, Parimal

    2017-01-01

    Conduct research, development and testing to identify airspace operations requirements to enable large-scale visual and beyond visual line of sight UAS operations in the low-altitude airspace. Use build-a-little-test-a-little strategy remote areas to urban areas Low density: No traffic management required but understanding of airspace constraints. Cooperative traffic management: Understanding of airspace constraints and other operations. Manned and unmanned traffic management: Scalable and heterogeneous operations. UTM construct consistent with FAAs risk-based strategy. UTM research platform is used for simulations and tests. UTM offers path towards scalability.

  18. UTM Safely Enabling UAS Operations in Low-Altitude Airspace

    NASA Technical Reports Server (NTRS)

    Kopardekar, Parimal H.

    2016-01-01

    Conduct research, development and testing to identify airspace operations requirements to enable large-scale visual and beyond visual line of sight UAS operations in the low-altitude airspace. Use build-a-little-test-a-little strategy remote areas to urban areas Low density: No traffic management required but understanding of airspace constraints. Cooperative traffic management: Understanding of airspace constraints and other operations. Manned and unmanned traffic management: Scalable and heterogeneous operations. UTM construct consistent with FAAs risk-based strategy. UTM research platform is used for simulations and tests. UTM offers path towards scalability.

  19. Airway mechanics and methods used to visualize smooth muscle dynamics in vitro.

    PubMed

    Cooper, P R; McParland, B E; Mitchell, H W; Noble, P B; Politi, A Z; Ressmeyer, A R; West, A R

    2009-10-01

    Contraction of airway smooth muscle (ASM) is regulated by the physiological, structural and mechanical environment in the lung. We review two in vitro techniques, lung slices and airway segment preparations, that enable in situ ASM contraction and airway narrowing to be visualized. Lung slices and airway segment approaches bridge a gap between cell culture and isolated ASM, and whole animal studies. Imaging techniques enable key upstream events involved in airway narrowing, such as ASM cell signalling and structural and mechanical events impinging on ASM, to be investigated.

  20. Visualizing Structure and Dynamics of Disaccharide Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, J. F.; Beckham, G. T.; Himmel, M. E.

    2012-01-01

    We examine the effect of several solvent models on the conformational properties and dynamics of disaccharides such as cellobiose and lactose. Significant variation in timescale for large scale conformational transformations are observed. Molecular dynamics simulation provides enough detail to enable insight through visualization of multidimensional data sets. We present a new way to visualize conformational space for disaccharides with Ramachandran plots.

  1. VID-R and SCAN: Tools and Methods for the Automated Analysis of Visual Records.

    ERIC Educational Resources Information Center

    Ekman, Paul; And Others

    The VID-R (Visual Information Display and Retrieval) system that enables computer-aided analysis of visual records is composed of a film-to-television chain, two videotape recorders with complete remote control of functions, a video-disc recorder, three high-resolution television monitors, a teletype, a PDP-8, a video and audio interface, three…

  2. Landscape control points: a procedure for predicting and monitoring visual impacts

    Treesearch

    R. Burton Litton

    1973-01-01

    The visual impacts of alterations to the landscape can be studied by setting up Landscape Control Points–a network of permanently established observation sites. Such observations enable the forest manager to anticipate visual impacts of management decision, select from a choice of alternative solutions, cover an area for comprehensive viewing, and establish a method to...

  3. Survey of Network Visualization Tools

    DTIC Science & Technology

    2007-12-01

    Dimensionality • 2D Comments: Deployment Type: • Components for tool building • Standalone Tool OS: • Windows Extensibility • ActiveX ...Visual Basic Comments: Interoperability Daisy is fully compliant with Microsoft’s ActiveX , therefore, other Windows based programs can...other functions that improve analytic decision making. Available in ActiveX , C++, Java, and .NET editions. • Tom Sawyer Visualization: Enables you to

  4. Enabling Efficient Intelligence Analysis in Degraded Environments

    DTIC Science & Technology

    2013-06-01

    Magnets Grid widget for multidimensional information exploration ; and a record browser of Visual Summary Cards widget for fast visual identification of...evolution analysis; a Magnets Grid widget for multi- dimensional information exploration ; and a record browser of Visual Summary Cards widget for fast...attention and inattentional blindness. It also explores and develops various techniques to represent information in a salient way and provide efficient

  5. Developing Students' Critical Thinking Skills through Visual Literacy in the New Secondary School Curriculum in Hong Kong

    ERIC Educational Resources Information Center

    Cheung, Chi-Kim; Jhaveri, Aditi Dubey

    2016-01-01

    This paper argues that the planned introduction of visual literacy into the New Secondary School Curriculum can play a crucial role in enabling students to think critically and creatively in Hong Kong's highly visual landscape. As Hong Kong's educational system remains entrenched in long-established and conventional pedagogies, the primacy given…

  6. Trends in the sand: Directional evolution in the shell shape of recessing scallops (Bivalvia: Pectinidae).

    PubMed

    Sherratt, Emma; Alejandrino, Alvin; Kraemer, Andrew C; Serb, Jeanne M; Adams, Dean C

    2016-09-01

    Directional evolution is one of the most compelling evolutionary patterns observed in macroevolution. Yet, despite its importance, detecting such trends in multivariate data remains a challenge. In this study, we evaluate multivariate evolution of shell shape in 93 bivalved scallop species, combining geometric morphometrics and phylogenetic comparative methods. Phylomorphospace visualization described the history of morphological diversification in the group; revealing that taxa with a recessing life habit were the most distinctive in shell shape, and appeared to display a directional trend. To evaluate this hypothesis empirically, we extended existing methods by characterizing the mean directional evolution in phylomorphospace for recessing scallops. We then compared this pattern to what was expected under several alternative evolutionary scenarios using phylogenetic simulations. The observed pattern did not fall within the distribution obtained under multivariate Brownian motion, enabling us to reject this evolutionary scenario. By contrast, the observed pattern was more similar to, and fell within, the distribution obtained from simulations using Brownian motion combined with a directional trend. Thus, the observed data are consistent with a pattern of directional evolution for this lineage of recessing scallops. We discuss this putative directional evolutionary trend in terms of its potential adaptive role in exploiting novel habitats. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  7. Visual Prostheses: The Enabling Technology to Give Sight to the Blind

    PubMed Central

    Maghami, Mohammad Hossein; Sodagar, Amir Masoud; Lashay, Alireza; Riazi-Esfahani, Hamid; Riazi-Esfahani, Mohammad

    2014-01-01

    Millions of patients are either slowly losing their vision or are already blind due to retinal degenerative diseases such as retinitis pigmentosa (RP) and age-related macular degeneration (AMD) or because of accidents or injuries. Employment of artificial means to treat extreme vision impairment has come closer to reality during the past few decades. Currently, many research groups work towards effective solutions to restore a rudimentary sense of vision to the blind. Aside from the efforts being put on replacing damaged parts of the retina by engineered living tissues or microfabricated photoreceptor arrays, implantable electronic microsystems, referred to as visual prostheses, are also sought as promising solutions to restore vision. From a functional point of view, visual prostheses receive image information from the outside world and deliver them to the natural visual system, enabling the subject to receive a meaningful perception of the image. This paper provides an overview of technical design aspects and clinical test results of visual prostheses, highlights past and recent progress in realizing chronic high-resolution visual implants as well as some technical challenges confronted when trying to enhance the functional quality of such devices. PMID:25709777

  8. Neural Substrates of Visual Spatial Coding and Visual Feedback Control for Hand Movements in Allocentric and Target-Directed Tasks

    PubMed Central

    Thaler, Lore; Goodale, Melvyn A.

    2011-01-01

    Neuropsychological evidence suggests that different brain areas may be involved in movements that are directed at visual targets (e.g., pointing or reaching), and movements that are based on allocentric visual information (e.g., drawing or copying). Here we used fMRI to investigate the neural correlates of these two types of movements in healthy volunteers. Subjects (n = 14) performed right hand movements in either a target-directed task (moving a cursor to a target dot) or an allocentric task (moving a cursor to reproduce the distance and direction between two distal target dots) with or without visual feedback about their hand movement. Movements were monitored with an MR compatible touch panel. A whole brain analysis revealed that movements in allocentric conditions led to an increase in activity in the fundus of the left intra-parietal sulcus (IPS), in posterior IPS, in bilateral dorsal premotor cortex (PMd), and in the lateral occipital complex (LOC). Visual feedback in both target-directed and allocentric conditions led to an increase in activity in area MT+, superior parietal–occipital cortex (SPOC), and posterior IPS (all bilateral). In addition, we found that visual feedback affected brain activity differently in target-directed as compared to allocentric conditions, particularly in the pre-supplementary motor area, PMd, IPS, and parieto-occipital cortex. Our results, in combination with previous findings, suggest that the LOC is essential for allocentric visual coding and that SPOC is involved in visual feedback control. The differences in brain activity between target-directed and allocentric visual feedback conditions may be related to behavioral differences in visual feedback control. Our results advance the understanding of the visual coordinate frame used by the LOC. In addition, because of the nature of the allocentric task, our results have relevance for the understanding of neural substrates of magnitude estimation and vector coding of movements. PMID:21941474

  9. Teaching Visual Literacy for the 21st Century.

    ERIC Educational Resources Information Center

    Glasgow, Jacqueline N.

    1994-01-01

    Discusses teaching visual literacy by teaching students how to decode advertising images, thus enabling them to move away from being passive receivers of messages to active unravelers. Shows how teachers can use concepts from semiotics to deconstruct advertising messages. (SR)

  10. Integrated Data Visualization and Virtual Reality Tool

    NASA Technical Reports Server (NTRS)

    Dryer, David A.

    1998-01-01

    The Integrated Data Visualization and Virtual Reality Tool (IDVVRT) Phase II effort was for the design and development of an innovative Data Visualization Environment Tool (DVET) for NASA engineers and scientists, enabling them to visualize complex multidimensional and multivariate data in a virtual environment. The objectives of the project were to: (1) demonstrate the transfer and manipulation of standard engineering data in a virtual world; (2) demonstrate the effects of design and changes using finite element analysis tools; and (3) determine the training and engineering design and analysis effectiveness of the visualization system.

  11. Towards a Web-Enabled Geovisualization and Analytics Platform for the Energy and Water Nexus

    NASA Astrophysics Data System (ADS)

    Sanyal, J.; Chandola, V.; Sorokine, A.; Allen, M.; Berres, A.; Pang, H.; Karthik, R.; Nugent, P.; McManamay, R.; Stewart, R.; Bhaduri, B. L.

    2017-12-01

    Interactive data analytics are playing an increasingly vital role in the generation of new, critical insights regarding the complex dynamics of the energy/water nexus (EWN) and its interactions with climate variability and change. Integration of impacts, adaptation, and vulnerability (IAV) science with emerging, and increasingly critical, data science capabilities offers a promising potential to meet the needs of the EWN community. To enable the exploration of pertinent research questions, a web-based geospatial visualization platform is being built that integrates a data analysis toolbox with advanced data fusion and data visualization capabilities to create a knowledge discovery framework for the EWN. The system, when fully built out, will offer several geospatial visualization capabilities including statistical visual analytics, clustering, principal-component analysis, dynamic time warping, support uncertainty visualization and the exploration of data provenance, as well as support machine learning discoveries to render diverse types of geospatial data and facilitate interactive analysis. Key components in the system architecture includes NASA's WebWorldWind, the Globus toolkit, postgresql, as well as other custom built software modules.

  12. Endoscopic Surgery for Symptomatic Unicameral Bone Cyst of the Proximal Femur

    PubMed Central

    Miyamoto, Wataru; Takao, Masato; Yasui, Youichi; Miki, Shinya; Matsushita, Takashi

    2013-01-01

    Recently, surgical treatment of a symptomatic unicameral cyst of the proximal femur has been achieved with less invasive procedures than traditional open curettage with an autologous bone graft. In this article we introduce endoscopic surgery for a symptomatic unicameral cyst of the proximal femur. The presented technique, which includes minimally invasive endoscopic curettage of the cyst and injection of a bone substitute, not only minimizes muscle damage around the femur but also enables sufficient curettage of the fibrous membrane in the cyst wall and the bony septum through direct detailed visualization by an endoscope. Furthermore, sufficient initial strength after curettage can be obtained by injecting calcium phosphate cement as a bone substitute. PMID:24892010

  13. Endoscopic Surgery for Symptomatic Unicameral Bone Cyst of the Proximal Femur.

    PubMed

    Miyamoto, Wataru; Takao, Masato; Yasui, Youichi; Miki, Shinya; Matsushita, Takashi

    2013-11-01

    Recently, surgical treatment of a symptomatic unicameral cyst of the proximal femur has been achieved with less invasive procedures than traditional open curettage with an autologous bone graft. In this article we introduce endoscopic surgery for a symptomatic unicameral cyst of the proximal femur. The presented technique, which includes minimally invasive endoscopic curettage of the cyst and injection of a bone substitute, not only minimizes muscle damage around the femur but also enables sufficient curettage of the fibrous membrane in the cyst wall and the bony septum through direct detailed visualization by an endoscope. Furthermore, sufficient initial strength after curettage can be obtained by injecting calcium phosphate cement as a bone substitute.

  14. Real-time imaging of specific genomic loci in eukaryotic cells using the ANCHOR DNA labelling system.

    PubMed

    Germier, Thomas; Sylvain, Audibert; Silvia, Kocanova; David, Lane; Kerstin, Bystricky

    2018-06-01

    Spatio-temporal organization of the cell nucleus adapts to and regulates genomic processes. Microscopy approaches that enable direct monitoring of specific chromatin sites in single cells and in real time are needed to better understand the dynamics involved. In this chapter, we describe the principle and development of ANCHOR, a novel tool for DNA labelling in eukaryotic cells. Protocols for use of ANCHOR to visualize a single genomic locus in eukaryotic cells are presented. We describe an approach for live cell imaging of a DNA locus during the entire cell cycle in human breast cancer cells. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance.

    PubMed

    Dong, Han; Sharma, Diksha; Badano, Aldo

    2014-12-01

    Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridmantis, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webmantis and visualmantis to facilitate the setup of computational experiments via hybridmantis. The visualization tools visualmantis and webmantis enable the user to control simulation properties through a user interface. In the case of webmantis, control via a web browser allows access through mobile devices such as smartphones or tablets. webmantis acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridmantis. The users can download the output images and statistics through a zip file for future reference. In addition, webmantis provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. The visualization tools visualmantis and webmantis provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual feedback for the model predictions.

  16. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams

    PubMed Central

    Rouinfar, Amy; Agra, Elise; Larson, Adam M.; Rebello, N. Sanjay; Loschky, Lester C.

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants’ attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants’ verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers’ attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions. PMID:25324804

  17. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams.

    PubMed

    Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.

  18. Transcranial direct current stimulation enhances recovery of stereopsis in adults with amblyopia.

    PubMed

    Spiegel, Daniel P; Li, Jinrong; Hess, Robert F; Byblow, Winston D; Deng, Daming; Yu, Minbin; Thompson, Benjamin

    2013-10-01

    Amblyopia is a neurodevelopmental disorder of vision caused by abnormal visual experience during early childhood that is often considered to be untreatable in adulthood. Recently, it has been shown that a novel dichoptic videogame-based treatment for amblyopia can improve visual function in adult patients, at least in part, by reducing inhibition of inputs from the amblyopic eye to the visual cortex. Non-invasive anodal transcranial direct current stimulation has been shown to reduce the activity of inhibitory cortical interneurons when applied to the primary motor or visual cortex. In this double-blind, sham-controlled cross-over study we tested the hypothesis that anodal transcranial direct current stimulation of the visual cortex would enhance the therapeutic effects of dichoptic videogame-based treatment. A homogeneous group of 16 young adults (mean age 22.1 ± 1.1 years) with amblyopia were studied to compare the effect of dichoptic treatment alone and dichoptic treatment combined with visual cortex direct current stimulation on measures of binocular (stereopsis) and monocular (visual acuity) visual function. The combined treatment led to greater improvements in stereoacuity than dichoptic treatment alone, indicating that direct current stimulation of the visual cortex boosts the efficacy of dichoptic videogame-based treatment. This intervention warrants further evaluation as a novel therapeutic approach for adults with amblyopia.

  19. The wide window of face detection.

    PubMed

    Hershler, Orit; Golan, Tal; Bentin, Shlomo; Hochstein, Shaul

    2010-08-20

    Faces are detected more rapidly than other objects in visual scenes and search arrays, but the cause for this face advantage has been contested. In the present study, we found that under conditions of spatial uncertainty, faces were easier to detect than control targets (dog faces, clocks and cars) even in the absence of surrounding stimuli, making an explanation based only on low-level differences unlikely. This advantage improved with eccentricity in the visual field, enabling face detection in wider visual windows, and pointing to selective sparing of face detection at greater eccentricities. This face advantage might be due to perceptual factors favoring face detection. In addition, the relative face advantage is greater under flanked than non-flanked conditions, suggesting an additional, possibly attention-related benefit enabling face detection in groups of distracters.

  20. Human Connectome Project Informatics: quality control, database services, and data visualization

    PubMed Central

    Marcus, Daniel S.; Harms, Michael P.; Snyder, Abraham Z.; Jenkinson, Mark; Wilson, J Anthony; Glasser, Matthew F.; Barch, Deanna M.; Archie, Kevin A.; Burgess, Gregory C.; Ramaratnam, Mohana; Hodge, Michael; Horton, William; Herrick, Rick; Olsen, Timothy; McKay, Michael; House, Matthew; Hileman, Michael; Reid, Erin; Harwell, John; Coalson, Timothy; Schindler, Jon; Elam, Jennifer S.; Curtiss, Sandra W.; Van Essen, David C.

    2013-01-01

    The Human Connectome Project (HCP) has developed protocols, standard operating and quality control procedures, and a suite of informatics tools to enable high throughput data collection, data sharing, automated data processing and analysis, and data mining and visualization. Quality control procedures include methods to maintain data collection consistency over time, to measure head motion, and to establish quantitative modality-specific overall quality assessments. Database services developed as customizations of the XNAT imaging informatics platform support both internal daily operations and open access data sharing. The Connectome Workbench visualization environment enables user interaction with HCP data and is increasingly integrated with the HCP's database services. Here we describe the current state of these procedures and tools and their application in the ongoing HCP study. PMID:23707591

  1. Inferring the direction of implied motion depends on visual awareness

    PubMed Central

    Faivre, Nathan; Koch, Christof

    2014-01-01

    Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction. PMID:24706951

  2. Inferring the direction of implied motion depends on visual awareness.

    PubMed

    Faivre, Nathan; Koch, Christof

    2014-04-04

    Visual awareness of an event, object, or scene is, by essence, an integrated experience, whereby different visual features composing an object (e.g., orientation, color, shape) appear as an unified percept and are processed as a whole. Here, we tested in human observers whether perceptual integration of static motion cues depends on awareness by measuring the capacity to infer the direction of motion implied by a static visible or invisible image under continuous flash suppression. Using measures of directional adaptation, we found that visible but not invisible implied motion adaptors biased the perception of real motion probes. In a control experiment, we found that invisible adaptors implying motion primed the perception of subsequent probes when they were identical (i.e., repetition priming), but not when they only shared the same direction (i.e., direction priming). Furthermore, using a model of visual processing, we argue that repetition priming effects are likely to arise as early as in the primary visual cortex. We conclude that although invisible images implying motion undergo some form of nonconscious processing, visual awareness is necessary to make inferences about motion direction.

  3. [Individual differences in sense of direction and psychological stress associated with mobility in visually impaired people].

    PubMed

    Matsunaka, Kumiko; Shibata, Yuki; Yamamoto, Toshikazu

    2008-08-01

    Study 1 investigated individual differences in spatial cognition amongst visually impaired students and sighted controls, as well as the extent to which visual status contributes to these individual differences. Fifty-eight visually impaired and 255 sighted university students evaluated their sense of direction via self-ratings. Visual impairment contributed to the factors associated with the use and understanding of maps, confirming that maps are generally unfamiliar to visually impaired people. The relationship between psychological stress associated with mobility and individual differences in sense of direction was investigated in Study 2. A stress checklist was administered to the 51 visually impaired students who participated in Study 1. Psychological stress level was related to understanding and use of maps, as well as orientation and renewal, that is, course correction after being got lost. Central visual field deficits were associated with greater mobility-related stress levels than peripheral visual field deficits.

  4. Action Recognition and Movement Direction Discrimination Tasks Are Associated with Different Adaptation Patterns

    PubMed Central

    de la Rosa, Stephan; Ekramnia, Mina; Bülthoff, Heinrich H.

    2016-01-01

    The ability to discriminate between different actions is essential for action recognition and social interactions. Surprisingly previous research has often probed action recognition mechanisms with tasks that did not require participants to discriminate between actions, e.g., left-right direction discrimination tasks. It is not known to what degree visual processes in direction discrimination tasks are also involved in the discrimination of actions, e.g., when telling apart a handshake from a high-five. Here, we examined whether action discrimination is influenced by movement direction and whether direction discrimination depends on the type of action. We used an action adaptation paradigm to target action and direction discrimination specific visual processes. In separate conditions participants visually adapted to forward and backward moving handshake and high-five actions. Participants subsequently categorized either the action or the movement direction of an ambiguous action. The results showed that direction discrimination adaptation effects were modulated by the type of action but action discrimination adaptation effects were unaffected by movement direction. These results suggest that action discrimination and direction categorization rely on partly different visual information. We propose that action discrimination tasks should be considered for the exploration of visual action recognition mechanisms. PMID:26941633

  5. Neural Responses to Visual Food Cues According to Weight Status: A Systematic Review of Functional Magnetic Resonance Imaging Studies

    PubMed Central

    Pursey, Kirrilly M.; Stanwell, Peter; Callister, Robert J.; Brain, Katherine; Collins, Clare E.; Burrows, Tracy L.

    2014-01-01

    Emerging evidence from recent neuroimaging studies suggests that specific food-related behaviors contribute to the development of obesity. The aim of this review was to report the neural responses to visual food cues, as assessed by functional magnetic resonance imaging (fMRI), in humans of differing weight status. Published studies to 2014 were retrieved and included if they used visual food cues, studied humans >18 years old, reported weight status, and included fMRI outcomes. Sixty studies were identified that investigated the neural responses of healthy weight participants (n = 26), healthy weight compared to obese participants (n = 17), and weight-loss interventions (n = 12). High-calorie food images were used in the majority of studies (n = 36), however, image selection justification was only provided in 19 studies. Obese individuals had increased activation of reward-related brain areas including the insula and orbitofrontal cortex in response to visual food cues compared to healthy weight individuals, and this was particularly evident in response to energy dense cues. Additionally, obese individuals were more responsive to food images when satiated. Meta-analysis of changes in neural activation post-weight loss revealed small areas of convergence across studies in brain areas related to emotion, memory, and learning, including the cingulate gyrus, lentiform nucleus, and precuneus. Differential activation patterns to visual food cues were observed between obese, healthy weight, and weight-loss populations. Future studies require standardization of nutrition variables and fMRI outcomes to enable more direct comparisons between studies. PMID:25988110

  6. Neural responses to visual food cues according to weight status: a systematic review of functional magnetic resonance imaging studies.

    PubMed

    Pursey, Kirrilly M; Stanwell, Peter; Callister, Robert J; Brain, Katherine; Collins, Clare E; Burrows, Tracy L

    2014-01-01

    Emerging evidence from recent neuroimaging studies suggests that specific food-related behaviors contribute to the development of obesity. The aim of this review was to report the neural responses to visual food cues, as assessed by functional magnetic resonance imaging (fMRI), in humans of differing weight status. Published studies to 2014 were retrieved and included if they used visual food cues, studied humans >18 years old, reported weight status, and included fMRI outcomes. Sixty studies were identified that investigated the neural responses of healthy weight participants (n = 26), healthy weight compared to obese participants (n = 17), and weight-loss interventions (n = 12). High-calorie food images were used in the majority of studies (n = 36), however, image selection justification was only provided in 19 studies. Obese individuals had increased activation of reward-related brain areas including the insula and orbitofrontal cortex in response to visual food cues compared to healthy weight individuals, and this was particularly evident in response to energy dense cues. Additionally, obese individuals were more responsive to food images when satiated. Meta-analysis of changes in neural activation post-weight loss revealed small areas of convergence across studies in brain areas related to emotion, memory, and learning, including the cingulate gyrus, lentiform nucleus, and precuneus. Differential activation patterns to visual food cues were observed between obese, healthy weight, and weight-loss populations. Future studies require standardization of nutrition variables and fMRI outcomes to enable more direct comparisons between studies.

  7. Y0: An innovative tool for spatial data analysis

    NASA Astrophysics Data System (ADS)

    Wilson, Jeremy C.

    1993-08-01

    This paper describes an advanced analysis and visualization tool, called Y0 (pronounced ``Why not?!''), that has been developed to directly support the scientific process for earth and space science research. Y0 aids the scientific research process by enabling the user to formulate algorithms and models within an integrated environment, and then interactively explore the solution space with the aid of appropriate visualizations. Y0 has been designed to provide strong support for both quantitative analysis and rich visualization. The user's algorithm or model is defined in terms of algebraic formulas in cells on worksheets, in a similar fashion to spreadsheet programs. Y0 is specifically designed to provide the data types and rich function set necessary for effective analysis and manipulation of remote sensing data. This includes various types of arrays, geometric objects, and objects for representing geographic coordinate system mappings. Visualization of results is tailored to the needs of remote sensing, with straightforward methods of composing, comparing, and animating imagery and graphical information, with reference to geographical coordinate systems. Y0 is based on advanced object-oriented technology. It is implemented in C++ for use in Unix environments, with a user interface based on the X window system. Y0 has been delivered under contract to Unidata, a group which provides data and software support to atmospheric researches in universities affiliated with UCAR. This paper will explore the key concepts in Y0, describe its utility for remote sensing analysis and visualization, and will give a specific example of its application to the problem of measuring glacier flow rates from Landsat imagery.

  8. Visualizing Simulated Electrical Fields from Electroencephalography and Transcranial Electric Brain Stimulation: A Comparative Evaluation

    PubMed Central

    Eichelbaum, Sebastian; Dannhauer, Moritz; Hlawitschka, Mario; Brooks, Dana; Knösche, Thomas R.; Scheuermann, Gerik

    2014-01-01

    Electrical activity of neuronal populations is a crucial aspect of brain activity. This activity is not measured directly but recorded as electrical potential changes using head surface electrodes (electroencephalogram - EEG). Head surface electrodes can also be deployed to inject electrical currents in order to modulate brain activity (transcranial electric stimulation techniques) for therapeutic and neuroscientific purposes. In electroencephalography and noninvasive electric brain stimulation, electrical fields mediate between electrical signal sources and regions of interest (ROI). These fields can be very complicated in structure, and are influenced in a complex way by the conductivity profile of the human head. Visualization techniques play a central role to grasp the nature of those fields because such techniques allow for an effective conveyance of complex data and enable quick qualitative and quantitative assessments. The examination of volume conduction effects of particular head model parameterizations (e.g., skull thickness and layering), of brain anomalies (e.g., holes in the skull, tumors), location and extent of active brain areas (e.g., high concentrations of current densities) and around current injecting electrodes can be investigated using visualization. Here, we evaluate a number of widely used visualization techniques, based on either the potential distribution or on the current-flow. In particular, we focus on the extractability of quantitative and qualitative information from the obtained images, their effective integration of anatomical context information, and their interaction. We present illustrative examples from clinically and neuroscientifically relevant cases and discuss the pros and cons of the various visualization techniques. PMID:24821532

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc

    The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less

  10. The Development of the Text Reception Threshold Test: A Visual Analogue of the Speech Reception Threshold Test

    ERIC Educational Resources Information Center

    Zekveld, Adriana A.; George, Erwin L. J.; Kramer, Sophia E.; Goverts, S. Theo; Houtgast, Tammo

    2007-01-01

    Purpose: In this study, the authors aimed to develop a visual analogue of the widely used Speech Reception Threshold (SRT; R. Plomp & A. M. Mimpen, 1979b) test. The Text Reception Threshold (TRT) test, in which visually presented sentences are masked by a bar pattern, enables the quantification of modality-aspecific variance in speech-in-noise…

  11. Running VisIt Software on the Peregrine System | High-Performance Computing

    Science.gov Websites

    kilobyte range. VisIt features a robust remote visualization capability. VisIt can be started on a local machine and used to visualize data on a remote compute cluster.The remote machine must be able to send VisIt module must be loaded as part of this process. To enable remote visualization the 'module load

  12. Anisotropies in the perceived spatial displacement of motion-defined contours: opposite biases in the upper-left and lower-right visual quadrants.

    PubMed

    Fan, Zhao; Harris, John

    2010-10-12

    In a recent study (Fan, Z., & Harris, J. (2008). Perceived spatial displacement of motion-defined contours in peripheral vision. Vision Research, 48(28), 2793-2804), we demonstrated that virtual contours defined by two regions of dots moving in opposite directions were displaced perceptually in the direction of motion of the dots in the more eccentric region when the contours were viewed in the right visual field. Here, we show that the magnitude and/or direction of these displacements varies in different quadrants of the visual field. When contours were presented in the lower visual field, the direction of perceived contour displacement was consistent with that when both contours were presented in the right visual field. However, this illusory motion-induced spatial displacement disappeared when both contours were presented in the upper visual field. Also, perceived contour displacement in the direction of the more eccentric dots was larger in the right than in the left visual field, perhaps because of a hemispheric asymmetry in attentional allocation. Quadrant-based analyses suggest that the pattern of results arises from opposite directions of perceived contour displacement in the upper-left and lower-right visual quadrants, which depend on the relative strengths of two effects: a greater sensitivity to centripetal motion, and an asymmetry in the allocation of spatial attention. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Exogenous Attention Enables Perceptual Learning

    PubMed Central

    Szpiro, Sarit F. A.; Carrasco, Marisa

    2015-01-01

    Practice can improve visual perception, and these improvements are considered to be a form of brain plasticity. Training-induced learning is time-consuming and requires hundreds of trials across multiple days. The process of learning acquisition is understudied. Can learning acquisition be potentiated by manipulating visual attentional cues? We developed a protocol in which we used task-irrelevant cues for between-groups manipulation of attention during training. We found that training with exogenous attention can enable the acquisition of learning. Remarkably, this learning was maintained even when observers were subsequently tested under neutral conditions, which indicates that a change in perception was involved. Our study is the first to isolate the effects of exogenous attention and to demonstrate its efficacy to enable learning. We propose that exogenous attention boosts perceptual learning by enhancing stimulus encoding. PMID:26502745

  14. Information visualization: Beyond traditional engineering

    NASA Technical Reports Server (NTRS)

    Thomas, James J.

    1995-01-01

    This presentation addresses a different aspect of the human-computer interface; specifically the human-information interface. This interface will be dominated by an emerging technology called Information Visualization (IV). IV goes beyond the traditional views of computer graphics, CADS, and enables new approaches for engineering. IV specifically must visualize text, documents, sound, images, and video in such a way that the human can rapidly interact with and understand the content structure of information entities. IV is the interactive visual interface between humans and their information resources.

  15. A parallel coordinates style interface for exploratory volume visualization.

    PubMed

    Tory, Melanie; Potts, Simeon; Möller, Torsten

    2005-01-01

    We present a user interface, based on parallel coordinates, that facilitates exploration of volume data. By explicitly representing the visualization parameter space, the interface provides an overview of rendering options and enables users to easily explore different parameters. Rendered images are stored in an integrated history bar that facilitates backtracking to previous visualization options. Initial usability testing showed clear agreement between users and experts of various backgrounds (usability, graphic design, volume visualization, and medical physics) that the proposed user interface is a valuable data exploration tool.

  16. Merging and Visualization of Archived Oceanographic Acoustic, Optical, and Sensor Data to Support Improved Access and Interpretation

    NASA Astrophysics Data System (ADS)

    Malik, M. A.; Cantwell, K. L.; Reser, B.; Gray, L. M.

    2016-02-01

    Marine researchers and managers routinely rely on interdisciplinary data sets collected using hull-mounted sonars, towed sensors, or submersible vehicles. These data sets can be broadly categorized into acoustic remote sensing, imagery-based observations, water property measurements, and physical samples. The resulting raw data sets are overwhelmingly large and complex, and often require specialized software and training to process. To address these challenges, NOAA's Office of Ocean Exploration and Research (OER) is developing tools to improve the discoverability of raw data sets and integration of quality-controlled processed data in order to facilitate re-use of archived oceanographic data. Majority of recently collected OER raw oceanographic data can be retrieved from national data archives (e.g. NCEI and NOAA central library). Merging of disperse data sets by scientists with diverse expertise, however remains problematic. Initial efforts at OER have focused on merging geospatial acoustic remote sensing data with imagery and water property measurements that typically lack direct geo-referencing. OER has developed `smart' ship and submersible tracks that can provide a synopsis of geospatial coverage of various data sets. Tools under development enable scientists to quickly assess the relevance of archived OER data to their respective research or management interests, and enable quick access to the desired raw and processed data sets. Pre-processing of the data and visualization to combine various data sets also offers benefits to streamline data quality assurance and quality control efforts.

  17. N-Way FRET Microscopy of Multiple Protein-Protein Interactions in Live Cells

    PubMed Central

    Hoppe, Adam D.; Scott, Brandon L.; Welliver, Timothy P.; Straight, Samuel W.; Swanson, Joel A.

    2013-01-01

    Fluorescence Resonance Energy Transfer (FRET) microscopy has emerged as a powerful tool to visualize nanoscale protein-protein interactions while capturing their microscale organization and millisecond dynamics. Recently, FRET microscopy was extended to imaging of multiple donor-acceptor pairs, thereby enabling visualization of multiple biochemical events within a single living cell. These methods require numerous equations that must be defined on a case-by-case basis. Here, we present a universal multispectral microscopy method (N-Way FRET) to enable quantitative imaging for any number of interacting and non-interacting FRET pairs. This approach redefines linear unmixing to incorporate the excitation and emission couplings created by FRET, which cannot be accounted for in conventional linear unmixing. Experiments on a three-fluorophore system using blue, yellow and red fluorescent proteins validate the method in living cells. In addition, we propose a simple linear algebra scheme for error propagation from input data to estimate the uncertainty in the computed FRET images. We demonstrate the strength of this approach by monitoring the oligomerization of three FP-tagged HIV Gag proteins whose tight association in the viral capsid is readily observed. Replacement of one FP-Gag molecule with a lipid raft-targeted FP allowed direct observation of Gag oligomerization with no association between FP-Gag and raft-targeted FP. The N-Way FRET method provides a new toolbox for capturing multiple molecular processes with high spatial and temporal resolution in living cells. PMID:23762252

  18. Visual direction finding by fishes

    NASA Technical Reports Server (NTRS)

    Waterman, T. H.

    1972-01-01

    The use of visual orientation, in the absence of landmarks, for underwater direction finding exercises by fishes is reviewed. Celestial directional clues observed directly near the water surface or indirectly at an asymptatic depth are suggested as possible orientation aids.

  19. Dynamix: dynamic visualization by automatic selection of informative tracks from hundreds of genomic datasets.

    PubMed

    Monfort, Matthias; Furlong, Eileen E M; Girardot, Charles

    2017-07-15

    Visualization of genomic data is fundamental for gaining insights into genome function. Yet, co-visualization of a large number of datasets remains a challenge in all popular genome browsers and the development of new visualization methods is needed to improve the usability and user experience of genome browsers. We present Dynamix, a JBrowse plugin that enables the parallel inspection of hundreds of genomic datasets. Dynamix takes advantage of a priori knowledge to automatically display data tracks with signal within a genomic region of interest. As the user navigates through the genome, Dynamix automatically updates data tracks and limits all manual operations otherwise needed to adjust the data visible on screen. Dynamix also introduces a new carousel view that optimizes screen utilization by enabling users to independently scroll through groups of tracks. Dynamix is hosted at http://furlonglab.embl.de/Dynamix . charles.girardot@embl.de. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  20. Typograph: Multiscale Spatial Exploration of Text Documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Endert, Alexander; Burtner, Edwin R.; Cramer, Nicholas O.

    2013-10-06

    Visualizing large document collections using a spatial layout of terms can enable quick overviews of information. These visual metaphors (e.g., word clouds, tag clouds, etc.) traditionally show a series of terms organized by space-filling algorithms. However, often lacking in these views is the ability to interactively explore the information to gain more detail, and the location and rendering of the terms are often not based on mathematical models that maintain relative distances from other information based on similarity metrics. In this paper, we present Typograph, a multi-scale spatial exploration visualization for large document collections. Based on the term-based visualization methods,more » Typograh enables multiple levels of detail (terms, phrases, snippets, and full documents) within the single spatialization. Further, the information is placed based on their relative similarity to other information to create the “near = similar” geographic metaphor. This paper discusses the design principles and functionality of Typograph and presents a use case analyzing Wikipedia to demonstrate usage.« less

  1. Longitudinally polarized shear wave optical coherence elastography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Miao, Yusi; Zhu, Jiang; Qi, Li; Qu, Yueqiao; He, Youmin; Gao, Yiwei; Chen, Zhongping

    2017-02-01

    Shear wave measurement enables quantitative assessment of tissue viscoelasticity. In previous studies, a transverse shear wave was measured using optical coherence elastography (OCE), which gives poor resolution along the force direction because the shear wave propagates perpendicular to the applied force. In this study, for the first time to our knowledge, we introduce an OCE method to detect a longitudinally polarized shear wave that propagates along the force direction. The direction of vibration induced by a piezo transducer (PZT) is parallel to the direction of wave propagation, which is perpendicular to the OCT beam. A Doppler variance method is used to visualize the transverse displacement. Both homogeneous phantoms and a side-by-side two-layer phantom were measured. The elastic moduli from mechanical tests closely matched to the values measured by the OCE system. Furthermore, we developed 3D computational models using finite element analysis to confirm the shear wave propagation in the longitudinal direction. The simulation shows that a longitudinally polarized shear wave is present as a plane wave in the near field of planar source due to diffraction effects. This imaging technique provides a novel method for the assessment of elastic properties along the force direction, which can be especially useful to image a layered tissue.

  2. Augmenting distractor filtering via transcranial magnetic stimulation of the lateral occipital cortex.

    PubMed

    Eštočinová, Jana; Lo Gerfo, Emanuele; Della Libera, Chiara; Chelazzi, Leonardo; Santandrea, Elisa

    2016-11-01

    Visual selective attention (VSA) optimizes perception and behavioral control by enabling efficient selection of relevant information and filtering of distractors. While focusing resources on task-relevant information helps counteract distraction, dedicated filtering mechanisms have recently been demonstrated, allowing neural systems to implement suitable policies for the suppression of potential interference. Limited evidence is presently available concerning the neural underpinnings of these mechanisms, and whether neural circuitry within the visual cortex might play a causal role in their instantiation, a possibility that we directly tested here. In two related experiments, transcranial magnetic stimulation (TMS) was applied over the lateral occipital cortex of healthy humans at different times during the execution of a behavioral task which entailed varying levels of distractor interference and need for attentional engagement. While earlier TMS boosted target selection, stimulation within a restricted time epoch close to (and in the course of) stimulus presentation engendered selective enhancement of distractor suppression, by affecting the ongoing, reactive instantiation of attentional filtering mechanisms required by specific task conditions. The results attest to a causal role of mid-tier ventral visual areas in distractor filtering and offer insights into the mechanisms through which TMS may have affected ongoing neural activity in the stimulated tissue. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Alpha shape theory for 3D visualization and volumetric measurement of brain tumor progression using magnetic resonance images.

    PubMed

    Hamoud Al-Tamimi, Mohammed Sabbih; Sulong, Ghazali; Shuaib, Ibrahim Lutfi

    2015-07-01

    Resection of brain tumors is a tricky task in surgery due to its direct influence on the patients' survival rate. Determining the tumor resection extent for its complete information via-à-vis volume and dimensions in pre- and post-operative Magnetic Resonance Images (MRI) requires accurate estimation and comparison. The active contour segmentation technique is used to segment brain tumors on pre-operative MR images using self-developed software. Tumor volume is acquired from its contours via alpha shape theory. The graphical user interface is developed for rendering, visualizing and estimating the volume of a brain tumor. Internet Brain Segmentation Repository dataset (IBSR) is employed to analyze and determine the repeatability and reproducibility of tumor volume. Accuracy of the method is validated by comparing the estimated volume using the proposed method with that of gold-standard. Segmentation by active contour technique is found to be capable of detecting the brain tumor boundaries. Furthermore, the volume description and visualization enable an interactive examination of tumor tissue and its surrounding. Admirable features of our results demonstrate that alpha shape theory in comparison to other existing standard methods is superior for precise volumetric measurement of tumor. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. A Structure-Based Distance Metric for High-Dimensional Space Exploration with Multi-Dimensional Scaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Hyun Jung; McDonnell, Kevin T.; Zelenyuk, Alla

    2014-03-01

    Although the Euclidean distance does well in measuring data distances within high-dimensional clusters, it does poorly when it comes to gauging inter-cluster distances. This significantly impacts the quality of global, low-dimensional space embedding procedures such as the popular multi-dimensional scaling (MDS) where one can often observe non-intuitive layouts. We were inspired by the perceptual processes evoked in the method of parallel coordinates which enables users to visually aggregate the data by the patterns the polylines exhibit across the dimension axes. We call the path of such a polyline its structure and suggest a metric that captures this structure directly inmore » high-dimensional space. This allows us to better gauge the distances of spatially distant data constellations and so achieve data aggregations in MDS plots that are more cognizant of existing high-dimensional structure similarities. Our MDS plots also exhibit similar visual relationships as the method of parallel coordinates which is often used alongside to visualize the high-dimensional data in raw form. We then cast our metric into a bi-scale framework which distinguishes far-distances from near-distances. The coarser scale uses the structural similarity metric to separate data aggregates obtained by prior classification or clustering, while the finer scale employs the appropriate Euclidean distance.« less

  5. Visualizing landscape hydrology as a means of education - The water cycle in a box

    NASA Astrophysics Data System (ADS)

    Lehr, Christian; Rauneker, Philipp; Fahle, Marcus; Hohenbrink, Tobias; Böttcher, Steven; Natkhin, Marco; Thomas, Björn; Dannowski, Ralf; Schwien, Bernd; Lischeid, Gunnar

    2016-04-01

    We used an aquarium to construct a physical model of the water cycle. The model can be used to visualize the movement of the water through the landscape from precipitation and infiltration via surface and subsurface flow to discharge into the sea. The model consists of two aquifers that are divided by a loamy aquitard. The 'geological' setting enables us to establish confining groundwater conditions and to demonstrate the functioning of artesian wells. Furthermore, small experiments with colored water as tracer can be performed to identify flow paths below the ground, simulate water supply problems like pollution of drinking water wells from inflowing contaminated groundwater or changes in subsurface flow direction due to changes in the predominant pressure gradients. Hydrological basics such as the connectivity of streams, lakes and the surrounding groundwater or the dependency of groundwater flow velocity from different substrates can directly be visualized. We used the model as an instructive tool in education and for public relations. We presented the model to different audiences from primary school pupils to laymen, students of hydrology up to university professors. The model was presented to the scientific community as part of the "Face of the Earth" exhibition at the EGU general assembly 2014. Independent of the antecedent knowledge of the audience, the predominant reactions were very positive. The model often acted as icebreaker to get a conversation on hydrological topics started. Because of the great interest, we prepared video material and a photo documentation on 1) the construction of the model and 2) the visualization of steady and dynamic hydrological situations. The videos will be published soon under creative common license and the collected material will be made accessible online. Accompanying documents will address professionals in hydrology as well as non-experts. In the PICO session, we will present details about the construction of the model and its main features. Further, short videos of specific processes and experiments will be shown.

  6. Visually driven chaining of elementary swim patterns into a goal-directed motor sequence: a virtual reality study of zebrafish prey capture

    PubMed Central

    Trivedi, Chintan A.; Bollmann, Johann H.

    2013-01-01

    Prey capture behavior critically depends on rapid processing of sensory input in order to track, approach, and catch the target. When using vision, the nervous system faces the problem of extracting relevant information from a continuous stream of input in order to detect and categorize visible objects as potential prey and to select appropriate motor patterns for approach. For prey capture, many vertebrates exhibit intermittent locomotion, in which discrete motor patterns are chained into a sequence, interrupted by short periods of rest. Here, using high-speed recordings of full-length prey capture sequences performed by freely swimming zebrafish larvae in the presence of a single paramecium, we provide a detailed kinematic analysis of first and subsequent swim bouts during prey capture. Using Fourier analysis, we show that individual swim bouts represent an elementary motor pattern. Changes in orientation are directed toward the target on a graded scale and are implemented by an asymmetric tail bend component superimposed on this basic motor pattern. To further investigate the role of visual feedback on the efficiency and speed of this complex behavior, we developed a closed-loop virtual reality setup in which minimally restrained larvae recapitulated interconnected swim patterns closely resembling those observed during prey capture in freely moving fish. Systematic variation of stimulus properties showed that prey capture is initiated within a narrow range of stimulus size and velocity. Furthermore, variations in the delay and location of swim triggered visual feedback showed that the reaction time of secondary and later swims is shorter for stimuli that appear within a narrow spatio-temporal window following a swim. This suggests that the larva may generate an expectation of stimulus position, which enables accelerated motor sequencing if the expectation is met by appropriate visual feedback. PMID:23675322

  7. Unmanned Aircraft Systems Traffic Management (UTM) Safely Enabling UAS Operations in Low-Altitude Airspace

    NASA Technical Reports Server (NTRS)

    Kopardekar, Parimal H.

    2017-01-01

    Conduct research, development and testing to identify airspace operations requirements to enable large-scale visual and beyond visual line of sight UAS operations in the low-altitude airspace. Use build-a-little-test-a-little strategy remote areas to urban areas Low density: No traffic management required but understanding of airspace constraints. Cooperative traffic management: Understanding of airspace constraints and other operations. Manned and unmanned traffic management: Scalable and heterogeneous operations. UTM construct consistent with FAAs risk-based strategy. UTM research platform is used for simulations and tests. UTM offers path towards scalability

  8. Visual performance for trip hazard detection when using incandescent and led miner cap lamps.

    PubMed

    Sammarco, John J; Gallagher, Sean; Reyes, Miguel

    2010-04-01

    Accident data for 2003-2007 indicate that slip, trip, and falls (STFs) are the second leading accident class (17.8%, n=2,441) of lost-time injuries in underground mining. Proper lighting plays a critical role in enabling miners to detect STF hazards in this environment. Often, the only lighting available to the miner is from a cap lamp worn on the miner's helmet. The focus of this research was to determine if the spectral content of light from light-emitting diode (LED) cap lamps enabled visual performance improvements for the detection of tripping hazards as compared to incandescent cap lamps that are traditionally used in underground mining. A secondary objective was to determine the effects of aging on visual performance. The visual performance of 30 subjects was quantified by measuring each subject's speed and accuracy in detecting objects positioned on the floor both in the near field, at 1.83 meters, and far field, at 3.66 meters. Near field objects were positioned at 0 degrees and +/-20 degrees off axis, while far field objects were positioned at 0 degrees and +/-10 degrees off axis. Three age groups were designated: group A consisted of subjects 18 to 25 years old, group B consisted of subjects 40 to 50 years old, and group C consisted of subjects 51 years and older. Results of the visual performance comparison for a commercially available LED, a prototype LED, and an incandescent cap lamp indicate that the location of objects on the floor, the type of cap lamp used, and subject age all had significant influences on the time required to identify potential trip hazards. The LED-based cap lamps enabled detection times that were an average of 0.96 seconds faster compared to the incandescent cap lamp. Use of the LED cap lamps resulted in average detection times that were about 13.6% faster than those recorded for the incandescent cap lamp. The visual performance differences between the commercially available LED and prototype LED cap lamp were not statistically significant. It can be inferred from this data that the spectral content from LED-based cap lamps could enable significant visual performance improvements for miners in the detection of trip hazards. Published by Elsevier Ltd.

  9. Causal evidence for retina dependent and independent visual motion computations in mouse cortex

    PubMed Central

    Hillier, Daniel; Fiscella, Michele; Drinnenberg, Antonia; Trenholm, Stuart; Rompani, Santiago B.; Raics, Zoltan; Katona, Gergely; Juettner, Josephine; Hierlemann, Andreas; Rozsa, Balazs; Roska, Botond

    2017-01-01

    How neuronal computations in the sensory periphery contribute to computations in the cortex is not well understood. We examined this question in the context of visual-motion processing in the retina and primary visual cortex (V1) of mice. We disrupted retinal direction selectivity – either exclusively along the horizontal axis using FRMD7 mutants or along all directions by ablating starburst amacrine cells – and monitored neuronal activity in layer 2/3 of V1 during stimulation with visual motion. In control mice, we found an overrepresentation of cortical cells preferring posterior visual motion, the dominant motion direction an animal experiences when it moves forward. In mice with disrupted retinal direction selectivity, the overrepresentation of posterior-motion-preferring cortical cells disappeared, and their response at higher stimulus speeds was reduced. This work reveals the existence of two functionally distinct, sensory-periphery-dependent and -independent computations of visual motion in the cortex. PMID:28530661

  10. Effectiveness of transcranial direct current stimulation and visual illusion on neuropathic pain in spinal cord injury

    PubMed Central

    Kumru, Hatice; Pelayo, Raul; Vidal, Joan; Tormos, Josep Maria; Fregni, Felipe; Navarro, Xavier; Pascual-Leone, Alvaro

    2010-01-01

    The aim of this study was to evaluate the analgesic effect of transcranial direct current stimulation of the motor cortex and techniques of visual illusion, applied isolated or combined, in patients with neuropathic pain following spinal cord injury. In a sham controlled, double-blind, parallel group design, 39 patients were randomized into four groups receiving transcranial direct current stimulation with walking visual illusion or with control illusion and sham stimulation with visual illusion or with control illusion. For transcranial direct current stimulation, the anode was placed over the primary motor cortex. Each patient received ten treatment sessions during two consecutive weeks. Clinical assessment was performed before, after the last day of treatment, after 2 and 4 weeks follow-up and after 12 weeks. Clinical assessment included overall pain intensity perception, Neuropathic Pain Symptom Inventory and Brief Pain Inventory. The combination of transcranial direct current stimulation and visual illusion reduced the intensity of neuropathic pain significantly more than any of the single interventions. Patients receiving transcranial direct current stimulation and visual illusion experienced a significant improvement in all pain subtypes, while patients in the transcranial direct current stimulation group showed improvement in continuous and paroxysmal pain, and those in the visual illusion group improved only in continuous pain and dysaesthesias. At 12 weeks after treatment, the combined treatment group still presented significant improvement on the overall pain intensity perception, whereas no improvements were reported in the other three groups. Our results demonstrate that transcranial direct current stimulation and visual illusion can be effective in the management of neuropathic pain following spinal cord injury, with minimal side effects and with good tolerability. PMID:20685806

  11. Dynamic single photon emission computed tomography—basic principles and cardiac applications

    PubMed Central

    Gullberg, Grant T; Reutter, Bryan W; Sitek, Arkadiusz; Maltz, Jonathan S; Budinger, Thomas F

    2011-01-01

    The very nature of nuclear medicine, the visual representation of injected radiopharmaceuticals, implies imaging of dynamic processes such as the uptake and wash-out of radiotracers from body organs. For years, nuclear medicine has been touted as the modality of choice for evaluating function in health and disease. This evaluation is greatly enhanced using single photon emission computed tomography (SPECT), which permits three-dimensional (3D) visualization of tracer distributions in the body. However, to fully realize the potential of the technique requires the imaging of in vivo dynamic processes of flow and metabolism. Tissue motion and deformation must also be addressed. Absolute quantification of these dynamic processes in the body has the potential to improve diagnosis. This paper presents a review of advancements toward the realization of the potential of dynamic SPECT imaging and a brief history of the development of the instrumentation. A major portion of the paper is devoted to the review of special data processing methods that have been developed for extracting kinetics from dynamic cardiac SPECT data acquired using rotating detector heads that move as radiopharmaceuticals exchange between biological compartments. Recent developments in multi-resolution spatiotemporal methods enable one to estimate kinetic parameters of compartment models of dynamic processes using data acquired from a single camera head with slow gantry rotation. The estimation of kinetic parameters directly from projection measurements improves bias and variance over the conventional method of first reconstructing 3D dynamic images, generating time–activity curves from selected regions of interest and then estimating the kinetic parameters from the generated time–activity curves. Although the potential applications of SPECT for imaging dynamic processes have not been fully realized in the clinic, it is hoped that this review illuminates the potential of SPECT for dynamic imaging, especially in light of new developments that enable measurement of dynamic processes directly from projection measurements. PMID:20858925

  12. Direct Imaging of ER Calcium with Targeted-Esterase Induced Dye Loading (TED)

    PubMed Central

    Samtleben, Samira; Jaepel, Juliane; Fecher, Caroline; Andreska, Thomas; Rehberg, Markus; Blum, Robert

    2013-01-01

    Visualization of calcium dynamics is important to understand the role of calcium in cell physiology. To examine calcium dynamics, synthetic fluorescent Ca2+ indictors have become popular. Here we demonstrate TED (= targeted-esterase induced dye loading), a method to improve the release of Ca2+ indicator dyes in the ER lumen of different cell types. To date, TED was used in cell lines, glial cells, and neurons in vitro. TED bases on efficient, recombinant targeting of a high carboxylesterase activity to the ER lumen using vector-constructs that express Carboxylesterases (CES). The latest TED vectors contain a core element of CES2 fused to a red fluorescent protein, thus enabling simultaneous two-color imaging. The dynamics of free calcium in the ER are imaged in one color, while the corresponding ER structure appears in red. At the beginning of the procedure, cells are transduced with a lentivirus. Subsequently, the infected cells are seeded on coverslips to finally enable live cell imaging. Then, living cells are incubated with the acetoxymethyl ester (AM-ester) form of low-affinity Ca2+ indicators, for instance Fluo5N-AM, Mag-Fluo4-AM, or Mag-Fura2-AM. The esterase activity in the ER cleaves off hydrophobic side chains from the AM form of the Ca2+ indicator and a hydrophilic fluorescent dye/Ca2+ complex is formed and trapped in the ER lumen. After dye loading, the cells are analyzed at an inverted confocal laser scanning microscope. Cells are continuously perfused with Ringer-like solutions and the ER calcium dynamics are directly visualized by time-lapse imaging. Calcium release from the ER is identified by a decrease in fluorescence intensity in regions of interest, whereas the refilling of the ER calcium store produces an increase in fluorescence intensity. Finally, the change in fluorescent intensity over time is determined by calculation of ΔF/F0. PMID:23685703

  13. Direct imaging of ER calcium with targeted-esterase induced dye loading (TED).

    PubMed

    Samtleben, Samira; Jaepel, Juliane; Fecher, Caroline; Andreska, Thomas; Rehberg, Markus; Blum, Robert

    2013-05-07

    Visualization of calcium dynamics is important to understand the role of calcium in cell physiology. To examine calcium dynamics, synthetic fluorescent Ca(2+) indictors have become popular. Here we demonstrate TED (= targeted-esterase induced dye loading), a method to improve the release of Ca(2+) indicator dyes in the ER lumen of different cell types. To date, TED was used in cell lines, glial cells, and neurons in vitro. TED bases on efficient, recombinant targeting of a high carboxylesterase activity to the ER lumen using vector-constructs that express Carboxylesterases (CES). The latest TED vectors contain a core element of CES2 fused to a red fluorescent protein, thus enabling simultaneous two-color imaging. The dynamics of free calcium in the ER are imaged in one color, while the corresponding ER structure appears in red. At the beginning of the procedure, cells are transduced with a lentivirus. Subsequently, the infected cells are seeded on coverslips to finally enable live cell imaging. Then, living cells are incubated with the acetoxymethyl ester (AM-ester) form of low-affinity Ca(2+) indicators, for instance Fluo5N-AM, Mag-Fluo4-AM, or Mag-Fura2-AM. The esterase activity in the ER cleaves off hydrophobic side chains from the AM form of the Ca(2+) indicator and a hydrophilic fluorescent dye/Ca(2+) complex is formed and trapped in the ER lumen. After dye loading, the cells are analyzed at an inverted confocal laser scanning microscope. Cells are continuously perfused with Ringer-like solutions and the ER calcium dynamics are directly visualized by time-lapse imaging. Calcium release from the ER is identified by a decrease in fluorescence intensity in regions of interest, whereas the refilling of the ER calcium store produces an increase in fluorescence intensity. Finally, the change in fluorescent intensity over time is determined by calculation of ΔF/F0.

  14. TOPICAL REVIEW: Dynamic single photon emission computed tomography—basic principles and cardiac applications

    NASA Astrophysics Data System (ADS)

    Gullberg, Grant T.; Reutter, Bryan W.; Sitek, Arkadiusz; Maltz, Jonathan S.; Budinger, Thomas F.

    2010-10-01

    The very nature of nuclear medicine, the visual representation of injected radiopharmaceuticals, implies imaging of dynamic processes such as the uptake and wash-out of radiotracers from body organs. For years, nuclear medicine has been touted as the modality of choice for evaluating function in health and disease. This evaluation is greatly enhanced using single photon emission computed tomography (SPECT), which permits three-dimensional (3D) visualization of tracer distributions in the body. However, to fully realize the potential of the technique requires the imaging of in vivo dynamic processes of flow and metabolism. Tissue motion and deformation must also be addressed. Absolute quantification of these dynamic processes in the body has the potential to improve diagnosis. This paper presents a review of advancements toward the realization of the potential of dynamic SPECT imaging and a brief history of the development of the instrumentation. A major portion of the paper is devoted to the review of special data processing methods that have been developed for extracting kinetics from dynamic cardiac SPECT data acquired using rotating detector heads that move as radiopharmaceuticals exchange between biological compartments. Recent developments in multi-resolution spatiotemporal methods enable one to estimate kinetic parameters of compartment models of dynamic processes using data acquired from a single camera head with slow gantry rotation. The estimation of kinetic parameters directly from projection measurements improves bias and variance over the conventional method of first reconstructing 3D dynamic images, generating time-activity curves from selected regions of interest and then estimating the kinetic parameters from the generated time-activity curves. Although the potential applications of SPECT for imaging dynamic processes have not been fully realized in the clinic, it is hoped that this review illuminates the potential of SPECT for dynamic imaging, especially in light of new developments that enable measurement of dynamic processes directly from projection measurements.

  15. Predicting Visual Semantic Descriptive Terms from Radiological Image Data: Preliminary Results with Liver Lesions in CT

    PubMed Central

    Depeursinge, Adrien; Kurtz, Camille; Beaulieu, Christopher F.; Napel, Sandy; Rubin, Daniel L.

    2014-01-01

    We describe a framework to model visual semantics of liver lesions in CT images in order to predict the visual semantic terms (VST) reported by radiologists in describing these lesions. Computational models of VST are learned from image data using high–order steerable Riesz wavelets and support vector machines (SVM). The organization of scales and directions that are specific to every VST are modeled as linear combinations of directional Riesz wavelets. The models obtained are steerable, which means that any orientation of the model can be synthesized from linear combinations of the basis filters. The latter property is leveraged to model VST independently from their local orientation. In a first step, these models are used to predict the presence of each semantic term that describes liver lesions. In a second step, the distances between all VST models are calculated to establish a non–hierarchical computationally–derived ontology of VST containing inter–term synonymy and complementarity. A preliminary evaluation of the proposed framework was carried out using 74 liver lesions annotated with a set of 18 VSTs from the RadLex ontology. A leave–one–patient–out cross–validation resulted in an average area under the ROC curve of 0.853 for predicting the presence of each VST when using SVMs in a feature space combining the magnitudes of the steered models with CT intensities. Likelihood maps are created for each VST, which enables high transparency of the information modeled. The computationally–derived ontology obtained from the VST models was found to be consistent with the underlying semantics of the visual terms. It was found to be complementary to the RadLex ontology, and constitutes a potential method to link the image content to visual semantics. The proposed framework is expected to foster human–computer synergies for the interpretation of radiological images while using rotation–covariant computational models of VSTs to (1) quantify their local likelihood and (2) explicitly link them with pixel–based image content in the context of a given imaging domain. PMID:24808406

  16. The global lambda visualization facility: An international ultra-high-definition wide-area visualization collaboratory

    USGS Publications Warehouse

    Leigh, J.; Renambot, L.; Johnson, Aaron H.; Jeong, B.; Jagodic, R.; Schwarz, N.; Svistula, D.; Singh, R.; Aguilera, J.; Wang, X.; Vishwanath, V.; Lopez, B.; Sandin, D.; Peterka, T.; Girado, J.; Kooima, R.; Ge, J.; Long, L.; Verlo, A.; DeFanti, T.A.; Brown, M.; Cox, D.; Patterson, R.; Dorn, P.; Wefel, P.; Levy, S.; Talandis, J.; Reitzer, J.; Prudhomme, T.; Coffin, T.; Davis, B.; Wielinga, P.; Stolk, B.; Bum, Koo G.; Kim, J.; Han, S.; Corrie, B.; Zimmerman, T.; Boulanger, P.; Garcia, M.

    2006-01-01

    The research outlined in this paper marks an initial global cooperative effort between visualization and collaboration researchers to build a persistent virtual visualization facility linked by ultra-high-speed optical networks. The goal is to enable the comprehensive and synergistic research and development of the necessary hardware, software and interaction techniques to realize the next generation of end-user tools for scientists to collaborate on the global Lambda Grid. This paper outlines some of the visualization research projects that were demonstrated at the iGrid 2005 workshop in San Diego, California.

  17. Mapping arealisation of the visual cortex of non-primate species: lessons for development and evolution

    PubMed Central

    Homman-Ludiye, Jihane; Bourne, James A.

    2014-01-01

    The integration of the visual stimulus takes place at the level of the neocortex, organized in anatomically distinct and functionally unique areas. Primates, including humans, are heavily dependent on vision, with approximately 50% of their neocortical surface dedicated to visual processing and possess many more visual areas than any other mammal, making them the model of choice to study visual cortical arealisation. However, in order to identify the mechanisms responsible for patterning the developing neocortex, specifying area identity as well as elucidate events that have enabled the evolution of the complex primate visual cortex, it is essential to gain access to the cortical maps of alternative species. To this end, species including the mouse have driven the identification of cellular markers, which possess an area-specific expression profile, the development of new tools to label connections and technological advance in imaging techniques enabling monitoring of cortical activity in a behaving animal. In this review we present non-primate species that have contributed to elucidating the evolution and development of the visual cortex. We describe the current understanding of the mechanisms supporting the establishment of areal borders during development, mainly gained in the mouse thanks to the availability of genetically modified lines but also the limitations of the mouse model and the need for alternate species. PMID:25071460

  18. 75 FR 53262 - Privacy Act of 1974; System of Records

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-31

    ... a new Privacy Act system of records, JUSTICE/FBI- 021, the Data Integration and Visualization System... Act system of records, the Data Integration and Visualization System (DIVS), Justice/FBI-021. The... investigative mission by enabling access, search, integration, and analytics across multiple existing databases...

  19. Design study of a high-resolution breast-dedicated PET system built from cadmium zinc telluride detectors

    PubMed Central

    Peng, Hao; Levin, Craig S

    2013-01-01

    We studied the performance of a dual-panel positron emission tomography (PET) camera dedicated to breast cancer imaging using Monte Carlo simulation. The proposed system consists of two 4 cm thick 12 × 15 cm2 area cadmium zinc telluride (CZT) panels with adjustable separation, which can be put in close proximity to the breast and/or axillary nodes. Unique characteristics distinguishing the proposed system from previous efforts in breast-dedicated PET instrumentation are the deployment of CZT detectors with superior spatial and energy resolution, using a cross-strip electrode readout scheme to enable 3D positioning of individual photon interaction coordinates in the CZT, which includes directly measured photon depth-of-interaction (DOI), and arranging the detector slabs edge-on with respect to incoming 511 keV photons for high photon sensitivity. The simulation results show that the proposed CZT dual-panel PET system is able to achieve superior performance in terms of photon sensitivity, noise equivalent count rate, spatial resolution and lesion visualization. The proposed system is expected to achieve ~32% photon sensitivity for a point source at the center and a 4 cm panel separation. For a simplified breast phantom adjacent to heart and torso compartments, the peak noise equivalent count (NEC) rate is predicted to be ~94.2 kcts s−1 (breast volume: 720 cm3 and activity concentration: 3.7 kBq cm−3) for a ~10% energy window around 511 keV and ~8 ns coincidence time window. The system achieves 1 mm intrinsic spatial resolution anywhere between the two panels with a 4 cm panel separation if the detectors have DOI resolution less than 2 mm. For a 3 mm DOI resolution, the system exhibits excellent sphere resolution uniformity (σrms/mean) ≤ 10%) across a 4 cm width FOV. Simulation results indicate that the system exhibits superior hot sphere visualization and is expected to visualize 2 mm diameter spheres with a 5:1 activity concentration ratio within roughly 7 min imaging time. Furthermore, we observe that the degree of spatial resolution degradation along the direction orthogonal to the two panels that is typical of a limited angle tomography configuration is mitigated by having high-resolution DOI capabilities that enable more accurate positioning of oblique response lines. PMID:20400807

  20. Experience Report: Visual Programming in the Real World

    NASA Technical Reports Server (NTRS)

    Baroth, E.; Hartsough, C

    1994-01-01

    This paper reports direct experience with two commercial, widely used visual programming environments. While neither of these systems is object oriented, the tools have transformed the development process and indicate a direction for visual object oriented tools to proceed.

  1. Placing blood on the target: a challenge for visually impaired persons.

    PubMed

    Cleary, M E; Hamilton, J E

    1993-01-01

    An individualized, blood glucose self-monitoring procedure for those who are visually impaired must be developed, taught, practiced, observed, and reviewed. Effective teaching requires understanding functional vision loss, observing safety precautions, organizing the work area, obtaining an adequate blood sample, ensuring accurate placement of blood on the strip, and cleaning up. Thoroughness and repetition enable the visually impaired person to perform the procedure safely and confidently.

  2. A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data

    PubMed Central

    Venkat, A.; Christensen, C.; Gyulassy, A.; Summa, B.; Federer, F.; Angelucci, A.; Pascucci, V.

    2017-01-01

    The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data. PMID:28638896

  3. A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data.

    PubMed

    Venkat, A; Christensen, C; Gyulassy, A; Summa, B; Federer, F; Angelucci, A; Pascucci, V

    2016-08-01

    The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data.

  4. Telerobotic Haptic Exploration in Art Galleries and Museums for Individuals with Visual Impairments.

    PubMed

    Park, Chung Hyuk; Ryu, Eun-Seok; Howard, Ayanna M

    2015-01-01

    This paper presents a haptic telepresence system that enables visually impaired users to explore locations with rich visual observation such as art galleries and museums by using a telepresence robot, a RGB-D sensor (color and depth camera), and a haptic interface. The recent improvement on RGB-D sensors has enabled real-time access to 3D spatial information in the form of point clouds. However, the real-time representation of this data in the form of tangible haptic experience has not been challenged enough, especially in the case of telepresence for individuals with visual impairments. Thus, the proposed system addresses the real-time haptic exploration of remote 3D information through video encoding and real-time 3D haptic rendering of the remote real-world environment. This paper investigates two scenarios in haptic telepresence, i.e., mobile navigation and object exploration in a remote environment. Participants with and without visual impairments participated in our experiments based on the two scenarios, and the system performance was validated. In conclusion, the proposed framework provides a new methodology of haptic telepresence for individuals with visual impairments by providing an enhanced interactive experience where they can remotely access public places (art galleries and museums) with the aid of haptic modality and robotic telepresence.

  5. BlockLogo: visualization of peptide and sequence motif conservation

    PubMed Central

    Olsen, Lars Rønn; Kudahl, Ulrich Johan; Simon, Christian; Sun, Jing; Schönbach, Christian; Reinherz, Ellis L.; Zhang, Guang Lan; Brusic, Vladimir

    2013-01-01

    BlockLogo is a web-server application for visualization of protein and nucleotide fragments, continuous protein sequence motifs, and discontinuous sequence motifs using calculation of block entropy from multiple sequence alignments. The user input consists of a multiple sequence alignment, selection of motif positions, type of sequence, and output format definition. The output has BlockLogo along with the sequence logo, and a table of motif frequencies. We deployed BlockLogo as an online application and have demonstrated its utility through examples that show visualization of T-cell epitopes and B-cell epitopes (both continuous and discontinuous). Our additional example shows a visualization and analysis of structural motifs that determine specificity of peptide binding to HLA-DR molecules. The BlockLogo server also employs selected experimentally validated prediction algorithms to enable on-the-fly prediction of MHC binding affinity to 15 common HLA class I and class II alleles as well as visual analysis of discontinuous epitopes from multiple sequence alignments. It enables the visualization and analysis of structural and functional motifs that are usually described as regular expressions. It provides a compact view of discontinuous motifs composed of distant positions within biological sequences. BlockLogo is available at: http://research4.dfci.harvard.edu/cvc/blocklogo/ and http://methilab.bu.edu/blocklogo/ PMID:24001880

  6. Live-cell visualization of gasdermin D-driven pyroptotic cell death.

    PubMed

    Rathkey, Joseph K; Benson, Bryan L; Chirieleison, Steven M; Yang, Jie; Xiao, Tsan S; Dubyak, George R; Huang, Alex Y; Abbott, Derek W

    2017-09-01

    Pyroptosis is a form of cell death important in defenses against pathogens that can also result in a potent and sometimes pathological inflammatory response. During pyroptosis, GSDMD (gasdermin D), the pore-forming effector protein, is cleaved, forms oligomers, and inserts into the membranes of the cell, resulting in rapid cell death. However, the potent cell death induction caused by GSDMD has complicated our ability to understand the biology of this protein. Studies aimed at visualizing GSDMD have relied on expression of GSDMD fragments in epithelial cell lines that naturally lack GSDMD expression and also lack the proteases necessary to cleave GSDMD. In this work, we performed mutagenesis and molecular modeling to strategically place tags and fluorescent proteins within GSDMD that support native pyroptosis and facilitate live-cell imaging of pyroptotic cell death. Here, we demonstrate that these fusion proteins are cleaved by caspases-1 and -11 at Asp-276. Mutations that disrupted the predicted p30-p20 autoinhibitory interface resulted in GSDMD aggregation, supporting the oligomerizing activity of these mutations. Furthermore, we show that these novel GSDMD fusions execute inflammasome-dependent pyroptotic cell death in response to multiple stimuli and allow for visualization of the morphological changes associated with pyroptotic cell death in real time. This work therefore provides new tools that not only expand the molecular understanding of pyroptosis but also enable its direct visualization. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  7. Surgical evaluation of a novel tethered robotic capsule endoscope using micro-patterned treads.

    PubMed

    Sliker, Levin J; Kern, Madalyn D; Schoen, Jonathan A; Rentschler, Mark E

    2012-10-01

    The state-of-the-art technology for gastrointestinal (GI) tract exploration is a capsule endoscope (CE). Capsule endoscopes are pill-sized devices that provide visual feedback of the GI tract as they move passively through the patient. These passive devices could benefit from a mobility system enabling maneuverability and controllability. Potential benefits of a tethered robotic capsule endoscope (tRCE) include faster travel speeds, reaction force generation for biopsy, and decreased capsule retention. In this work, a tethered CE is developed with an active locomotion system for mobility within a collapsed lumen. Micro-patterned polydimethylsiloxane (PDMS) treads are implemented onto a custom capsule housing as a mobility method. The tRCE housing contains a direct current (DC) motor and gear train to drive the treads, a video camera for visual feedback, and two light sources (infrared and visible) for illumination. The device was placed within the insufflated abdomen of a live anesthetized pig to evaluate mobility performance on a planar tissue surface, as well as within the cecum to evaluate mobility performance in a collapsed lumen. The tRCE was capable of forward and reverse mobility for both planar and collapsed lumen tissue environments. Also, using an onboard visual system, the tRCE was capable of demonstrating visual feedback within an insufflated, anesthetized porcine abdomen. Proof-of-concept in vivo tRCE mobility using micro-patterned PDMS treads was shown. This suggests that a similar method could be implemented in future smaller, faster, and untethered RCEs.

  8. Development of Four Dimensional Human Model that Enables Deformation of Skin, Organs and Blood Vessel System During Body Movement - Visualizing Movements of the Musculoskeletal System.

    PubMed

    Suzuki, Naoki; Hattori, Asaki; Hashizume, Makoto

    2016-01-01

    We constructed a four dimensional human model that is able to visualize the structure of a whole human body, including the inner structures, in real-time to allow us to analyze human dynamic changes in the temporal, spatial and quantitative domains. To verify whether our model was generating changes according to real human body dynamics, we measured a participant's skin expansion and compared it to that of the model conducted under the same body movement. We also made a contribution to the field of orthopedics, as we were able to devise a display method that enables the observer to more easily observe the changes made in the complex skeletal muscle system during body movements, which in the past were difficult to visualize.

  9. eLoom and Flatland: specification, simulation and visualization engines for the study of arbitrary hierarchical neural architectures.

    PubMed

    Caudell, Thomas P; Xiao, Yunhai; Healy, Michael J

    2003-01-01

    eLoom is an open source graph simulation software tool, developed at the University of New Mexico (UNM), that enables users to specify and simulate neural network models. Its specification language and libraries enables users to construct and simulate arbitrary, potentially hierarchical network structures on serial and parallel processing systems. In addition, eLoom is integrated with UNM's Flatland, an open source virtual environments development tool to provide real-time visualizations of the network structure and activity. Visualization is a useful method for understanding both learning and computation in artificial neural networks. Through 3D animated pictorially representations of the state and flow of information in the network, a better understanding of network functionality is achieved. ART-1, LAPART-II, MLP, and SOM neural networks are presented to illustrate eLoom and Flatland's capabilities.

  10. Reliability and relative weighting of visual and nonvisual information for perceiving direction of self-motion during walking

    PubMed Central

    Saunders, Jeffrey A.

    2014-01-01

    Direction of self-motion during walking is indicated by multiple cues, including optic flow, nonvisual sensory cues, and motor prediction. I measured the reliability of perceived heading from visual and nonvisual cues during walking, and whether cues are weighted in an optimal manner. I used a heading alignment task to measure perceived heading during walking. Observers walked toward a target in a virtual environment with and without global optic flow. The target was simulated to be infinitely far away, so that it did not provide direct feedback about direction of self-motion. Variability in heading direction was low even without optic flow, with average RMS error of 2.4°. Global optic flow reduced variability to 1.9°–2.1°, depending on the structure of the environment. The small amount of variance reduction was consistent with optimal use of visual information. The relative contribution of visual and nonvisual information was also measured using cue conflict conditions. Optic flow specified a conflicting heading direction (±5°), and bias in walking direction was used to infer relative weighting. Visual feedback influenced heading direction by 16%–34% depending on scene structure, with more effect with dense motion parallax. The weighting of visual feedback was close to the predictions of an optimal integration model given the observed variability measures. PMID:24648194

  11. Excitation model of pacemaker cardiomyocytes of cardiac conduction system

    NASA Astrophysics Data System (ADS)

    Grigoriev, M.; Babich, L.

    2015-11-01

    Myocardium includes typical and atypical cardiomyocytes - pacemakers, which form the cardiac conduction system. Excitation from the atrioventricular node in normal conditions is possible only in one direction. Retrograde direction of pulses is impossible. The most important prerequisite for the work of cardiomyocytes is the anatomical integrity of the conduction system. Changes in contractile force of the cardiomyocytes, which appear periodically, are due to two mechanisms of self-regulation - heterometric and homeometric. Graphic course of the excitation pulse propagation along the heart muscle more accurately reveals the understanding of the arrhythmia mechanism. These models have the ability to visualize the essence of excitation dynamics. However, they do not have the proper forecasting function for result estimation. Integrative mathematical model enables further investigation of general laws of the myocardium active behavior, allows for determination of the violation mechanism of electrical and contractile function of cardiomyocytes. Currently, there is no full understanding of the topography of pacemakers and ionic mechanisms. There is a need for the development of direction of mathematical modeling and comparative studies of the electrophysiological arrangement of cells of atrioventricular connection and ventricular conduction system.

  12. Direct observation of lubricant additives using tomography techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yunyun; Sanchez, Carlos; Parkinson, Dilworth Y.

    Lubricants play important roles in daily activities such as driving, walking, and cooking. The current understanding of mechanisms of lubrication, particularly in mechanical systems, has been limited by the lack of capability in direct observation. Here, we report an in situ approach to directly observe the motion of additive particles in grease under the influence of shear. Using the K-edge tomography technique, it is possible to detect particular additives in a grease and observe their distribution through 3D visualization. A commercial grease as a reference was studied with and without an inorganic additive of Fe{sub 3}O{sub 4} microparticles. The resultsmore » showed that it was possible to identify these particles and track their movement. Under a shear stress, Fe{sub 3}O{sub 4} particles were found to adhere to the edge of calcium complex thickeners commonly used in grease. Due to sliding, the grease formed a film with increased density. This approach enables in-line monitoring of a lubricant and future investigation in mechanisms of lubrication.« less

  13. Cyberinfrastructure for Atmospheric Discovery

    NASA Astrophysics Data System (ADS)

    Wilhelmson, R.; Moore, C. W.

    2004-12-01

    Each year across the United States, floods, tornadoes, hail, strong winds, lightning, hurricanes, and winter storms cause hundreds of deaths, routinely disrupt transportation and commerce, and result in billions of dollars in annual economic losses . MEAD and LEAD are two recent efforts aimed at developing the cyberinfrastructure for studying and forecasting these events through collection, integration, and analysis of observational data coupled with numerical simulation, data mining, and visualization. MEAD (Modeling Environment for Atmospheric Discovery) has been funded for two years as an NCSA (National Center for Supercomputing Applications) Alliance Expedition. The goal of this expedition has been the development/adaptation of cyberinfrastructure that will enable research simulations, datamining, machine learning and visualization of hurricanes and storms utilizing the high performance computing environments including the TeraGrid. Portal grid and web infrastructure are being tested that will enable launching of hundreds of individual WRF (Weather Research and Forecasting) simulations. In a similar way, multiple Regional Ocean Modeling System (ROMS) or WRF/ROMS simulations can be carried out. Metadata and the resulting large volumes of data will then be made available for further study and for educational purposes using analysis, mining, and visualization services. Initial coupling of the ROMS and WRF codes has been completed and parallel I/O is being implemented for these models. Management of these activities (services) are being enabled through Grid workflow technologies (e.g. OGCE). LEAD (Linked Environments for Atmospheric Discovery) is a recently funded 5-year, large NSF ITR grant that involves 9 institutions who are developing a comprehensive national cyberinfrastructure in mesoscale meteorology, particularly one that can interoperate with others being developed. LEAD is addressing the fundamental information technology (IT) research challenges needed to create an integrated, scalable for identifying, accessing, preparing, assimilating, predicting, managing, analyzing, mining, and visualizing a broad array of meteorological data and model output, independent of format and physical location. A transforming element of LEAD is Workflow Orchestration for On-Demand, Real-Time, Dynamically-Adaptive Systems (WOORDS), which allows the use of analysis tools, forecast models, and data repositories as dynamically adaptive, on-demand, Grid-enabled systems that can a) change configuration rapidly and automatically in response to weather; b) continually be steered by new data; c) respond to decision-driven inputs from users; d) initiate other processes automatically; and e) steer remote observing technologies to optimize data collection for the problem at hand. Although LEAD efforts are primiarly directed at mesoscale meteorology, the IT services being developed has general applicability to other geoscience and environmental science. Integration of traditional and new data sources is a crucial component in LEAD for data analysis and assimilation, for integration of (ensemble mining) of data from sets of simulations, and for comparing results to observational data. As part of the integration effort, LEAD is creating a myLEAD metadata catalog service: a personal metacatalog that extends the Globus MCS system and is built on top of the OGSA-DAI system developed at the National e-Science Center in Edinburgh, Scotland.

  14. Localized direction selective responses in the dendrites of visual interneurons of the fly

    PubMed Central

    2010-01-01

    Background The various tasks of visual systems, including course control, collision avoidance and the detection of small objects, require at the neuronal level the dendritic integration and subsequent processing of many spatially distributed visual motion inputs. While much is known about the pooled output in these systems, as in the medial superior temporal cortex of monkeys or in the lobula plate of the insect visual system, the motion tuning of the elements that provide the input has yet received little attention. In order to visualize the motion tuning of these inputs we examined the dendritic activation patterns of neurons that are selective for the characteristic patterns of wide-field motion, the lobula-plate tangential cells (LPTCs) of the blowfly. These neurons are known to sample direction-selective motion information from large parts of the visual field and combine these signals into axonal and dendro-dendritic outputs. Results Fluorescence imaging of intracellular calcium concentration allowed us to take a direct look at the local dendritic activity and the resulting local preferred directions in LPTC dendrites during activation by wide-field motion in different directions. These 'calcium response fields' resembled a retinotopic dendritic map of local preferred directions in the receptive field, the layout of which is a distinguishing feature of different LPTCs. Conclusions Our study reveals how neurons acquire selectivity for distinct visual motion patterns by dendritic integration of the local inputs with different preferred directions. With their spatial layout of directional responses, the dendrites of the LPTCs we investigated thus served as matched filters for wide-field motion patterns. PMID:20384983

  15. An interactive, multi-touch videowall for scientific data exploration

    NASA Astrophysics Data System (ADS)

    Blower, Jon; Griffiths, Guy; van Meersbergen, Maarten; Lusher, Scott; Styles, Jon

    2014-05-01

    The use of videowalls for scientific data exploration is rising as hardware becomes cheaper and the availability of software and multimedia content grows. Most videowalls are used primarily for outreach and communication purposes, but there is increasing interest in using large display screens to support exploratory visualization as an integral part of scientific research. In this PICO presentation we will present a brief overview of a new videowall system at the University of Reading, which is designed specifically to support interactive, exploratory visualization activities in climate science and Earth Observation. The videowall consists of eight 42-inch full-HD screens (in 4x2 formation), giving a total resolution of about 16 megapixels. The display is managed by a videowall controller, which can direct video to the screen from up to four external laptops, a purpose-built graphics workstation, or any combination thereof. A multi-touch overlay provides the capability for the user to interact directly with the data. There are many ways to use the videowall, and a key technical challenge is to make the most of the touch capabilities - touch has the potential to greatly reduce the learning curve in interactive data exploration, but most software is not yet designed for this purpose. In the PICO we will present an overview of some ways in which the wall can be employed in science, seeking feedback and discussion from the community. The system was inspired by an existing and highly-successful system (known as the "Collaboratorium") at the Netherlands e-Science Center (NLeSC). We will demonstrate how we have adapted NLeSC's visualization software to our system for touch-enabled multi-screen climate data exploration.

  16. The fundus photo has met its match: optical coherence tomography and adaptive optics ophthalmoscopy are here to stay

    PubMed Central

    Morgan, Jessica I. W.

    2016-01-01

    Purpose Over the past 25 years, optical coherence tomography (OCT) and adaptive optics (AO) ophthalmoscopy have revolutionised our ability to non-invasively observe the living retina. The purpose of this review is to highlight the techniques and human clinical applications of recent advances in OCT and adaptive optics scanning laser/light ophthalmoscopy (AOSLO) ophthalmic imaging. Recent findings Optical coherence tomography retinal and optic nerve head (ONH) imaging technology allows high resolution in the axial direction resulting in cross-sectional visualisation of retinal and ONH lamination. Complementary AO ophthalmoscopy gives high resolution in the transverse direction resulting in en face visualisation of retinal cell mosaics. Innovative detection schemes applied to OCT and AOSLO technologies (such as spectral domain OCT, OCT angiography, confocal and non-confocal AOSLO, fluorescence, and AO-OCT) have enabled high contrast between retinal and ONH structures in three dimensions and have allowed in vivo retinal imaging to approach that of histological quality. In addition, both OCT and AOSLO have shown the capability to detect retinal reflectance changes in response to visual stimuli, paving the way for future studies to investigate objective biomarkers of visual function at the cellular level. Increasingly, these imaging techniques are being applied to clinical studies of the normal and diseased visual system. Summary Optical coherence tomography and AOSLO technologies are capable of elucidating the structure and function of the retina and ONH noninvasively with unprecedented resolution and contrast. The techniques have proven their worth in both basic science and clinical applications and each will continue to be utilised in future studies for many years to come. PMID:27112222

  17. The fundus photo has met its match: optical coherence tomography and adaptive optics ophthalmoscopy are here to stay.

    PubMed

    Morgan, Jessica I W

    2016-05-01

    Over the past 25 years, optical coherence tomography (OCT) and adaptive optics (AO) ophthalmoscopy have revolutionised our ability to non-invasively observe the living retina. The purpose of this review is to highlight the techniques and human clinical applications of recent advances in OCT and adaptive optics scanning laser/light ophthalmoscopy (AOSLO) ophthalmic imaging. Optical coherence tomography retinal and optic nerve head (ONH) imaging technology allows high resolution in the axial direction resulting in cross-sectional visualisation of retinal and ONH lamination. Complementary AO ophthalmoscopy gives high resolution in the transverse direction resulting in en face visualisation of retinal cell mosaics. Innovative detection schemes applied to OCT and AOSLO technologies (such as spectral domain OCT, OCT angiography, confocal and non-confocal AOSLO, fluorescence, and AO-OCT) have enabled high contrast between retinal and ONH structures in three dimensions and have allowed in vivo retinal imaging to approach that of histological quality. In addition, both OCT and AOSLO have shown the capability to detect retinal reflectance changes in response to visual stimuli, paving the way for future studies to investigate objective biomarkers of visual function at the cellular level. Increasingly, these imaging techniques are being applied to clinical studies of the normal and diseased visual system. Optical coherence tomography and AOSLO technologies are capable of elucidating the structure and function of the retina and ONH noninvasively with unprecedented resolution and contrast. The techniques have proven their worth in both basic science and clinical applications and each will continue to be utilised in future studies for many years to come. © 2016 The Authors Ophthalmic & Physiological Optics © 2016 The College of Optometrists.

  18. Implementation of an ADME enabling selection and visualization tool for drug discovery.

    PubMed

    Stoner, Chad L; Gifford, Eric; Stankovic, Charles; Lepsy, Christopher S; Brodfuehrer, Joanne; Prasad, J V N Vara; Surendran, Narayanan

    2004-05-01

    The pharmaceutical industry has large investments in compound library enrichment, high throughput biological screening, and biopharmaceutical (ADME) screening. As the number of compounds submitted for in vitro ADME screens increases, data analysis, interpretation, and reporting will become rate limiting in providing ADME-structure-activity relationship information to guide the synthetic strategy for chemical series. To meet these challenges, a software tool was developed and implemented that enables scientists to explore in vitro and in silico ADME and chemistry data in a multidimensional framework. The present work integrates physicochemical and ADME data, encompassing results for Caco-2 permeability, human liver microsomal half-life, rat liver microsomal half-life, kinetic solubility, measured log P, rule of 5 descriptors (molecular weight, hydrogen bond acceptors, hydrogen bond donors, calculated log P), polar surface area, chemical stability, and CYP450 3A4 inhibition. To facilitate interpretation of this data, a semicustomized software solution using Spotfire was designed that allows for multidimensional data analysis and visualization. The solution also enables simultaneous viewing and export of chemical structures with the corresponding ADME properties, enabling a more facile analysis of ADME-structure-activity relationship. In vitro and in silico ADME data were generated for 358 compounds from a series of human immunodeficiency virus protease inhibitors, resulting in a data set of 5370 experimental values which were subsequently analyzed and visualized using the customized Spotfire application. Implementation of this analysis and visualization tool has accelerated the selection of molecules for further development based on optimum ADME characteristics, and provided medicinal chemistry with specific, data driven structural recommendations for improvements in the ADME profile. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 93: 1131-1141, 2004

  19. Image-guided feedback for ophthalmic microsurgery using multimodal intraoperative swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Li, Jianwei D.; Malone, Joseph D.; El-Haddad, Mohamed T.; Arquitola, Amber M.; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.

    2017-02-01

    Surgical interventions for ocular diseases involve manipulations of semi-transparent structures in the eye, but limited visualization of these tissue layers remains a critical barrier to developing novel surgical techniques and improving clinical outcomes. We addressed limitations in image-guided ophthalmic microsurgery by using microscope-integrated multimodal intraoperative swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography (iSS-SESLO-OCT). We previously demonstrated in vivo human ophthalmic imaging using SS-SESLO-OCT, which enabled simultaneous acquisition of en face SESLO images with every OCT cross-section. Here, we integrated our new 400 kHz iSS-SESLO-OCT, which used a buffered Axsun 1060 nm swept-source, with a surgical microscope and TrueVision stereoscopic viewing system to provide image-based feedback. In vivo human imaging performance was demonstrated on a healthy volunteer, and simulated surgical maneuvers were performed in ex vivo porcine eyes. Denselysampled static volumes and volumes subsampled at 10 volumes-per-second were used to visualize tissue deformations and surgical dynamics during corneal sweeps, compressions, and dissections, and retinal sweeps, compressions, and elevations. En face SESLO images enabled orientation and co-registration with the widefield surgical microscope view while OCT imaging enabled depth-resolved visualization of surgical instrument positions relative to anatomic structures-of-interest. TrueVision heads-up display allowed for side-by-side viewing of the surgical field with SESLO and OCT previews for real-time feedback, and we demonstrated novel integrated segmentation overlays for augmented-reality surgical guidance. Integration of these complementary imaging modalities may benefit surgical outcomes by enabling real-time intraoperative visualization of surgical plans, instrument positions, tissue deformations, and image-based surrogate biomarkers correlated with completion of surgical goals.

  20. Bumblebee calligraphy: the design and control of flight motifs in the learning and return flights of Bombus terrestris.

    PubMed

    Philippides, Andrew; de Ibarra, Natalie Hempel; Riabinina, Olena; Collett, Thomas S

    2013-03-15

    Many wasps and bees learn the position of their nest relative to nearby visual features during elaborate 'learning' flights that they perform on leaving the nest. Return flights to the nest are thought to be patterned so that insects can reach their nest by matching their current view to views of their surroundings stored during learning flights. To understand how ground-nesting bumblebees might implement such a matching process, we have video-recorded the bees' learning and return flights and analysed the similarities and differences between the principal motifs of their flights. Loops that take bees away from and bring them back towards the nest are common during learning flights and less so in return flights. Zigzags are more prominent on return flights. Both motifs tend to be nest based. Bees often both fly towards and face the nest in the middle of loops and at the turns of zigzags. Before and after flight direction and body orientation are aligned, the two diverge from each other so that the nest is held within the bees' fronto-lateral visual field while flight direction relative to the nest can fluctuate more widely. These and other parallels between loops and zigzags suggest that they are stable variations of an underlying pattern, which enable bees to store and reacquire similar nest-focused views during learning and return flights.

  1. Search performance is better predicted by tileability than presence of a unique basic feature.

    PubMed

    Chang, Honghua; Rosenholtz, Ruth

    2016-08-01

    Traditional models of visual search such as feature integration theory (FIT; Treisman & Gelade, 1980), have suggested that a key factor determining task difficulty consists of whether or not the search target contains a "basic feature" not found in the other display items (distractors). Here we discriminate between such traditional models and our recent texture tiling model (TTM) of search (Rosenholtz, Huang, Raj, Balas, & Ilie, 2012b), by designing new experiments that directly pit these models against each other. Doing so is nontrivial, for two reasons. First, the visual representation in TTM is fully specified, and makes clear testable predictions, but its complexity makes getting intuitions difficult. Here we elucidate a rule of thumb for TTM, which enables us to easily design new and interesting search experiments. FIT, on the other hand, is somewhat ill-defined and hard to pin down. To get around this, rather than designing totally new search experiments, we start with five classic experiments that FIT already claims to explain: T among Ls, 2 among 5s, Q among Os, O among Qs, and an orientation/luminance-contrast conjunction search. We find that fairly subtle changes in these search tasks lead to significant changes in performance, in a direction predicted by TTM, providing definitive evidence in favor of the texture tiling model as opposed to traditional views of search.

  2. In situ visualization of newly synthesized proteins in environmental microbes using amino acid tagging and click chemistry

    PubMed Central

    Hatzenpichler, Roland; Scheller, Silvan; Tavormina, Patricia L; Babin, Brett M; Tirrell, David A; Orphan, Victoria J

    2014-01-01

    Here we describe the application of a new click chemistry method for fluorescent tracking of protein synthesis in individual microorganisms within environmental samples. This technique, termed bioorthogonal non-canonical amino acid tagging (BONCAT), is based on the in vivo incorporation of the non-canonical amino acid L-azidohomoalanine (AHA), a surrogate for l-methionine, followed by fluorescent labelling of AHA-containing cellular proteins by azide-alkyne click chemistry. BONCAT was evaluated with a range of phylogenetically and physiologically diverse archaeal and bacterial pure cultures and enrichments, and used to visualize translationally active cells within complex environmental samples including an oral biofilm, freshwater and anoxic sediment. We also developed combined assays that couple BONCAT with ribosomal RNA (rRNA)-targeted fluorescence in situ hybridization (FISH), enabling a direct link between taxonomic identity and translational activity. Using a methanotrophic enrichment culture incubated under different conditions, we demonstrate the potential of BONCAT-FISH to study microbial physiology in situ. A direct comparison of anabolic activity using BONCAT and stable isotope labelling by nano-scale secondary ion mass spectrometry (15NH3 assimilation) for individual cells within a sediment-sourced enrichment culture showed concordance between AHA-positive cells and 15N enrichment. BONCAT-FISH offers a fast, inexpensive and straightforward fluorescence microscopy method for studying the in situ activity of environmental microbes on a single-cell level. PMID:24571640

  3. Application of surface analytical methods for hazardous situation in the Adriatic Sea: monitoring of organic matter dynamics and oil pollution

    NASA Astrophysics Data System (ADS)

    Pletikapić, Galja; Ivošević DeNardis, Nadica

    2017-01-01

    Surface analytical methods are applied to examine the environmental status of seawaters. The present overview emphasizes advantages of combining surface analytical methods, applied to a hazardous situation in the Adriatic Sea, such as monitoring of the first aggregation phases of dissolved organic matter in order to potentially predict the massive mucilage formation and testing of oil spill cleanup. Such an approach, based on fast and direct characterization of organic matter and its high-resolution visualization, sets a continuous-scale description of organic matter from micro- to nanometre scales. Electrochemical method of chronoamperometry at the dropping mercury electrode meets the requirements for monitoring purposes due to the simple and fast analysis of a large number of natural seawater samples enabling simultaneous differentiation of organic constituents. In contrast, atomic force microscopy allows direct visualization of biotic and abiotic particles and provides an insight into structural organization of marine organic matter at micro- and nanometre scales. In the future, merging data at different spatial scales, taking into account experimental input on micrometre scale, observations on metre scale and modelling on kilometre scale, will be important for developing sophisticated technological platforms for knowledge transfer, reports and maps applicable for the marine environmental protection and management of the coastal area, especially for tourism, fishery and cruiser trafficking.

  4. How barn owls (Tyto alba) visually follow moving voles (Microtus socialis) before attacking them.

    PubMed

    Fux, Michal; Eilam, David

    2009-09-07

    The present study focused on the movements that owls perform before they swoop down on their prey. The working hypothesis was that owl head movements reflect the capacity to efficiently follow visually and auditory a moving prey. To test this hypothesis, five tame barn owls (Tyto alba) were each exposed 10 times to a live vole in a laboratory setting that enabled us to simultaneously record the behavior of both owl and vole. Bi-dimensional analysis of the horizontal and vertical projections of movements revealed that owl head movements increased in amplitude parallel to the vole's direction of movement (sideways or away from/toward the owl). However, the owls also performed relatively large repetitive horizontal head movements when the voles were progressing in any direction, suggesting that these movements were critical for the owl to accurately locate the prey, independent of prey behavior. From the pattern of head movements we conclude that owls orient toward the prospective clash point, and then return to the target itself (the vole) - a pattern that fits an interception rather than a tracking mode of following a moving target. The large horizontal component of head movement in following live prey may indicate that barn owls either have a horizontally narrow fovea or that these movements serve in forming a motion parallax along with preserving image acuity on a horizontally wide fovea.

  5. Search performance is better predicted by tileability than presence of a unique basic feature

    PubMed Central

    Chang, Honghua; Rosenholtz, Ruth

    2016-01-01

    Traditional models of visual search such as feature integration theory (FIT; Treisman & Gelade, 1980), have suggested that a key factor determining task difficulty consists of whether or not the search target contains a “basic feature” not found in the other display items (distractors). Here we discriminate between such traditional models and our recent texture tiling model (TTM) of search (Rosenholtz, Huang, Raj, Balas, & Ilie, 2012b), by designing new experiments that directly pit these models against each other. Doing so is nontrivial, for two reasons. First, the visual representation in TTM is fully specified, and makes clear testable predictions, but its complexity makes getting intuitions difficult. Here we elucidate a rule of thumb for TTM, which enables us to easily design new and interesting search experiments. FIT, on the other hand, is somewhat ill-defined and hard to pin down. To get around this, rather than designing totally new search experiments, we start with five classic experiments that FIT already claims to explain: T among Ls, 2 among 5s, Q among Os, O among Qs, and an orientation/luminance-contrast conjunction search. We find that fairly subtle changes in these search tasks lead to significant changes in performance, in a direction predicted by TTM, providing definitive evidence in favor of the texture tiling model as opposed to traditional views of search. PMID:27548090

  6. Nanophotonics-enabled smart windows, buildings and wearables

    NASA Astrophysics Data System (ADS)

    Smith, Geoff; Gentle, Angus; Arnold, Matthew; Cortie, Michael

    2016-06-01

    Design and production of spectrally smart windows, walls, roofs and fabrics has a long history, which includes early examples of applied nanophotonics. Evolving nanoscience has a special role to play as it provides the means to improve the functionality of these everyday materials. Improvement in the quality of human experience in any location at any time of year is the goal. Energy savings, thermal and visual comfort indoors and outdoors, visual experience, air quality and better health are all made possible by materials, whose "smartness" is aimed at designed responses to environmental energy flows. The spectral and angle of incidence responses of these nanomaterials must thus take account of the spectral and directional aspects of solar energy and of atmospheric thermal radiation plus the visible and color sensitivity of the human eye. The structures required may use resonant absorption, multilayer stacks, optical anisotropy and scattering to achieve their functionality. These structures are, in turn, constructed out of particles, columns, ultrathin layers, voids, wires, pure and doped oxides, metals, polymers or transparent conductors (TCs). The need to cater for wavelengths stretching from 0.3 to 35 μm including ultraviolet-visible, near-infrared (IR) and thermal or Planck radiation, with a spectrally and directionally complex atmosphere, and both being dynamic, means that hierarchical and graded nanostructures often feature. Nature has evolved to deal with the same energy flows, so biomimicry is sometimes a useful guide.

  7. Visual orientation performances of desert ants (Cataglyphis bicolor) toward astromenotactic directions and horizon landmarks

    NASA Technical Reports Server (NTRS)

    Wehner, R.

    1972-01-01

    Experimental data, on the visual orientation of desert ants toward astromenotactic courses and horizon landmarks involving the cooperation of different direction finding systems, are given. Attempts were made to: (1) determine if the ants choose a compromise direction between astromenotactic angles and the direction toward horizon landmarks when both angles compete with each other or whether they decide alternatively; (2) analyze adaptations of the visual system to the special demands of direction finding by astromenotactic orientation or pattern recognition; and (3) determine parameters of visual learning behavior. Results show separate orientation mechanisms are responsible for the orientation of the ant toward astromenotactic angles and horizon landmarks. If both systems compete with each other, the ants switch over from one system to the other and do not perform a compromise direction.

  8. Exogenous Attention Enables Perceptual Learning.

    PubMed

    Szpiro, Sarit F A; Carrasco, Marisa

    2015-12-01

    Practice can improve visual perception, and these improvements are considered to be a form of brain plasticity. Training-induced learning is time-consuming and requires hundreds of trials across multiple days. The process of learning acquisition is understudied. Can learning acquisition be potentiated by manipulating visual attentional cues? We developed a protocol in which we used task-irrelevant cues for between-groups manipulation of attention during training. We found that training with exogenous attention can enable the acquisition of learning. Remarkably, this learning was maintained even when observers were subsequently tested under neutral conditions, which indicates that a change in perception was involved. Our study is the first to isolate the effects of exogenous attention and to demonstrate its efficacy to enable learning. We propose that exogenous attention boosts perceptual learning by enhancing stimulus encoding. © The Author(s) 2015.

  9. Direct Imaging of Cerebral Thromboemboli Using Computed Tomography and Fibrin-targeted Gold Nanoparticles

    PubMed Central

    Kim, Jeong-Yeon; Ryu, Ju Hee; Schellingerhout, Dawid; Sun, In-Cheol; Lee, Su-Kyoung; Jeon, Sangmin; Kim, Jiwon; Kwon, Ick Chan; Nahrendorf, Matthias; Ahn, Cheol-Hee; Kim, Kwangmeyung; Kim, Dong-Eog

    2015-01-01

    Computed tomography (CT) is the current standard for time-critical decision-making in stroke patients, informing decisions on thrombolytic therapy with tissue plasminogen activator (tPA), which has a narrow therapeutic index. We aimed to develop a CT-based method to directly visualize cerebrovascular thrombi and guide thrombolytic therapy. Glycol-chitosan-coated gold nanoparticles (GC-AuNPs) were synthesized and conjugated to fibrin-targeting peptides, forming fib-GC-AuNP. This targeted imaging agent and non-targeted control agent were characterized in vitro and in vivo in C57Bl/6 mice (n = 107) with FeCl3-induced carotid thrombosis and/or embolic ischemic stroke. Fibrin-binding capacity was superior with fib-GC-AuNPs compared to GC-AuNPs, with thrombi visualized as high density on microCT (mCT). mCT imaging using fib-GC-AuNP allowed the prompt detection and quantification of cerebral thrombi, and monitoring of tPA-mediated thrombolytic effect, which reflected histological stroke outcome. Furthermore, recurrent thrombosis could be diagnosed by mCT without further nanoparticle administration for up to 3 weeks. fib-GC-AuNP-based direct cerebral thrombus imaging greatly enhance the value and information obtainable by regular CT, has multiple uses in basic / translational vascular research, and will likely allow personalized thrombolytic therapy in clinic by a) optimizing tPA-dosing to match thrombus burden, b) enabling the rational triage of patients to more radical therapies such as endovascular clot-retrieval, and c) potentially serving as a theranostic platform for targeted delivery of concurrent thrombolysis. PMID:26199648

  10. An interactive visualization tool for mobile objects

    NASA Astrophysics Data System (ADS)

    Kobayashi, Tetsuo

    Recent advancements in mobile devices---such as Global Positioning System (GPS), cellular phones, car navigation system, and radio-frequency identification (RFID)---have greatly influenced the nature and volume of data about individual-based movement in space and time. Due to the prevalence of mobile devices, vast amounts of mobile objects data are being produced and stored in databases, overwhelming the capacity of traditional spatial analytical methods. There is a growing need for discovering unexpected patterns, trends, and relationships that are hidden in the massive mobile objects data. Geographic visualization (GVis) and knowledge discovery in databases (KDD) are two major research fields that are associated with knowledge discovery and construction. Their major research challenges are the integration of GVis and KDD, enhancing the ability to handle large volume mobile objects data, and high interactivity between the computer and users of GVis and KDD tools. This dissertation proposes a visualization toolkit to enable highly interactive visual data exploration for mobile objects datasets. Vector algebraic representation and online analytical processing (OLAP) are utilized for managing and querying the mobile object data to accomplish high interactivity of the visualization tool. In addition, reconstructing trajectories at user-defined levels of temporal granularity with time aggregation methods allows exploration of the individual objects at different levels of movement generality. At a given level of generality, individual paths can be combined into synthetic summary paths based on three similarity measures, namely, locational similarity, directional similarity, and geometric similarity functions. A visualization toolkit based on the space-time cube concept exploits these functionalities to create a user-interactive environment for exploring mobile objects data. Furthermore, the characteristics of visualized trajectories are exported to be utilized for data mining, which leads to the integration of GVis and KDD. Case studies using three movement datasets (personal travel data survey in Lexington, Kentucky, wild chicken movement data in Thailand, and self-tracking data in Utah) demonstrate the potential of the system to extract meaningful patterns from the otherwise difficult to comprehend collections of space-time trajectories.

  11. Computable visually observed phenotype ontological framework for plants

    PubMed Central

    2011-01-01

    Background The ability to search for and precisely compare similar phenotypic appearances within and across species has vast potential in plant science and genetic research. The difficulty in doing so lies in the fact that many visual phenotypic data, especially visually observed phenotypes that often times cannot be directly measured quantitatively, are in the form of text annotations, and these descriptions are plagued by semantic ambiguity, heterogeneity, and low granularity. Though several bio-ontologies have been developed to standardize phenotypic (and genotypic) information and permit comparisons across species, these semantic issues persist and prevent precise analysis and retrieval of information. A framework suitable for the modeling and analysis of precise computable representations of such phenotypic appearances is needed. Results We have developed a new framework called the Computable Visually Observed Phenotype Ontological Framework for plants. This work provides a novel quantitative view of descriptions of plant phenotypes that leverages existing bio-ontologies and utilizes a computational approach to capture and represent domain knowledge in a machine-interpretable form. This is accomplished by means of a robust and accurate semantic mapping module that automatically maps high-level semantics to low-level measurements computed from phenotype imagery. The framework was applied to two different plant species with semantic rules mined and an ontology constructed. Rule quality was evaluated and showed high quality rules for most semantics. This framework also facilitates automatic annotation of phenotype images and can be adopted by different plant communities to aid in their research. Conclusions The Computable Visually Observed Phenotype Ontological Framework for plants has been developed for more efficient and accurate management of visually observed phenotypes, which play a significant role in plant genomics research. The uniqueness of this framework is its ability to bridge the knowledge of informaticians and plant science researchers by translating descriptions of visually observed phenotypes into standardized, machine-understandable representations, thus enabling the development of advanced information retrieval and phenotype annotation analysis tools for the plant science community. PMID:21702966

  12. Domain Visualization Using VxInsight[R] for Science and Technology Management.

    ERIC Educational Resources Information Center

    Boyack, Kevin W.; Wylie, Brian N.; Davidson, George S.

    2002-01-01

    Presents the application of a knowledge visualization tool, VxInsight[R], to enable domain analysis for science and technology management. Uses data mining from sources of bibliographic information to define subsets of relevant information and discusses citation mapping, text mapping, and journal mapping. (Author/LRW)

  13. An Evaluation of Multimodal Interactions with Technology while Learning Science Concepts

    ERIC Educational Resources Information Center

    Anastopoulou, Stamatina; Sharples, Mike; Baber, Chris

    2011-01-01

    This paper explores the value of employing multiple modalities to facilitate science learning with technology. In particular, it is argued that when multiple modalities are employed, learners construct strong relations between physical movement and visual representations of motion. Body interactions with visual representations, enabled by…

  14. Art Education and At-Risk Youth: Enabling Factors of Visual Expressions.

    ERIC Educational Resources Information Center

    O'Thearling, Sibyl; Bickley-Green, Cynthia Ann

    1996-01-01

    Examines a visual art program for at-risk students that attempts to increase self-esteem, stimulate inquiry, and develop critical thinking through art criticism and self- expression. Summarizes the responses of 11 at-risk students and 35 general education college students to the question, "What is art?" (MJP)

  15. Solar System Visualizations

    NASA Technical Reports Server (NTRS)

    Brown, Alison M.

    2005-01-01

    Solar System Visualization products enable scientists to compare models and measurements in new ways that enhance the scientific discovery process, enhance the information content and understanding of the science results for both science colleagues and the public, and create.visually appealing and intellectually stimulating visualization products. Missions supported include MER, MRO, and Cassini. Image products produced include pan and zoom animations of large mosaics to reveal the details of surface features and topography, animations into registered multi-resolution mosaics to provide context for microscopic images, 3D anaglyphs from left and right stereo pairs, and screen captures from video footage. Specific products include a three-part context animation of the Cassini Enceladus encounter highlighting images from 350 to 4 meter per pixel resolution; Mars Reconnaissance Orbiter screen captures illustrating various instruments during assembly and testing at the Payload Hazardous Servicing Facility at Kennedy Space Center; and an animation of Mars Exploration Rover Opportunity's 'Rub al Khali' panorama where the rover was stuck in the deep fine sand for more than a month. This task creates new visualization products that enable new science results and enhance the public's understanding of the Solar System and NASA's missions of exploration.

  16. Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography.

    PubMed

    Wojtkowski, Maciej; Srinivasan, Vivek; Fujimoto, James G; Ko, Tony; Schuman, Joel S; Kowalczyk, Andrzej; Duker, Jay S

    2005-10-01

    To demonstrate high-speed, ultrahigh-resolution, 3-dimensional optical coherence tomography (3D OCT) and new protocols for retinal imaging. Ultrahigh-resolution OCT using broadband light sources achieves axial image resolutions of approximately 2 microm compared with standard 10-microm-resolution OCT current commercial instruments. High-speed OCT using spectral/Fourier domain detection enables dramatic increases in imaging speeds. Three-dimensional OCT retinal imaging is performed in normal human subjects using high-speed ultrahigh-resolution OCT. Three-dimensional OCT data of the macula and optic disc are acquired using a dense raster scan pattern. New processing and display methods for generating virtual OCT fundus images; cross-sectional OCT images with arbitrary orientations; quantitative maps of retinal, nerve fiber layer, and other intraretinal layer thicknesses; and optic nerve head topographic parameters are demonstrated. Three-dimensional OCT imaging enables new imaging protocols that improve visualization and mapping of retinal microstructure. An OCT fundus image can be generated directly from the 3D OCT data, which enables precise and repeatable registration of cross-sectional OCT images and thickness maps with fundus features. Optical coherence tomography images with arbitrary orientations, such as circumpapillary scans, can be generated from 3D OCT data. Mapping of total retinal thickness and thicknesses of the nerve fiber layer, photoreceptor layer, and other intraretinal layers is demonstrated. Measurement of optic nerve head topography and disc parameters is also possible. Three-dimensional OCT enables measurements that are similar to those of standard instruments, including the StratusOCT, GDx, HRT, and RTA. Three-dimensional OCT imaging can be performed using high-speed ultrahigh-resolution OCT. Three-dimensional OCT provides comprehensive visualization and mapping of retinal microstructures. The high data acquisition speeds enable high-density data sets with large numbers of transverse positions on the retina, which reduces the possibility of missing focal pathologies. In addition to providing image information such as OCT cross-sectional images, OCT fundus images, and 3D rendering, quantitative measurement and mapping of intraretinal layer thickness and topographic features of the optic disc are possible. We hope that 3D OCT imaging may help to elucidate the structural changes associated with retinal disease as well as improve early diagnosis and monitoring of disease progression and response to treatment.

  17. Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

    2000-01-01

    Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

  18. Quantitative ex-vivo micro-computed tomographic imaging of blood vessels and necrotic regions within tumors.

    PubMed

    Downey, Charlene M; Singla, Arvind K; Villemaire, Michelle L; Buie, Helen R; Boyd, Steven K; Jirik, Frank R

    2012-01-01

    Techniques for visualizing and quantifying the microvasculature of tumors are essential not only for studying angiogenic processes but also for monitoring the effects of anti-angiogenic treatments. Given the relatively limited information that can be gleaned from conventional 2-D histological analyses, there has been considerable interest in methods that enable the 3-D assessment of the vasculature. To this end, we employed a polymerizing intravascular contrast medium (Microfil) and micro-computed tomography (micro-CT) in combination with a maximal spheres direct 3-D analysis method to visualize and quantify ex-vivo vessel structural features, and to define regions of hypoperfusion within tumors that would be indicative of necrosis. Employing these techniques we quantified the effects of a vascular disrupting agent on the tumor vasculature. The methods described herein for quantifying whole tumor vascularity represent a significant advance in the 3-D study of tumor angiogenesis and evaluation of novel therapeutics, and will also find potential application in other fields where quantification of blood vessel structure and necrosis are important outcome parameters.

  19. Increasing Electrochemiluminescence Intensity of a Wireless Electrode Array Chip by Thousands of Times Using a Diode for Sensitive Visual Detection by a Digital Camera.

    PubMed

    Qi, Liming; Xia, Yong; Qi, Wenjing; Gao, Wenyue; Wu, Fengxia; Xu, Guobao

    2016-01-19

    Both a wireless electrochemiluminescence (ECL) electrode microarray chip and the dramatic increase in ECL by embedding a diode in an electromagnetic receiver coil have been first reported. The newly designed device consists of a chip and a transmitter. The chip has an electromagnetic receiver coil, a mini-diode, and a gold electrode array. The mini-diode can rectify alternating current into direct current and thus enhance ECL intensities by 18 thousand times, enabling a sensitive visual detection using common cameras or smart phones as low cost detectors. The detection limit of hydrogen peroxide using a digital camera is comparable to that using photomultiplier tube (PMT)-based detectors. Coupled with a PMT-based detector, the device can detect luminol with higher sensitivity with linear ranges from 10 nM to 1 mM. Because of the advantages including high sensitivity, high throughput, low cost, high portability, and simplicity, it is promising in point of care testing, drug screening, and high throughput analysis.

  20. Teledermatology: from historical perspective to emerging techniques of the modern era: part II: Emerging technologies in teledermatology, limitations and future directions.

    PubMed

    Coates, Sarah J; Kvedar, Joseph; Granstein, Richard D

    2015-04-01

    Telemedicine is the use of telecommunications technology to support health care at a distance. Dermatology relies on visual cues that are easily captured by imaging technologies, making it ideally suited for this care model. Advances in telecommunications technology have made it possible to deliver high-quality skin care when patient and provider are separated by both time and space. Most recently, mobile devices that connect users through cellular data networks have enabled teledermatologists to instantly communicate with primary care providers throughout the world. The availability of teledermoscopy provides an additional layer of visual information to enhance the quality of teleconsultations. Teledermatopathology has become increasingly feasible because of advances in digitization of entire microscopic slides and robot-assisted microscopy. Barriers to additional expansion of these services include underdeveloped infrastructure in remote regions, fragmented electronic medical records, and varying degrees of reimbursement. Teleconsultants also confront special legal and ethical challenges as they work toward building a global network of practicing physicians. Copyright © 2014 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  1. Visualization of protein interactions in living Drosophila embryos by the bimolecular fluorescence complementation assay.

    PubMed

    Hudry, Bruno; Viala, Séverine; Graba, Yacine; Merabet, Samir

    2011-01-28

    Protein interactions control the regulatory networks underlying developmental processes. The understanding of developmental complexity will, therefore, require the characterization of protein interactions within their proper environment. The bimolecular fluorescence complementation (BiFC) technology offers this possibility as it enables the direct visualization of protein interactions in living cells. However, its potential has rarely been applied in embryos of animal model organisms and was only performed under transient protein expression levels. Using a Hox protein partnership as a test case, we investigated the suitability of BiFC for the study of protein interactions in the living Drosophila embryo. Importantly, all BiFC parameters were established with constructs that were stably expressed under the control of endogenous promoters. Under these physiological conditions, we showed that BiFC is specific and sensitive enough to analyse dynamic protein interactions. We next used BiFC in a candidate interaction screen, which led to the identification of several Hox protein partners. Our results establish the general suitability of BiFC for revealing and studying protein interactions in their physiological context during the rapid course of Drosophila embryonic development.

  2. Tools for visualization of phosphoinositides in the cell nucleus.

    PubMed

    Kalasova, Ilona; Fáberová, Veronika; Kalendová, Alžběta; Yildirim, Sukriye; Uličná, Lívia; Venit, Tomáš; Hozák, Pavel

    2016-04-01

    Phosphoinositides (PIs) are glycerol-based phospholipids containing hydrophilic inositol ring. The inositol ring is mono-, bis-, or tris-phosphorylated yielding seven PIs members. Ample evidence shows that PIs localize both to the cytoplasm and to the nucleus. However, tools for direct visualization of nuclear PIs are limited and many studies thus employ indirect approaches, such as staining of their metabolic enzymes. Since localization and mobility of PIs differ from their metabolic enzymes, these approaches may result in incomplete data. In this paper, we tested commercially available PIs antibodies by light microscopy on fixed cells, tested their specificity using protein-lipid overlay assay and blocking assay, and compared their staining patterns. Additionally, we prepared recombinant PIs-binding domains and tested them on both fixed and live cells by light microscopy. The results provide a useful overview of usability of the tools tested and stress that the selection of adequate tools is critical. Knowing the localization of individual PIs in various functional compartments should enable us to better understand the roles of PIs in the cell nucleus.

  3. Molecular visualizing and quantifying immune-associated peroxynitrite fluxes in phagocytes and mouse inflammation model.

    PubMed

    Li, Zan; Yan, Shi-Hai; Chen, Chen; Geng, Zhi-Rong; Chang, Jia-Yin; Chen, Chun-Xia; Huang, Bing-Huan; Wang, Zhi-Lin

    2017-04-15

    Reactions of peroxynitrite (ONOO - ) with biomolecules can lead to cytotoxic and cytoprotective events. Due to the difficulty of directly and unambiguously measuring its levels, most of the beneficial effects associated with ONOO - in vivo remain controversial or poorly characterized. Recently, optical imaging has served as a powerful noninvasive approach to studying ONOO - in living systems. However, ratiometric probes for ONOO - are currently lacking. Herein, we report the design, synthesis, and biological evaluation of F 482 , a novel fluorescence indicator that relies on ONOO - -induced diene oxidation. The remarkable sensitivity, selectivity, and photostability of F 482 enabled us to visualize basal ONOO - in immune-stimulated phagocyte cells and quantify its generation in phagosomes by high-throughput flow cytometry analysis. With the aid of in vivo ONOO - imaging in a mouse inflammation model assisted by F 482 , we envision that F 482 will find widespread applications in the study of the ONOO - biology associated with physiological and pathological processes in vitro and in vivo. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. A graph algebra for scalable visual analytics.

    PubMed

    Shaverdian, Anna A; Zhou, Hao; Michailidis, George; Jagadish, Hosagrahar V

    2012-01-01

    Visual analytics (VA), which combines analytical techniques with advanced visualization features, is fast becoming a standard tool for extracting information from graph data. Researchers have developed many tools for this purpose, suggesting a need for formal methods to guide these tools' creation. Increased data demands on computing requires redesigning VA tools to consider performance and reliability in the context of analysis of exascale datasets. Furthermore, visual analysts need a way to document their analyses for reuse and results justification. A VA graph framework encapsulated in a graph algebra helps address these needs. Its atomic operators include selection and aggregation. The framework employs a visual operator and supports dynamic attributes of data to enable scalable visual exploration of data.

  5. Real-space identification of intermolecular bonding with atomic force microscopy.

    PubMed

    Zhang, Jun; Chen, Pengcheng; Yuan, Bingkai; Ji, Wei; Cheng, Zhihai; Qiu, Xiaohui

    2013-11-01

    We report a real-space visualization of the formation of hydrogen bonding in 8-hydroxyquinoline (8-hq) molecular assemblies on a Cu(111) substrate, using noncontact atomic force microscopy (NC-AFM). The atomically resolved molecular structures enable a precise determination of the characteristics of hydrogen bonding networks, including the bonding sites, orientations, and lengths. The observation of bond contrast was interpreted by ab initio density functional calculations, which indicated the electron density contribution from the hybridized electronic state of the hydrogen bond. Intermolecular coordination between the dehydrogenated 8-hq and Cu adatoms was also revealed by the submolecular resolution AFM characterization. The direct identification of local bonding configurations by NC-AFM would facilitate detailed investigations of intermolecular interactions in complex molecules with multiple active sites.

  6. Myocardial Mapping With Cardiac Magnetic Resonance: The Diagnostic Value of Novel Sequences.

    PubMed

    Sanz, Javier; LaRocca, Gina; Mirelis, Jesús G

    2016-09-01

    Cardiac magnetic resonance has evolved into a crucial modality for the evaluation of cardiomyopathy due to its ability to characterize myocardial structure and function. In the last few years, interest has increased in the potential of "mapping" techniques that provide direct and objective quantification of myocardial properties such as T1, T2, and T2* times. These approaches enable the detection of abnormalities that affect the myocardium in a diffuse fashion and/or may be too subtle for visual recognition. This article reviews the current state of myocardial T1 and T2-mapping in both health and disease. Copyright © 2016 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  7. [Angioscanning in the diagnosis of breast neoplasms].

    PubMed

    Trishkin, V A; Fadeev, N P; Tetel'baum, B I; Dymarskiĭ, L Iu

    1976-01-01

    Angioscanning with macroalbumin J131 was performed in 30 patients with different mammary gland diseases (breast cancer--in 22, breast sarcoma--in 1, cystic fibroadenomatosis--in 6 and one patients without any breast pathology). Twenty eight of thirty patients were subjected to surgery, and the diagnosis was supported histologically. Injection of macroalbumin J131 in the subclavian artery enabled the authors to visualize malignant neoplasms, located mainly in the external quadrants of the mammary gland. The least size of the tumor, revealed by angioscanning, was 1.5 cm in diameter. The method of isotope injection directly in the subclavian artery, employed by the authors, may be recommended for patients, whose primary tumor is localized in the external half of the gland and in the axillary process.

  8. DEVELOPMENTS IN GRworkbench

    NASA Astrophysics Data System (ADS)

    Moylan, Andrew; Scott, Susan M.; Searle, Anthony C.

    2006-02-01

    The software tool GRworkbench is an ongoing project in visual, numerical General Relativity at The Australian National University. Recently, GRworkbench has been significantly extended to facilitate numerical experimentation in analytically-defined space-times. The numerical differential geometric engine has been rewritten using functional programming techniques, enabling objects which are normally defined as functions in the formalism of differential geometry and General Relativity to be directly represented as function variables in the C++ code of GRworkbench. The new functional differential geometric engine allows for more accurate and efficient visualisation of objects in space-times and makes new, efficient computational techniques available. Motivated by the desire to investigate a recent scientific claim using GRworkbench, new tools for numerical experimentation have been implemented, allowing for the simulation of complex physical situations.

  9. Quantitative imaging of mammalian transcriptional dynamics: from single cells to whole embryos.

    PubMed

    Zhao, Ziqing W; White, Melanie D; Bissiere, Stephanie; Levi, Valeria; Plachta, Nicolas

    2016-12-23

    Probing dynamic processes occurring within the cell nucleus at the quantitative level has long been a challenge in mammalian biology. Advances in bio-imaging techniques over the past decade have enabled us to directly visualize nuclear processes in situ with unprecedented spatial and temporal resolution and single-molecule sensitivity. Here, using transcription as our primary focus, we survey recent imaging studies that specifically emphasize the quantitative understanding of nuclear dynamics in both time and space. These analyses not only inform on previously hidden physical parameters and mechanistic details, but also reveal a hierarchical organizational landscape for coordinating a wide range of transcriptional processes shared by mammalian systems of varying complexity, from single cells to whole embryos.

  10. Design, Synthesis, and Isomerization Studies of Light-Driven Molecular Motors for Single Molecular Imaging

    PubMed Central

    2018-01-01

    The design of a multicomponent system that aims at the direct visualization of a synthetic rotary motor at the single molecule level on surfaces is presented. The synthesis of two functional motors enabling photochemical rotation and fluorescent detection is described. The light-driven molecular motor is found to operate in the presence of a fluorescent tag if a rigid long rod (32 Å) is installed between both photoactive moieties. The photochemical isomerization and subsequent thermal helix inversion steps are confirmed by 1H NMR and UV–vis absorption spectroscopies. In addition, the tetra-acid functioned motor can be successfully grafted onto amine-coated quartz and it is shown that the light responsive rotary motion on surfaces is preserved. PMID:29741383

  11. Coherent Raman Scattering Microscopy in Biology and Medicine.

    PubMed

    Zhang, Chi; Zhang, Delong; Cheng, Ji-Xin

    2015-01-01

    Advancements in coherent Raman scattering (CRS) microscopy have enabled label-free visualization and analysis of functional, endogenous biomolecules in living systems. When compared with spontaneous Raman microscopy, a key advantage of CRS microscopy is the dramatic improvement in imaging speed, which gives rise to real-time vibrational imaging of live biological samples. Using molecular vibrational signatures, recently developed hyperspectral CRS microscopy has improved the readout of chemical information available from CRS images. In this article, we review recent achievements in CRS microscopy, focusing on the theory of the CRS signal-to-noise ratio, imaging speed, technical developments, and applications of CRS imaging in bioscience and clinical settings. In addition, we present possible future directions that the use of this technology may take.

  12. Coherent Raman Scattering Microscopy in Biology and Medicine

    PubMed Central

    Zhang, Chi; Zhang, Delong; Cheng, Ji-Xin

    2016-01-01

    Advancements in coherent Raman scattering (CRS) microscopy have enabled label-free visualization and analysis of functional, endogenous biomolecules in living systems. When compared with spontaneous Raman microscopy, a key advantage of CRS microscopy is the dramatic improvement in imaging speed, which gives rise to real-time vibrational imaging of live biological samples. Using molecular vibrational signatures, recently developed hyperspectral CRS microscopy has improved the readout of chemical information available from CRS images. In this article, we review recent achievements in CRS microscopy, focusing on the theory of the CRS signal-to-noise ratio, imaging speed, technical developments, and applications of CRS imaging in bioscience and clinical settings. In addition, we present possible future directions that the use of this technology may take. PMID:26514285

  13. 47 CFR 80.293 - Check bearings by authorized ship personnel.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ....293 Section 80.293 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL... comparison of simultaneous visual and radio direction finder bearings. At least one comparison bearing must... visual bearing relative to the ship's heading and the difference between the visual and radio direction...

  14. The triticeae toolbox: combining phenotype and genotype data to advance small-grains breeding

    USDA-ARS?s Scientific Manuscript database

    The Triticeae Toolbox (http://triticeaetoolbox.org; T3) is the database schema enabling plant breeders and researchers to combine, visualize, and interrogate the wealth of phenotype and genotype data generated by the Triticeae Coordinated Agricultural Project (TCAP). T3 enables users to define speci...

  15. Cloud-Based Computational Tools for Earth Science Applications

    NASA Astrophysics Data System (ADS)

    Arendt, A. A.; Fatland, R.; Howe, B.

    2015-12-01

    Earth scientists are increasingly required to think across disciplines and utilize a wide range of datasets in order to solve complex environmental challenges. Although significant progress has been made in distributing data, researchers must still invest heavily in developing computational tools to accommodate their specific domain. Here we document our development of lightweight computational data systems aimed at enabling rapid data distribution, analytics and problem solving tools for Earth science applications. Our goal is for these systems to be easily deployable, scalable and flexible to accommodate new research directions. As an example we describe "Ice2Ocean", a software system aimed at predicting runoff from snow and ice in the Gulf of Alaska region. Our backend components include relational database software to handle tabular and vector datasets, Python tools (NumPy, pandas and xray) for rapid querying of gridded climate data, and an energy and mass balance hydrological simulation model (SnowModel). These components are hosted in a cloud environment for direct access across research teams, and can also be accessed via API web services using a REST interface. This API is a vital component of our system architecture, as it enables quick integration of our analytical tools across disciplines, and can be accessed by any existing data distribution centers. We will showcase several data integration and visualization examples to illustrate how our system has expanded our ability to conduct cross-disciplinary research.

  16. Emissions Scenario Portal for Visualization of Low Carbon Pathways

    NASA Astrophysics Data System (ADS)

    Friedrich, J.; Hennig, R. J.; Mountford, H.; Altamirano, J. C.; Ge, M.; Fransen, T.

    2016-12-01

    This proposal for a presentation is centered around a new project which is developed collaboratively by the World Resources Institute (WRI), Google Inc., and Deep Decarbonization Pathways Project (DDPP). The project aims to develop an online, open portal, the Emissions Scenario Portal (ESP),to enable users to easily visualize a range of future greenhouse gas emission pathways linked to different scenarios of economic and energy developments, drawing from a variety of modeling tools. It is targeted to users who are not modelling experts, but instead policy analysts or advisors, investment analysts, and similar who draw on modelled scenarios to inform their work, and who can benefit from better access to, and transparency around, the wide range of emerging scenarios on ambitious climate action. The ESP will provide information from scenarios in a visually appealing and easy-to-understand manner that enable these users to recognize the opportunities to reduce GHG emissions, the implications of the different scenarios, and the underlying assumptions. To facilitate the application of the portal and tools in policy dialogues, a series of country-specific and potentially sector-specific workshops with key decision-makers and analysts, supported by relevant analysis, will be organized by the key partners and also in broader collaboration with others who might wish to convene relevant groups around the information. This project will provide opportunities for modelers to increase their outreach and visibility in the public space and to directly interact with key audiences of emissions scenarios, such as policy analysts and advisors. The information displayed on the portal will cover a wide range of indicators, sectors and important scenario characteristics such as macroeconomic information, emission factors and policy as well as technology assumptions in order to facilitate comparison. These indicators have been selected based on existing standards (such as the IIASA AR5 database, the Greenhouse Gas Protocol and accounting literature) and stakeholder consultations. Examples for use cases include: technical advisers for governments NGO/Civil Society advocates Investors and bankers Modelers and academics Business sustainability officers

  17. Integrated Computational System for Aerodynamic Steering and Visualization

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus

    1999-01-01

    In February of 1994, an effort from the Fluid Dynamics and Information Sciences Divisions at NASA Ames Research Center with McDonnel Douglas Aerospace Company and Stanford University was initiated to develop, demonstrate, validate and disseminate automated software for numerical aerodynamic simulation. The goal of the initiative was to develop a tri-discipline approach encompassing CFD, Intelligent Systems, and Automated Flow Feature Recognition to improve the utility of CFD in the design cycle. This approach would then be represented through an intelligent computational system which could accept an engineer's definition of a problem and construct an optimal and reliable CFD solution. Stanford University's role focused on developing technologies that advance visualization capabilities for analysis of CFD data, extract specific flow features useful for the design process, and compare CFD data with experimental data. During the years 1995-1997, Stanford University focused on developing techniques in the area of tensor visualization and flow feature extraction. Software libraries were created enabling feature extraction and exploration of tensor fields. As a proof of concept, a prototype system called the Integrated Computational System (ICS) was developed to demonstrate CFD design cycle. The current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching and vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will (1) briefly review the technologies developed during 1995-1997 (2) describe current technologies in the area of comparison techniques, (4) describe the theory of our new method researched during the grant year (5) summarize a few of the results and finally (6) discuss work within the last 6 months that are direct extensions from the grant.

  18. Development of visual 3D virtual environment for control software

    NASA Technical Reports Server (NTRS)

    Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence

    1991-01-01

    Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D environment has considerable potential in the field of software engineering.

  19. Constructive, collaborative, contextual, and self-directed learning in surface anatomy education.

    PubMed

    Bergman, Esther M; Sieben, Judith M; Smailbegovic, Ida; de Bruin, Anique B H; Scherpbier, Albert J J A; van der Vleuten, Cees P M

    2013-01-01

    Anatomy education often consists of a combination of lectures and laboratory sessions, the latter frequently including surface anatomy. Studying surface anatomy enables students to elaborate on their knowledge of the cadaver's static anatomy by enabling the visualization of structures, especially those of the musculoskeletal system, move and function in a living human being. A recent development in teaching methods for surface anatomy is body painting, which several studies suggest increases both student motivation and knowledge acquisition. This article focuses on a teaching approach and is a translational contribution to existing literature. In line with best evidence medical education, the aim of this article is twofold: to briefly inform teachers about constructivist learning theory and elaborate on the principles of constructive, collaborative, contextual, and self-directed learning; and to provide teachers with an example of how to implement these learning principles to change the approach to teaching surface anatomy. Student evaluations of this new approach demonstrate that the application of these learning principles leads to higher student satisfaction. However, research suggests that even better results could be achieved by further adjustments in the application of contextual and self-directed learning principles. Successful implementation and guidance of peer physical examination is crucial for the described approach, but research shows that other options, like using life models, seem to work equally well. Future research on surface anatomy should focus on increasing the students' ability to apply anatomical knowledge and defining the setting in which certain teaching methods and approaches have a positive effect. Copyright © 2012 American Association of Anatomists.

  20. Advanced correlation grid: Analysis and visualisation of functional connectivity among multiple spike trains.

    PubMed

    Masud, Mohammad Shahed; Borisyuk, Roman; Stuart, Liz

    2017-07-15

    This study analyses multiple spike trains (MST) data, defines its functional connectivity and subsequently visualises an accurate diagram of connections. This is a challenging problem. For example, it is difficult to distinguish the common input and the direct functional connection of two spike trains. The new method presented in this paper is based on the traditional pairwise cross-correlation function (CCF) and a new combination of statistical techniques. First, the CCF is used to create the Advanced Correlation Grid (ACG) correlation where both the significant peak of the CCF and the corresponding time delay are used for detailed analysis of connectivity. Second, these two features of functional connectivity are used to classify connections. Finally, the visualization technique is used to represent the topology of functional connections. Examples are presented in the paper to demonstrate the new Advanced Correlation Grid method and to show how it enables discrimination between (i) influence from one spike train to another through an intermediate spike train and (ii) influence from one common spike train to another pair of analysed spike trains. The ACG method enables scientists to automatically distinguish between direct connections from spurious connections such as common source connection and indirect connection whereas existing methods require in-depth analysis to identify such connections. The ACG is a new and effective method for studying functional connectivity of multiple spike trains. This method can identify accurately all the direct connections and can distinguish common source and indirect connections automatically. Copyright © 2017 Elsevier B.V. All rights reserved.

Top