Interaction Junk: User Interaction-Based Evaluation of Visual Analytic Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; North, Chris
2012-10-14
With the growing need for visualization to aid users in understanding large, complex datasets, the ability for users to interact and explore these datasets is critical. As visual analytic systems have advanced to leverage powerful computational models and data analytics capabilities, the modes by which users engage and interact with the information are limited. Often, users are taxed with directly manipulating parameters of these models through traditional GUIs (e.g., using sliders to directly manipulate the value of a parameter). However, the purpose of user interaction in visual analytic systems is to enable visual data exploration – where users can focusmore » on their task, as opposed to the tool or system. As a result, users can engage freely in data exploration and decision-making, for the purpose of gaining insight. In this position paper, we discuss how evaluating visual analytic systems can be approached through user interaction analysis, where the goal is to minimize the cognitive translation between the visual metaphor and the mode of interaction (i.e., reducing the “Interactionjunk”). We motivate this concept through a discussion of traditional GUIs used in visual analytics for direct manipulation of model parameters, and the importance of designing interactions the support visual data exploration.« less
Interactive Exploration of Cosmological Dark-Matter Simulation Data.
Scherzinger, Aaron; Brix, Tobias; Drees, Dominik; Volker, Andreas; Radkov, Kiril; Santalidis, Niko; Fieguth, Alexander; Hinrichs, Klaus H
2017-01-01
The winning entry of the 2015 IEEE Scientific Visualization Contest, this article describes a visualization tool for cosmological data resulting from dark-matter simulations. The proposed system helps users explore all aspects of the data at once and receive more detailed information about structures of interest at any time. Moreover, novel methods for visualizing and interactively exploring dark-matter halo substructures are proposed.
Wu, Yubao; Zhu, Xiaofeng; Chen, Jian; Zhang, Xiang
2013-11-01
Epistasis (gene-gene interaction) detection in large-scale genetic association studies has recently drawn extensive research interests as many complex traits are likely caused by the joint effect of multiple genetic factors. The large number of possible interactions poses both statistical and computational challenges. A variety of approaches have been developed to address the analytical challenges in epistatic interaction detection. These methods usually output the identified genetic interactions and store them in flat file formats. It is highly desirable to develop an effective visualization tool to further investigate the detected interactions and unravel hidden interaction patterns. We have developed EINVis, a novel visualization tool that is specifically designed to analyze and explore genetic interactions. EINVis displays interactions among genetic markers as a network. It utilizes a circular layout (specially, a tree ring view) to simultaneously visualize the hierarchical interactions between single nucleotide polymorphisms (SNPs), genes, and chromosomes, and the network structure formed by these interactions. Using EINVis, the user can distinguish marginal effects from interactions, track interactions involving more than two markers, visualize interactions at different levels, and detect proxy SNPs based on linkage disequilibrium. EINVis is an effective and user-friendly free visualization tool for analyzing and exploring genetic interactions. It is publicly available with detailed documentation and online tutorial on the web at http://filer.case.edu/yxw407/einvis/. © 2013 WILEY PERIODICALS, INC.
Three visualization approaches for communicating and exploring PIT tag data
Letcher, Benjamin; Walker, Jeffrey D.; O'Donnell, Matthew; Whiteley, Andrew R.; Nislow, Keith; Coombs, Jason
2018-01-01
As the number, size and complexity of ecological datasets has increased, narrative and interactive raw data visualizations have emerged as important tools for exploring and understanding these large datasets. As a demonstration, we developed three visualizations to communicate and explore passive integrated transponder tag data from two long-term field studies. We created three independent visualizations for the same dataset, allowing separate entry points for users with different goals and experience levels. The first visualization uses a narrative approach to introduce users to the study. The second visualization provides interactive cross-filters that allow users to explore multi-variate relationships in the dataset. The last visualization allows users to visualize the movement histories of individual fish within the stream network. This suite of visualization tools allows a progressive discovery of more detailed information and should make the data accessible to users with a wide variety of backgrounds and interests.
Chronodes: Interactive Multifocus Exploration of Event Sequences
POLACK, PETER J.; CHEN, SHANG-TSE; KAHNG, MINSUK; DE BARBARO, KAYA; BASOLE, RAHUL; SHARMIN, MOUSHUMI; CHAU, DUEN HORNG
2018-01-01
The advent of mobile health (mHealth) technologies challenges the capabilities of current visualizations, interactive tools, and algorithms. We present Chronodes, an interactive system that unifies data mining and human-centric visualization techniques to support explorative analysis of longitudinal mHealth data. Chronodes extracts and visualizes frequent event sequences that reveal chronological patterns across multiple participant timelines of mHealth data. It then combines novel interaction and visualization techniques to enable multifocus event sequence analysis, which allows health researchers to interactively define, explore, and compare groups of participant behaviors using event sequence combinations. Through summarizing insights gained from a pilot study with 20 behavioral and biomedical health experts, we discuss Chronodes’s efficacy and potential impact in the mHealth domain. Ultimately, we outline important open challenges in mHealth, and offer recommendations and design guidelines for future research. PMID:29515937
Dabek, Filip; Caban, Jesus J
2017-01-01
Despite the recent popularity of visual analytics focusing on big data, little is known about how to support users that use visualization techniques to explore multi-dimensional datasets and accomplish specific tasks. Our lack of models that can assist end-users during the data exploration process has made it challenging to learn from the user's interactive and analytical process. The ability to model how a user interacts with a specific visualization technique and what difficulties they face are paramount in supporting individuals with discovering new patterns within their complex datasets. This paper introduces the notion of visualization systems understanding and modeling user interactions with the intent of guiding a user through a task thereby enhancing visual data exploration. The challenges faced and the necessary future steps to take are discussed; and to provide a working example, a grammar-based model is presented that can learn from user interactions, determine the common patterns among a number of subjects using a K-Reversible algorithm, build a set of rules, and apply those rules in the form of suggestions to new users with the goal of guiding them along their visual analytic process. A formal evaluation study with 300 subjects was performed showing that our grammar-based model is effective at capturing the interactive process followed by users and that further research in this area has the potential to positively impact how users interact with a visualization system.
Görg, Carsten; Liu, Zhicheng; Kihm, Jaeyeon; Choo, Jaegul; Park, Haesun; Stasko, John
2013-10-01
Investigators across many disciplines and organizations must sift through large collections of text documents to understand and piece together information. Whether they are fighting crime, curing diseases, deciding what car to buy, or researching a new field, inevitably investigators will encounter text documents. Taking a visual analytics approach, we integrate multiple text analysis algorithms with a suite of interactive visualizations to provide a flexible and powerful environment that allows analysts to explore collections of documents while sensemaking. Our particular focus is on the process of integrating automated analyses with interactive visualizations in a smooth and fluid manner. We illustrate this integration through two example scenarios: an academic researcher examining InfoVis and VAST conference papers and a consumer exploring car reviews while pondering a purchase decision. Finally, we provide lessons learned toward the design and implementation of visual analytics systems for document exploration and understanding.
Scientific Visualization of Radio Astronomy Data using Gesture Interaction
NASA Astrophysics Data System (ADS)
Mulumba, P.; Gain, J.; Marais, P.; Woudt, P.
2015-09-01
MeerKAT in South Africa (Meer = More Karoo Array Telescope) will require software to help visualize, interpret and interact with multidimensional data. While visualization of multi-dimensional data is a well explored topic, little work has been published on the design of intuitive interfaces to such systems. More specifically, the use of non-traditional interfaces (such as motion tracking and multi-touch) has not been widely investigated within the context of visualizing astronomy data. We hypothesize that a natural user interface would allow for easier data exploration which would in turn lead to certain kinds of visualizations (volumetric, multidimensional). To this end, we have developed a multi-platform scientific visualization system for FITS spectral data cubes using VTK (Visualization Toolkit) and a natural user interface to explore the interaction between a gesture input device and multidimensional data space. Our system supports visual transformations (translation, rotation and scaling) as well as sub-volume extraction and arbitrary slicing of 3D volumetric data. These tasks were implemented across three prototypes aimed at exploring different interaction strategies: standard (mouse/keyboard) interaction, volumetric gesture tracking (Leap Motion controller) and multi-touch interaction (multi-touch monitor). A Heuristic Evaluation revealed that the volumetric gesture tracking prototype shows great promise for interfacing with the depth component (z-axis) of 3D volumetric space across multiple transformations. However, this is limited by users needing to remember the required gestures. In comparison, the touch-based gesture navigation is typically more familiar to users as these gestures were engineered from standard multi-touch actions. Future work will address a complete usability test to evaluate and compare the different interaction modalities against the different visualization tasks.
Interactive visual exploration and analysis of origin-destination data
NASA Astrophysics Data System (ADS)
Ding, Linfang; Meng, Liqiu; Yang, Jian; Krisp, Jukka M.
2018-05-01
In this paper, we propose a visual analytics approach for the exploration of spatiotemporal interaction patterns of massive origin-destination data. Firstly, we visually query the movement database for data at certain time windows. Secondly, we conduct interactive clustering to allow the users to select input variables/features (e.g., origins, destinations, distance, and duration) and to adjust clustering parameters (e.g. distance threshold). The agglomerative hierarchical clustering method is applied for the multivariate clustering of the origin-destination data. Thirdly, we design a parallel coordinates plot for visualizing the precomputed clusters and for further exploration of interesting clusters. Finally, we propose a gradient line rendering technique to show the spatial and directional distribution of origin-destination clusters on a map view. We implement the visual analytics approach in a web-based interactive environment and apply it to real-world floating car data from Shanghai. The experiment results show the origin/destination hotspots and their spatial interaction patterns. They also demonstrate the effectiveness of our proposed approach.
Interactive Visualization of Dependencies
ERIC Educational Resources Information Center
Moreno, Camilo Arango; Bischof, Walter F.; Hoover, H. James
2012-01-01
We present an interactive tool for browsing course requisites as a case study of dependency visualization. This tool uses multiple interactive visualizations to allow the user to explore the dependencies between courses. A usability study revealed that the proposed browser provides significant advantages over traditional methods, in terms of…
Integrating visualization and interaction research to improve scientific workflows.
Keefe, Daniel F
2010-01-01
Scientific-visualization research is, nearly by necessity, interdisciplinary. In addition to their collaborators in application domains (for example, cell biology), researchers regularly build on close ties with disciplines related to visualization, such as graphics, human-computer interaction, and cognitive science. One of these ties is the connection between visualization and interaction research. This isn't a new direction for scientific visualization (see the "Early Connections" sidebar). However, momentum recently seems to be increasing toward integrating visualization research (for example, effective visual presentation of data) with interaction research (for example, innovative interactive techniques that facilitate manipulating and exploring data). We see evidence of this trend in several places, including the visualization literature and conferences.
Living Liquid: Design and Evaluation of an Exploratory Visualization Tool for Museum Visitors.
Ma, J; Liao, I; Ma, Kwan-Liu; Frazier, J
2012-12-01
Interactive visualizations can allow science museum visitors to explore new worlds by seeing and interacting with scientific data. However, designing interactive visualizations for informal learning environments, such as museums, presents several challenges. First, visualizations must engage visitors on a personal level. Second, visitors often lack the background to interpret visualizations of scientific data. Third, visitors have very limited time at individual exhibits in museums. This paper examines these design considerations through the iterative development and evaluation of an interactive exhibit as a visualization tool that gives museumgoers access to scientific data generated and used by researchers. The exhibit prototype, Living Liquid, encourages visitors to ask and answer their own questions while exploring the time-varying global distribution of simulated marine microbes using a touchscreen interface. Iterative development proceeded through three rounds of formative evaluations using think-aloud protocols and interviews, each round informing a key visualization design decision: (1) what to visualize to initiate inquiry, (2) how to link data at the microscopic scale to global patterns, and (3) how to include additional data that allows visitors to pursue their own questions. Data from visitor evaluations suggests that, when designing visualizations for public audiences, one should (1) avoid distracting visitors from data that they should explore, (2) incorporate background information into the visualization, (3) favor understandability over scientific accuracy, and (4) layer data accessibility to structure inquiry. Lessons learned from this case study add to our growing understanding of how to use visualizations to actively engage learners with scientific data.
View-Dependent Streamline Deformation and Exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tong, Xin; Edwards, John; Chen, Chun-Ming
Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. We propose a new streamline exploration approach by visually manipulating the cluttered streamlines by pulling visible layers apart and revealing the hidden structures underneath. This paper presents a customized view-dependent deformation algorithm and an interactive visualization tool to minimize visual cluttering for visualizing 3D vector and tensor fields. The algorithm is able to maintain the overall integrity of the fields and expose previously hidden structures.more » Our system supports both mouse and direct-touch interactions to manipulate the viewing perspectives and visualize the streamlines in depth. By using a lens metaphor of different shapes to select the transition zone of the targeted area interactively, the users can move their focus and examine the vector or tensor field freely.« less
Characterizing Interaction with Visual Mathematical Representations
ERIC Educational Resources Information Center
Sedig, Kamran; Sumner, Mark
2006-01-01
This paper presents a characterization of computer-based interactions by which learners can explore and investigate visual mathematical representations (VMRs). VMRs (e.g., geometric structures, graphs, and diagrams) refer to graphical representations that visually encode properties and relationships of mathematical structures and concepts.…
ERIC Educational Resources Information Center
Eckhoff, Angela
2013-01-01
In many early childhood classrooms, visual arts experiences occur around a communal arts table. A shared workspace allows for spontaneous conversation and exploration of the art-making process of peers and teachers. In this setting, conversation can play an important role in visual arts experiences as children explore new media, skills, and ideas.…
View-Dependent Streamline Deformation and Exploration
Tong, Xin; Edwards, John; Chen, Chun-Ming; Shen, Han-Wei; Johnson, Chris R.; Wong, Pak Chung
2016-01-01
Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. We propose a new streamline exploration approach by visually manipulating the cluttered streamlines by pulling visible layers apart and revealing the hidden structures underneath. This paper presents a customized view-dependent deformation algorithm and an interactive visualization tool to minimize visual clutter in 3D vector and tensor fields. The algorithm is able to maintain the overall integrity of the fields and expose previously hidden structures. Our system supports both mouse and direct-touch interactions to manipulate the viewing perspectives and visualize the streamlines in depth. By using a lens metaphor of different shapes to select the transition zone of the targeted area interactively, the users can move their focus and examine the vector or tensor field freely. PMID:26600061
View-Dependent Streamline Deformation and Exploration.
Tong, Xin; Edwards, John; Chen, Chun-Ming; Shen, Han-Wei; Johnson, Chris R; Wong, Pak Chung
2016-07-01
Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. We propose a new streamline exploration approach by visually manipulating the cluttered streamlines by pulling visible layers apart and revealing the hidden structures underneath. This paper presents a customized view-dependent deformation algorithm and an interactive visualization tool to minimize visual clutter in 3D vector and tensor fields. The algorithm is able to maintain the overall integrity of the fields and expose previously hidden structures. Our system supports both mouse and direct-touch interactions to manipulate the viewing perspectives and visualize the streamlines in depth. By using a lens metaphor of different shapes to select the transition zone of the targeted area interactively, the users can move their focus and examine the vector or tensor field freely.
SLIDE - a web-based tool for interactive visualization of large-scale -omics data.
Ghosh, Soumita; Datta, Abhik; Tan, Kaisen; Choi, Hyungwon
2018-06-28
Data visualization is often regarded as a post hoc step for verifying statistically significant results in the analysis of high-throughput data sets. This common practice leaves a large amount of raw data behind, from which more information can be extracted. However, existing solutions do not provide capabilities to explore large-scale raw datasets using biologically sensible queries, nor do they allow user interaction based real-time customization of graphics. To address these drawbacks, we have designed an open-source, web-based tool called Systems-Level Interactive Data Exploration, or SLIDE to visualize large-scale -omics data interactively. SLIDE's interface makes it easier for scientists to explore quantitative expression data in multiple resolutions in a single screen. SLIDE is publicly available under BSD license both as an online version as well as a stand-alone version at https://github.com/soumitag/SLIDE. Supplementary Information are available at Bioinformatics online.
3D Exploration of Meteorological Data: Facing the challenges of operational forecasters
NASA Astrophysics Data System (ADS)
Koutek, Michal; Debie, Frans; van der Neut, Ian
2016-04-01
In the past years the Royal Netherlands Meteorological Institute (KNMI) has been working on innovation in the field of meteorological data visualization. We are dealing with Numerical Weather Prediction (NWP) model data and observational data, i.e. satellite images, precipitation radar, ground and air-borne measurements. These multidimensional multivariate data are geo-referenced and can be combined in 3D space to provide more intuitive views on the atmospheric phenomena. We developed the Weather3DeXplorer (W3DX), a visualization framework for processing and interactive exploration and visualization using Virtual Reality (VR) technology. We managed to have great successes with research studies on extreme weather situations. In this paper we will elaborate what we have learned from application of interactive 3D visualization in the operational weather room. We will explain how important it is to control the degrees-of-freedom during interaction that are given to the users: forecasters/scientists; (3D camera and 3D slicing-plane navigation appear to be rather difficult for the users, when not implemented properly). We will present a novel approach of operational 3D visualization user interfaces (UI) that for a great deal eliminates the obstacle and the time it usually takes to set up the visualization parameters and an appropriate camera view on a certain atmospheric phenomenon. We have found our inspiration in the way our operational forecasters work in the weather room. We decided to form a bridge between 2D visualization images and interactive 3D exploration. Our method combines WEB-based 2D UI's, pre-rendered 3D visualization catalog for the latest NWP model runs, with immediate entry into interactive 3D session for selected visualization setting. Finally, we would like to present the first user experiences with this approach.
BIOLOGICAL NETWORK EXPLORATION WITH CYTOSCAPE 3
Su, Gang; Morris, John H.; Demchak, Barry; Bader, Gary D.
2014-01-01
Cytoscape is one of the most popular open-source software tools for the visual exploration of biomedical networks composed of protein, gene and other types of interactions. It offers researchers a versatile and interactive visualization interface for exploring complex biological interconnections supported by diverse annotation and experimental data, thereby facilitating research tasks such as predicting gene function and pathway construction. Cytoscape provides core functionality to load, visualize, search, filter and save networks, and hundreds of Apps extend this functionality to address specific research needs. The latest generation of Cytoscape (version 3.0 and later) has substantial improvements in function, user interface and performance relative to previous versions. This protocol aims to jump-start new users with specific protocols for basic Cytoscape functions, such as installing Cytoscape and Cytoscape Apps, loading data, visualizing and navigating the network, visualizing network associated data (attributes) and identifying clusters. It also highlights new features that benefit experienced users. PMID:25199793
Vabalas, Andrius; Freeth, Megan
2016-01-01
The current study investigated whether the amount of autistic traits shown by an individual is associated with viewing behaviour during a face-to-face interaction. The eye movements of 36 neurotypical university students were recorded using a mobile eye-tracking device. High amounts of autistic traits were neither associated with reduced looking to the social partner overall, nor with reduced looking to the face. However, individuals who were high in autistic traits exhibited reduced visual exploration during the face-to-face interaction overall, as demonstrated by shorter and less frequent saccades. Visual exploration was not related to social anxiety. This study suggests that there are systematic individual differences in visual exploration during social interactions and these are related to amount of autistic traits.
CollaborationViz: Interactive Visual Exploration of Biomedical Research Collaboration Networks
Bian, Jiang; Xie, Mengjun; Hudson, Teresa J.; Eswaran, Hari; Brochhausen, Mathias; Hanna, Josh; Hogan, William R.
2014-01-01
Social network analysis (SNA) helps us understand patterns of interaction between social entities. A number of SNA studies have shed light on the characteristics of research collaboration networks (RCNs). Especially, in the Clinical Translational Science Award (CTSA) community, SNA provides us a set of effective tools to quantitatively assess research collaborations and the impact of CTSA. However, descriptive network statistics are difficult for non-experts to understand. In this article, we present our experiences of building meaningful network visualizations to facilitate a series of visual analysis tasks. The basis of our design is multidimensional, visual aggregation of network dynamics. The resulting visualizations can help uncover hidden structures in the networks, elicit new observations of the network dynamics, compare different investigators and investigator groups, determine critical factors to the network evolution, and help direct further analyses. We applied our visualization techniques to explore the biomedical RCNs at the University of Arkansas for Medical Sciences – a CTSA institution. And, we created CollaborationViz, an open-source visual analytical tool to help network researchers and administration apprehend the network dynamics of research collaborations through interactive visualization. PMID:25405477
The Visual Geophysical Exploration Environment: A Multi-dimensional Scientific Visualization
NASA Astrophysics Data System (ADS)
Pandya, R. E.; Domenico, B.; Murray, D.; Marlino, M. R.
2003-12-01
The Visual Geophysical Exploration Environment (VGEE) is an online learning environment designed to help undergraduate students understand fundamental Earth system science concepts. The guiding principle of the VGEE is the importance of hands-on interaction with scientific visualization and data. The VGEE consists of four elements: 1) an online, inquiry-based curriculum for guiding student exploration; 2) a suite of El Nino-related data sets adapted for student use; 3) a learner-centered interface to a scientific visualization tool; and 4) a set of concept models (interactive tools that help students understand fundamental scientific concepts). There are two key innovations featured in this interactive poster session. One is the integration of concept models and the visualization tool. Concept models are simple, interactive, Java-based illustrations of fundamental physical principles. We developed eight concept models and integrated them into the visualization tool to enable students to probe data. The ability to probe data using a concept model addresses the common problem of transfer: the difficulty students have in applying theoretical knowledge to everyday phenomenon. The other innovation is a visualization environment and data that are discoverable in digital libraries, and installed, configured, and used for investigations over the web. By collaborating with the Integrated Data Viewer developers, we were able to embed a web-launchable visualization tool and access to distributed data sets into the online curricula. The Thematic Real-time Environmental Data Distributed Services (THREDDS) project is working to provide catalogs of datasets that can be used in new VGEE curricula under development. By cataloging this curricula in the Digital Library for Earth System Education (DLESE), learners and educators can discover the data and visualization tool within a framework that guides their use.
Interactive (statistical) visualisation and exploration of a billion objects with vaex
NASA Astrophysics Data System (ADS)
Breddels, M. A.
2017-06-01
With new catalogues arriving such as the Gaia DR1, containing more than a billion objects, new methods of handling and visualizing these data volumes are needed. We show that by calculating statistics on a regular (N-dimensional) grid, visualizations of a billion objects can be done within a second on a modern desktop computer. This is achieved using memory mapping of hdf5 files together with a simple binning algorithm, which are part of a Python library called vaex. This enables efficient exploration or large datasets interactively, making science exploration of large catalogues feasible. Vaex is a Python library and an application, which allows for interactive exploration and visualization. The motivation for developing vaex is the catalogue of the Gaia satellite, however, vaex can also be used on SPH or N-body simulations, any other (future) catalogues such as SDSS, Pan-STARRS, LSST, etc. or other tabular data. The homepage for vaex is http://vaex.astro.rug.nl.
Interactive Visualization of Healthcare Data Using Tableau.
Ko, Inseok; Chang, Hyejung
2017-10-01
Big data analysis is receiving increasing attention in many industries, including healthcare. Visualization plays an important role not only in intuitively showing the results of data analysis but also in the whole process of collecting, cleaning, analyzing, and sharing data. This paper presents a procedure for the interactive visualization and analysis of healthcare data using Tableau as a business intelligence tool. Starting with installation of the Tableau Desktop Personal version 10.3, this paper describes the process of understanding and visualizing healthcare data using an example. The example data of colon cancer patients were obtained from health insurance claims in years 2012 and 2013, provided by the Health Insurance Review and Assessment Service. To explore the visualization of healthcare data using Tableau for beginners, this paper describes the creation of a simple view for the average length of stay of colon cancer patients. Since Tableau provides various visualizations and customizations, the level of analysis can be increased with small multiples, view filtering, mark cards, and Tableau charts. Tableau is a software that can help users explore and understand their data by creating interactive visualizations. The software has the advantages that it can be used in conjunction with almost any database, and it is easy to use by dragging and dropping to create an interactive visualization expressing the desired format.
GLO-STIX: Graph-Level Operations for Specifying Techniques and Interactive eXploration
Stolper, Charles D.; Kahng, Minsuk; Lin, Zhiyuan; Foerster, Florian; Goel, Aakash; Stasko, John; Chau, Duen Horng
2015-01-01
The field of graph visualization has produced a wealth of visualization techniques for accomplishing a variety of analysis tasks. Therefore analysts often rely on a suite of different techniques, and visual graph analysis application builders strive to provide this breadth of techniques. To provide a holistic model for specifying network visualization techniques (as opposed to considering each technique in isolation) we present the Graph-Level Operations (GLO) model. We describe a method for identifying GLOs and apply it to identify five classes of GLOs, which can be flexibly combined to re-create six canonical graph visualization techniques. We discuss advantages of the GLO model, including potentially discovering new, effective network visualization techniques and easing the engineering challenges of building multi-technique graph visualization applications. Finally, we implement the GLOs that we identified into the GLO-STIX prototype system that enables an analyst to interactively explore a graph by applying GLOs. PMID:26005315
Devlin, Joseph C; Battaglia, Thomas; Blaser, Martin J; Ruggles, Kelly V
2018-06-25
Exploration of large data sets, such as shotgun metagenomic sequence or expression data, by biomedical experts and medical professionals remains as a major bottleneck in the scientific discovery process. Although tools for this purpose exist for 16S ribosomal RNA sequencing analysis, there is a growing but still insufficient number of user-friendly interactive visualization workflows for easy data exploration and figure generation. The development of such platforms for this purpose is necessary to accelerate and streamline microbiome laboratory research. We developed the Workflow Hub for Automated Metagenomic Exploration (WHAM!) as a web-based interactive tool capable of user-directed data visualization and statistical analysis of annotated shotgun metagenomic and metatranscriptomic data sets. WHAM! includes exploratory and hypothesis-based gene and taxa search modules for visualizing differences in microbial taxa and gene family expression across experimental groups, and for creating publication quality figures without the need for command line interface or in-house bioinformatics. WHAM! is an interactive and customizable tool for downstream metagenomic and metatranscriptomic analysis providing a user-friendly interface allowing for easy data exploration by microbiome and ecological experts to facilitate discovery in multi-dimensional and large-scale data sets.
Visual exploration and analysis of human-robot interaction rules
NASA Astrophysics Data System (ADS)
Zhang, Hui; Boyles, Michael J.
2013-01-01
We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming interfaces, information visualization, and visual data mining methods to facilitate designing, comprehending, and evaluating HRI interfaces.
What Google Maps can do for biomedical data dissemination: examples and a design study.
Jianu, Radu; Laidlaw, David H
2013-05-04
Biologists often need to assess whether unfamiliar datasets warrant the time investment required for more detailed exploration. Basing such assessments on brief descriptions provided by data publishers is unwieldy for large datasets that contain insights dependent on specific scientific questions. Alternatively, using complex software systems for a preliminary analysis may be deemed as too time consuming in itself, especially for unfamiliar data types and formats. This may lead to wasted analysis time and discarding of potentially useful data. We present an exploration of design opportunities that the Google Maps interface offers to biomedical data visualization. In particular, we focus on synergies between visualization techniques and Google Maps that facilitate the development of biological visualizations which have both low-overhead and sufficient expressivity to support the exploration of data at multiple scales. The methods we explore rely on displaying pre-rendered visualizations of biological data in browsers, with sparse yet powerful interactions, by using the Google Maps API. We structure our discussion around five visualizations: a gene co-regulation visualization, a heatmap viewer, a genome browser, a protein interaction network, and a planar visualization of white matter in the brain. Feedback from collaborative work with domain experts suggests that our Google Maps visualizations offer multiple, scale-dependent perspectives and can be particularly helpful for unfamiliar datasets due to their accessibility. We also find that users, particularly those less experienced with computer use, are attracted by the familiarity of the Google Maps API. Our five implementations introduce design elements that can benefit visualization developers. We describe a low-overhead approach that lets biologists access readily analyzed views of unfamiliar scientific datasets. We rely on pre-computed visualizations prepared by data experts, accompanied by sparse and intuitive interactions, and distributed via the familiar Google Maps framework. Our contributions are an evaluation demonstrating the validity and opportunities of this approach, a set of design guidelines benefiting those wanting to create such visualizations, and five concrete example visualizations.
What google maps can do for biomedical data dissemination: examples and a design study
2013-01-01
Background Biologists often need to assess whether unfamiliar datasets warrant the time investment required for more detailed exploration. Basing such assessments on brief descriptions provided by data publishers is unwieldy for large datasets that contain insights dependent on specific scientific questions. Alternatively, using complex software systems for a preliminary analysis may be deemed as too time consuming in itself, especially for unfamiliar data types and formats. This may lead to wasted analysis time and discarding of potentially useful data. Results We present an exploration of design opportunities that the Google Maps interface offers to biomedical data visualization. In particular, we focus on synergies between visualization techniques and Google Maps that facilitate the development of biological visualizations which have both low-overhead and sufficient expressivity to support the exploration of data at multiple scales. The methods we explore rely on displaying pre-rendered visualizations of biological data in browsers, with sparse yet powerful interactions, by using the Google Maps API. We structure our discussion around five visualizations: a gene co-regulation visualization, a heatmap viewer, a genome browser, a protein interaction network, and a planar visualization of white matter in the brain. Feedback from collaborative work with domain experts suggests that our Google Maps visualizations offer multiple, scale-dependent perspectives and can be particularly helpful for unfamiliar datasets due to their accessibility. We also find that users, particularly those less experienced with computer use, are attracted by the familiarity of the Google Maps API. Our five implementations introduce design elements that can benefit visualization developers. Conclusions We describe a low-overhead approach that lets biologists access readily analyzed views of unfamiliar scientific datasets. We rely on pre-computed visualizations prepared by data experts, accompanied by sparse and intuitive interactions, and distributed via the familiar Google Maps framework. Our contributions are an evaluation demonstrating the validity and opportunities of this approach, a set of design guidelines benefiting those wanting to create such visualizations, and five concrete example visualizations. PMID:23642009
Applying Pragmatics Principles for Interaction with Visual Analytics.
Hoque, Enamul; Setlur, Vidya; Tory, Melanie; Dykeman, Isaac
2018-01-01
Interactive visual data analysis is most productive when users can focus on answering the questions they have about their data, rather than focusing on how to operate the interface to the analysis tool. One viable approach to engaging users in interactive conversations with their data is a natural language interface to visualizations. These interfaces have the potential to be both more expressive and more accessible than other interaction paradigms. We explore how principles from language pragmatics can be applied to the flow of visual analytical conversations, using natural language as an input modality. We evaluate the effectiveness of pragmatics support in our system Evizeon, and present design considerations for conversation interfaces to visual analytics tools.
Supporting students' knowledge integration with technology-enhanced inquiry curricula
NASA Astrophysics Data System (ADS)
Chiu, Jennifer Lopseen
Dynamic visualizations of scientific phenomena have the potential to transform how students learn and understand science. Dynamic visualizations enable interaction and experimentation with unobservable atomic-level phenomena. A series of studies clarify the conditions under which embedding dynamic visualizations in technology-enhanced inquiry instruction can help students develop robust and durable chemistry knowledge. Using the knowledge integration perspective, I designed Chemical Reactions, a technology-enhanced curriculum unit, with a partnership of teachers, educational researchers, and chemists. This unit guides students in an exploration of how energy and chemical reactions relate to climate change. It uses powerful dynamic visualizations to connect atomic level interactions to the accumulation of greenhouse gases. The series of studies were conducted in typical classrooms in eleven high schools across the country. This dissertation describes four studies that contribute to understanding of how visualizations can be used to transform chemistry learning. The efficacy study investigated the impact of the Chemical Reactions unit compared to traditional instruction using pre-, post- and delayed posttest assessments. The self-monitoring study used self-ratings in combination with embedded assessments to explore how explanation prompts help students learn from dynamic visualizations. The self-regulation study used log files of students' interactions with the learning environment to investigate how external feedback and explanation prompts influence students' exploration of dynamic visualizations. The explanation study compared specific and general explanation prompts to explore the processes by which explanations benefit learning with dynamic visualizations. These studies delineate the conditions under which dynamic visualizations embedded in inquiry instruction can enhance student outcomes. The studies reveal that visualizations can be deceptively clear, deterring learners from exploring details. Asking students to generate explanations helps them realize what they don't understand and can spur students to revisit visualizations to remedy gaps in their knowledge. The studies demonstrate that science instruction focused on complex topics can succeed by combining visualizations with generative activities to encourage knowledge integration. Students are more successful at monitoring their progress and remedying gaps in knowledge when required to distinguish among alternative explanations. The results inform the design of technology-enhanced science instruction for typical classrooms.
The Spinel Explorer--Interactive Visual Analysis of Spinel Group Minerals.
Luján Ganuza, María; Ferracutti, Gabriela; Gargiulo, María Florencia; Castro, Silvia Mabel; Bjerg, Ernesto; Gröller, Eduard; Matković, Krešimir
2014-12-01
Geologists usually deal with rocks that are up to several thousand million years old. They try to reconstruct the tectonic settings where these rocks were formed and the history of events that affected them through the geological time. The spinel group minerals provide useful information regarding the geological environment in which the host rocks were formed. They constitute excellent indicators of geological environments (tectonic settings) and are of invaluable help in the search for mineral deposits of economic interest. The current workflow requires the scientists to work with different applications to analyze spine data. They do use specific diagrams, but these are usually not interactive. The current workflow hinders domain experts to fully exploit the potentials of tediously and expensively collected data. In this paper, we introduce the Spinel Explorer-an interactive visual analysis application for spinel group minerals. The design of the Spinel Explorer and of the newly introduced interactions is a result of a careful study of geologists' tasks. The Spinel Explorer includes most of the diagrams commonly used for analyzing spinel group minerals, including 2D binary plots, ternary plots, and 3D Spinel prism plots. Besides specific plots, conventional information visualization views are also integrated in the Spinel Explorer. All views are interactive and linked. The Spinel Explorer supports conventional statistics commonly used in spinel minerals exploration. The statistics views and different data derivation techniques are fully integrated in the system. Besides the Spinel Explorer as newly proposed interactive exploration system, we also describe the identified analysis tasks, and propose a new workflow. We evaluate the Spinel Explorer using real-life data from two locations in Argentina: the Frontal Cordillera in Central Andes and Patagonia. We describe the new findings of the geologists which would have been much more difficult to achieve using the current workflow only. Very positive feedback from geologists confirms the usefulness of the Spinel Explorer.
E-Books Plus: Role of Interactive Visuals in Exploration of Mathematical Information and E-Learning
ERIC Educational Resources Information Center
Rowhani, Sonja; Sedig, Kamran
2005-01-01
E-books promise to become a widespread delivery mechanism for educational resources. However, current e-books do not take full advantage of the power of computing tools. In particular, interaction with the content is often reduced to navigation through the information. This article investigates how adding interactive visuals to an e-book…
ActiviTree: interactive visual exploration of sequences in event-based data using graph similarity.
Vrotsou, Katerina; Johansson, Jimmy; Cooper, Matthew
2009-01-01
The identification of significant sequences in large and complex event-based temporal data is a challenging problem with applications in many areas of today's information intensive society. Pure visual representations can be used for the analysis, but are constrained to small data sets. Algorithmic search mechanisms used for larger data sets become expensive as the data size increases and typically focus on frequency of occurrence to reduce the computational complexity, often overlooking important infrequent sequences and outliers. In this paper we introduce an interactive visual data mining approach based on an adaptation of techniques developed for web searching, combined with an intuitive visual interface, to facilitate user-centred exploration of the data and identification of sequences significant to that user. The search algorithm used in the exploration executes in negligible time, even for large data, and so no pre-processing of the selected data is required, making this a completely interactive experience for the user. Our particular application area is social science diary data but the technique is applicable across many other disciplines.
VisGets: coordinated visualizations for web-based information exploration and discovery.
Dörk, Marian; Carpendale, Sheelagh; Collins, Christopher; Williamson, Carey
2008-01-01
In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets--interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.
The 3D widgets for exploratory scientific visualization
NASA Technical Reports Server (NTRS)
Herndon, Kenneth P.; Meyer, Tom
1995-01-01
Computational fluid dynamics (CFD) techniques are used to simulate flows of fluids like air or water around such objects as airplanes and automobiles. These techniques usually generate very large amounts of numerical data which are difficult to understand without using graphical scientific visualization techniques. There are a number of commercial scientific visualization applications available today which allow scientists to control visualization tools via textual and/or 2D user interfaces. However, these user interfaces are often difficult to use. We believe that 3D direct-manipulation techniques for interactively controlling visualization tools will provide opportunities for powerful and useful interfaces with which scientists can more effectively explore their datasets. A few systems have been developed which use these techniques. In this paper, we will present a variety of 3D interaction techniques for manipulating parameters of visualization tools used to explore CFD datasets, and discuss in detail various techniques for positioning tools in a 3D scene.
Empirical Comparison of Visualization Tools for Larger-Scale Network Analysis
Pavlopoulos, Georgios A.; Paez-Espino, David; Kyrpides, Nikos C.; ...
2017-07-18
Gene expression, signal transduction, protein/chemical interactions, biomedical literature cooccurrences, and other concepts are often captured in biological network representations where nodes represent a certain bioentity and edges the connections between them. While many tools to manipulate, visualize, and interactively explore such networks already exist, only few of them can scale up and follow today’s indisputable information growth. In this review, we shortly list a catalog of available network visualization tools and, from a user-experience point of view, we identify four candidate tools suitable for larger-scale network analysis, visualization, and exploration. Lastly, we comment on their strengths and their weaknesses andmore » empirically discuss their scalability, user friendliness, and postvisualization capabilities.« less
Empirical Comparison of Visualization Tools for Larger-Scale Network Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlopoulos, Georgios A.; Paez-Espino, David; Kyrpides, Nikos C.
Gene expression, signal transduction, protein/chemical interactions, biomedical literature cooccurrences, and other concepts are often captured in biological network representations where nodes represent a certain bioentity and edges the connections between them. While many tools to manipulate, visualize, and interactively explore such networks already exist, only few of them can scale up and follow today’s indisputable information growth. In this review, we shortly list a catalog of available network visualization tools and, from a user-experience point of view, we identify four candidate tools suitable for larger-scale network analysis, visualization, and exploration. Lastly, we comment on their strengths and their weaknesses andmore » empirically discuss their scalability, user friendliness, and postvisualization capabilities.« less
Semantic Interaction for Sensemaking: Inferring Analytical Reasoning for Model Steering.
Endert, A; Fiaux, P; North, C
2012-12-01
Visual analytic tools aim to support the cognitively demanding task of sensemaking. Their success often depends on the ability to leverage capabilities of mathematical models, visualization, and human intuition through flexible, usable, and expressive interactions. Spatially clustering data is one effective metaphor for users to explore similarity and relationships between information, adjusting the weighting of dimensions or characteristics of the dataset to observe the change in the spatial layout. Semantic interaction is an approach to user interaction in such spatializations that couples these parametric modifications of the clustering model with users' analytic operations on the data (e.g., direct document movement in the spatialization, highlighting text, search, etc.). In this paper, we present results of a user study exploring the ability of semantic interaction in a visual analytic prototype, ForceSPIRE, to support sensemaking. We found that semantic interaction captures the analytical reasoning of the user through keyword weighting, and aids the user in co-creating a spatialization based on the user's reasoning and intuition.
Can Generating Representations Enhance Learning with Dynamic Visualizations?
ERIC Educational Resources Information Center
Zhang, Zhihui Helen; Linn, Marcia C.
2011-01-01
This study explores the impact of asking middle school students to generate drawings of their ideas about chemical reactions on integrated understanding. Students explored atomic interactions during hydrogen combustion using a dynamic visualization. The generation group drew their ideas about how the reaction takes place at the molecular level.…
An Interactive Visualization Framework to Support Exploration and Analysis of TBI/PTSD Clinical Data
2017-05-01
techniques to overcome some of the challenges and complexities of the data . Our approach uses a novel adaptive window-based frequency sequence mining ...AWARD NUMBER: W81XWH-15-2-0016 TITLE: An Interactive Visualization Framework to Support Exploration and Analysis of TBI/PTSD Clinical Data ...Analysis of TBI/PTSD Clinical Data 5a. CONTRACT NUMBER 5b. GRANT NUMBER W81XWH-15-2-0016 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Dr. Jesus Caban 5d
Applications of image processing and visualization in the evaluation of murder and assault
NASA Astrophysics Data System (ADS)
Oliver, William R.; Rosenman, Julian G.; Boxwala, Aziz; Stotts, David; Smith, John; Soltys, Mitchell; Symon, James; Cullip, Tim; Wagner, Glenn
1994-09-01
Recent advances in image processing and visualization are of increasing use in the investigation of violent crime. The Digital Image Processing Laboratory at the Armed Forces Institute of Pathology in collaboration with groups at the University of North Carolina at Chapel Hill are actively exploring visualization applications including image processing of trauma images, 3D visualization, forensic database management and telemedicine. Examples of recent applications are presented. Future directions of effort include interactive consultation and image manipulation tools for forensic data exploration.
An Evaluation of Multimodal Interactions with Technology while Learning Science Concepts
ERIC Educational Resources Information Center
Anastopoulou, Stamatina; Sharples, Mike; Baber, Chris
2011-01-01
This paper explores the value of employing multiple modalities to facilitate science learning with technology. In particular, it is argued that when multiple modalities are employed, learners construct strong relations between physical movement and visual representations of motion. Body interactions with visual representations, enabled by…
Bach, Benjamin; Sicat, Ronell; Beyer, Johanna; Cordeil, Maxime; Pfister, Hanspeter
2018-01-01
We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user's real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.
StreamExplorer: A Multi-Stage System for Visually Exploring Events in Social Streams.
Wu, Yingcai; Chen, Zhutian; Sun, Guodao; Xie, Xiao; Cao, Nan; Liu, Shixia; Cui, Weiwei
2017-10-18
Analyzing social streams is important for many applications, such as crisis management. However, the considerable diversity, increasing volume, and high dynamics of social streams of large events continue to be significant challenges that must be overcome to ensure effective exploration. We propose a novel framework by which to handle complex social streams on a budget PC. This framework features two components: 1) an online method to detect important time periods (i.e., subevents), and 2) a tailored GPU-assisted Self-Organizing Map (SOM) method, which clusters the tweets of subevents stably and efficiently. Based on the framework, we present StreamExplorer to facilitate the visual analysis, tracking, and comparison of a social stream at three levels. At a macroscopic level, StreamExplorer uses a new glyph-based timeline visualization, which presents a quick multi-faceted overview of the ebb and flow of a social stream. At a mesoscopic level, a map visualization is employed to visually summarize the social stream from either a topical or geographical aspect. At a microscopic level, users can employ interactive lenses to visually examine and explore the social stream from different perspectives. Two case studies and a task-based evaluation are used to demonstrate the effectiveness and usefulness of StreamExplorer.Analyzing social streams is important for many applications, such as crisis management. However, the considerable diversity, increasing volume, and high dynamics of social streams of large events continue to be significant challenges that must be overcome to ensure effective exploration. We propose a novel framework by which to handle complex social streams on a budget PC. This framework features two components: 1) an online method to detect important time periods (i.e., subevents), and 2) a tailored GPU-assisted Self-Organizing Map (SOM) method, which clusters the tweets of subevents stably and efficiently. Based on the framework, we present StreamExplorer to facilitate the visual analysis, tracking, and comparison of a social stream at three levels. At a macroscopic level, StreamExplorer uses a new glyph-based timeline visualization, which presents a quick multi-faceted overview of the ebb and flow of a social stream. At a mesoscopic level, a map visualization is employed to visually summarize the social stream from either a topical or geographical aspect. At a microscopic level, users can employ interactive lenses to visually examine and explore the social stream from different perspectives. Two case studies and a task-based evaluation are used to demonstrate the effectiveness and usefulness of StreamExplorer.
Beyond Control Panels: Direct Manipulation for Visual Analytics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; Bradel, Lauren; North, Chris
2013-07-19
Information Visualization strives to provide visual representations through which users can think about and gain insight into information. By leveraging the visual and cognitive systems of humans, complex relationships and phenomena occurring within datasets can be uncovered by exploring information visually. Interaction metaphors for such visualizations are designed to enable users direct control over the filters, queries, and other parameters controlling how the data is visually represented. Through the evolution of information visualization, more complex mathematical and data analytic models are being used to visualize relationships and patterns in data – creating the field of Visual Analytics. However, the expectationsmore » for how users interact with these visualizations has remained largely unchanged – focused primarily on the direct manipulation of parameters of the underlying mathematical models. In this article we present an opportunity to evolve the methodology for user interaction from the direct manipulation of parameters through visual control panels, to interactions designed specifically for visual analytic systems. Instead of focusing on traditional direct manipulation of mathematical parameters, the evolution of the field can be realized through direct manipulation within the visual representation – where users can not only gain insight, but also interact. This article describes future directions and research challenges that fundamentally change the meaning of direct manipulation with regards to visual analytics, advancing the Science of Interaction.« less
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Matasci, Naim
2011-03-01
The explosion of online scientific data from experiments, simulations, and observations has given rise to an avalanche of algorithmic, visualization and imaging methods. There has also been enormous growth in the introduction of tools that provide interactive interfaces for exploring these data dynamically. Most systems, however, do not support the realtime exploration of patterns and relationships across tools and do not provide guidance on which colors, colormaps or visual metaphors will be most effective. In this paper, we introduce a general architecture for sharing metadata between applications and a "Metadata Mapper" component that allows the analyst to decide how metadata from one component should be represented in another, guided by perceptual rules. This system is designed to support "brushing [1]," in which highlighting a region of interest in one application automatically highlights corresponding values in another, allowing the scientist to develop insights from multiple sources. Our work builds on the component-based iPlant Cyberinfrastructure [2] and provides a general approach to supporting interactive, exploration across independent visualization and visual analysis components.
Integrating natural language processing and web GIS for interactive knowledge domain visualization
NASA Astrophysics Data System (ADS)
Du, Fangming
Recent years have seen a powerful shift towards data-rich environments throughout society. This has extended to a change in how the artifacts and products of scientific knowledge production can be analyzed and understood. Bottom-up approaches are on the rise that combine access to huge amounts of academic publications with advanced computer graphics and data processing tools, including natural language processing. Knowledge domain visualization is one of those multi-technology approaches, with its aim of turning domain-specific human knowledge into highly visual representations in order to better understand the structure and evolution of domain knowledge. For example, network visualizations built from co-author relations contained in academic publications can provide insight on how scholars collaborate with each other in one or multiple domains, and visualizations built from the text content of articles can help us understand the topical structure of knowledge domains. These knowledge domain visualizations need to support interactive viewing and exploration by users. Such spatialization efforts are increasingly looking to geography and GIS as a source of metaphors and practical technology solutions, even when non-georeferenced information is managed, analyzed, and visualized. When it comes to deploying spatialized representations online, web mapping and web GIS can provide practical technology solutions for interactive viewing of knowledge domain visualizations, from panning and zooming to the overlay of additional information. This thesis presents a novel combination of advanced natural language processing - in the form of topic modeling - with dimensionality reduction through self-organizing maps and the deployment of web mapping/GIS technology towards intuitive, GIS-like, exploration of a knowledge domain visualization. A complete workflow is proposed and implemented that processes any corpus of input text documents into a map form and leverages a web application framework to let users explore knowledge domain maps interactively. This workflow is implemented and demonstrated for a data set of more than 66,000 conference abstracts.
Visual Processing: Hungry Like the Mouse.
Piscopo, Denise M; Niell, Cristopher M
2016-09-07
In this issue of Neuron, Burgess et al. (2016) explore how motivational state interacts with visual processing, by examining hunger modulation of food-associated visual responses in postrhinal cortical neurons and their inputs from amygdala. Copyright © 2016 Elsevier Inc. All rights reserved.
Towards a Web-Enabled Geovisualization and Analytics Platform for the Energy and Water Nexus
NASA Astrophysics Data System (ADS)
Sanyal, J.; Chandola, V.; Sorokine, A.; Allen, M.; Berres, A.; Pang, H.; Karthik, R.; Nugent, P.; McManamay, R.; Stewart, R.; Bhaduri, B. L.
2017-12-01
Interactive data analytics are playing an increasingly vital role in the generation of new, critical insights regarding the complex dynamics of the energy/water nexus (EWN) and its interactions with climate variability and change. Integration of impacts, adaptation, and vulnerability (IAV) science with emerging, and increasingly critical, data science capabilities offers a promising potential to meet the needs of the EWN community. To enable the exploration of pertinent research questions, a web-based geospatial visualization platform is being built that integrates a data analysis toolbox with advanced data fusion and data visualization capabilities to create a knowledge discovery framework for the EWN. The system, when fully built out, will offer several geospatial visualization capabilities including statistical visual analytics, clustering, principal-component analysis, dynamic time warping, support uncertainty visualization and the exploration of data provenance, as well as support machine learning discoveries to render diverse types of geospatial data and facilitate interactive analysis. Key components in the system architecture includes NASA's WebWorldWind, the Globus toolkit, postgresql, as well as other custom built software modules.
An interactive visualization tool for mobile objects
NASA Astrophysics Data System (ADS)
Kobayashi, Tetsuo
Recent advancements in mobile devices---such as Global Positioning System (GPS), cellular phones, car navigation system, and radio-frequency identification (RFID)---have greatly influenced the nature and volume of data about individual-based movement in space and time. Due to the prevalence of mobile devices, vast amounts of mobile objects data are being produced and stored in databases, overwhelming the capacity of traditional spatial analytical methods. There is a growing need for discovering unexpected patterns, trends, and relationships that are hidden in the massive mobile objects data. Geographic visualization (GVis) and knowledge discovery in databases (KDD) are two major research fields that are associated with knowledge discovery and construction. Their major research challenges are the integration of GVis and KDD, enhancing the ability to handle large volume mobile objects data, and high interactivity between the computer and users of GVis and KDD tools. This dissertation proposes a visualization toolkit to enable highly interactive visual data exploration for mobile objects datasets. Vector algebraic representation and online analytical processing (OLAP) are utilized for managing and querying the mobile object data to accomplish high interactivity of the visualization tool. In addition, reconstructing trajectories at user-defined levels of temporal granularity with time aggregation methods allows exploration of the individual objects at different levels of movement generality. At a given level of generality, individual paths can be combined into synthetic summary paths based on three similarity measures, namely, locational similarity, directional similarity, and geometric similarity functions. A visualization toolkit based on the space-time cube concept exploits these functionalities to create a user-interactive environment for exploring mobile objects data. Furthermore, the characteristics of visualized trajectories are exported to be utilized for data mining, which leads to the integration of GVis and KDD. Case studies using three movement datasets (personal travel data survey in Lexington, Kentucky, wild chicken movement data in Thailand, and self-tracking data in Utah) demonstrate the potential of the system to extract meaningful patterns from the otherwise difficult to comprehend collections of space-time trajectories.
RealSurf - A Tool for the Interactive Visualization of Mathematical Models
NASA Astrophysics Data System (ADS)
Stussak, Christian; Schenzel, Peter
For applications in fine art, architecture and engineering it is often important to visualize and to explore complex mathematical models. In former times there were static models of them collected in museums respectively in mathematical institutes. In order to check their properties for esthetical reasons it could be helpful to explore them interactively in 3D in real time. For the class of implicitly given algebraic surfaces we developed the tool RealSurf. Here we give an introduction to the program and some hints for the design of interesting surfaces.
Query2Question: Translating Visualization Interaction into Natural Language.
Nafari, Maryam; Weaver, Chris
2015-06-01
Richly interactive visualization tools are increasingly popular for data exploration and analysis in a wide variety of domains. Existing systems and techniques for recording provenance of interaction focus either on comprehensive automated recording of low-level interaction events or on idiosyncratic manual transcription of high-level analysis activities. In this paper, we present the architecture and translation design of a query-to-question (Q2Q) system that automatically records user interactions and presents them semantically using natural language (written English). Q2Q takes advantage of domain knowledge and uses natural language generation (NLG) techniques to translate and transcribe a progression of interactive visualization states into a visual log of styled text that complements and effectively extends the functionality of visualization tools. We present Q2Q as a means to support a cross-examination process in which questions rather than interactions are the focus of analytic reasoning and action. We describe the architecture and implementation of the Q2Q system, discuss key design factors and variations that effect question generation, and present several visualizations that incorporate Q2Q for analysis in a variety of knowledge domains.
Design and outcomes of an acoustic data visualization seminar.
Robinson, Philip W; Pätynen, Jukka; Haapaniemi, Aki; Kuusinen, Antti; Leskinen, Petri; Zan-Bi, Morley; Lokki, Tapio
2014-01-01
Recently, the Department of Media Technology at Aalto University offered a seminar entitled Applied Data Analysis and Visualization. The course used spatial impulse response measurements from concert halls as the context to explore high-dimensional data visualization methods. Students were encouraged to represent source and receiver positions, spatial aspects, and temporal development of sound fields, frequency characteristics, and comparisons between halls, using animations and interactive graphics. The primary learning objectives were for the students to translate their skills across disciplines and gain a working understanding of high-dimensional data visualization techniques. Accompanying files present examples of student-generated, animated and interactive visualizations.
Parallel Planes Information Visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bush, Brian
2015-12-26
This software presents a user-provided multivariate dataset as an interactive three dimensional visualization so that the user can explore the correlation between variables in the observations and the distribution of observations among the variables.
ERIC Educational Resources Information Center
Rossetto, Marietta; Chiera-Macchia, Antonella
2011-01-01
This study investigated the use of comics (Cary, 2004) in a guided writing experience in secondary school Italian language learning. The main focus of the peer group interaction task included the exploration of visual sequencing and visual integration (Bailey, O'Grady-Jones, & McGown, 1995) using image and text to create a comic strip narrative in…
Explorative visual analytics on interval-based genomic data and their metadata.
Jalili, Vahid; Matteucci, Matteo; Masseroli, Marco; Ceri, Stefano
2017-12-04
With the wide-spreading of public repositories of NGS processed data, the availability of user-friendly and effective tools for data exploration, analysis and visualization is becoming very relevant. These tools enable interactive analytics, an exploratory approach for the seamless "sense-making" of data through on-the-fly integration of analysis and visualization phases, suggested not only for evaluating processing results, but also for designing and adapting NGS data analysis pipelines. This paper presents abstractions for supporting the early analysis of NGS processed data and their implementation in an associated tool, named GenoMetric Space Explorer (GeMSE). This tool serves the needs of the GenoMetric Query Language, an innovative cloud-based system for computing complex queries over heterogeneous processed data. It can also be used starting from any text files in standard BED, BroadPeak, NarrowPeak, GTF, or general tab-delimited format, containing numerical features of genomic regions; metadata can be provided as text files in tab-delimited attribute-value format. GeMSE allows interactive analytics, consisting of on-the-fly cycling among steps of data exploration, analysis and visualization that help biologists and bioinformaticians in making sense of heterogeneous genomic datasets. By means of an explorative interaction support, users can trace past activities and quickly recover their results, seamlessly going backward and forward in the analysis steps and comparative visualizations of heatmaps. GeMSE effective application and practical usefulness is demonstrated through significant use cases of biological interest. GeMSE is available at http://www.bioinformatics.deib.polimi.it/GeMSE/ , and its source code is available at https://github.com/Genometric/GeMSE under GPLv3 open-source license.
Urban Space Explorer: A Visual Analytics System for Urban Planning.
Karduni, Alireza; Cho, Isaac; Wessel, Ginette; Ribarsky, William; Sauda, Eric; Dou, Wenwen
2017-01-01
Understanding people's behavior is fundamental to many planning professions (including transportation, community development, economic development, and urban design) that rely on data about frequently traveled routes, places, and social and cultural practices. Based on the results of a practitioner survey, the authors designed Urban Space Explorer, a visual analytics system that utilizes mobile social media to enable interactive exploration of public-space-related activity along spatial, temporal, and semantic dimensions.
ARIES: Enabling Visual Exploration and Organization of Art Image Collections.
Crissaff, Lhaylla; Wood Ruby, Louisa; Deutch, Samantha; DuBois, R Luke; Fekete, Jean-Daniel; Freire, Juliana; Silva, Claudio
2018-01-01
Art historians have traditionally used physical light boxes to prepare exhibits or curate collections. On a light box, they can place slides or printed images, move the images around at will, group them as desired, and visual-ly compare them. The transition to digital images has rendered this workflow obsolete. Now, art historians lack well-designed, unified interactive software tools that effectively support the operations they perform with physi-cal light boxes. To address this problem, we designed ARIES (ARt Image Exploration Space), an interactive image manipulation system that enables the exploration and organization of fine digital art. The system allows images to be compared in multiple ways, offering dynamic overlays analogous to a physical light box, and sup-porting advanced image comparisons and feature-matching functions, available through computational image processing. We demonstrate the effectiveness of our system to support art historians tasks through real use cases.
A prototype system based on visual interactive SDM called VGC
NASA Astrophysics Data System (ADS)
Jia, Zelu; Liu, Yaolin; Liu, Yanfang
2009-10-01
In many application domains, data is collected and referenced by its geo-spatial location. Spatial data mining, or the discovery of interesting patterns in such databases, is an important capability in the development of database systems. Spatial data mining recently emerges from a number of real applications, such as real-estate marketing, urban planning, weather forecasting, medical image analysis, road traffic accident analysis, etc. It demands for efficient solutions for many new, expensive, and complicated problems. For spatial data mining of large data sets to be effective, it is also important to include humans in the data exploration process and combine their flexibility, creativity, and general knowledge with the enormous storage capacity and computational power of today's computers. Visual spatial data mining applies human visual perception to the exploration of large data sets. Presenting data in an interactive, graphical form often fosters new insights, encouraging the information and validation of new hypotheses to the end of better problem-solving and gaining deeper domain knowledge. In this paper a visual interactive spatial data mining prototype system (visual geo-classify) based on VC++6.0 and MapObject2.0 are designed and developed, the basic algorithms of the spatial data mining is used decision tree and Bayesian networks, and data classify are used training and learning and the integration of the two to realize. The result indicates it's a practical and extensible visual interactive spatial data mining tool.
Pseudohaptic interaction with knot diagrams
NASA Astrophysics Data System (ADS)
Weng, Jianguang; Zhang, Hui
2012-07-01
To make progress in understanding knot theory, we need to interact with the projected representations of mathematical knots, which are continuous in three dimensions (3-D) but significantly interrupted in the projective images. One way to achieve such a goal is to design an interactive system that allows us to sketch two-dimensional (2-D) knot diagrams by taking advantage of a collision-sensing controller and explore their underlying smooth structures through a continuous motion. Recent advances of interaction techniques have been made that allow progress in this direction. Pseudohaptics that simulate haptic effects using pure visual feedback can be used to develop such an interactive system. We outline one such pseudohaptic knot diagram interface. Our interface derives from the familiar pencil-and-paper process of drawing 2-D knot diagrams and provides haptic-like sensations to facilitate the creation and exploration of knot diagrams. A centerpiece of the interaction model simulates a physically reactive mouse cursor, which is exploited to resolve the apparent conflict between the continuous structure of the actual smooth knot and the visual discontinuities in the knot diagram representation. Another value in exploiting pseudohaptics is that an acceleration (or deceleration) of the mouse cursor (or surface locator) can be used to indicate the slope of the curve (or surface) of which the projective image is being explored. By exploiting these additional visual cues, we proceed to a full-featured extension to a pseudohaptic four-dimensional (4-D) visualization system that simulates the continuous navigation on 4-D objects and allows us to sense the bumps and holes in the fourth dimension. Preliminary tests of the software show that main features of the interface overcome some expected perceptual limitations in our interaction with 2-D knot diagrams of 3-D knots and 3-D projective images of 4-D mathematical objects.
Interactive 3D visualization for theoretical virtual observatories
NASA Astrophysics Data System (ADS)
Dykes, T.; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.
2018-06-01
Virtual observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of data sets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2D or volume rendering in 3D. We analyse the current state of 3D visualization for big theoretical astronomical data sets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3D visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based data sets, allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.
Interactive 3D visualization speeds well, reservoir planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petzet, G.A.
1997-11-24
Texaco Exploration and Production has begun making expeditious analyses and drilling decisions that result from interactive, large screen visualization of seismic and other three dimensional data. A pumpkin shaped room or pod inside a 3,500 sq ft, state-of-the-art facility in Southwest Houston houses a supercomputer and projection equipment Texaco said will help its people sharply reduce 3D seismic project cycle time, boost production from existing fields, and find more reserves. Oil and gas related applications of the visualization center include reservoir engineering, plant walkthrough simulation for facilities/piping design, and new field exploration. The center houses a Silicon Graphics Onyx2 infinitemore » reality supercomputer configured with 8 processors, 3 graphics pipelines, and 6 gigabytes of main memory.« less
VISTILES: Coordinating and Combining Co-located Mobile Devices for Visual Data Exploration.
Langner, Ricardo; Horak, Tom; Dachselt, Raimund
2017-08-29
We present VISTILES, a conceptual framework that uses a set of mobile devices to distribute and coordinate visualization views for the exploration of multivariate data. In contrast to desktop-based interfaces for information visualization, mobile devices offer the potential to provide a dynamic and user-defined interface supporting co-located collaborative data exploration with different individual workflows. As part of our framework, we contribute concepts that enable users to interact with coordinated & multiple views (CMV) that are distributed across several mobile devices. The major components of the framework are: (i) dynamic and flexible layouts for CMV focusing on the distribution of views and (ii) an interaction concept for smart adaptations and combinations of visualizations utilizing explicit side-by-side arrangements of devices. As a result, users can benefit from the possibility to combine devices and organize them in meaningful spatial layouts. Furthermore, we present a web-based prototype implementation as a specific instance of our concepts. This implementation provides a practical application case enabling users to explore a multivariate data collection. We also illustrate the design process including feedback from a preliminary user study, which informed the design of both the concepts and the final prototype.
Falcon: A Temporal Visual Analysis System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A.
2016-09-05
Flexible visible exploration of long, high-resolution time series from multiple sensor streams is a challenge in several domains. Falcon is a visual analytics approach that helps researchers acquire a deep understanding of patterns in log and imagery data. Falcon allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations with multiple levels of detail. These capabilities are applicable to the analysis of any quantitative time series.
Visual Links: Discovery in Art and Science.
ERIC Educational Resources Information Center
Dake, Dennis M.
Some specific aspects of the process of discovery are explored as they are experienced in the visual arts and the physical sciences. Both fields use the same visual/brain processing system, and both disciplines share an imaginative and productive interest in the disciplined use of imagistic thinking. Many productive interactions between visual…
Visual Juxtaposition as Qualitative Inquiry in Educational Research
ERIC Educational Resources Information Center
Metcalfe, Amy Scott
2015-01-01
Visual juxtaposition is inquiry through contrast, facilitated by side-by-side positioning of two images, or images and text. When combined with a theoretical foundation that explores interactions between the material and discursive elements of visual data, juxtaposition creates opportunities for qualitative analysis that are not as readily…
NASA Astrophysics Data System (ADS)
West, Ruth G.; Margolis, Todd; Prudhomme, Andrew; Schulze, Jürgen P.; Mostafavi, Iman; Lewis, J. P.; Gossmann, Joachim; Singh, Rajvikram
2014-02-01
Scalable Metadata Environments (MDEs) are an artistic approach for designing immersive environments for large scale data exploration in which users interact with data by forming multiscale patterns that they alternatively disrupt and reform. Developed and prototyped as part of an art-science research collaboration, we define an MDE as a 4D virtual environment structured by quantitative and qualitative metadata describing multidimensional data collections. Entire data sets (e.g.10s of millions of records) can be visualized and sonified at multiple scales and at different levels of detail so they can be explored interactively in real-time within MDEs. They are designed to reflect similarities and differences in the underlying data or metadata such that patterns can be visually/aurally sorted in an exploratory fashion by an observer who is not familiar with the details of the mapping from data to visual, auditory or dynamic attributes. While many approaches for visual and auditory data mining exist, MDEs are distinct in that they utilize qualitative and quantitative data and metadata to construct multiple interrelated conceptual coordinate systems. These "regions" function as conceptual lattices for scalable auditory and visual representations within virtual environments computationally driven by multi-GPU CUDA-enabled fluid dyamics systems.
Visualizing Biological Data in Museums: Visitor Learning with an Interactive Tree of Life Exhibit
ERIC Educational Resources Information Center
Horn, Michael S.; Phillips, Brenda C.; Evans, Evelyn Margaret; Block, Florian; Diamond, Judy; Shen, Chia
2016-01-01
In this study, we investigate museum visitor learning and engagement at an interactive visualization of an evolutionary tree of life consisting of over 70,000 species. The study was conducted at two natural history museums where visitors collaboratively explored the tree of life using direct touch gestures on a multi-touch tabletop display. In the…
Matsumiya, Kazumichi
2013-10-01
Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.
TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections.
Kim, Minjeong; Kang, Kyeongpil; Park, Deokgun; Choo, Jaegul; Elmqvist, Niklas
2017-01-01
Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets.
mHealth Visual Discovery Dashboard.
Fang, Dezhi; Hohman, Fred; Polack, Peter; Sarker, Hillol; Kahng, Minsuk; Sharmin, Moushumi; al'Absi, Mustafa; Chau, Duen Horng
2017-09-01
We present Discovery Dashboard, a visual analytics system for exploring large volumes of time series data from mobile medical field studies. Discovery Dashboard offers interactive exploration tools and a data mining motif discovery algorithm to help researchers formulate hypotheses, discover trends and patterns, and ultimately gain a deeper understanding of their data. Discovery Dashboard emphasizes user freedom and flexibility during the data exploration process and enables researchers to do things previously challenging or impossible to do - in the web-browser and in real time. We demonstrate our system visualizing data from a mobile sensor study conducted at the University of Minnesota that included 52 participants who were trying to quit smoking.
mHealth Visual Discovery Dashboard
Fang, Dezhi; Hohman, Fred; Polack, Peter; Sarker, Hillol; Kahng, Minsuk; Sharmin, Moushumi; al'Absi, Mustafa; Chau, Duen Horng
2018-01-01
We present Discovery Dashboard, a visual analytics system for exploring large volumes of time series data from mobile medical field studies. Discovery Dashboard offers interactive exploration tools and a data mining motif discovery algorithm to help researchers formulate hypotheses, discover trends and patterns, and ultimately gain a deeper understanding of their data. Discovery Dashboard emphasizes user freedom and flexibility during the data exploration process and enables researchers to do things previously challenging or impossible to do — in the web-browser and in real time. We demonstrate our system visualizing data from a mobile sensor study conducted at the University of Minnesota that included 52 participants who were trying to quit smoking. PMID:29354812
Geometric quantification of features in large flow fields.
Kendall, Wesley; Huang, Jian; Peterka, Tom
2012-01-01
Interactive exploration of flow features in large-scale 3D unsteady-flow data is one of the most challenging visualization problems today. To comprehensively explore the complex feature spaces in these datasets, a proposed system employs a scalable framework for investigating a multitude of characteristics from traced field lines. This capability supports the examination of various neighborhood-based geometric attributes in concert with other scalar quantities. Such an analysis wasn't previously possible because of the large computational overhead and I/O requirements. The system integrates visual analytics methods by letting users procedurally and interactively describe and extract high-level flow features. An exploration of various phenomena in a large global ocean-modeling simulation demonstrates the approach's generality and expressiveness as well as its efficacy.
Belle2VR: A Virtual-Reality Visualization of Subatomic Particle Physics in the Belle II Experiment.
Duer, Zach; Piilonen, Leo; Glasson, George
2018-05-01
Belle2VR is an interactive virtual-reality visualization of subatomic particle physics, designed by an interdisciplinary team as an educational tool for learning about and exploring subatomic particle collisions. This article describes the tool, discusses visualization design decisions, and outlines our process for collaborative development.
DOT National Transportation Integrated Search
2009-12-01
The goals of integration should be: Supporting domain oriented data analysis through the use of : knowledge augmented visual analytics system. In this project, we focus on: : Providing interactive data exploration for bridge managements. : ...
The Effects of Pictorial Complexity and Cognitive Style on Visual Recall Memory.
ERIC Educational Resources Information Center
Jesky, Romaine R.; Berry, Louis H.
The effect of the interaction between cognitive style differences (field dependence/field independence) and various degrees of visual complexity on pictorial recall memory was explored using three sets of visuals in three different formats--line drawing, black and white, and color. The subjects were 86 undergraduate students enrolled in two core…
What you say matters: exploring visual-verbal interactions in visual working memory.
Mate, Judit; Allen, Richard J; Baqués, Josep
2012-01-01
The aim of this study was to explore whether the content of a simple concurrent verbal load task determines the extent of its interference on memory for coloured shapes. The task consisted of remembering four visual items while repeating aloud a pair of words that varied in terms of imageability and relatedness to the task set. At test, a cue appeared that was either the colour or the shape of one of the previously seen objects, with participants required to select the object's other feature from a visual array. During encoding and retention, there were four verbal load conditions: (a) a related, shape-colour pair (from outside the experimental set, i.e., "pink square"); (b) a pair of unrelated but visually imageable, concrete, words (i.e., "big elephant"); (c) a pair of unrelated and abstract words (i.e., "critical event"); and (d) no verbal load. Results showed differential effects of these verbal load conditions. In particular, imageable words (concrete and related conditions) interfered to a greater degree than abstract words. Possible implications for how visual working memory interacts with verbal memory and long-term memory are discussed.
NASA Astrophysics Data System (ADS)
Matott, L. S.; Hymiak, B.; Reslink, C. F.; Baxter, C.; Aziz, S.
2012-12-01
As part of the NSF-sponsored 'URGE (Undergraduate Research Group Experiences) to Compute' program, Dr. Matott has been collaborating with talented Math majors to explore the design of cost-effective systems to safeguard groundwater supplies from contaminated sites. Such activity is aided by a combination of groundwater modeling, simulation-based optimization, and high-performance computing - disciplines largely unfamiliar to the students at the outset of the program. To help train and engage the students, a number of interactive and graphical software packages were utilized. Examples include: (1) a tutorial for exploring the behavior of evolutionary algorithms and other heuristic optimizers commonly used in simulation-based optimization; (2) an interactive groundwater modeling package for exploring alternative pump-and-treat containment scenarios at a contaminated site in Billings, Montana; (3) the R software package for visualizing various concepts related to subsurface hydrology; and (4) a job visualization tool for exploring the behavior of numerical experiments run on a large distributed computing cluster. Further engagement and excitement in the program was fostered by entering (and winning) a computer art competition run by the Coalition for Academic Scientific Computation (CASC). The winning submission visualizes an exhaustively mapped optimization cost surface and dramatically illustrates the phenomena of artificial minima - valley locations that correspond to designs whose costs are only partially optimal.
Intuitive Exploration of Volumetric Data Using Dynamic Galleries.
Jönsson, Daniel; Falk, Martin; Ynnerman, Anders
2016-01-01
In this work we present a volume exploration method designed to be used by novice users and visitors to science centers and museums. The volumetric digitalization of artifacts in museums is of rapidly increasing interest as enhanced user experience through interactive data visualization can be achieved. This is, however, a challenging task since the vast majority of visitors are not familiar with the concepts commonly used in data exploration, such as mapping of visual properties from values in the data domain using transfer functions. Interacting in the data domain is an effective way to filter away undesired information but it is difficult to predict where the values lie in the spatial domain. In this work we make extensive use of dynamic previews instantly generated as the user explores the data domain. The previews allow the user to predict what effect changes in the data domain will have on the rendered image without being aware that visual parameters are set in the data domain. Each preview represents a subrange of the data domain where overview and details are given on demand through zooming and panning. The method has been designed with touch interfaces as the target platform for interaction. We provide a qualitative evaluation performed with visitors to a science center to show the utility of the approach.
ERIC Educational Resources Information Center
Oh, Yunjin; Lee, Soon Min
2016-01-01
This study explored whether learning-related anxiety would negatively affect intention to persist with e-learning among students with visual impairment, and examined the roles of three online interactions in the relationship between learning-related anxiety and intention to persist with e-learning. For this study, a convenience sample of…
Visual Analytics for Heterogeneous Geoscience Data
NASA Astrophysics Data System (ADS)
Pan, Y.; Yu, L.; Zhu, F.; Rilee, M. L.; Kuo, K. S.; Jiang, H.; Yu, H.
2017-12-01
Geoscience data obtained from diverse sources have been routinely leveraged by scientists to study various phenomena. The principal data sources include observations and model simulation outputs. These data are characterized by spatiotemporal heterogeneity originated from different instrument design specifications and/or computational model requirements used in data generation processes. Such inherent heterogeneity poses several challenges in exploring and analyzing geoscience data. First, scientists often wish to identify features or patterns co-located among multiple data sources to derive and validate certain hypotheses. Heterogeneous data make it a tedious task to search such features in dissimilar datasets. Second, features of geoscience data are typically multivariate. It is challenging to tackle the high dimensionality of geoscience data and explore the relations among multiple variables in a scalable fashion. Third, there is a lack of transparency in traditional automated approaches, such as feature detection or clustering, in that scientists cannot intuitively interact with their analysis processes and interpret results. To address these issues, we present a new scalable approach that can assist scientists in analyzing voluminous and diverse geoscience data. We expose a high-level query interface that allows users to easily express their customized queries to search features of interest across multiple heterogeneous datasets. For identified features, we develop a visualization interface that enables interactive exploration and analytics in a linked-view manner. Specific visualization techniques such as scatter plots to parallel coordinates are employed in each view to allow users to explore various aspects of features. Different views are linked and refreshed according to user interactions in any individual view. In such a manner, a user can interactively and iteratively gain understanding into the data through a variety of visual analytics operations. We demonstrate with use cases how scientists can combine the query and visualization interfaces to enable a customized workflow facilitating studies using heterogeneous geoscience datasets.
Exploring Gigabyte Datasets in Real Time: Architectures, Interfaces and Time-Critical Design
NASA Technical Reports Server (NTRS)
Bryson, Steve; Gerald-Yamasaki, Michael (Technical Monitor)
1998-01-01
Architectures and Interfaces: The implications of real-time interaction on software architecture design: decoupling of interaction/graphics and computation into asynchronous processes. The performance requirements of graphics and computation for interaction. Time management in such an architecture. Examples of how visualization algorithms must be modified for high performance. Brief survey of interaction techniques and design, including direct manipulation and manipulation via widgets. talk discusses how human factors considerations drove the design and implementation of the virtual wind tunnel. Time-Critical Design: A survey of time-critical techniques for both computation and rendering. Emphasis on the assignment of a time budget to both the overall visualization environment and to each individual visualization technique in the environment. The estimation of the benefit and cost of an individual technique. Examples of the modification of visualization algorithms to allow time-critical control.
NASA Astrophysics Data System (ADS)
Krueger, Evan; Messier, Erik; Linte, Cristian A.; Diaz, Gabriel
2017-03-01
Recent advances in medical image acquisition allow for the reconstruction of anatomies with 3D, 4D, and 5D renderings. Nevertheless, standard anatomical and medical data visualization still relies heavily on the use of traditional 2D didactic tools (i.e., textbooks and slides), which restrict the presentation of image data to a 2D slice format. While these approaches have their merits beyond being cost effective and easy to disseminate, anatomy is inherently three-dimensional. By using 2D visualizations to illustrate more complex morphologies, important interactions between structures can be missed. In practice, such as in the planning and execution of surgical interventions, professionals require intricate knowledge of anatomical complexities, which can be more clearly communicated and understood through intuitive interaction with 3D volumetric datasets, such as those extracted from high-resolution CT or MRI scans. Open source, high quality, 3D medical imaging datasets are freely available, and with the emerging popularity of 3D display technologies, affordable and consistent 3D anatomical visualizations can be created. In this study we describe the design, implementation, and evaluation of one such interactive, stereoscopic visualization paradigm for human anatomy extracted from 3D medical images. A stereoscopic display was created by projecting the scene onto the lab floor using sequential frame stereo projection and viewed through active shutter glasses. By incorporating a PhaseSpace motion tracking system, a single viewer can navigate an augmented reality environment and directly manipulate virtual objects in 3D. While this paradigm is sufficiently versatile to enable a wide variety of applications in need of 3D visualization, we designed our study to work as an interactive game, which allows users to explore the anatomy of various organs and systems. In this study we describe the design, implementation, and evaluation of an interactive and stereoscopic visualization platform for exploring and understanding human anatomy. This system can present medical imaging data in three dimensions and allows for direct physical interaction and manipulation by the viewer. This should provide numerous benefits over traditional, 2D display and interaction modalities, and in our analysis, we aim to quantify and qualify users' visual and motor interactions with the virtual environment when employing this interactive display as a 3D didactic tool.
VISUAL DATA MINING IN ATMOSPHERIC SCIENCE DATA
This paper discusses the use of simple visual tools to explore multivariate spatially-referenced data. It describes interactive approaches such as linked brushing, and dynamic methods such as the grand tour. applied to studying the Comprehensive Ocean-Atmosphere Data Set (COADS)....
Modeling and evaluating user behavior in exploratory visual analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reda, Khairi; Johnson, Andrew E.; Papka, Michael E.
Empirical evaluation methods for visualizations have traditionally focused on assessing the outcome of the visual analytic process as opposed to characterizing how that process unfolds. There are only a handful of methods that can be used to systematically study how people use visualizations, making it difficult for researchers to capture and characterize the subtlety of cognitive and interaction behaviors users exhibit during visual analysis. To validate and improve visualization design, however, it is important for researchers to be able to assess and understand how users interact with visualization systems under realistic scenarios. This paper presents a methodology for modeling andmore » evaluating the behavior of users in exploratory visual analysis. We model visual exploration using a Markov chain process comprising transitions between mental, interaction, and computational states. These states and the transitions between them can be deduced from a variety of sources, including verbal transcripts, videos and audio recordings, and log files. This model enables the evaluator to characterize the cognitive and computational processes that are essential to insight acquisition in exploratory visual analysis, and reconstruct the dynamics of interaction between the user and the visualization system. We illustrate this model with two exemplar user studies, and demonstrate the qualitative and quantitative analytical tools it affords.« less
Visualization of metabolic interaction networks in microbial communities using VisANT 5.0
Granger, Brian R.; Chang, Yi -Chien; Wang, Yan; ...
2016-04-15
Here, the complexity of metabolic networks in microbial communities poses an unresolved visualization and interpretation challenge. We address this challenge in the newly expanded version of a software tool for the analysis of biological networks, VisANT 5.0. We focus in particular on facilitating the visual exploration of metabolic interaction between microbes in a community, e.g. as predicted by COMETS (Computation of Microbial Ecosystems in Time and Space), a dynamic stoichiometric modeling framework. Using VisANT's unique meta-graph implementation, we show how one can use VisANT 5.0 to explore different time-dependent ecosystem-level metabolic networks. In particular, we analyze the metabolic interaction networkmore » between two bacteria previously shown to display an obligate cross-feeding interdependency. In addition, we illustrate how a putative minimal gut microbiome community could be represented in our framework, making it possible to highlight interactions across multiple coexisting species. We envisage that the "symbiotic layout" of VisANT can be employed as a general tool for the analysis of metabolism in complex microbial communities as well as heterogeneous human tissues.« less
ERIC Educational Resources Information Center
Mirel, Barbara; Kumar, Anuj; Nong, Paige; Su, Gang; Meng, Fan
2016-01-01
Life scientists increasingly use visual analytics to explore large data sets and generate hypotheses. Undergraduate biology majors should be learning these same methods. Yet visual analytics is one of the most underdeveloped areas of undergraduate biology education. This study sought to determine the feasibility of undergraduate biology majors…
PedVizApi: a Java API for the interactive, visual analysis of extended pedigrees.
Fuchsberger, Christian; Falchi, Mario; Forer, Lukas; Pramstaller, Peter P
2008-01-15
PedVizApi is a Java API (application program interface) for the visual analysis of large and complex pedigrees. It provides all the necessary functionality for the interactive exploration of extended genealogies. While available packages are mostly focused on a static representation or cannot be added to an existing application, PedVizApi is a highly flexible open source library for the efficient construction of visual-based applications for the analysis of family data. An extensive demo application and a R interface is provided. http://www.pedvizapi.org
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, Elena S.; McCue, Lee Ann; Rutledge, Alexandra C.
2012-04-25
Visual Exploration and Statistics to Promote Annotation (VESPA) is an interactive visual analysis software tool that facilitates the discovery of structural mis-annotations in prokaryotic genomes. VESPA integrates high-throughput peptide-centric proteomics data and oligo-centric or RNA-Seq transcriptomics data into a genomic context. The data may be interrogated via visual analysis across multiple levels of genomic resolution, linked searches, exports and interaction with BLAST to rapidly identify location of interest within the genome and evaluate potential mis-annotations.
KML Tours: A New Platform for Exploring and Sharing Geospatial Data
NASA Astrophysics Data System (ADS)
Barcay, D. P.; Weiss-Malik, M.
2009-12-01
Google Earth and other virtual globes have allowed millions of people to explore the world from their own home. This technology has also raised the bar for professional visualizations: enabling interactive 3D visualizations to be created from massive data-sets, and shared using the KML language. For academics and professionals alike, an engaging presentation of your geospatial data is generally expected and can be the most effective form of advertisement. To that end, we released 'Touring' in Google Earth 5.0: a new medium for cinematic expression, visualized in Google Earth and written as extensions to the KML language. In a KML tour, the author has fine-grained control over the entire visual experience: precisely moving the virtual camera through the world while dynamically modifying the content, style, position, and visibility of the displayed data. An author can synchronize audio to this experience, bringing further immersion to a visualization. KML tours can help engage a broad user-base and conveying subtle concepts that aren't immediately apparent in traditional geospatial content. Unlike a pre-rendered video, a KML Tour maintains the rich interactivity of Google Earth, allowing users to continue exploring your content, and to mash-up other content with your visualization. This session will include conceptual explanations of the Touring feature in Google Earth, the structure of the touring KML extensions, as well as examples of compelling tours.
Integrated web visualizations for protein-protein interaction databases.
Jeanquartier, Fleur; Jean-Quartier, Claire; Holzinger, Andreas
2015-06-16
Understanding living systems is crucial for curing diseases. To achieve this task we have to understand biological networks based on protein-protein interactions. Bioinformatics has come up with a great amount of databases and tools that support analysts in exploring protein-protein interactions on an integrated level for knowledge discovery. They provide predictions and correlations, indicate possibilities for future experimental research and fill the gaps to complete the picture of biochemical processes. There are numerous and huge databases of protein-protein interactions used to gain insights into answering some of the many questions of systems biology. Many computational resources integrate interaction data with additional information on molecular background. However, the vast number of diverse Bioinformatics resources poses an obstacle to the goal of understanding. We present a survey of databases that enable the visual analysis of protein networks. We selected M=10 out of N=53 resources supporting visualization, and we tested against the following set of criteria: interoperability, data integration, quantity of possible interactions, data visualization quality and data coverage. The study reveals differences in usability, visualization features and quality as well as the quantity of interactions. StringDB is the recommended first choice. CPDB presents a comprehensive dataset and IntAct lets the user change the network layout. A comprehensive comparison table is available via web. The supplementary table can be accessed on http://tinyurl.com/PPI-DB-Comparison-2015. Only some web resources featuring graph visualization can be successfully applied to interactive visual analysis of protein-protein interaction. Study results underline the necessity for further enhancements of visualization integration in biochemical analysis tools. Identified challenges are data comprehensiveness, confidence, interactive feature and visualization maturing.
Monaco, Simona; Gallivan, Jason P; Figley, Teresa D; Singhal, Anthony; Culham, Jody C
2017-11-29
The role of the early visual cortex and higher-order occipitotemporal cortex has been studied extensively for visual recognition and to a lesser degree for haptic recognition and visually guided actions. Using a slow event-related fMRI experiment, we investigated whether tactile and visual exploration of objects recruit the same "visual" areas (and in the case of visual cortex, the same retinotopic zones) and if these areas show reactivation during delayed actions in the dark toward haptically explored objects (and if so, whether this reactivation might be due to imagery). We examined activation during visual or haptic exploration of objects and action execution (grasping or reaching) separated by an 18 s delay. Twenty-nine human volunteers (13 females) participated in this study. Participants had their eyes open and fixated on a point in the dark. The objects were placed below the fixation point and accordingly visual exploration activated the cuneus, which processes retinotopic locations in the lower visual field. Strikingly, the occipital pole (OP), representing foveal locations, showed higher activation for tactile than visual exploration, although the stimulus was unseen and location in the visual field was peripheral. Moreover, the lateral occipital tactile-visual area (LOtv) showed comparable activation for tactile and visual exploration. Psychophysiological interaction analysis indicated that the OP showed stronger functional connectivity with anterior intraparietal sulcus and LOtv during the haptic than visual exploration of shapes in the dark. After the delay, the cuneus, OP, and LOtv showed reactivation that was independent of the sensory modality used to explore the object. These results show that haptic actions not only activate "visual" areas during object touch, but also that this information appears to be used in guiding grasping actions toward targets after a delay. SIGNIFICANCE STATEMENT Visual presentation of an object activates shape-processing areas and retinotopic locations in early visual areas. Moreover, if the object is grasped in the dark after a delay, these areas show "reactivation." Here, we show that these areas are also activated and reactivated for haptic object exploration and haptically guided grasping. Touch-related activity occurs not only in the retinotopic location of the visual stimulus, but also at the occipital pole (OP), corresponding to the foveal representation, even though the stimulus was unseen and located peripherally. That is, the same "visual" regions are implicated in both visual and haptic exploration; however, touch also recruits high-acuity central representation within early visual areas during both haptic exploration of objects and subsequent actions toward them. Functional connectivity analysis shows that the OP is more strongly connected with ventral and dorsal stream areas when participants explore an object in the dark than when they view it. Copyright © 2017 the authors 0270-6474/17/3711572-20$15.00/0.
Zhang, Yiye; Padman, Rema
2017-01-01
Patients with multiple chronic conditions (MCC) pose an increasingly complex health management challenge worldwide, particularly due to the significant gap in our understanding of how to provide coordinated care. Drawing on our prior research on learning data-driven clinical pathways from actual practice data, this paper describes a prototype, interactive platform for visualizing the pathways of MCC to support shared decision making. Created using Python web framework, JavaScript library and our clinical pathway learning algorithm, the visualization platform allows clinicians and patients to learn the dominant patterns of co-progression of multiple clinical events from their own data, and interactively explore and interpret the pathways. We demonstrate functionalities of the platform using a cluster of 36 patients, identified from a dataset of 1,084 patients, who are diagnosed with at least chronic kidney disease, hypertension, and diabetes. Future evaluation studies will explore the use of this platform to better understand and manage MCC.
SOMFlow: Guided Exploratory Cluster Analysis with Self-Organizing Maps and Analytic Provenance.
Sacha, Dominik; Kraus, Matthias; Bernard, Jurgen; Behrisch, Michael; Schreck, Tobias; Asano, Yuki; Keim, Daniel A
2018-01-01
Clustering is a core building block for data analysis, aiming to extract otherwise hidden structures and relations from raw datasets, such as particular groups that can be effectively related, compared, and interpreted. A plethora of visual-interactive cluster analysis techniques has been proposed to date, however, arriving at useful clusterings often requires several rounds of user interactions to fine-tune the data preprocessing and algorithms. We present a multi-stage Visual Analytics (VA) approach for iterative cluster refinement together with an implementation (SOMFlow) that uses Self-Organizing Maps (SOM) to analyze time series data. It supports exploration by offering the analyst a visual platform to analyze intermediate results, adapt the underlying computations, iteratively partition the data, and to reflect previous analytical activities. The history of previous decisions is explicitly visualized within a flow graph, allowing to compare earlier cluster refinements and to explore relations. We further leverage quality and interestingness measures to guide the analyst in the discovery of useful patterns, relations, and data partitions. We conducted two pair analytics experiments together with a subject matter expert in speech intonation research to demonstrate that the approach is effective for interactive data analysis, supporting enhanced understanding of clustering results as well as the interactive process itself.
A web-portal for interactive data exploration, visualization, and hypothesis testing
Bartsch, Hauke; Thompson, Wesley K.; Jernigan, Terry L.; Dale, Anders M.
2014-01-01
Clinical research studies generate data that need to be shared and statistically analyzed by their participating institutions. The distributed nature of research and the different domains involved present major challenges to data sharing, exploration, and visualization. The Data Portal infrastructure was developed to support ongoing research in the areas of neurocognition, imaging, and genetics. Researchers benefit from the integration of data sources across domains, the explicit representation of knowledge from domain experts, and user interfaces providing convenient access to project specific data resources and algorithms. The system provides an interactive approach to statistical analysis, data mining, and hypothesis testing over the lifetime of a study and fulfills a mandate of public sharing by integrating data sharing into a system built for active data exploration. The web-based platform removes barriers for research and supports the ongoing exploration of data. PMID:24723882
Meghdadi, Amir H; Irani, Pourang
2013-12-01
We propose a novel video visual analytics system for interactive exploration of surveillance video data. Our approach consists of providing analysts with various views of information related to moving objects in a video. To do this we first extract each object's movement path. We visualize each movement by (a) creating a single action shot image (a still image that coalesces multiple frames), (b) plotting its trajectory in a space-time cube and (c) displaying an overall timeline view of all the movements. The action shots provide a still view of the moving object while the path view presents movement properties such as speed and location. We also provide tools for spatial and temporal filtering based on regions of interest. This allows analysts to filter out large amounts of movement activities while the action shot representation summarizes the content of each movement. We incorporated this multi-part visual representation of moving objects in sViSIT, a tool to facilitate browsing through the video content by interactive querying and retrieval of data. Based on our interaction with security personnel who routinely interact with surveillance video data, we identified some of the most common tasks performed. This resulted in designing a user study to measure time-to-completion of the various tasks. These generally required searching for specific events of interest (targets) in videos. Fourteen different tasks were designed and a total of 120 min of surveillance video were recorded (indoor and outdoor locations recording movements of people and vehicles). The time-to-completion of these tasks were compared against a manual fast forward video browsing guided with movement detection. We demonstrate how our system can facilitate lengthy video exploration and significantly reduce browsing time to find events of interest. Reports from expert users identify positive aspects of our approach which we summarize in our recommendations for future video visual analytics systems.
a Web-Based Interactive Platform for Co-Clustering Spatio-Temporal Data
NASA Astrophysics Data System (ADS)
Wu, X.; Poorthuis, A.; Zurita-Milla, R.; Kraak, M.-J.
2017-09-01
Since current studies on clustering analysis mainly focus on exploring spatial or temporal patterns separately, a co-clustering algorithm is utilized in this study to enable the concurrent analysis of spatio-temporal patterns. To allow users to adopt and adapt the algorithm for their own analysis, it is integrated within the server side of an interactive web-based platform. The client side of the platform, running within any modern browser, is a graphical user interface (GUI) with multiple linked visualizations that facilitates the understanding, exploration and interpretation of the raw dataset and co-clustering results. Users can also upload their own datasets and adjust clustering parameters within the platform. To illustrate the use of this platform, an annual temperature dataset from 28 weather stations over 20 years in the Netherlands is used. After the dataset is loaded, it is visualized in a set of linked visualizations: a geographical map, a timeline and a heatmap. This aids the user in understanding the nature of their dataset and the appropriate selection of co-clustering parameters. Once the dataset is processed by the co-clustering algorithm, the results are visualized in the small multiples, a heatmap and a timeline to provide various views for better understanding and also further interpretation. Since the visualization and analysis are integrated in a seamless platform, the user can explore different sets of co-clustering parameters and instantly view the results in order to do iterative, exploratory data analysis. As such, this interactive web-based platform allows users to analyze spatio-temporal data using the co-clustering method and also helps the understanding of the results using multiple linked visualizations.
NASA Astrophysics Data System (ADS)
Moore, C. A.; Gertman, V.; Olsoy, P.; Mitchell, J.; Glenn, N. F.; Joshi, A.; Norpchen, D.; Shrestha, R.; Pernice, M.; Spaete, L.; Grover, S.; Whiting, E.; Lee, R.
2011-12-01
Immersive virtual reality environments such as the IQ-Station or CAVE° (Cave Automated Virtual Environment) offer new and exciting ways to visualize and explore scientific data and are powerful research and educational tools. Combining remote sensing data from a range of sensor platforms in immersive 3D environments can enhance the spectral, textural, spatial, and temporal attributes of the data, which enables scientists to interact and analyze the data in ways never before possible. Visualization and analysis of large remote sensing datasets in immersive environments requires software customization for integrating LiDAR point cloud data with hyperspectral raster imagery, the generation of quantitative tools for multidimensional analysis, and the development of methods to capture 3D visualizations for stereographic playback. This study uses hyperspectral and LiDAR data acquired over the China Hat geologic study area near Soda Springs, Idaho, USA. The data are fused into a 3D image cube for interactive data exploration and several methods of recording and playback are investigated that include: 1) creating and implementing a Virtual Reality User Interface (VRUI) patch configuration file to enable recording and playback of VRUI interactive sessions within the CAVE and 2) using the LiDAR and hyperspectral remote sensing data and GIS data to create an ArcScene 3D animated flyover, where left- and right-eye visuals are captured from two independent monitors for playback in a stereoscopic player. These visualizations can be used as outreach tools to demonstrate how integrated data and geotechnology techniques can help scientists see, explore, and more adequately comprehend scientific phenomena, both real and abstract.
Head-mounted eye tracking: a new method to describe infant looking.
Franchak, John M; Kretch, Kari S; Soska, Kasey C; Adolph, Karen E
2011-01-01
Despite hundreds of studies describing infants' visual exploration of experimental stimuli, researchers know little about where infants look during everyday interactions. The current study describes the first method for studying visual behavior during natural interactions in mobile infants. Six 14-month-old infants wore a head-mounted eye-tracker that recorded gaze during free play with mothers. Results revealed that infants' visual exploration is opportunistic and depends on the availability of information and the constraints of infants' own bodies. Looks to mothers' faces were rare following infant-directed utterances but more likely if mothers were sitting at infants' eye level. Gaze toward the destination of infants' hand movements was common during manual actions and crawling, but looks toward obstacles during leg movements were less frequent. © 2011 The Authors. Child Development © 2011 Society for Research in Child Development, Inc.
FuryExplorer: visual-interactive exploration of horse motion capture data
NASA Astrophysics Data System (ADS)
Wilhelm, Nils; Vögele, Anna; Zsoldos, Rebeka; Licka, Theresia; Krüger, Björn; Bernard, Jürgen
2015-01-01
The analysis of equine motion has a long tradition in the past of mankind. Equine biomechanics aims at detecting characteristics of horses indicative of good performance. Especially, veterinary medicine gait analysis plays an important role in diagnostics and in the emerging research of long-term effects of athletic exercises. More recently, the incorporation of motion capture technology contributed to an easier and faster analysis, with a trend from mere observation of horses towards the analysis of multivariate time-oriented data. However, due to the novelty of this topic being raised within an interdisciplinary context, there is yet a lack of visual-interactive interfaces to facilitate time series data analysis and information discourse for the veterinary and biomechanics communities. In this design study, we bring visual analytics technology into the respective domains, which, to our best knowledge, was never approached before. Based on requirements developed in the domain characterization phase, we present a visual-interactive system for the exploration of horse motion data. The system provides multiple views which enable domain experts to explore frequent poses and motions, but also to drill down to interesting subsets, possibly containing unexpected patterns. We show the applicability of the system in two exploratory use cases, one on the comparison of different gait motions, and one on the analysis of lameness recovery. Finally, we present the results of a summative user study conducted in the environment of the domain experts. The overall outcome was a significant improvement in effectiveness and efficiency in the analytical workflow of the domain experts.
ERIC Educational Resources Information Center
Isik-Ercan, Zeynep; Zeynep Inan, Hatice; Nowak, Jeffrey A.; Kim, Beomjin
2014-01-01
This qualitative case study describes (a) the ways 3D visualization, coupled with other science and literacy experiences, supported young children's first exploration of the Earth-Sun-Moon system and (b) the perspectives of classroom teachers and children on using 3D visualization. We created three interactive 3D software modules that simulate day…
iCanPlot: Visual Exploration of High-Throughput Omics Data Using Interactive Canvas Plotting
Sinha, Amit U.; Armstrong, Scott A.
2012-01-01
Increasing use of high throughput genomic scale assays requires effective visualization and analysis techniques to facilitate data interpretation. Moreover, existing tools often require programming skills, which discourages bench scientists from examining their own data. We have created iCanPlot, a compelling platform for visual data exploration based on the latest technologies. Using the recently adopted HTML5 Canvas element, we have developed a highly interactive tool to visualize tabular data and identify interesting patterns in an intuitive fashion without the need of any specialized computing skills. A module for geneset overlap analysis has been implemented on the Google App Engine platform: when the user selects a region of interest in the plot, the genes in the region are analyzed on the fly. The visualization and analysis are amalgamated for a seamless experience. Further, users can easily upload their data for analysis—which also makes it simple to share the analysis with collaborators. We illustrate the power of iCanPlot by showing an example of how it can be used to interpret histone modifications in the context of gene expression. PMID:22393367
Quantitative Imaging In Pathology (QUIP) | Informatics Technology for Cancer Research (ITCR)
This site hosts web accessible applications, tools and data designed to support analysis, management, and exploration of whole slide tissue images for cancer research. The following tools are included: caMicroscope: A digital pathology data management and visualization plaform that enables interactive viewing of whole slide tissue images and segmentation results. caMicroscope can be also used independently of QUIP. FeatureExplorer: An interactive tool to allow patient-level feature exploration across multiple dimensions.
Usaj, Matej; Tan, Yizhao; Wang, Wen; VanderSluis, Benjamin; Zou, Albert; Myers, Chad L.; Costanzo, Michael; Andrews, Brenda; Boone, Charles
2017-01-01
Providing access to quantitative genomic data is key to ensure large-scale data validation and promote new discoveries. TheCellMap.org serves as a central repository for storing and analyzing quantitative genetic interaction data produced by genome-scale Synthetic Genetic Array (SGA) experiments with the budding yeast Saccharomyces cerevisiae. In particular, TheCellMap.org allows users to easily access, visualize, explore, and functionally annotate genetic interactions, or to extract and reorganize subnetworks, using data-driven network layouts in an intuitive and interactive manner. PMID:28325812
Usaj, Matej; Tan, Yizhao; Wang, Wen; VanderSluis, Benjamin; Zou, Albert; Myers, Chad L; Costanzo, Michael; Andrews, Brenda; Boone, Charles
2017-05-05
Providing access to quantitative genomic data is key to ensure large-scale data validation and promote new discoveries. TheCellMap.org serves as a central repository for storing and analyzing quantitative genetic interaction data produced by genome-scale Synthetic Genetic Array (SGA) experiments with the budding yeast Saccharomyces cerevisiae In particular, TheCellMap.org allows users to easily access, visualize, explore, and functionally annotate genetic interactions, or to extract and reorganize subnetworks, using data-driven network layouts in an intuitive and interactive manner. Copyright © 2017 Usaj et al.
NASA Astrophysics Data System (ADS)
Shipman, J. S.; Anderson, J. W.
2017-12-01
An ideal tool for ecologists and land managers to investigate the impacts of both projected environmental changes and policy alternatives is the creation of immersive, interactive, virtual landscapes. As a new frontier in visualizing and understanding geospatial data, virtual landscapes require a new toolbox for data visualization that includes traditional GIS tools and uncommon tools such as the Unity3d game engine. Game engines provide capabilities to not only explore data but to build and interact with dynamic models collaboratively. These virtual worlds can be used to display and illustrate data that is often more understandable and plausible to both stakeholders and policy makers than is achieved using traditional maps.Within this context we will present funded research that has been developed utilizing virtual landscapes for geographic visualization and decision support among varied stakeholders. We will highlight the challenges and lessons learned when developing interactive virtual environments that require large multidisciplinary team efforts with varied competences. The results will emphasize the importance of visualization and interactive virtual environments and the link with emerging research disciplines within Visual Analytics.
NASA Astrophysics Data System (ADS)
Ozturk, D.; Chaudhary, A.; Votava, P.; Kotfila, C.
2016-12-01
Jointly developed by Kitware and NASA Ames, GeoNotebook is an open source tool designed to give the maximum amount of flexibility to analysts, while dramatically simplifying the process of exploring geospatially indexed datasets. Packages like Fiona (backed by GDAL), Shapely, Descartes, Geopandas, and PySAL provide a stack of technologies for reading, transforming, and analyzing geospatial data. Combined with the Jupyter notebook and libraries like matplotlib/Basemap it is possible to generate detailed geospatial visualizations. Unfortunately, visualizations generated is either static or does not perform well for very large datasets. Also, this setup requires a great deal of boilerplate code to create and maintain. Other extensions exist to remedy these problems, but they provide a separate map for each input cell and do not support map interactions that feed back into the python environment. To support interactive data exploration and visualization on large datasets we have developed an extension to the Jupyter notebook that provides a single dynamic map that can be managed from the Python environment, and that can communicate back with a server which can perform operations like data subsetting on a cloud-based cluster.
When the display matters: A multifaceted perspective on 3D geovisualizations
NASA Astrophysics Data System (ADS)
Juřík, Vojtěch; Herman, Lukáš; Šašinka, Čeněk; Stachoň, Zdeněk; Chmelík, Jiří
2017-04-01
This study explores the influence of stereoscopic (real) 3D and monoscopic (pseudo) 3D visualization on the human ability to reckon altitude information in noninteractive and interactive 3D geovisualizations. A two phased experiment was carried out to compare the performance of two groups of participants, one of them using the real 3D and the other one pseudo 3D visualization of geographical data. A homogeneous group of 61 psychology students, inexperienced in processing of geographical data, were tested with respect to their efficiency at identifying altitudes of the displayed landscape. The first phase of the experiment was designed as non-interactive, where static 3D visual displayswere presented; the second phase was designed as interactive and the participants were allowed to explore the scene by adjusting the position of the virtual camera. The investigated variables included accuracy at altitude identification, time demands and the amount of the participant's motor activity performed during interaction with geovisualization. The interface was created using a Motion Capture system, Wii Remote Controller, widescreen projection and the passive Dolby 3D technology (for real 3D vision). The real 3D visual display was shown to significantly increase the accuracy of the landscape altitude identification in non-interactive tasks. As expected, in the interactive phase there were differences in accuracy flattened out between groups due to the possibility of interaction, with no other statistically significant differences in completion times or motor activity. The increased number of omitted objects in real 3D condition was further subjected to an exploratory analysis.
Chevallier, Coralie; Parish-Morris, Julia; McVey, Alana; Rump, Keiran M; Sasson, Noah J; Herrington, John D; Schultz, Robert T
2015-10-01
Autism Spectrum Disorder (ASD) is characterized by social impairments that have been related to deficits in social attention, including diminished gaze to faces. Eye-tracking studies are commonly used to examine social attention and social motivation in ASD, but they vary in sensitivity. In this study, we hypothesized that the ecological nature of the social stimuli would affect participants' social attention, with gaze behavior during more naturalistic scenes being most predictive of ASD vs. typical development. Eighty-one children with and without ASD participated in three eye-tracking tasks that differed in the ecological relevance of the social stimuli. In the "Static Visual Exploration" task, static images of objects and people were presented; in the "Dynamic Visual Exploration" task, video clips of individual faces and objects were presented side-by-side; in the "Interactive Visual Exploration" task, video clips of children playing with objects in a naturalistic context were presented. Our analyses uncovered a three-way interaction between Task, Social vs. Object Stimuli, and Diagnosis. This interaction was driven by group differences on one task only-the Interactive task. Bayesian analyses confirmed that the other two tasks were insensitive to group membership. In addition, receiver operating characteristic analyses demonstrated that, unlike the other two tasks, the Interactive task had significant classification power. The ecological relevance of social stimuli is an important factor to consider for eye-tracking studies aiming to measure social attention and motivation in ASD. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Mirel, Barbara; Kumar, Anuj; Nong, Paige; Su, Gang; Meng, Fan
2016-02-01
Life scientists increasingly use visual analytics to explore large data sets and generate hypotheses. Undergraduate biology majors should be learning these same methods. Yet visual analytics is one of the most underdeveloped areas of undergraduate biology education. This study sought to determine the feasibility of undergraduate biology majors conducting exploratory analysis using the same interactive data visualizations as practicing scientists. We examined 22 upper level undergraduates in a genomics course as they engaged in a case-based inquiry with an interactive heat map. We qualitatively and quantitatively analyzed students' visual analytic behaviors, reasoning and outcomes to identify student performance patterns, commonly shared efficiencies and task completion. We analyzed students' successes and difficulties in applying knowledge and skills relevant to the visual analytics case and related gaps in knowledge and skill to associated tool designs. Findings show that undergraduate engagement in visual analytics is feasible and could be further strengthened through tool usability improvements. We identify these improvements. We speculate, as well, on instructional considerations that our findings suggested may also enhance visual analytics in case-based modules.
Kumar, Anuj; Nong, Paige; Su, Gang; Meng, Fan
2016-01-01
Life scientists increasingly use visual analytics to explore large data sets and generate hypotheses. Undergraduate biology majors should be learning these same methods. Yet visual analytics is one of the most underdeveloped areas of undergraduate biology education. This study sought to determine the feasibility of undergraduate biology majors conducting exploratory analysis using the same interactive data visualizations as practicing scientists. We examined 22 upper level undergraduates in a genomics course as they engaged in a case-based inquiry with an interactive heat map. We qualitatively and quantitatively analyzed students’ visual analytic behaviors, reasoning and outcomes to identify student performance patterns, commonly shared efficiencies and task completion. We analyzed students’ successes and difficulties in applying knowledge and skills relevant to the visual analytics case and related gaps in knowledge and skill to associated tool designs. Findings show that undergraduate engagement in visual analytics is feasible and could be further strengthened through tool usability improvements. We identify these improvements. We speculate, as well, on instructional considerations that our findings suggested may also enhance visual analytics in case-based modules. PMID:26877625
Interactive metagenomic visualization in a Web browser.
Ondov, Brian D; Bergman, Nicholas H; Phillippy, Adam M
2011-09-30
A critical output of metagenomic studies is the estimation of abundances of taxonomical or functional groups. The inherent uncertainty in assignments to these groups makes it important to consider both their hierarchical contexts and their prediction confidence. The current tools for visualizing metagenomic data, however, omit or distort quantitative hierarchical relationships and lack the facility for displaying secondary variables. Here we present Krona, a new visualization tool that allows intuitive exploration of relative abundances and confidences within the complex hierarchies of metagenomic classifications. Krona combines a variant of radial, space-filling displays with parametric coloring and interactive polar-coordinate zooming. The HTML5 and JavaScript implementation enables fully interactive charts that can be explored with any modern Web browser, without the need for installed software or plug-ins. This Web-based architecture also allows each chart to be an independent document, making them easy to share via e-mail or post to a standard Web server. To illustrate Krona's utility, we describe its application to various metagenomic data sets and its compatibility with popular metagenomic analysis tools. Krona is both a powerful metagenomic visualization tool and a demonstration of the potential of HTML5 for highly accessible bioinformatic visualizations. Its rich and interactive displays facilitate more informed interpretations of metagenomic analyses, while its implementation as a browser-based application makes it extremely portable and easily adopted into existing analysis packages. Both the Krona rendering code and conversion tools are freely available under a BSD open-source license, and available from: http://krona.sourceforge.net.
Interactive Visualization of Complex Seismic Data and Models Using Bokeh
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, Chengping; Ammon, Charles J.; Maceira, Monica
Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less
Interactive Visualization of Complex Seismic Data and Models Using Bokeh
Chai, Chengping; Ammon, Charles J.; Maceira, Monica; ...
2018-02-14
Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less
A new metaphor for projection-based visual analysis and data exploration
NASA Astrophysics Data System (ADS)
Schreck, Tobias; Panse, Christian
2007-01-01
In many important application domains such as Business and Finance, Process Monitoring, and Security, huge and quickly increasing volumes of complex data are collected. Strong efforts are underway developing automatic and interactive analysis tools for mining useful information from these data repositories. Many data analysis algorithms require an appropriate definition of similarity (or distance) between data instances to allow meaningful clustering, classification, and retrieval, among other analysis tasks. Projection-based data visualization is highly interesting (a) for visual discrimination analysis of a data set within a given similarity definition, and (b) for comparative analysis of similarity characteristics of a given data set represented by different similarity definitions. We introduce an intuitive and effective novel approach for projection-based similarity visualization for interactive discrimination analysis, data exploration, and visual evaluation of metric space effectiveness. The approach is based on the convex hull metaphor for visually aggregating sets of points in projected space, and it can be used with a variety of different projection techniques. The effectiveness of the approach is demonstrated by application on two well-known data sets. Statistical evidence supporting the validity of the hull metaphor is presented. We advocate the hull-based approach over the standard symbol-based approach to projection visualization, as it allows a more effective perception of similarity relationships and class distribution characteristics.
ProteoLens: a visual analytic tool for multi-scale database-driven biological network data mining.
Huan, Tianxiao; Sivachenko, Andrey Y; Harrison, Scott H; Chen, Jake Y
2008-08-12
New systems biology studies require researchers to understand how interplay among myriads of biomolecular entities is orchestrated in order to achieve high-level cellular and physiological functions. Many software tools have been developed in the past decade to help researchers visually navigate large networks of biomolecular interactions with built-in template-based query capabilities. To further advance researchers' ability to interrogate global physiological states of cells through multi-scale visual network explorations, new visualization software tools still need to be developed to empower the analysis. A robust visual data analysis platform driven by database management systems to perform bi-directional data processing-to-visualizations with declarative querying capabilities is needed. We developed ProteoLens as a JAVA-based visual analytic software tool for creating, annotating and exploring multi-scale biological networks. It supports direct database connectivity to either Oracle or PostgreSQL database tables/views, on which SQL statements using both Data Definition Languages (DDL) and Data Manipulation languages (DML) may be specified. The robust query languages embedded directly within the visualization software help users to bring their network data into a visualization context for annotation and exploration. ProteoLens supports graph/network represented data in standard Graph Modeling Language (GML) formats, and this enables interoperation with a wide range of other visual layout tools. The architectural design of ProteoLens enables the de-coupling of complex network data visualization tasks into two distinct phases: 1) creating network data association rules, which are mapping rules between network node IDs or edge IDs and data attributes such as functional annotations, expression levels, scores, synonyms, descriptions etc; 2) applying network data association rules to build the network and perform the visual annotation of graph nodes and edges according to associated data values. We demonstrated the advantages of these new capabilities through three biological network visualization case studies: human disease association network, drug-target interaction network and protein-peptide mapping network. The architectural design of ProteoLens makes it suitable for bioinformatics expert data analysts who are experienced with relational database management to perform large-scale integrated network visual explorations. ProteoLens is a promising visual analytic platform that will facilitate knowledge discoveries in future network and systems biology studies.
ePlant and the 3D data display initiative: integrative systems biology on the world wide web.
Fucile, Geoffrey; Di Biase, David; Nahal, Hardeep; La, Garon; Khodabandeh, Shokoufeh; Chen, Yani; Easley, Kante; Christendat, Dinesh; Kelley, Lawrence; Provart, Nicholas J
2011-01-10
Visualization tools for biological data are often limited in their ability to interactively integrate data at multiple scales. These computational tools are also typically limited by two-dimensional displays and programmatic implementations that require separate configurations for each of the user's computing devices and recompilation for functional expansion. Towards overcoming these limitations we have developed "ePlant" (http://bar.utoronto.ca/eplant) - a suite of open-source world wide web-based tools for the visualization of large-scale data sets from the model organism Arabidopsis thaliana. These tools display data spanning multiple biological scales on interactive three-dimensional models. Currently, ePlant consists of the following modules: a sequence conservation explorer that includes homology relationships and single nucleotide polymorphism data, a protein structure model explorer, a molecular interaction network explorer, a gene product subcellular localization explorer, and a gene expression pattern explorer. The ePlant's protein structure explorer module represents experimentally determined and theoretical structures covering >70% of the Arabidopsis proteome. The ePlant framework is accessed entirely through a web browser, and is therefore platform-independent. It can be applied to any model organism. To facilitate the development of three-dimensional displays of biological data on the world wide web we have established the "3D Data Display Initiative" (http://3ddi.org).
Visualizing Mobility of Public Transportation System.
Zeng, Wei; Fu, Chi-Wing; Arisona, Stefan Müller; Erath, Alexander; Qu, Huamin
2014-12-01
Public transportation systems (PTSs) play an important role in modern cities, providing shared/massive transportation services that are essential for the general public. However, due to their increasing complexity, designing effective methods to visualize and explore PTS is highly challenging. Most existing techniques employ network visualization methods and focus on showing the network topology across stops while ignoring various mobility-related factors such as riding time, transfer time, waiting time, and round-the-clock patterns. This work aims to visualize and explore passenger mobility in a PTS with a family of analytical tasks based on inputs from transportation researchers. After exploring different design alternatives, we come up with an integrated solution with three visualization modules: isochrone map view for geographical information, isotime flow map view for effective temporal information comparison and manipulation, and OD-pair journey view for detailed visual analysis of mobility factors along routes between specific origin-destination pairs. The isotime flow map linearizes a flow map into a parallel isoline representation, maximizing the visualization of mobility information along the horizontal time axis while presenting clear and smooth pathways from origin to destinations. Moreover, we devise several interactive visual query methods for users to easily explore the dynamics of PTS mobility over space and time. Lastly, we also construct a PTS mobility model from millions of real passenger trajectories, and evaluate our visualization techniques with assorted case studies with the transportation researchers.
A pseudo-haptic knot diagram interface
NASA Astrophysics Data System (ADS)
Zhang, Hui; Weng, Jianguang; Hanson, Andrew J.
2011-01-01
To make progress in understanding knot theory, we will need to interact with the projected representations of mathematical knots which are of course continuous in 3D but significantly interrupted in the projective images. One way to achieve such a goal would be to design an interactive system that allows us to sketch 2D knot diagrams by taking advantage of a collision-sensing controller and explore their underlying smooth structures through a continuous motion. Recent advances of interaction techniques have been made that allow progress to be made in this direction. Pseudo-haptics that simulates haptic effects using pure visual feedback can be used to develop such an interactive system. This paper outlines one such pseudo-haptic knot diagram interface. Our interface derives from the familiar pencil-and-paper process of drawing 2D knot diagrams and provides haptic-like sensations to facilitate the creation and exploration of knot diagrams. A centerpiece of the interaction model simulates a "physically" reactive mouse cursor, which is exploited to resolve the apparent conflict between the continuous structure of the actual smooth knot and the visual discontinuities in the knot diagram representation. Another value in exploiting pseudo-haptics is that an acceleration (or deceleration) of the mouse cursor (or surface locator) can be used to indicate the slope of the curve (or surface) of whom the projective image is being explored. By exploiting these additional visual cues, we proceed to a full-featured extension to a pseudo-haptic 4D visualization system that simulates the continuous navigation on 4D objects and allows us to sense the bumps and holes in the fourth dimension. Preliminary tests of the software show that main features of the interface overcome some expected perceptual limitations in our interaction with 2D knot diagrams of 3D knots and 3D projective images of 4D mathematical objects.
Microreact: visualizing and sharing data for genomic epidemiology and phylogeography
Argimón, Silvia; Abudahab, Khalil; Goater, Richard J. E.; Fedosejev, Artemij; Bhai, Jyothish; Glasner, Corinna; Feil, Edward J.; Holden, Matthew T. G.; Yeats, Corin A.; Grundmann, Hajo; Spratt, Brian G.
2016-01-01
Visualization is frequently used to aid our interpretation of complex datasets. Within microbial genomics, visualizing the relationships between multiple genomes as a tree provides a framework onto which associated data (geographical, temporal, phenotypic and epidemiological) are added to generate hypotheses and to explore the dynamics of the system under investigation. Selected static images are then used within publications to highlight the key findings to a wider audience. However, these images are a very inadequate way of exploring and interpreting the richness of the data. There is, therefore, a need for flexible, interactive software that presents the population genomic outputs and associated data in a user-friendly manner for a wide range of end users, from trained bioinformaticians to front-line epidemiologists and health workers. Here, we present Microreact, a web application for the easy visualization of datasets consisting of any combination of trees, geographical, temporal and associated metadata. Data files can be uploaded to Microreact directly via the web browser or by linking to their location (e.g. from Google Drive/Dropbox or via API), and an integrated visualization via trees, maps, timelines and tables provides interactive querying of the data. The visualization can be shared as a permanent web link among collaborators, or embedded within publications to enable readers to explore and download the data. Microreact can act as an end point for any tool or bioinformatic pipeline that ultimately generates a tree, and provides a simple, yet powerful, visualization method that will aid research and discovery and the open sharing of datasets. PMID:28348833
Buchanan, Verica; Lu, Yafeng; McNeese, Nathan; Steptoe, Michael; Maciejewski, Ross; Cooke, Nancy
2017-03-01
Historically, domains such as business intelligence would require a single analyst to engage with data, develop a model, answer operational questions, and predict future behaviors. However, as the problems and domains become more complex, organizations are employing teams of analysts to explore and model data to generate knowledge. Furthermore, given the rapid increase in data collection, organizations are struggling to develop practices for intelligence analysis in the era of big data. Currently, a variety of machine learning and data mining techniques are available to model data and to generate insights and predictions, and developments in the field of visual analytics have focused on how to effectively link data mining algorithms with interactive visuals to enable analysts to explore, understand, and interact with data and data models. Although studies have explored the role of single analysts in the visual analytics pipeline, little work has explored the role of teamwork and visual analytics in the analysis of big data. In this article, we present an experiment integrating statistical models, visual analytics techniques, and user experiments to study the role of teamwork in predictive analytics. We frame our experiment around the analysis of social media data for box office prediction problems and compare the prediction performance of teams, groups, and individuals. Our results indicate that a team's performance is mediated by the team's characteristics such as openness of individual members to others' positions and the type of planning that goes into the team's analysis. These findings have important implications for how organizations should create teams in order to make effective use of information from their analytic models.
Balconi, Michela; Vanutelli, Maria Elide
2016-01-01
The present research explored the effect of cross-modal integration of emotional cues (auditory and visual (AV)) compared with only visual (V) emotional cues in observing interspecies interactions. The brain activity was monitored when subjects processed AV and V situations, which represented an emotional (positive or negative), interspecies (human-animal) interaction. Congruence (emotionally congruous or incongruous visual and auditory patterns) was also modulated. electroencephalography brain oscillations (from delta to beta) were analyzed and the cortical source localization (by standardized Low Resolution Brain Electromagnetic Tomography) was applied to the data. Frequency band (mainly low-frequency delta and theta) showed a significant brain activity increasing in response to negative compared to positive interactions within the right hemisphere. Moreover, differences were found based on stimulation type, with an increased effect for AV compared with V. Finally, delta band supported a lateralized right dorsolateral prefrontal cortex (DLPFC) activity in response to negative and incongruous interspecies interactions, mainly for AV. The contribution of cross-modality, congruence (incongruous patterns), and lateralization (right DLPFC) in response to interspecies emotional interactions was discussed at light of a "negative lateralized effect."
Visualization of Metabolic Interaction Networks in Microbial Communities Using VisANT 5.0
Wang, Yan; DeLisi, Charles; Segrè, Daniel; Hu, Zhenjun
2016-01-01
The complexity of metabolic networks in microbial communities poses an unresolved visualization and interpretation challenge. We address this challenge in the newly expanded version of a software tool for the analysis of biological networks, VisANT 5.0. We focus in particular on facilitating the visual exploration of metabolic interaction between microbes in a community, e.g. as predicted by COMETS (Computation of Microbial Ecosystems in Time and Space), a dynamic stoichiometric modeling framework. Using VisANT’s unique metagraph implementation, we show how one can use VisANT 5.0 to explore different time-dependent ecosystem-level metabolic networks. In particular, we analyze the metabolic interaction network between two bacteria previously shown to display an obligate cross-feeding interdependency. In addition, we illustrate how a putative minimal gut microbiome community could be represented in our framework, making it possible to highlight interactions across multiple coexisting species. We envisage that the “symbiotic layout” of VisANT can be employed as a general tool for the analysis of metabolism in complex microbial communities as well as heterogeneous human tissues. VisANT is freely available at: http://visant.bu.edu and COMETS at http://comets.bu.edu. PMID:27081850
Visualization of Metabolic Interaction Networks in Microbial Communities Using VisANT 5.0.
Granger, Brian R; Chang, Yi-Chien; Wang, Yan; DeLisi, Charles; Segrè, Daniel; Hu, Zhenjun
2016-04-01
The complexity of metabolic networks in microbial communities poses an unresolved visualization and interpretation challenge. We address this challenge in the newly expanded version of a software tool for the analysis of biological networks, VisANT 5.0. We focus in particular on facilitating the visual exploration of metabolic interaction between microbes in a community, e.g. as predicted by COMETS (Computation of Microbial Ecosystems in Time and Space), a dynamic stoichiometric modeling framework. Using VisANT's unique metagraph implementation, we show how one can use VisANT 5.0 to explore different time-dependent ecosystem-level metabolic networks. In particular, we analyze the metabolic interaction network between two bacteria previously shown to display an obligate cross-feeding interdependency. In addition, we illustrate how a putative minimal gut microbiome community could be represented in our framework, making it possible to highlight interactions across multiple coexisting species. We envisage that the "symbiotic layout" of VisANT can be employed as a general tool for the analysis of metabolism in complex microbial communities as well as heterogeneous human tissues. VisANT is freely available at: http://visant.bu.edu and COMETS at http://comets.bu.edu.
ERIC Educational Resources Information Center
Chandramouli, Magesh; Chittamuru, Siva-Teja
2016-01-01
This paper explains the design of a graphics-based virtual environment for instructing computer hardware concepts to students, especially those at the beginner level. Photorealistic visualizations and simulations are designed and programmed with interactive features allowing students to practice, explore, and test themselves on computer hardware…
Music Therapy in the Treatment of Social Isolation in Visually Impaired Children.
ERIC Educational Resources Information Center
Gourgey, Charles
1998-01-01
Reviews the literature on the use of music therapy with visually impaired and socially isolated children. Describes ways that music therapy can help the child explore his environment, modify blindisms (stereotypic, autistic-like behaviors), and encourage social awareness and interaction with other children. (DB)
A novel visualization model for web search results.
Nguyen, Tien N; Zhang, Jin
2006-01-01
This paper presents an interactive visualization system, named WebSearchViz, for visualizing the Web search results and acilitating users' navigation and exploration. The metaphor in our model is the solar system with its planets and asteroids revolving around the sun. Location, color, movement, and spatial distance of objects in the visual space are used to represent the semantic relationships between a query and relevant Web pages. Especially, the movement of objects and their speeds add a new dimension to the visual space, illustrating the degree of relevance among a query and Web search results in the context of users' subjects of interest. By interacting with the visual space, users are able to observe the semantic relevance between a query and a resulting Web page with respect to their subjects of interest, context information, or concern. Users' subjects of interest can be dynamically changed, redefined, added, or deleted from the visual space.
Perceptualization of geometry using intelligent haptic and visual sensing
NASA Astrophysics Data System (ADS)
Weng, Jianguang; Zhang, Hui
2013-01-01
We present a set of paradigms for investigating geometric structures using haptic and visual sensing. Our principal test cases include smoothly embedded geometry shapes such as knotted curves embedded in 3D and knotted surfaces in 4D, that contain massive intersections when projected to one lower dimension. One can exploit a touch-responsive 3D interactive probe to haptically override this conflicting evidence in the rendered images, by forcing continuity in the haptic representation to emphasize the true topology. In our work, we exploited a predictive haptic guidance, a "computer-simulated hand" with supplementary force suggestion, to support intelligent exploration of geometry shapes that will smooth and maximize the probability of recognition. The cognitive load can be reduced further when enabling an attention-driven visual sensing during the haptic exploration. Our methods combine to reveal the full richness of the haptic exploration of geometric structures, and to overcome the limitations of traditional 4D visualization.
Visualizing the process of interaction in a 3D environment
NASA Astrophysics Data System (ADS)
Vaidya, Vivek; Suryanarayanan, Srikanth; Krishnan, Kajoli; Mullick, Rakesh
2007-03-01
As the imaging modalities used in medicine transition to increasingly three-dimensional data the question of how best to interact with and analyze this data becomes ever more pressing. Immersive virtual reality systems seem to hold promise in tackling this, but how individuals learn and interact in these environments is not fully understood. Here we will attempt to show some methods in which user interaction in a virtual reality environment can be visualized and how this can allow us to gain greater insight into the process of interaction/learning in these systems. Also explored is the possibility of using this method to improve understanding and management of ergonomic issues within an interface.
Integration and visualization of systems biology data in context of the genome
2010-01-01
Background High-density tiling arrays and new sequencing technologies are generating rapidly increasing volumes of transcriptome and protein-DNA interaction data. Visualization and exploration of this data is critical to understanding the regulatory logic encoded in the genome by which the cell dynamically affects its physiology and interacts with its environment. Results The Gaggle Genome Browser is a cross-platform desktop program for interactively visualizing high-throughput data in the context of the genome. Important features include dynamic panning and zooming, keyword search and open interoperability through the Gaggle framework. Users may bookmark locations on the genome with descriptive annotations and share these bookmarks with other users. The program handles large sets of user-generated data using an in-process database and leverages the facilities of SQL and the R environment for importing and manipulating data. A key aspect of the Gaggle Genome Browser is interoperability. By connecting to the Gaggle framework, the genome browser joins a suite of interconnected bioinformatics tools for analysis and visualization with connectivity to major public repositories of sequences, interactions and pathways. To this flexible environment for exploring and combining data, the Gaggle Genome Browser adds the ability to visualize diverse types of data in relation to its coordinates on the genome. Conclusions Genomic coordinates function as a common key by which disparate biological data types can be related to one another. In the Gaggle Genome Browser, heterogeneous data are joined by their location on the genome to create information-rich visualizations yielding insight into genome organization, transcription and its regulation and, ultimately, a better understanding of the mechanisms that enable the cell to dynamically respond to its environment. PMID:20642854
VANLO - Interactive visual exploration of aligned biological networks
Brasch, Steffen; Linsen, Lars; Fuellen, Georg
2009-01-01
Background Protein-protein interaction (PPI) is fundamental to many biological processes. In the course of evolution, biological networks such as protein-protein interaction networks have developed. Biological networks of different species can be aligned by finding instances (e.g. proteins) with the same common ancestor in the evolutionary process, so-called orthologs. For a better understanding of the evolution of biological networks, such aligned networks have to be explored. Visualization can play a key role in making the various relationships transparent. Results We present a novel visualization system for aligned biological networks in 3D space that naturally embeds existing 2D layouts. In addition to displaying the intra-network connectivities, we also provide insight into how the individual networks relate to each other by placing aligned entities on top of each other in separate layers. We optimize the layout of the entire alignment graph in a global fashion that takes into account inter- as well as intra-network relationships. The layout algorithm includes a step of merging aligned networks into one graph, laying out the graph with respect to application-specific requirements, splitting the merged graph again into individual networks, and displaying the network alignment in layers. In addition to representing the data in a static way, we also provide different interaction techniques to explore the data with respect to application-specific tasks. Conclusion Our system provides an intuitive global understanding of aligned PPI networks and it allows the investigation of key biological questions. We evaluate our system by applying it to real-world examples documenting how our system can be used to investigate the data with respect to these key questions. Our tool VANLO (Visualization of Aligned Networks with Layout Optimization) can be accessed at . PMID:19821976
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potter, Kristin C; Brunhart-Lupo, Nicholas J; Bush, Brian W
We have developed a framework for the exploration, design, and planning of energy systems that combines interactive visualization with machine-learning based approximations of simulations through a general purpose dataflow API. Our system provides a visual inter- face allowing users to explore an ensemble of energy simulations representing a subset of the complex input parameter space, and spawn new simulations to 'fill in' input regions corresponding to new enegery system scenarios. Unfortunately, many energy simula- tions are far too slow to provide interactive responses. To support interactive feedback, we are developing reduced-form models via machine learning techniques, which provide statistically soundmore » esti- mates of the full simulations at a fraction of the computational cost and which are used as proxies for the full-form models. Fast com- putation and an agile dataflow enhance the engagement with energy simulations, and allow researchers to better allocate computational resources to capture informative relationships within the system and provide a low-cost method for validating and quality-checking large-scale modeling efforts.« less
Spherical Panoramas for Astrophysical Data Visualization
NASA Astrophysics Data System (ADS)
Kent, Brian R.
2017-05-01
Data immersion has advantages in astrophysical visualization. Complex multi-dimensional data and phase spaces can be explored in a seamless and interactive viewing environment. Putting the user in the data is a first step toward immersive data analysis. We present a technique for creating 360° spherical panoramas with astrophysical data. The three-dimensional software package Blender and the Google Spatial Media module are used together to immerse users in data exploration. Several examples employing these methods exhibit how the technique works using different types of astronomical data.
Py-SPHViewer: Cosmological simulations using Smoothed Particle Hydrodynamics
NASA Astrophysics Data System (ADS)
Benítez-Llambay, Alejandro
2017-12-01
Py-SPHViewer visualizes and explores N-body + Hydrodynamics simulations. The code interpolates the underlying density field (or any other property) traced by a set of particles, using the Smoothed Particle Hydrodynamics (SPH) interpolation scheme, thus producing not only beautiful but also useful scientific images. Py-SPHViewer enables the user to explore simulated volumes using different projections. Py-SPHViewer also provides a natural way to visualize (in a self-consistent fashion) gas dynamical simulations, which use the same technique to compute the interactions between particles.
Visual analytics in cheminformatics: user-supervised descriptor selection for QSAR methods.
Martínez, María Jimena; Ponzoni, Ignacio; Díaz, Mónica F; Vazquez, Gustavo E; Soto, Axel J
2015-01-01
The design of QSAR/QSPR models is a challenging problem, where the selection of the most relevant descriptors constitutes a key step of the process. Several feature selection methods that address this step are concentrated on statistical associations among descriptors and target properties, whereas the chemical knowledge is left out of the analysis. For this reason, the interpretability and generality of the QSAR/QSPR models obtained by these feature selection methods are drastically affected. Therefore, an approach for integrating domain expert's knowledge in the selection process is needed for increase the confidence in the final set of descriptors. In this paper a software tool, which we named Visual and Interactive DEscriptor ANalysis (VIDEAN), that combines statistical methods with interactive visualizations for choosing a set of descriptors for predicting a target property is proposed. Domain expertise can be added to the feature selection process by means of an interactive visual exploration of data, and aided by statistical tools and metrics based on information theory. Coordinated visual representations are presented for capturing different relationships and interactions among descriptors, target properties and candidate subsets of descriptors. The competencies of the proposed software were assessed through different scenarios. These scenarios reveal how an expert can use this tool to choose one subset of descriptors from a group of candidate subsets or how to modify existing descriptor subsets and even incorporate new descriptors according to his or her own knowledge of the target property. The reported experiences showed the suitability of our software for selecting sets of descriptors with low cardinality, high interpretability, low redundancy and high statistical performance in a visual exploratory way. Therefore, it is possible to conclude that the resulting tool allows the integration of a chemist's expertise in the descriptor selection process with a low cognitive effort in contrast with the alternative of using an ad-hoc manual analysis of the selected descriptors. Graphical abstractVIDEAN allows the visual analysis of candidate subsets of descriptors for QSAR/QSPR. In the two panels on the top, users can interactively explore numerical correlations as well as co-occurrences in the candidate subsets through two interactive graphs.
Gaia Data Release 1. The archive visualisation service
NASA Astrophysics Data System (ADS)
Moitinho, A.; Krone-Martins, A.; Savietto, H.; Barros, M.; Barata, C.; Falcão, A. J.; Fernandes, T.; Alves, J.; Silva, A. F.; Gomes, M.; Bakker, J.; Brown, A. G. A.; González-Núñez, J.; Gracia-Abril, G.; Gutiérrez-Sánchez, R.; Hernández, J.; Jordan, S.; Luri, X.; Merin, B.; Mignard, F.; Mora, A.; Navarro, V.; O'Mullane, W.; Sagristà Sellés, T.; Salgado, J.; Segovia, J. C.; Utrilla, E.; Arenou, F.; de Bruijne, J. H. J.; Jansen, F.; McCaughrean, M.; O'Flaherty, K. S.; Taylor, M. B.; Vallenari, A.
2017-09-01
Context. The first Gaia data release (DR1) delivered a catalogue of astrometry and photometry for over a billion astronomical sources. Within the panoplyof methods used for data exploration, visualisation is often the starting point and even the guiding reference for scientific thought. However, this is a volume of data that cannot be efficiently explored using traditional tools, techniques, and habits. Aims: We aim to provide a global visual exploration service for the Gaia archive, something that is not possible out of the box for most people. The service has two main goals. The first is to provide a software platform for interactive visual exploration of the archive contents, using common personal computers and mobile devices available to most users. The second aim is to produce intelligible and appealing visual representations of the enormous information content of the archive. Methods: The interactive exploration service follows a client-server design. The server runs close to the data, at the archive, and is responsible for hiding as far as possible the complexity and volume of the Gaia data from the client. This is achieved by serving visual detail on demand. Levels of detail are pre-computed using data aggregation and subsampling techniques. For DR1, the client is a web application that provides an interactive multi-panel visualisation workspace as well as a graphical user interface. Results: The Gaia archive Visualisation Service offers a web-based multi-panel interactive visualisation desktop in a browser tab. It currently provides highly configurable 1D histograms and 2D scatter plots of Gaia DR1 and the Tycho-Gaia Astrometric Solution (TGAS) with linked views. An innovative feature is the creation of ADQL queries from visually defined regions in plots. These visual queries are ready for use in the Gaia Archive Search/data retrieval service. In addition, regions around user-selected objects can be further examined with automatically generated SIMBAD searches. Integration of the Aladin Lite and JS9 applications add support to the visualisation of HiPS and FITS maps. The production of the all-sky source density map that became the iconic image of Gaia DR1 is described in detail. Conclusions: On the day of DR1, over seven thousand users accessed the Gaia Archive visualisation portal. The system, running on a single machine, proved robust and did not fail while enabling thousands of users to visualise and explore the over one billion sources in DR1. There are still several limitations, most noticeably that users may only choose from a list of pre-computed visualisations. Thus, other visualisation applications that can complement the archive service are examined. Finally, development plans for Data Release 2 are presented.
Interactive metagenomic visualization in a Web browser
2011-01-01
Background A critical output of metagenomic studies is the estimation of abundances of taxonomical or functional groups. The inherent uncertainty in assignments to these groups makes it important to consider both their hierarchical contexts and their prediction confidence. The current tools for visualizing metagenomic data, however, omit or distort quantitative hierarchical relationships and lack the facility for displaying secondary variables. Results Here we present Krona, a new visualization tool that allows intuitive exploration of relative abundances and confidences within the complex hierarchies of metagenomic classifications. Krona combines a variant of radial, space-filling displays with parametric coloring and interactive polar-coordinate zooming. The HTML5 and JavaScript implementation enables fully interactive charts that can be explored with any modern Web browser, without the need for installed software or plug-ins. This Web-based architecture also allows each chart to be an independent document, making them easy to share via e-mail or post to a standard Web server. To illustrate Krona's utility, we describe its application to various metagenomic data sets and its compatibility with popular metagenomic analysis tools. Conclusions Krona is both a powerful metagenomic visualization tool and a demonstration of the potential of HTML5 for highly accessible bioinformatic visualizations. Its rich and interactive displays facilitate more informed interpretations of metagenomic analyses, while its implementation as a browser-based application makes it extremely portable and easily adopted into existing analysis packages. Both the Krona rendering code and conversion tools are freely available under a BSD open-source license, and available from: http://krona.sourceforge.net. PMID:21961884
Web-based visual analysis for high-throughput genomics
2013-01-01
Background Visualization plays an essential role in genomics research by making it possible to observe correlations and trends in large datasets as well as communicate findings to others. Visual analysis, which combines visualization with analysis tools to enable seamless use of both approaches for scientific investigation, offers a powerful method for performing complex genomic analyses. However, there are numerous challenges that arise when creating rich, interactive Web-based visualizations/visual analysis applications for high-throughput genomics. These challenges include managing data flow from Web server to Web browser, integrating analysis tools and visualizations, and sharing visualizations with colleagues. Results We have created a platform simplifies the creation of Web-based visualization/visual analysis applications for high-throughput genomics. This platform provides components that make it simple to efficiently query very large datasets, draw common representations of genomic data, integrate with analysis tools, and share or publish fully interactive visualizations. Using this platform, we have created a Circos-style genome-wide viewer, a generic scatter plot for correlation analysis, an interactive phylogenetic tree, a scalable genome browser for next-generation sequencing data, and an application for systematically exploring tool parameter spaces to find good parameter values. All visualizations are interactive and fully customizable. The platform is integrated with the Galaxy (http://galaxyproject.org) genomics workbench, making it easy to integrate new visual applications into Galaxy. Conclusions Visualization and visual analysis play an important role in high-throughput genomics experiments, and approaches are needed to make it easier to create applications for these activities. Our framework provides a foundation for creating Web-based visualizations and integrating them into Galaxy. Finally, the visualizations we have created using the framework are useful tools for high-throughput genomics experiments. PMID:23758618
Interactive Visualizations of Complex Seismic Data and Models
NASA Astrophysics Data System (ADS)
Chai, C.; Ammon, C. J.; Maceira, M.; Herrmann, R. B.
2016-12-01
The volume and complexity of seismic data and models have increased dramatically thanks to dense seismic station deployments and advances in data modeling and processing. Seismic observations such as receiver functions and surface-wave dispersion are multidimensional: latitude, longitude, time, amplitude and latitude, longitude, period, and velocity. Three-dimensional seismic velocity models are characterized with three spatial dimensions and one additional dimension for the speed. In these circumstances, exploring the data and models and assessing the data fits is a challenge. A few professional packages are available to visualize these complex data and models. However, most of these packages rely on expensive commercial software or require a substantial time investment to master, and even when that effort is complete, communicating the results to others remains a problem. A traditional approach during the model interpretation stage is to examine data fits and model features using a large number of static displays. Publications include a few key slices or cross-sections of these high-dimensional data, but this prevents others from directly exploring the model and corresponding data fits. In this presentation, we share interactive visualization examples of complex seismic data and models that are based on open-source tools and are easy to implement. Model and data are linked in an intuitive and informative web-browser based display that can be used to explore the model and the features in the data that influence various aspects of the model. We encode the model and data into HTML files and present high-dimensional information using two approaches. The first uses a Python package to pack both data and interactive plots in a single file. The second approach uses JavaScript, CSS, and HTML to build a dynamic webpage for seismic data visualization. The tools have proven useful and led to deeper insight into 3D seismic models and the data that were used to construct them. Such easy-to-use interactive displays are essential in teaching environments - user-friendly interactivity allows students to explore large, complex data sets and models at their own pace, enabling a more accessible learning experience.
Ovis: A Framework for Visual Analysis of Ocean Forecast Ensembles.
Höllt, Thomas; Magdy, Ahmed; Zhan, Peng; Chen, Guoning; Gopalakrishnan, Ganesh; Hoteit, Ibrahim; Hansen, Charles D; Hadwiger, Markus
2014-08-01
We present a novel integrated visualization system that enables interactive visual analysis of ensemble simulations of the sea surface height that is used in ocean forecasting. The position of eddies can be derived directly from the sea surface height and our visualization approach enables their interactive exploration and analysis.The behavior of eddies is important in different application settings of which we present two in this paper. First, we show an application for interactive planning of placement as well as operation of off-shore structures using real-world ensemble simulation data of the Gulf of Mexico. Off-shore structures, such as those used for oil exploration, are vulnerable to hazards caused by eddies, and the oil and gas industry relies on ocean forecasts for efficient operations. We enable analysis of the spatial domain, as well as the temporal evolution, for planning the placement and operation of structures.Eddies are also important for marine life. They transport water over large distances and with it also heat and other physical properties as well as biological organisms. In the second application we present the usefulness of our tool, which could be used for planning the paths of autonomous underwater vehicles, so called gliders, for marine scientists to study simulation data of the largely unexplored Red Sea.
Fast 3D Surface Extraction 2 pages (including abstract)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sewell, Christopher Meyer; Patchett, John M.; Ahrens, James P.
Ocean scientists searching for isosurfaces and/or thresholds of interest in high resolution 3D datasets required a tedious and time-consuming interactive exploration experience. PISTON research and development activities are enabling ocean scientists to rapidly and interactively explore isosurfaces and thresholds in their large data sets using a simple slider with real time calculation and visualization of these features. Ocean Scientists can now visualize more features in less time, helping them gain a better understanding of the high resolution data sets they work with on a daily basis. Isosurface timings (512{sup 3} grid): VTK 7.7 s, Parallel VTK (48-core) 1.3 s, PISTONmore » OpenMP (48-core) 0.2 s, PISTON CUDA (Quadro 6000) 0.1 s.« less
Human microbiome visualization using 3D technology.
Moore, Jason H; Lari, Richard Cowper Sal; Hill, Douglas; Hibberd, Patricia L; Madan, Juliette C
2011-01-01
High-throughput sequencing technology has opened the door to the study of the human microbiome and its relationship with health and disease. This is both an opportunity and a significant biocomputing challenge. We present here a 3D visualization methodology and freely-available software package for facilitating the exploration and analysis of high-dimensional human microbiome data. Our visualization approach harnesses the power of commercial video game development engines to provide an interactive medium in the form of a 3D heat map for exploration of microbial species and their relative abundance in different patients. The advantage of this approach is that the third dimension provides additional layers of information that cannot be visualized using a traditional 2D heat map. We demonstrate the usefulness of this visualization approach using microbiome data collected from a sample of premature babies with and without sepsis.
Conceptual Understanding of Geological Concepts by Students with Visual Impairments
ERIC Educational Resources Information Center
Wild, Tiffany A.; Hilson, Margilee P.; Farrand, Kathleen M.
2013-01-01
Eighteen middle and high school students with visual impairments participated in a weeklong field-based geology summer camp. This paper reports the curriculum, strategies, and what the students learned about Earth science by climbing in and out of caves, collecting fossils, exploring a bog, and interacting with experts in the field. Students were…
The Effect of User Characteristics on the Efficiency of Visual Querying
ERIC Educational Resources Information Center
Bak, Peter; Meyer, Joachim
2011-01-01
Information systems increasingly provide options for visually inspecting data during the process of information discovery and exploration. Little research has dealt so far with user interactions with these systems, and specifically with the effects of characteristics of the displayed data and the user on performance with such systems. The study…
Visualization of Differences in Data Measuring Mathematical Skills
ERIC Educational Resources Information Center
Zoubek, Lukas; Burda, Michal
2009-01-01
Identification of significant differences in sets of data is a common task of data mining. This paper describes a novel visualization technique that allows the user to interactively explore and analyze differences in mean values of analyzed attributes. Statistical tests of hypotheses are used to identify the significant differences and the results…
BiSet: Semantic Edge Bundling with Biclusters for Sensemaking.
Sun, Maoyuan; Mi, Peng; North, Chris; Ramakrishnan, Naren
2016-01-01
Identifying coordinated relationships is an important task in data analytics. For example, an intelligence analyst might want to discover three suspicious people who all visited the same four cities. Existing techniques that display individual relationships, such as between lists of entities, require repetitious manual selection and significant mental aggregation in cluttered visualizations to find coordinated relationships. In this paper, we present BiSet, a visual analytics technique to support interactive exploration of coordinated relationships. In BiSet, we model coordinated relationships as biclusters and algorithmically mine them from a dataset. Then, we visualize the biclusters in context as bundled edges between sets of related entities. Thus, bundles enable analysts to infer task-oriented semantic insights about potentially coordinated activities. We make bundles as first class objects and add a new layer, "in-between", to contain these bundle objects. Based on this, bundles serve to organize entities represented in lists and visually reveal their membership. Users can interact with edge bundles to organize related entities, and vice versa, for sensemaking purposes. With a usage scenario, we demonstrate how BiSet supports the exploration of coordinated relationships in text analytics.
PDB-Explorer: a web-based interactive map of the protein data bank in shape space.
Jin, Xian; Awale, Mahendra; Zasso, Michaël; Kostro, Daniel; Patiny, Luc; Reymond, Jean-Louis
2015-10-23
The RCSB Protein Data Bank (PDB) provides public access to experimentally determined 3D-structures of biological macromolecules (proteins, peptides and nucleic acids). While various tools are available to explore the PDB, options to access the global structural diversity of the entire PDB and to perceive relationships between PDB structures remain very limited. A 136-dimensional atom pair 3D-fingerprint for proteins (3DP) counting categorized atom pairs at increasing through-space distances was designed to represent the molecular shape of PDB-entries. Nearest neighbor searches examples were reported exemplifying the ability of 3DP-similarity to identify closely related biomolecules from small peptides to enzyme and large multiprotein complexes such as virus particles. The principle component analysis was used to obtain the visualization of PDB in 3DP-space. The 3DP property space groups proteins and protein assemblies according to their 3D-shape similarity, yet shows exquisite ability to distinguish between closely related structures. An interactive website called PDB-Explorer is presented featuring a color-coded interactive map of PDB in 3DP-space. Each pixel of the map contains one or more PDB-entries which are directly visualized as ribbon diagrams when the pixel is selected. The PDB-Explorer website allows performing 3DP-nearest neighbor searches of any PDB-entry or of any structure uploaded as protein-type PDB file. All functionalities on the website are implemented in JavaScript in a platform-independent manner and draw data from a server that is updated daily with the latest PDB additions, ensuring complete and up-to-date coverage. The essentially instantaneous 3DP-similarity search with the PDB-Explorer provides results comparable to those of much slower 3D-alignment algorithms, and automatically clusters proteins from the same superfamilies in tight groups. A chemical space classification of PDB based on molecular shape was obtained using a new atom-pair 3D-fingerprint for proteins and implemented in a web-based database exploration tool comprising an interactive color-coded map of the PDB chemical space and a nearest neighbor search tool. The PDB-Explorer website is freely available at www.cheminfo.org/pdbexplorer and represents an unprecedented opportunity to interactively visualize and explore the structural diversity of the PDB. ᅟ
Declarative language design for interactive visualization.
Heer, Jeffrey; Bostock, Michael
2010-01-01
We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude.
NASA Astrophysics Data System (ADS)
Craig, Paul; Kennedy, Jessie
2008-01-01
An increasingly common approach being taken by taxonomists to define the relationships between taxa in alternative hierarchical classifications is to use a set-based notation which states relationship between two taxa from alternative classifications. Textual recording of these relationships is cumbersome and difficult for taxonomists to manage. While text based GUI tools are beginning to appear which ease the process, these have several limitations. Interactive visual tools offer greater potential to allow taxonomists to explore the taxa in these hierarchies and specify such relationships. This paper describes the Concept Relationship Editor, an interactive visualisation tool designed to support the assertion of relationships between taxonomic classifications. The tool operates using an interactive space-filling adjacency layout which allows users to expand multiple lists of taxa with common parents so they can explore and assert relationships between two classifications.
OpinionSeer: interactive visualization of hotel customer feedback.
Wu, Yingcai; Wei, Furu; Liu, Shixia; Au, Norman; Cui, Weiwei; Zhou, Hong; Qu, Huamin
2010-01-01
The rapid development of Web technology has resulted in an increasing number of hotel customers sharing their opinions on the hotel services. Effective visual analysis of online customer opinions is needed, as it has a significant impact on building a successful business. In this paper, we present OpinionSeer, an interactive visualization system that could visually analyze a large collection of online hotel customer reviews. The system is built on a new visualization-centric opinion mining technique that considers uncertainty for faithfully modeling and analyzing customer opinions. A new visual representation is developed to convey customer opinions by augmenting well-established scatterplots and radial visualization. To provide multiple-level exploration, we introduce subjective logic to handle and organize subjective opinions with degrees of uncertainty. Several case studies illustrate the effectiveness and usefulness of OpinionSeer on analyzing relationships among multiple data dimensions and comparing opinions of different groups. Aside from data on hotel customer feedback, OpinionSeer could also be applied to visually analyze customer opinions on other products or services.
An interactive, multi-touch videowall for scientific data exploration
NASA Astrophysics Data System (ADS)
Blower, Jon; Griffiths, Guy; van Meersbergen, Maarten; Lusher, Scott; Styles, Jon
2014-05-01
The use of videowalls for scientific data exploration is rising as hardware becomes cheaper and the availability of software and multimedia content grows. Most videowalls are used primarily for outreach and communication purposes, but there is increasing interest in using large display screens to support exploratory visualization as an integral part of scientific research. In this PICO presentation we will present a brief overview of a new videowall system at the University of Reading, which is designed specifically to support interactive, exploratory visualization activities in climate science and Earth Observation. The videowall consists of eight 42-inch full-HD screens (in 4x2 formation), giving a total resolution of about 16 megapixels. The display is managed by a videowall controller, which can direct video to the screen from up to four external laptops, a purpose-built graphics workstation, or any combination thereof. A multi-touch overlay provides the capability for the user to interact directly with the data. There are many ways to use the videowall, and a key technical challenge is to make the most of the touch capabilities - touch has the potential to greatly reduce the learning curve in interactive data exploration, but most software is not yet designed for this purpose. In the PICO we will present an overview of some ways in which the wall can be employed in science, seeking feedback and discussion from the community. The system was inspired by an existing and highly-successful system (known as the "Collaboratorium") at the Netherlands e-Science Center (NLeSC). We will demonstrate how we have adapted NLeSC's visualization software to our system for touch-enabled multi-screen climate data exploration.
Xia, Kai; Dong, Dong; Han, Jing-Dong J
2006-01-01
Background Although protein-protein interaction (PPI) networks have been explored by various experimental methods, the maps so built are still limited in coverage and accuracy. To further expand the PPI network and to extract more accurate information from existing maps, studies have been carried out to integrate various types of functional relationship data. A frequently updated database of computationally analyzed potential PPIs to provide biological researchers with rapid and easy access to analyze original data as a biological network is still lacking. Results By applying a probabilistic model, we integrated 27 heterogeneous genomic, proteomic and functional annotation datasets to predict PPI networks in human. In addition to previously studied data types, we show that phenotypic distances and genetic interactions can also be integrated to predict PPIs. We further built an easy-to-use, updatable integrated PPI database, the Integrated Network Database (IntNetDB) online, to provide automatic prediction and visualization of PPI network among genes of interest. The networks can be visualized in SVG (Scalable Vector Graphics) format for zooming in or out. IntNetDB also provides a tool to extract topologically highly connected network neighborhoods from a specific network for further exploration and research. Using the MCODE (Molecular Complex Detections) algorithm, 190 such neighborhoods were detected among all the predicted interactions. The predicted PPIs can also be mapped to worm, fly and mouse interologs. Conclusion IntNetDB includes 180,010 predicted protein-protein interactions among 9,901 human proteins and represents a useful resource for the research community. Our study has increased prediction coverage by five-fold. IntNetDB also provides easy-to-use network visualization and analysis tools that allow biological researchers unfamiliar with computational biology to access and analyze data over the internet. The web interface of IntNetDB is freely accessible at . Visualization requires Mozilla version 1.8 (or higher) or Internet Explorer with installation of SVGviewer. PMID:17112386
Telerobotic Haptic Exploration in Art Galleries and Museums for Individuals with Visual Impairments.
Park, Chung Hyuk; Ryu, Eun-Seok; Howard, Ayanna M
2015-01-01
This paper presents a haptic telepresence system that enables visually impaired users to explore locations with rich visual observation such as art galleries and museums by using a telepresence robot, a RGB-D sensor (color and depth camera), and a haptic interface. The recent improvement on RGB-D sensors has enabled real-time access to 3D spatial information in the form of point clouds. However, the real-time representation of this data in the form of tangible haptic experience has not been challenged enough, especially in the case of telepresence for individuals with visual impairments. Thus, the proposed system addresses the real-time haptic exploration of remote 3D information through video encoding and real-time 3D haptic rendering of the remote real-world environment. This paper investigates two scenarios in haptic telepresence, i.e., mobile navigation and object exploration in a remote environment. Participants with and without visual impairments participated in our experiments based on the two scenarios, and the system performance was validated. In conclusion, the proposed framework provides a new methodology of haptic telepresence for individuals with visual impairments by providing an enhanced interactive experience where they can remotely access public places (art galleries and museums) with the aid of haptic modality and robotic telepresence.
PanACEA: a bioinformatics tool for the exploration and visualization of bacterial pan-chromosomes.
Clarke, Thomas H; Brinkac, Lauren M; Inman, Jason M; Sutton, Granger; Fouts, Derrick E
2018-06-27
Bacterial pan-genomes, comprised of conserved and variable genes across multiple sequenced bacterial genomes, allow for identification of genomic regions that are phylogenetically discriminating or functionally important. Pan-genomes consist of large amounts of data, which can restrict researchers ability to locate and analyze these regions. Multiple software packages are available to visualize pan-genomes, but currently their ability to address these concerns are limited by using only pre-computed data sets, prioritizing core over variable gene clusters, or by not accounting for pan-chromosome positioning in the viewer. We introduce PanACEA (Pan-genome Atlas with Chromosome Explorer and Analyzer), which utilizes locally-computed interactive web-pages to view ordered pan-genome data. It consists of multi-tiered, hierarchical display pages that extend from pan-chromosomes to both core and variable regions to single genes. Regions and genes are functionally annotated to allow for rapid searching and visual identification of regions of interest with the option that user-supplied genomic phylogenies and metadata can be incorporated. PanACEA's memory and time requirements are within the capacities of standard laptops. The capability of PanACEA as a research tool is demonstrated by highlighting a variable region important in differentiating strains of Enterobacter hormaechei. PanACEA can rapidly translate the results of pan-chromosome programs into an intuitive and interactive visual representation. It will empower researchers to visually explore and identify regions of the pan-chromosome that are most biologically interesting, and to obtain publication quality images of these regions.
jSquid: a Java applet for graphical on-line network exploration.
Klammer, Martin; Roopra, Sanjit; Sonnhammer, Erik L L
2008-06-15
jSquid is a graph visualization tool for exploring graphs from protein-protein interaction or functional coupling networks. The tool was designed for the FunCoup web site, but can be used for any similar network exploring purpose. The program offers various visualization and graph manipulation techniques to increase the utility for the user. jSquid is available for direct usage and download at http://jSquid.sbc.su.se including source code under the GPLv3 license, and input examples. It requires Java version 5 or higher to run properly. erik.sonnhammer@sbc.su.se Supplementary data are available at Bioinformatics online.
Data visualization in interactive maps and time series
NASA Astrophysics Data System (ADS)
Maigne, Vanessa; Evano, Pascal; Brockmann, Patrick; Peylin, Philippe; Ciais, Philippe
2014-05-01
State-of-the-art data visualization has nothing to do with plots and maps we used few years ago. Many opensource tools are now available to provide access to scientific data and implement accessible, interactive, and flexible web applications. Here we will present a web site opened November 2013 to create custom global and regional maps and time series from research models and datasets. For maps, we explore and get access to data sources from a THREDDS Data Server (TDS) with the OGC WMS protocol (using the ncWMS implementation) then create interactive maps with the OpenLayers javascript library and extra information layers from a GeoServer. Maps become dynamic, zoomable, synchroneaously connected to each other, and exportable to Google Earth. For time series, we extract data from a TDS with the Netcdf Subset Service (NCSS) then display interactive graphs with a custom library based on the Data Driven Documents javascript library (D3.js). This time series application provides dynamic functionalities such as interpolation, interactive zoom on different axes, display of point values, and export to different formats. These tools were implemented for the Global Carbon Atlas (http://www.globalcarbonatlas.org): a web portal to explore, visualize, and interpret global and regional carbon fluxes from various model simulations arising from both human activities and natural processes, a work led by the Global Carbon Project.
McNally, Colin P.; Eng, Alexander; Noecker, Cecilia; Gagne-Maynard, William C.; Borenstein, Elhanan
2018-01-01
The abundance of both taxonomic groups and gene categories in microbiome samples can now be easily assayed via various sequencing technologies, and visualized using a variety of software tools. However, the assemblage of taxa in the microbiome and its gene content are clearly linked, and tools for visualizing the relationship between these two facets of microbiome composition and for facilitating exploratory analysis of their co-variation are lacking. Here we introduce BURRITO, a web tool for interactive visualization of microbiome multi-omic data with paired taxonomic and functional information. BURRITO simultaneously visualizes the taxonomic and functional compositions of multiple samples and dynamically highlights relationships between taxa and functions to capture the underlying structure of these data. Users can browse for taxa and functions of interest and interactively explore the share of each function attributed to each taxon across samples. BURRITO supports multiple input formats for taxonomic and metagenomic data, allows adjustment of data granularity, and can export generated visualizations as static publication-ready formatted figures. In this paper, we describe the functionality of BURRITO, and provide illustrative examples of its utility for visualizing various trends in the relationship between the composition of taxa and functions in complex microbiomes. PMID:29545787
Innovative Visualization Techniques applied to a Flood Scenario
NASA Astrophysics Data System (ADS)
Falcão, António; Ho, Quan; Lopes, Pedro; Malamud, Bruce D.; Ribeiro, Rita; Jern, Mikael
2013-04-01
The large and ever-increasing amounts of multi-dimensional, time-varying and geospatial digital information from multiple sources represent a major challenge for today's analysts. We present a set of visualization techniques that can be used for the interactive analysis of geo-referenced and time sampled data sets, providing an integrated mechanism and that aids the user to collaboratively explore, present and communicate visually complex and dynamic data. Here we present these concepts in the context of a 4 hour flood scenario from Lisbon in 2010, with data that includes measures of water column (flood height) every 10 minutes at a 4.5 m x 4.5 m resolution, topography, building damage, building information, and online base maps. Techniques we use include web-based linked views, multiple charts, map layers and storytelling. We explain two of these in more detail that are not currently in common use for visualization of data: storytelling and web-based linked views. Visual storytelling is a method for providing a guided but interactive process of visualizing data, allowing more engaging data exploration through interactive web-enabled visualizations. Within storytelling, a snapshot mechanism helps the author of a story to highlight data views of particular interest and subsequently share or guide others within the data analysis process. This allows a particular person to select relevant attributes for a snapshot, such as highlighted regions for comparisons, time step, class values for colour legend, etc. and provide a snapshot of the current application state, which can then be provided as a hyperlink and recreated by someone else. Since data can be embedded within this snapshot, it is possible to interactively visualize and manipulate it. The second technique, web-based linked views, includes multiple windows which interactively respond to the user selections, so that when selecting an object and changing it one window, it will automatically update in all the other windows. These concepts can be part of a collaborative platform, where multiple people share and work together on the data, via online access, which also allows its remote usage from a mobile platform. Storytelling augments analysis and decision-making capabilities allowing to assimilate complex situations and reach informed decisions, in addition to helping the public visualize information. In our visualization scenario, developed in the context of the VA-4D project for the European Space Agency (see http://www.ca3-uninova.org/project_va4d), we make use of the GAV (GeoAnalytics Visualization) framework, a web-oriented visual analytics application based on multiple interactive views. The final visualization that we produce includes multiple interactive views, including a dynamic multi-layer map surrounded by other visualizations such as bar charts, time graphs and scatter plots. The map provides flood and building information, on top of a base city map (street maps and/or satellite imagery provided by online map services such as Google Maps, Bing Maps etc.). Damage over time for selected buildings, damage for all buildings at a chosen time period, correlation between damage and water depth can be analysed in the other views. This interactive web-based visualization that incorporates the ideas of storytelling, web-based linked views, and other visualization techniques, for a 4 hour flood event in Lisbon in 2010, can be found online at http://www.ncomva.se/flash/projects/esa/flooding/.
Viewing Objects and Planning Actions: On the Potentiation of Grasping Behaviours by Visual Objects
ERIC Educational Resources Information Center
Makris, Stergios; Hadar, Aviad A.; Yarrow, Kielan
2011-01-01
How do humans interact with tools? Gibson (1979) suggested that humans perceive directly what tools afford in terms of meaningful actions. This "affordances" hypothesis implies that visual objects can potentiate motor responses even in the absence of an intention to act. Here we explore the temporal evolution of motor plans afforded by common…
ERIC Educational Resources Information Center
Ryoo, Kihyun; Bedell, Kristin
2017-01-01
Although extensive research has shown the educational value of different types of interactive visualizations on students' science learning in general, how such technologies can contribute to English learners' (ELs) understanding of complex scientific concepts has not been sufficiently explored to date. This mixed-methods study investigated how…
ERIC Educational Resources Information Center
Liang, Hai-Ning; Sedig, Kamran
2010-01-01
Many students find it difficult to engage with mathematical concepts. As a relatively new class of learning tools, visualization tools may be able to promote higher levels of engagement with mathematical concepts. Often, development of new tools may outpace empirical evaluations of the effectiveness of these tools, especially in educational…
Using WorldWide Telescope in Observing, Research and Presentation
NASA Astrophysics Data System (ADS)
Roberts, Douglas A.; Fay, J.
2014-01-01
WorldWide Telescope (WWT) is free software that enables researchers to interactively explore observational data using a user-friendly interface. Reference, all-sky datasets and pointed observations are available as layers along with the ability to easily overlay additional FITS images and catalog data. Connections to the Astrophysics Data System (ADS) are included which enable visual investigation using WWT to drive document searches in ADS. WWT can be used to capture and share visual exploration with colleagues during observational planning and analysis. Finally, researchers can use WorldWide Telescope to create videos for professional, education and outreach presentations. I will conclude with an example of how I have used WWT in a research project. Specifically, I will discuss how WorldWide Telescope helped our group to prepare for radio observations and following them, in the analysis of multi-wavelength data taken in the inner parsec of the Galaxy. A concluding video will show how WWT brought together disparate datasets in a unified interactive visualization environment.
The KUPNetViz: a biological network viewer for multiple -omics datasets in kidney diseases.
Moulos, Panagiotis; Klein, Julie; Jupp, Simon; Stevens, Robert; Bascands, Jean-Loup; Schanstra, Joost P
2013-07-24
Constant technological advances have allowed scientists in biology to migrate from conventional single-omics to multi-omics experimental approaches, challenging bioinformatics to bridge this multi-tiered information. Ongoing research in renal biology is no exception. The results of large-scale and/or high throughput experiments, presenting a wealth of information on kidney disease are scattered across the web. To tackle this problem, we recently presented the KUPKB, a multi-omics data repository for renal diseases. In this article, we describe KUPNetViz, a biological graph exploration tool allowing the exploration of KUPKB data through the visualization of biomolecule interactions. KUPNetViz enables the integration of multi-layered experimental data over different species, renal locations and renal diseases to protein-protein interaction networks and allows association with biological functions, biochemical pathways and other functional elements such as miRNAs. KUPNetViz focuses on the simplicity of its usage and the clarity of resulting networks by reducing and/or automating advanced functionalities present in other biological network visualization packages. In addition, it allows the extrapolation of biomolecule interactions across different species, leading to the formulations of new plausible hypotheses, adequate experiment design and to the suggestion of novel biological mechanisms. We demonstrate the value of KUPNetViz by two usage examples: the integration of calreticulin as a key player in a larger interaction network in renal graft rejection and the novel observation of the strong association of interleukin-6 with polycystic kidney disease. The KUPNetViz is an interactive and flexible biological network visualization and exploration tool. It provides renal biologists with biological network snapshots of the complex integrated data of the KUPKB allowing the formulation of new hypotheses in a user friendly manner.
The KUPNetViz: a biological network viewer for multiple -omics datasets in kidney diseases
2013-01-01
Background Constant technological advances have allowed scientists in biology to migrate from conventional single-omics to multi-omics experimental approaches, challenging bioinformatics to bridge this multi-tiered information. Ongoing research in renal biology is no exception. The results of large-scale and/or high throughput experiments, presenting a wealth of information on kidney disease are scattered across the web. To tackle this problem, we recently presented the KUPKB, a multi-omics data repository for renal diseases. Results In this article, we describe KUPNetViz, a biological graph exploration tool allowing the exploration of KUPKB data through the visualization of biomolecule interactions. KUPNetViz enables the integration of multi-layered experimental data over different species, renal locations and renal diseases to protein-protein interaction networks and allows association with biological functions, biochemical pathways and other functional elements such as miRNAs. KUPNetViz focuses on the simplicity of its usage and the clarity of resulting networks by reducing and/or automating advanced functionalities present in other biological network visualization packages. In addition, it allows the extrapolation of biomolecule interactions across different species, leading to the formulations of new plausible hypotheses, adequate experiment design and to the suggestion of novel biological mechanisms. We demonstrate the value of KUPNetViz by two usage examples: the integration of calreticulin as a key player in a larger interaction network in renal graft rejection and the novel observation of the strong association of interleukin-6 with polycystic kidney disease. Conclusions The KUPNetViz is an interactive and flexible biological network visualization and exploration tool. It provides renal biologists with biological network snapshots of the complex integrated data of the KUPKB allowing the formulation of new hypotheses in a user friendly manner. PMID:23883183
imDEV: a graphical user interface to R multivariate analysis tools in Microsoft Excel
USDA-ARS?s Scientific Manuscript database
Interactive modules for data exploration and visualization (imDEV) is a Microsoft Excel spreadsheet embedded application providing an integrated environment for the analysis of omics data sets with a user-friendly interface. Individual modules were designed to provide toolsets to enable interactive ...
Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research
NASA Astrophysics Data System (ADS)
Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.
2008-12-01
In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.
An active visual search interface for Medline.
Xuan, Weijian; Dai, Manhong; Mirel, Barbara; Wilson, Justin; Athey, Brian; Watson, Stanley J; Meng, Fan
2007-01-01
Searching the Medline database is almost a daily necessity for many biomedical researchers. However, available Medline search solutions are mainly designed for the quick retrieval of a small set of most relevant documents. Because of this search model, they are not suitable for the large-scale exploration of literature and the underlying biomedical conceptual relationships, which are common tasks in the age of high throughput experimental data analysis and cross-discipline research. We try to develop a new Medline exploration approach by incorporating interactive visualization together with powerful grouping, summary, sorting and active external content retrieval functions. Our solution, PubViz, is based on the FLEX platform designed for interactive web applications and its prototype is publicly available at: http://brainarray.mbni.med.umich.edu/Brainarray/DataMining/PubViz.
Typograph: Multiscale Spatial Exploration of Text Documents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; Burtner, Edwin R.; Cramer, Nicholas O.
2013-10-06
Visualizing large document collections using a spatial layout of terms can enable quick overviews of information. These visual metaphors (e.g., word clouds, tag clouds, etc.) traditionally show a series of terms organized by space-filling algorithms. However, often lacking in these views is the ability to interactively explore the information to gain more detail, and the location and rendering of the terms are often not based on mathematical models that maintain relative distances from other information based on similarity metrics. In this paper, we present Typograph, a multi-scale spatial exploration visualization for large document collections. Based on the term-based visualization methods,more » Typograh enables multiple levels of detail (terms, phrases, snippets, and full documents) within the single spatialization. Further, the information is placed based on their relative similarity to other information to create the “near = similar” geographic metaphor. This paper discusses the design principles and functionality of Typograph and presents a use case analyzing Wikipedia to demonstrate usage.« less
A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data
Venkat, A.; Christensen, C.; Gyulassy, A.; Summa, B.; Federer, F.; Angelucci, A.; Pascucci, V.
2017-01-01
The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data. PMID:28638896
A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data.
Venkat, A; Christensen, C; Gyulassy, A; Summa, B; Federer, F; Angelucci, A; Pascucci, V
2016-08-01
The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data.
Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega
2015-04-14
This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work.
RankExplorer: Visualization of Ranking Changes in Large Time Series Data.
Shi, Conglei; Cui, Weiwei; Liu, Shixia; Xu, Panpan; Chen, Wei; Qu, Huamin
2012-12-01
For many applications involving time series data, people are often interested in the changes of item values over time as well as their ranking changes. For example, people search many words via search engines like Google and Bing every day. Analysts are interested in both the absolute searching number for each word as well as their relative rankings. Both sets of statistics may change over time. For very large time series data with thousands of items, how to visually present ranking changes is an interesting challenge. In this paper, we propose RankExplorer, a novel visualization method based on ThemeRiver to reveal the ranking changes. Our method consists of four major components: 1) a segmentation method which partitions a large set of time series curves into a manageable number of ranking categories; 2) an extended ThemeRiver view with embedded color bars and changing glyphs to show the evolution of aggregation values related to each ranking category over time as well as the content changes in each ranking category; 3) a trend curve to show the degree of ranking changes over time; 4) rich user interactions to support interactive exploration of ranking changes. We have applied our method to some real time series data and the case studies demonstrate that our method can reveal the underlying patterns related to ranking changes which might otherwise be obscured in traditional visualizations.
GODIVA2: interactive visualization of environmental data on the Web.
Blower, J D; Haines, K; Santokhee, A; Liu, C L
2009-03-13
GODIVA2 is a dynamic website that provides visual access to several terabytes of physically distributed, four-dimensional environmental data. It allows users to explore large datasets interactively without the need to install new software or download and understand complex data. Through the use of open international standards, GODIVA2 maintains a high level of interoperability with third-party systems, allowing diverse datasets to be mutually compared. Scientists can use the system to search for features in large datasets and to diagnose the output from numerical simulations and data processing algorithms. Data providers around Europe have adopted GODIVA2 as an INSPIRE-compliant dynamic quick-view system for providing visual access to their data.
Zhang, Baohong; Zhao, Shanrong; Neuhaus, Isaac
2018-05-03
We present a bioinformatics and systems biology visualization toolkit harmonizing real time interactive exploring and analyzing of big data, full-fledged customizing of look-n-feel, and producing multi-panel publication-ready figures in PDF format simultaneously. Source code and detailed user guides are available at http://canvasxpress.org, https://baohongz.github.io/canvasDesigner, and https://baohongz.github.io/canvasDesigner/demo_video.html. isaac.neuhaus@bms.com, baohong.zhang@pfizer.com, shanrong.zhao@pfizer.com. Supplementary materials are available at https://goo.gl/1uQygs.
Olechnovic, Kliment; Margelevicius, Mindaugas; Venclovas, Ceslovas
2011-03-01
We present Voroprot, an interactive cross-platform software tool that provides a unique set of capabilities for exploring geometric features of protein structure. Voroprot allows the construction and visualization of the Apollonius diagram (also known as the additively weighted Voronoi diagram), the Apollonius graph, protein alpha shapes, interatomic contact surfaces, solvent accessible surfaces, pockets and cavities inside protein structure. Voroprot is available for Windows, Linux and Mac OS X operating systems and can be downloaded from http://www.ibt.lt/bioinformatics/voroprot/.
Interactive web-based identification and visualization of transcript shared sequences.
Azhir, Alaleh; Merino, Louis-Henri; Nauen, David W
2018-05-12
We have developed TraC (Transcript Consensus), a web-based tool for detecting and visualizing shared sequences among two or more mRNA transcripts such as splice variants. Results including exon-exon boundaries are returned in a highly intuitive, data-rich, interactive plot that permits users to explore the similarities and differences of multiple transcript sequences. The online tool (http://labs.pathology.jhu.edu/nauen/trac/) is free to use. The source code is freely available for download (https://github.com/nauenlab/TraC). Copyright © 2018 Elsevier Inc. All rights reserved.
Khramtsova, Ekaterina A; Stranger, Barbara E
2017-02-01
Over the last decade, genome-wide association studies (GWAS) have generated vast amounts of analysis results, requiring development of novel tools for data visualization. Quantile–quantile (QQ) plots and Manhattan plots are classical tools which have been utilized to visually summarize GWAS results and identify genetic variants significantly associated with traits of interest. However, static visualizations are limiting in the information that can be shown. Here, we present Assocplots, a Python package for viewing and exploring GWAS results not only using classic static Manhattan and QQ plots, but also through a dynamic extension which allows to interactively visualize the relationships between GWAS results from multiple cohorts or studies. The Assocplots package is open source and distributed under the MIT license via GitHub (https://github.com/khramts/assocplots) along with examples, documentation and installation instructions. ekhramts@medicine.bsd.uchicago.edu or bstranger@medicine.bsd.uchicago.edu
Explore the virtual side of earth science
,
1998-01-01
Scientists have always struggled to find an appropriate technology that could represent three-dimensional (3-D) data, facilitate dynamic analysis, and encourage on-the-fly interactivity. In the recent past, scientific visualization has increased the scientist's ability to visualize information, but it has not provided the interactive environment necessary for rapidly changing the model or for viewing the model in ways not predetermined by the visualization specialist. Virtual Reality Modeling Language (VRML 2.0) is a new environment for visualizing 3-D information spaces and is accessible through the Internet with current browser technologies. Researchers from the U.S. Geological Survey (USGS) are using VRML as a scientific visualization tool to help convey complex scientific concepts to various audiences. Kevin W. Laurent, computer scientist, and Maura J. Hogan, technical information specialist, have created a collection of VRML models available through the Internet at Virtual Earth Science (virtual.er.usgs.gov).
Nowke, Christian; Diaz-Pier, Sandra; Weyers, Benjamin; Hentschel, Bernd; Morrison, Abigail; Kuhlen, Torsten W.; Peyser, Alexander
2018-01-01
Simulation models in many scientific fields can have non-unique solutions or unique solutions which can be difficult to find. Moreover, in evolving systems, unique final state solutions can be reached by multiple different trajectories. Neuroscience is no exception. Often, neural network models are subject to parameter fitting to obtain desirable output comparable to experimental data. Parameter fitting without sufficient constraints and a systematic exploration of the possible solution space can lead to conclusions valid only around local minima or around non-minima. To address this issue, we have developed an interactive tool for visualizing and steering parameters in neural network simulation models. In this work, we focus particularly on connectivity generation, since finding suitable connectivity configurations for neural network models constitutes a complex parameter search scenario. The development of the tool has been guided by several use cases—the tool allows researchers to steer the parameters of the connectivity generation during the simulation, thus quickly growing networks composed of multiple populations with a targeted mean activity. The flexibility of the software allows scientists to explore other connectivity and neuron variables apart from the ones presented as use cases. With this tool, we enable an interactive exploration of parameter spaces and a better understanding of neural network models and grapple with the crucial problem of non-unique network solutions and trajectories. In addition, we observe a reduction in turn around times for the assessment of these models, due to interactive visualization while the simulation is computed. PMID:29937723
Towards human-computer synergetic analysis of large-scale biological data.
Singh, Rahul; Yang, Hui; Dalziel, Ben; Asarnow, Daniel; Murad, William; Foote, David; Gormley, Matthew; Stillman, Jonathan; Fisher, Susan
2013-01-01
Advances in technology have led to the generation of massive amounts of complex and multifarious biological data in areas ranging from genomics to structural biology. The volume and complexity of such data leads to significant challenges in terms of its analysis, especially when one seeks to generate hypotheses or explore the underlying biological processes. At the state-of-the-art, the application of automated algorithms followed by perusal and analysis of the results by an expert continues to be the predominant paradigm for analyzing biological data. This paradigm works well in many problem domains. However, it also is limiting, since domain experts are forced to apply their instincts and expertise such as contextual reasoning, hypothesis formulation, and exploratory analysis after the algorithm has produced its results. In many areas where the organization and interaction of the biological processes is poorly understood and exploratory analysis is crucial, what is needed is to integrate domain expertise during the data analysis process and use it to drive the analysis itself. In context of the aforementioned background, the results presented in this paper describe advancements along two methodological directions. First, given the context of biological data, we utilize and extend a design approach called experiential computing from multimedia information system design. This paradigm combines information visualization and human-computer interaction with algorithms for exploratory analysis of large-scale and complex data. In the proposed approach, emphasis is laid on: (1) allowing users to directly visualize, interact, experience, and explore the data through interoperable visualization-based and algorithmic components, (2) supporting unified query and presentation spaces to facilitate experimentation and exploration, (3) providing external contextual information by assimilating relevant supplementary data, and (4) encouraging user-directed information visualization, data exploration, and hypotheses formulation. Second, to illustrate the proposed design paradigm and measure its efficacy, we describe two prototype web applications. The first, called XMAS (Experiential Microarray Analysis System) is designed for analysis of time-series transcriptional data. The second system, called PSPACE (Protein Space Explorer) is designed for holistic analysis of structural and structure-function relationships using interactive low-dimensional maps of the protein structure space. Both these systems promote and facilitate human-computer synergy, where cognitive elements such as domain knowledge, contextual reasoning, and purpose-driven exploration, are integrated with a host of powerful algorithmic operations that support large-scale data analysis, multifaceted data visualization, and multi-source information integration. The proposed design philosophy, combines visualization, algorithmic components and cognitive expertise into a seamless processing-analysis-exploration framework that facilitates sense-making, exploration, and discovery. Using XMAS, we present case studies that analyze transcriptional data from two highly complex domains: gene expression in the placenta during human pregnancy and reaction of marine organisms to heat stress. With PSPACE, we demonstrate how complex structure-function relationships can be explored. These results demonstrate the novelty, advantages, and distinctions of the proposed paradigm. Furthermore, the results also highlight how domain insights can be combined with algorithms to discover meaningful knowledge and formulate evidence-based hypotheses during the data analysis process. Finally, user studies against comparable systems indicate that both XMAS and PSPACE deliver results with better interpretability while placing lower cognitive loads on the users. XMAS is available at: http://tintin.sfsu.edu:8080/xmas. PSPACE is available at: http://pspace.info/.
Towards human-computer synergetic analysis of large-scale biological data
2013-01-01
Background Advances in technology have led to the generation of massive amounts of complex and multifarious biological data in areas ranging from genomics to structural biology. The volume and complexity of such data leads to significant challenges in terms of its analysis, especially when one seeks to generate hypotheses or explore the underlying biological processes. At the state-of-the-art, the application of automated algorithms followed by perusal and analysis of the results by an expert continues to be the predominant paradigm for analyzing biological data. This paradigm works well in many problem domains. However, it also is limiting, since domain experts are forced to apply their instincts and expertise such as contextual reasoning, hypothesis formulation, and exploratory analysis after the algorithm has produced its results. In many areas where the organization and interaction of the biological processes is poorly understood and exploratory analysis is crucial, what is needed is to integrate domain expertise during the data analysis process and use it to drive the analysis itself. Results In context of the aforementioned background, the results presented in this paper describe advancements along two methodological directions. First, given the context of biological data, we utilize and extend a design approach called experiential computing from multimedia information system design. This paradigm combines information visualization and human-computer interaction with algorithms for exploratory analysis of large-scale and complex data. In the proposed approach, emphasis is laid on: (1) allowing users to directly visualize, interact, experience, and explore the data through interoperable visualization-based and algorithmic components, (2) supporting unified query and presentation spaces to facilitate experimentation and exploration, (3) providing external contextual information by assimilating relevant supplementary data, and (4) encouraging user-directed information visualization, data exploration, and hypotheses formulation. Second, to illustrate the proposed design paradigm and measure its efficacy, we describe two prototype web applications. The first, called XMAS (Experiential Microarray Analysis System) is designed for analysis of time-series transcriptional data. The second system, called PSPACE (Protein Space Explorer) is designed for holistic analysis of structural and structure-function relationships using interactive low-dimensional maps of the protein structure space. Both these systems promote and facilitate human-computer synergy, where cognitive elements such as domain knowledge, contextual reasoning, and purpose-driven exploration, are integrated with a host of powerful algorithmic operations that support large-scale data analysis, multifaceted data visualization, and multi-source information integration. Conclusions The proposed design philosophy, combines visualization, algorithmic components and cognitive expertise into a seamless processing-analysis-exploration framework that facilitates sense-making, exploration, and discovery. Using XMAS, we present case studies that analyze transcriptional data from two highly complex domains: gene expression in the placenta during human pregnancy and reaction of marine organisms to heat stress. With PSPACE, we demonstrate how complex structure-function relationships can be explored. These results demonstrate the novelty, advantages, and distinctions of the proposed paradigm. Furthermore, the results also highlight how domain insights can be combined with algorithms to discover meaningful knowledge and formulate evidence-based hypotheses during the data analysis process. Finally, user studies against comparable systems indicate that both XMAS and PSPACE deliver results with better interpretability while placing lower cognitive loads on the users. XMAS is available at: http://tintin.sfsu.edu:8080/xmas. PSPACE is available at: http://pspace.info/. PMID:24267485
MotionFlow: Visual Abstraction and Aggregation of Sequential Patterns in Human Motion Tracking Data.
Jang, Sujin; Elmqvist, Niklas; Ramani, Karthik
2016-01-01
Pattern analysis of human motions, which is useful in many research areas, requires understanding and comparison of different styles of motion patterns. However, working with human motion tracking data to support such analysis poses great challenges. In this paper, we propose MotionFlow, a visual analytics system that provides an effective overview of various motion patterns based on an interactive flow visualization. This visualization formulates a motion sequence as transitions between static poses, and aggregates these sequences into a tree diagram to construct a set of motion patterns. The system also allows the users to directly reflect the context of data and their perception of pose similarities in generating representative pose states. We provide local and global controls over the partition-based clustering process. To support the users in organizing unstructured motion data into pattern groups, we designed a set of interactions that enables searching for similar motion sequences from the data, detailed exploration of data subsets, and creating and modifying the group of motion patterns. To evaluate the usability of MotionFlow, we conducted a user study with six researchers with expertise in gesture-based interaction design. They used MotionFlow to explore and organize unstructured motion tracking data. Results show that the researchers were able to easily learn how to use MotionFlow, and the system effectively supported their pattern analysis activities, including leveraging their perception and domain knowledge.
Peterson, Elena S; McCue, Lee Ann; Schrimpe-Rutledge, Alexandra C; Jensen, Jeffrey L; Walker, Hyunjoo; Kobold, Markus A; Webb, Samantha R; Payne, Samuel H; Ansong, Charles; Adkins, Joshua N; Cannon, William R; Webb-Robertson, Bobbie-Jo M
2012-04-05
The procedural aspects of genome sequencing and assembly have become relatively inexpensive, yet the full, accurate structural annotation of these genomes remains a challenge. Next-generation sequencing transcriptomics (RNA-Seq), global microarrays, and tandem mass spectrometry (MS/MS)-based proteomics have demonstrated immense value to genome curators as individual sources of information, however, integrating these data types to validate and improve structural annotation remains a major challenge. Current visual and statistical analytic tools are focused on a single data type, or existing software tools are retrofitted to analyze new data forms. We present Visual Exploration and Statistics to Promote Annotation (VESPA) is a new interactive visual analysis software tool focused on assisting scientists with the annotation of prokaryotic genomes though the integration of proteomics and transcriptomics data with current genome location coordinates. VESPA is a desktop Java™ application that integrates high-throughput proteomics data (peptide-centric) and transcriptomics (probe or RNA-Seq) data into a genomic context, all of which can be visualized at three levels of genomic resolution. Data is interrogated via searches linked to the genome visualizations to find regions with high likelihood of mis-annotation. Search results are linked to exports for further validation outside of VESPA or potential coding-regions can be analyzed concurrently with the software through interaction with BLAST. VESPA is demonstrated on two use cases (Yersinia pestis Pestoides F and Synechococcus sp. PCC 7002) to demonstrate the rapid manner in which mis-annotations can be found and explored in VESPA using either proteomics data alone, or in combination with transcriptomic data. VESPA is an interactive visual analytics tool that integrates high-throughput data into a genomic context to facilitate the discovery of structural mis-annotations in prokaryotic genomes. Data is evaluated via visual analysis across multiple levels of genomic resolution, linked searches and interaction with existing bioinformatics tools. We highlight the novel functionality of VESPA and core programming requirements for visualization of these large heterogeneous datasets for a client-side application. The software is freely available at https://www.biopilot.org/docs/Software/Vespa.php.
2012-01-01
Background The procedural aspects of genome sequencing and assembly have become relatively inexpensive, yet the full, accurate structural annotation of these genomes remains a challenge. Next-generation sequencing transcriptomics (RNA-Seq), global microarrays, and tandem mass spectrometry (MS/MS)-based proteomics have demonstrated immense value to genome curators as individual sources of information, however, integrating these data types to validate and improve structural annotation remains a major challenge. Current visual and statistical analytic tools are focused on a single data type, or existing software tools are retrofitted to analyze new data forms. We present Visual Exploration and Statistics to Promote Annotation (VESPA) is a new interactive visual analysis software tool focused on assisting scientists with the annotation of prokaryotic genomes though the integration of proteomics and transcriptomics data with current genome location coordinates. Results VESPA is a desktop Java™ application that integrates high-throughput proteomics data (peptide-centric) and transcriptomics (probe or RNA-Seq) data into a genomic context, all of which can be visualized at three levels of genomic resolution. Data is interrogated via searches linked to the genome visualizations to find regions with high likelihood of mis-annotation. Search results are linked to exports for further validation outside of VESPA or potential coding-regions can be analyzed concurrently with the software through interaction with BLAST. VESPA is demonstrated on two use cases (Yersinia pestis Pestoides F and Synechococcus sp. PCC 7002) to demonstrate the rapid manner in which mis-annotations can be found and explored in VESPA using either proteomics data alone, or in combination with transcriptomic data. Conclusions VESPA is an interactive visual analytics tool that integrates high-throughput data into a genomic context to facilitate the discovery of structural mis-annotations in prokaryotic genomes. Data is evaluated via visual analysis across multiple levels of genomic resolution, linked searches and interaction with existing bioinformatics tools. We highlight the novel functionality of VESPA and core programming requirements for visualization of these large heterogeneous datasets for a client-side application. The software is freely available at https://www.biopilot.org/docs/Software/Vespa.php. PMID:22480257
NASA Astrophysics Data System (ADS)
Ducasse, J.; Macé, M.; Jouffrais, C.
2015-08-01
Visual maps must be transcribed into (interactive) raised-line maps to be accessible for visually impaired people. However, these tactile maps suffer from several shortcomings: they are long and expensive to produce, they cannot display a large amount of information, and they are not dynamically modifiable. A number of methods have been developed to automate the production of raised-line maps, but there is not yet any tactile map editor on the market. Tangible interactions proved to be an efficient way to help a visually impaired user manipulate spatial representations. Contrary to raised-line maps, tangible maps can be autonomously constructed and edited. In this paper, we present the scenarios and the main expected contributions of the AccessiMap project, which is based on the availability of many sources of open spatial data: 1/ facilitating the production of interactive tactile maps with the development of an open-source web-based editor; 2/ investigating the use of tangible interfaces for the autonomous construction and exploration of a map by a visually impaired user.
Cross-modal interaction between visual and olfactory learning in Apis cerana.
Zhang, Li-Zhen; Zhang, Shao-Wu; Wang, Zi-Long; Yan, Wei-Yu; Zeng, Zhi-Jiang
2014-10-01
The power of the small honeybee brain carrying out behavioral and cognitive tasks has been shown repeatedly to be highly impressive. The present study investigates, for the first time, the cross-modal interaction between visual and olfactory learning in Apis cerana. To explore the role and molecular mechanisms of cross-modal learning in A. cerana, the honeybees were trained and tested in a modified Y-maze with seven visual and five olfactory stimulus, where a robust visual threshold for black/white grating (period of 2.8°-3.8°) and relatively olfactory threshold (concentration of 50-25%) was obtained. Meanwhile, the expression levels of five genes (AcCREB, Acdop1, Acdop2, Acdop3, Actyr1) related to learning and memory were analyzed under different training conditions by real-time RT-PCR. The experimental results indicate that A. cerana could exhibit cross-modal interactions between visual and olfactory learning by reducing the threshold level of the conditioning stimuli, and that these genes may play important roles in the learning process of honeybees.
Graph Visualization for RDF Graphs with SPARQL-EndPoints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sukumar, Sreenivas R; Bond, Nathaniel
2014-07-11
RDF graphs are hard to visualize as triples. This software module is a web interface that connects to a SPARQL endpoint and retrieves graph data that the user can explore interactively and seamlessly. The software written in python and JavaScript has been tested to work on screens as little as the smart phones to large screens such as EVEREST.
ERIC Educational Resources Information Center
Astle, Duncan E.; Scerif, Gaia
2011-01-01
An ever increasing amount of research in the fields of developmental psychology and adult cognitive neuroscience explores attentional control as a driver of visual short-term and working memory capacity limits ("VSTM" and "VWM", respectively). However, these literatures have thus far been disparate: they use different measures or different labels,…
NASA Astrophysics Data System (ADS)
Demir, I.; Krajewski, W. F.
2013-12-01
As geoscientists are confronted with increasingly massive datasets from environmental observations to simulations, one of the biggest challenges is having the right tools to gain scientific insight from the data and communicate the understanding to stakeholders. Recent developments in web technologies make it easy to manage, visualize and share large data sets with general public. Novel visualization techniques and dynamic user interfaces allow users to interact with data, and modify the parameters to create custom views of the data to gain insight from simulations and environmental observations. This requires developing new data models and intelligent knowledge discovery techniques to explore and extract information from complex computational simulations or large data repositories. Scientific visualization will be an increasingly important component to build comprehensive environmental information platforms. This presentation provides an overview of the trends and challenges in the field of scientific visualization, and demonstrates information visualization and communication tools developed within the light of these challenges.
Typograph: Multiscale Spatial Exploration of Text Documents
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endert, Alexander; Burtner, Edwin R.; Cramer, Nicholas O.
2013-12-01
Visualizing large document collections using a spatial layout of terms can enable quick overviews of information. However, these metaphors (e.g., word clouds, tag clouds, etc.) often lack interactivity to explore the information and the location and rendering of the terms are often not based on mathematical models that maintain relative distances from other information based on similarity metrics. Further, transitioning between levels of detail (i.e., from terms to full documents) can be challanging. In this paper, we present Typograph, a multi-scale spatial exploration visualization for large document collections. Based on the term-based visualization methods, Typograh enables multipel levels of detailmore » (terms, phrases, snippets, and full documents) within the single spatialization. Further, the information is placed based on their relative similarity to other information to create the “near = similar” geography metaphor. This paper discusses the design principles and functionality of Typograph and presents a use case analyzing Wikipedia to demonstrate usage.« less
Brain Modulyzer: Interactive Visual Analysis of Functional Brain Connectivity
Murugesan, Sugeerth; Bouchard, Kristopher; Brown, Jesse A.; ...
2016-05-09
Here, we present Brain Modulyzer, an interactive visual exploration tool for functional magnetic resonance imaging (fMRI) brain scans, aimed at analyzing the correlation between different brain regions when resting or when performing mental tasks. Brain Modulyzer combines multiple coordinated views—such as heat maps, node link diagrams, and anatomical views—using brushing and linking to provide an anatomical context for brain connectivity data. Integrating methods from graph theory and analysis, e.g., community detection and derived graph measures, makes it possible to explore the modular and hierarchical organization of functional brain networks. Providing immediate feedback by displaying analysis results instantaneously while changing parametersmore » gives neuroscientists a powerful means to comprehend complex brain structure more effectively and efficiently and supports forming hypotheses that can then be validated via statistical analysis. In order to demonstrate the utility of our tool, we also present two case studies—exploring progressive supranuclear palsy, as well as memory encoding and retrieval« less
Brain Modulyzer: Interactive Visual Analysis of Functional Brain Connectivity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murugesan, Sugeerth; Bouchard, Kristopher; Brown, Jesse A.
Here, we present Brain Modulyzer, an interactive visual exploration tool for functional magnetic resonance imaging (fMRI) brain scans, aimed at analyzing the correlation between different brain regions when resting or when performing mental tasks. Brain Modulyzer combines multiple coordinated views—such as heat maps, node link diagrams, and anatomical views—using brushing and linking to provide an anatomical context for brain connectivity data. Integrating methods from graph theory and analysis, e.g., community detection and derived graph measures, makes it possible to explore the modular and hierarchical organization of functional brain networks. Providing immediate feedback by displaying analysis results instantaneously while changing parametersmore » gives neuroscientists a powerful means to comprehend complex brain structure more effectively and efficiently and supports forming hypotheses that can then be validated via statistical analysis. In order to demonstrate the utility of our tool, we also present two case studies—exploring progressive supranuclear palsy, as well as memory encoding and retrieval« less
Gnadt, William; Grossberg, Stephen
2008-06-01
How do reactive and planned behaviors interact in real time? How are sequences of such behaviors released at appropriate times during autonomous navigation to realize valued goals? Controllers for both animals and mobile robots, or animats, need reactive mechanisms for exploration, and learned plans to reach goal objects once an environment becomes familiar. The SOVEREIGN (Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goal-oriented Navigation) animat model embodies these capabilities, and is tested in a 3D virtual reality environment. SOVEREIGN includes several interacting subsystems which model complementary properties of cortical What and Where processing streams and which clarify similarities between mechanisms for navigation and arm movement control. As the animat explores an environment, visual inputs are processed by networks that are sensitive to visual form and motion in the What and Where streams, respectively. Position-invariant and size-invariant recognition categories are learned by real-time incremental learning in the What stream. Estimates of target position relative to the animat are computed in the Where stream, and can activate approach movements toward the target. Motion cues from animat locomotion can elicit head-orienting movements to bring a new target into view. Approach and orienting movements are alternately performed during animat navigation. Cumulative estimates of each movement are derived from interacting proprioceptive and visual cues. Movement sequences are stored within a motor working memory. Sequences of visual categories are stored in a sensory working memory. These working memories trigger learning of sensory and motor sequence categories, or plans, which together control planned movements. Predictively effective chunk combinations are selectively enhanced via reinforcement learning when the animat is rewarded. Selected planning chunks effect a gradual transition from variable reactive exploratory movements to efficient goal-oriented planned movement sequences. Volitional signals gate interactions between model subsystems and the release of overt behaviors. The model can control different motor sequences under different motivational states and learns more efficient sequences to rewarded goals as exploration proceeds.
3DProIN: Protein-Protein Interaction Networks and Structure Visualization.
Li, Hui; Liu, Chunmei
2014-06-14
3DProIN is a computational tool to visualize protein-protein interaction networks in both two dimensional (2D) and three dimensional (3D) view. It models protein-protein interactions in a graph and explores the biologically relevant features of the tertiary structures of each protein in the network. Properties such as color, shape and name of each node (protein) of the network can be edited in either 2D or 3D views. 3DProIN is implemented using 3D Java and C programming languages. The internet crawl technique is also used to parse dynamically grasped protein interactions from protein data bank (PDB). It is a java applet component that is embedded in the web page and it can be used on different platforms including Linux, Mac and Window using web browsers such as Firefox, Internet Explorer, Chrome and Safari. It also was converted into a mac app and submitted to the App store as a free app. Mac users can also download the app from our website. 3DProIN is available for academic research at http://bicompute.appspot.com.
NASA Astrophysics Data System (ADS)
Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris
2015-04-01
Many geoscience applications can benefit from testing many combinations of input parameters for geochemical simulation models. It is, however, a challenge to screen the input and output data from the model to identify the significant relationships between input parameters and output variables. For addressing this problem we propose a Visual Analytics approach that has been developed in an ongoing collaboration between computer science and geoscience researchers. Our Visual Analytics approach uses visualization methods of hierarchical horizontal axis, multi-factor stacked bar charts and interactive semi-automated filtering for input and output data together with automatic sensitivity analysis. This guides the users towards significant relationships. We implement our approach as an interactive data exploration tool. It is designed with flexibility in mind, so that a diverse set of tasks such as inverse modeling, sensitivity analysis and model parameter refinement can be supported. Here we demonstrate the capabilities of our approach by two examples for gas storage applications. For the first example our Visual Analytics approach enabled the analyst to observe how the element concentrations change around previously established baselines in response to thousands of different combinations of mineral phases. This supported combinatorial inverse modeling for interpreting observations about the chemical composition of the formation fluids at the Ketzin pilot site for CO2 storage. The results indicate that, within the experimental error range, the formation fluid cannot be considered at local thermodynamical equilibrium with the mineral assemblage of the reservoir rock. This is a valuable insight from the predictive geochemical modeling for the Ketzin site. For the second example our approach supports sensitivity analysis for a reaction involving the reductive dissolution of pyrite with formation of pyrrothite in presence of gaseous hydrogen. We determine that this reaction is thermodynamically favorable under a broad range of conditions. This includes low temperatures and absence of microbial catalysators. Our approach has potential for use in other applications that involve exploration of relationships in geochemical simulation model data.
Harnessing the web information ecosystem with wiki-based visualization dashboards.
McKeon, Matt
2009-01-01
We describe the design and deployment of Dashiki, a public website where users may collaboratively build visualization dashboards through a combination of a wiki-like syntax and interactive editors. Our goals are to extend existing research on social data analysis into presentation and organization of data from multiple sources, explore new metaphors for these activities, and participate more fully in the web!s information ecology by providing tighter integration with real-time data. To support these goals, our design includes novel and low-barrier mechanisms for editing and layout of dashboard pages and visualizations, connection to data sources, and coordinating interaction between visualizations. In addition to describing these technologies, we provide a preliminary report on the public launch of a prototype based on this design, including a description of the activities of our users derived from observation and interviews.
Progressive Visual Analytics: User-Driven Visual Exploration of In-Progress Analytics.
Stolper, Charles D; Perer, Adam; Gotz, David
2014-12-01
As datasets grow and analytic algorithms become more complex, the typical workflow of analysts launching an analytic, waiting for it to complete, inspecting the results, and then re-Iaunching the computation with adjusted parameters is not realistic for many real-world tasks. This paper presents an alternative workflow, progressive visual analytics, which enables an analyst to inspect partial results of an algorithm as they become available and interact with the algorithm to prioritize subspaces of interest. Progressive visual analytics depends on adapting analytical algorithms to produce meaningful partial results and enable analyst intervention without sacrificing computational speed. The paradigm also depends on adapting information visualization techniques to incorporate the constantly refining results without overwhelming analysts and provide interactions to support an analyst directing the analytic. The contributions of this paper include: a description of the progressive visual analytics paradigm; design goals for both the algorithms and visualizations in progressive visual analytics systems; an example progressive visual analytics system (Progressive Insights) for analyzing common patterns in a collection of event sequences; and an evaluation of Progressive Insights and the progressive visual analytics paradigm by clinical researchers analyzing electronic medical records.
PIVOT: platform for interactive analysis and visualization of transcriptomics data.
Zhu, Qin; Fisher, Stephen A; Dueck, Hannah; Middleton, Sarah; Khaladkar, Mugdha; Kim, Junhyong
2018-01-05
Many R packages have been developed for transcriptome analysis but their use often requires familiarity with R and integrating results of different packages requires scripts to wrangle the datatypes. Furthermore, exploratory data analyses often generate multiple derived datasets such as data subsets or data transformations, which can be difficult to track. Here we present PIVOT, an R-based platform that wraps open source transcriptome analysis packages with a uniform user interface and graphical data management that allows non-programmers to interactively explore transcriptomics data. PIVOT supports more than 40 popular open source packages for transcriptome analysis and provides an extensive set of tools for statistical data manipulations. A graph-based visual interface is used to represent the links between derived datasets, allowing easy tracking of data versions. PIVOT further supports automatic report generation, publication-quality plots, and program/data state saving, such that all analysis can be saved, shared and reproduced. PIVOT will allow researchers with broad background to easily access sophisticated transcriptome analysis tools and interactively explore transcriptome datasets.
Brughmans, Tom; de Waal, Maaike S; Hofman, Corinne L; Brandes, Ulrik
2018-01-01
This paper presents a study of the visual properties of natural and Amerindian cultural landscapes in late pre-colonial East-Guadeloupe and of how these visual properties affected social interactions. Through a review of descriptive and formal visibility studies in Caribbean archaeology, it reveals that the ability of visual properties to affect past human behaviour is frequently evoked but the more complex of these hypotheses are rarely studied formally. To explore such complex hypotheses, the current study applies a range of techniques: total viewsheds, cumulative viewsheds, visual neighbourhood configurations and visibility networks. Experiments were performed to explore the control of seascapes, the functioning of hypothetical smoke signalling networks, the correlation of these visual properties with stylistic similarities of material culture found at sites and the change of visual properties over time. The results of these experiments suggest that only few sites in Eastern Guadeloupe are located in areas that are particularly suitable to visually control possible sea routes for short- and long-distance exchange; that visual control over sea areas was not a factor of importance for the existence of micro-style areas; that during the early phase of the Late Ceramic Age networks per landmass are connected and dense and that they incorporate all sites, a structure that would allow hypothetical smoke signalling networks; and that the visual properties of locations of the late sites Morne Souffleur and Morne Cybèle-1 were not ideal for defensive purposes. These results led us to propose a multi-scalar hypothesis for how lines of sight between settlements in the Lesser Antilles could have structured past human behaviour: short-distance visibility networks represent the structuring of navigation and communication within landmasses, whereas the landmasses themselves served as focal points for regional navigation and interaction. We conclude by emphasising that since our archaeological theories about visual properties usually take a multi-scalar landscape perspective, there is a need for this perspective to be reflected in our formal visibility methods as is made possible by the methods used in this paper.
López, David; Oehlberg, Lora; Doger, Candemir; Isenberg, Tobias
2016-05-01
We discuss touch-based navigation of 3D visualizations in a combined monoscopic and stereoscopic viewing environment. We identify a set of interaction modes, and a workflow that helps users transition between these modes to improve their interaction experience. In our discussion we analyze, in particular, the control-display space mapping between the different reference frames of the stereoscopic and monoscopic displays. We show how this mapping supports interactive data exploration, but may also lead to conflicts between the stereoscopic and monoscopic views due to users' movement in space; we resolve these problems through synchronization. To support our discussion, we present results from an exploratory observational evaluation with domain experts in fluid mechanics and structural biology. These experts explored domain-specific datasets using variations of a system that embodies the interaction modes and workflows; we report on their interactions and qualitative feedback on the system and its workflow.
MetExploreViz: web component for interactive metabolic network visualization.
Chazalviel, Maxime; Frainay, Clément; Poupin, Nathalie; Vinson, Florence; Merlet, Benjamin; Gloaguen, Yoann; Cottret, Ludovic; Jourdan, Fabien
2017-09-15
MetExploreViz is an open source web component that can be easily embedded in any web site. It provides features dedicated to the visualization of metabolic networks and pathways and thus offers a flexible solution to analyze omics data in a biochemical context. Documentation and link to GIT code repository (GPL 3.0 license)are available at this URL: http://metexplore.toulouse.inra.fr/metexploreViz/doc /. Tutorial is available at this URL. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Interactive visualization of public health indicators to support policymaking: An exploratory study
Zakkar, Moutasem; Sedig, Kamran
2017-01-01
Purpose The purpose of this study is to examine the use of interactive visualizations to represent data/information related to social determinants of health and public health indicators, and to investigate the benefits of such visualizations for health policymaking. Methods: The study developed a prototype for an online interactive visualization tool that represents the social determinants of health. The study participants explored and used the tool. The tool was evaluated using the informal user experience evaluation method. This method involves the prospective users of a tool to use and play with it and their feedback to be collected through interviews. Results: Using visualizations to represent and interact with health indicators has advantages over traditional representation techniques that do not allow users to interact with the information. Communicating healthcare indicators to policymakers is a complex task because of the complexity of the indicators, diversity of audiences, and different audience needs. This complexity can lead to information misinterpretation, which occurs when users of the health data ignore or do not know why, where, and how the data has been produced, or where and how it can be used. Conclusions: Public health policymaking is a complex process, and data is only one element among others needed in this complex process. Researchers and healthcare organizations should conduct a strategic evaluation to assess the usability of interactive visualizations and decision support tools before investing in these tools. Such evaluation should take into consideration the cost, ease of use, learnability, and efficiency of those tools, and the factors that influence policymaking. PMID:29026455
TreePlus: interactive exploration of networks with enhanced tree layouts.
Lee, Bongshin; Parr, Cynthia S; Plaisant, Catherine; Bederson, Benjamin B; Veksler, Vladislav D; Gray, Wayne D; Kotfila, Christopher
2006-01-01
Despite extensive research, it is still difficult to produce effective interactive layouts for large graphs. Dense layout and occlusion make food webs, ontologies, and social networks difficult to understand and interact with. We propose a new interactive Visual Analytics component called TreePlus that is based on a tree-style layout. TreePlus reveals the missing graph structure with visualization and interaction while maintaining good readability. To support exploration of the local structure of the graph and gathering of information from the extensive reading of labels, we use a guiding metaphor of "Plant a seed and watch it grow." It allows users to start with a node and expand the graph as needed, which complements the classic overview techniques that can be effective at (but often limited to) revealing clusters. We describe our design goals, describe the interface, and report on a controlled user study with 28 participants comparing TreePlus with a traditional graph interface for six tasks. In general, the advantage of TreePlus over the traditional interface increased as the density of the displayed data increased. Participants also reported higher levels of confidence in their answers with TreePlus and most of them preferred TreePlus.
Integrative interactive visualization of crystal structure, band structure, and Brillouin zone
NASA Astrophysics Data System (ADS)
Hanson, Robert; Hinke, Ben; van Koevering, Matthew; Oses, Corey; Toher, Cormac; Hicks, David; Gossett, Eric; Plata Ramos, Jose; Curtarolo, Stefano; Aflow Collaboration
The AFLOW library is an open-access database for high throughput ab-initio calculations that serves as a resource for the dissemination of computational results in the area of materials science. Our project aims to create an interactive web-based visualization of any structure in the AFLOW database that has associate band structure data in a way that allows novel simultaneous exploration of the crystal structure, band structure, and Brillouin zone. Interactivity is obtained using two synchronized JSmol implementations, one for the crystal structure and one for the Brillouin zone, along with a D3-based band-structure diagram produced on the fly from data obtained from the AFLOW database. The current website portal (http://aflowlib.mems.duke.edu/users/jmolers/matt/website) allows interactive access and visualization of crystal structure, Brillouin zone and band structure for more than 55,000 inorganic crystal structures. This work was supported by the US Navy Office of Naval Research through a Broad Area Announcement administered by Duke University.
2015-01-01
Background Though cluster analysis has become a routine analytic task for bioinformatics research, it is still arduous for researchers to assess the quality of a clustering result. To select the best clustering method and its parameters for a dataset, researchers have to run multiple clustering algorithms and compare them. However, such a comparison task with multiple clustering results is cognitively demanding and laborious. Results In this paper, we present XCluSim, a visual analytics tool that enables users to interactively compare multiple clustering results based on the Visual Information Seeking Mantra. We build a taxonomy for categorizing existing techniques of clustering results visualization in terms of the Gestalt principles of grouping. Using the taxonomy, we choose the most appropriate interactive visualizations for presenting individual clustering results from different types of clustering algorithms. The efficacy of XCluSim is shown through case studies with a bioinformatician. Conclusions Compared to other relevant tools, XCluSim enables users to compare multiple clustering results in a more scalable manner. Moreover, XCluSim supports diverse clustering algorithms and dedicated visualizations and interactions for different types of clustering results, allowing more effective exploration of details on demand. Through case studies with a bioinformatics researcher, we received positive feedback on the functionalities of XCluSim, including its ability to help identify stably clustered items across multiple clustering results. PMID:26328893
ERIC Educational Resources Information Center
Bianchi, June
2008-01-01
The article presents the rationale, methodology, and selected outcomes from "More than a body's work," a collaborative, international, arts educational interactive research project. The project, taking place in both New York and England, explored the ways in which young people construct and "perform" identity through the…
GRIDVIEW: Recent Improvements in Research and Education Software for Exploring Mars Topography
NASA Technical Reports Server (NTRS)
Roark, J. H.; Frey, H. V.
2001-01-01
We have developed an Interactive Data Language (IDL) scientific visualization software tool called GRIDVIEW that can be used in research and education to explore and study the most recent Mars Orbiter Laser Altimeter (MOLA) gridded topography of Mars (http://denali.gsfc.nasa.gov/mola_pub/gridview). Additional information is contained in the original extended abstract.
Hyperspace geography: visualizing fitness landscapes beyond 4D.
Wiles, Janet; Tonkes, Bradley
2006-01-01
Human perception is finely tuned to extract structure about the 4D world of time and space as well as properties such as color and texture. Developing intuitions about spatial structure beyond 4D requires exploiting other perceptual and cognitive abilities. One of the most natural ways to explore complex spaces is for a user to actively navigate through them, using local explorations and global summaries to develop intuitions about structure, and then testing the developing ideas by further exploration. This article provides a brief overview of a technique for visualizing surfaces defined over moderate-dimensional binary spaces, by recursively unfolding them onto a 2D hypergraph. We briefly summarize the uses of a freely available Web-based visualization tool, Hyperspace Graph Paper (HSGP), for exploring fitness landscapes and search algorithms in evolutionary computation. HSGP provides a way for a user to actively explore a landscape, from simple tasks such as mapping the neighborhood structure of different points, to seeing global properties such as the size and distribution of basins of attraction or how different search algorithms interact with landscape structure. It has been most useful for exploring recursive and repetitive landscapes, and its strength is that it allows intuitions to be developed through active navigation by the user, and exploits the visual system's ability to detect pattern and texture. The technique is most effective when applied to continuous functions over Boolean variables using 4 to 16 dimensions.
Visual Culture and Electronic Government: Exploring a New Generation of E-Government
NASA Astrophysics Data System (ADS)
Bekkers, Victor; Moody, Rebecca
E-government is becoming more picture-oriented. What meaning do stakeholders attach to visual events and visualization? Comparative case study research show the functional meaning primarily refers to registration, integration, transparency and communication. The political meaning refers to new ways of framing in order to secure specific interests and claims. To what the institutional meaning relates is ambiguous: either it improves the position of citizens, or it reinforces the existing bias presented by governments. Hence, we expect that the emergence of a visualized public space, through omnipresent penetration of (mobile) multimedia technologies, will influence government-citizen interactions.
Spatial Paradigm for Information Retrieval and Exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
The SPIRE system consists of software for visual analysis of primarily text based information sources. This technology enables the content analysis of text documents without reading all the documents. It employs several algorithms for text and word proximity analysis. It identifies the key themes within the text documents. From this analysis, it projects the results onto a visual spatial proximity display (Galaxies or Themescape) where items (documents and/or themes) visually close to each other are known to have content which is close to each other. Innovative interaction techniques then allow for dynamic visual analysis of large text based information spaces.
SPIRE1.03. Spatial Paradigm for Information Retrieval and Exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, K.J.; Bohn, S.; Crow, V.
The SPIRE system consists of software for visual analysis of primarily text based information sources. This technology enables the content analysis of text documents without reading all the documents. It employs several algorithms for text and word proximity analysis. It identifies the key themes within the text documents. From this analysis, it projects the results onto a visual spatial proximity display (Galaxies or Themescape) where items (documents and/or themes) visually close to each other are known to have content which is close to each other. Innovative interaction techniques then allow for dynamic visual analysis of large text based information spaces.
WeightLifter: Visual Weight Space Exploration for Multi-Criteria Decision Making.
Pajer, Stephan; Streit, Marc; Torsney-Weir, Thomas; Spechtenhauser, Florian; Muller, Torsten; Piringer, Harald
2017-01-01
A common strategy in Multi-Criteria Decision Making (MCDM) is to rank alternative solutions by weighted summary scores. Weights, however, are often abstract to the decision maker and can only be set by vague intuition. While previous work supports a point-wise exploration of weight spaces, we argue that MCDM can benefit from a regional and global visual analysis of weight spaces. Our main contribution is WeightLifter, a novel interactive visualization technique for weight-based MCDM that facilitates the exploration of weight spaces with up to ten criteria. Our technique enables users to better understand the sensitivity of a decision to changes of weights, to efficiently localize weight regions where a given solution ranks high, and to filter out solutions which do not rank high enough for any plausible combination of weights. We provide a comprehensive requirement analysis for weight-based MCDM and describe an interactive workflow that meets these requirements. For evaluation, we describe a usage scenario of WeightLifter in automotive engineering and report qualitative feedback from users of a deployed version as well as preliminary feedback from decision makers in multiple domains. This feedback confirms that WeightLifter increases both the efficiency of weight-based MCDM and the awareness of uncertainty in the ultimate decisions.
Schofield, E C; Carver, T; Achuthan, P; Freire-Pritchett, P; Spivakov, M; Todd, J A; Burren, O S
2016-08-15
Promoter capture Hi-C (PCHi-C) allows the genome-wide interrogation of physical interactions between distal DNA regulatory elements and gene promoters in multiple tissue contexts. Visual integration of the resultant chromosome interaction maps with other sources of genomic annotations can provide insight into underlying regulatory mechanisms. We have developed Capture HiC Plotter (CHiCP), a web-based tool that allows interactive exploration of PCHi-C interaction maps and integration with both public and user-defined genomic datasets. CHiCP is freely accessible from www.chicp.org and supports most major HTML5 compliant web browsers. Full source code and installation instructions are available from http://github.com/D-I-L/django-chicp ob219@cam.ac.uk. © The Author 2016. Published by Oxford University Press. All rights reserved.
Gao, Peng; Liu, Peng; Su, Hongsen; Qiao, Liang
2015-04-01
Integrating visualization toolkit and the capability of interaction, bidirectional communication and graphics rendering which provided by HTML5, we explored and experimented on the feasibility of remote medical image reconstruction and interaction in pure Web. We prompted server-centric method which did not need to download the big medical data to local connections and avoided considering network transmission pressure and the three-dimensional (3D) rendering capability of client hardware. The method integrated remote medical image reconstruction and interaction into Web seamlessly, which was applicable to lower-end computers and mobile devices. Finally, we tested this method in the Internet and achieved real-time effects. This Web-based 3D reconstruction and interaction method, which crosses over internet terminals and performance limited devices, may be useful for remote medical assistant.
A generalized 3D framework for visualization of planetary data.
NASA Astrophysics Data System (ADS)
Larsen, K. W.; De Wolfe, A. W.; Putnam, B.; Lindholm, D. M.; Nguyen, D.
2016-12-01
As the volume and variety of data returned from planetary exploration missions continues to expand, new tools and technologies are needed to explore the data and answer questions about the formation and evolution of the solar system. We have developed a 3D visualization framework that enables the exploration of planetary data from multiple instruments on the MAVEN mission to Mars. This framework not only provides the opportunity for cross-instrument visualization, but is extended to include model data as well, helping to bridge the gap between theory and observation. This is made possible through the use of new web technologies, namely LATIS, a data server that can stream data and spacecraft ephemerides to a web browser, and Cesium, a Javascript library for 3D globes. The common visualization framework we have developed is flexible and modular so that it can easily be adapted for additional missions. In addition to demonstrating the combined data and modeling capabilities of the system for the MAVEN mission, we will display the first ever near real-time `QuickLook', interactive, 4D data visualization for the Magnetospheric Multiscale Mission (MMS). In this application, data from all four spacecraft can be manipulated and visualized as soon as the data is ingested into the MMS Science Data Center, less than one day after collection.
compuGUT: An in silico platform for simulating intestinal fermentation
NASA Astrophysics Data System (ADS)
Moorthy, Arun S.; Eberl, Hermann J.
The microbiota inhabiting the colon and its effect on health is a topic of significant interest. In this paper, we describe the compuGUT - a simulation tool developed to assist in exploring interactions between intestinal microbiota and their environment. The primary numerical machinery is implemented in C, and the accessory scripts for loading and visualization are prepared in bash (LINUX) and R. SUNDIALS libraries are employed for numerical integration, and googleVis API for interactive visualization. Supplementary material includes a concise description of the underlying mathematical model, and detailed characterization of numerical errors and computing times associated with implementation parameters.
MTVis: tree exploration using a multitouch interface
NASA Astrophysics Data System (ADS)
Andrews, David; Teoh, Soon Tee
2010-01-01
We present MTVis, a multi-touch interactive tree visualization system. The multi-touch interface display hardware is built using the LED-LP technology, and the tree layout is based on RINGS, but enhanced with multitouch interactions. We describe the features of the system, and how the multi-touch interface enhances the user's experience in exploring the tree data structure. In particular, the multi-touch interface allows the user to simultaneously control two child nodes of the root, and rotate them so that some nodes are magnified, while preserving the layout of the tree. We also describe the other meaninful touch screen gestures the users can use to intuitively explore the tree.
ERIC Educational Resources Information Center
Torrens, Paul M.; Griffin, William A.
2013-01-01
The authors describe an observational and analytic methodology for recording and interpreting dynamic microprocesses that occur during social interaction, making use of space--time data collection techniques, spatial-statistical analysis, and visualization. The scheme has three investigative foci: Structure, Activity Composition, and Clustering.…
Learning Protein Structure with Peers in an AR-Enhanced Learning Environment
ERIC Educational Resources Information Center
Chen, Yu-Chien
2013-01-01
Augmented reality (AR) is an interactive system that allows users to interact with virtual objects and the real world at the same time. The purpose of this dissertation was to explore how AR, as a new visualization tool, that can demonstrate spatial relationships by representing three dimensional objects and animations, facilitates students to…
A distributed analysis and visualization system for model and observational data
NASA Technical Reports Server (NTRS)
Wilhelmson, Robert B.
1994-01-01
Software was developed with NASA support to aid in the analysis and display of the massive amounts of data generated from satellites, observational field programs, and from model simulations. This software was developed in the context of the PATHFINDER (Probing ATmospHeric Flows in an Interactive and Distributed EnviRonment) Project. The overall aim of this project is to create a flexible, modular, and distributed environment for data handling, modeling simulations, data analysis, and visualization of atmospheric and fluid flows. Software completed with NASA support includes GEMPAK analysis, data handling, and display modules for which collaborators at NASA had primary responsibility, and prototype software modules for three-dimensional interactive and distributed control and display as well as data handling, for which NSCA was responsible. Overall process control was handled through a scientific and visualization application builder from Silicon Graphics known as the Iris Explorer. In addition, the GEMPAK related work (GEMVIS) was also ported to the Advanced Visualization System (AVS) application builder. Many modules were developed to enhance those already available in Iris Explorer including HDF file support, improved visualization and display, simple lattice math, and the handling of metadata through development of a new grid datatype. Complete source and runtime binaries along with on-line documentation is available via the World Wide Web at: http://redrock.ncsa.uiuc.edu/ PATHFINDER/pathre12/top/top.html.
Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega
2015-01-01
This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189
CrosstalkNet: A Visualization Tool for Differential Co-expression Networks and Communities.
Manem, Venkata; Adam, George Alexandru; Gruosso, Tina; Gigoux, Mathieu; Bertos, Nicholas; Park, Morag; Haibe-Kains, Benjamin
2018-04-15
Variations in physiological conditions can rewire molecular interactions between biological compartments, which can yield novel insights into gain or loss of interactions specific to perturbations of interest. Networks are a promising tool to elucidate intercellular interactions, yet exploration of these large-scale networks remains a challenge due to their high dimensionality. To retrieve and mine interactions, we developed CrosstalkNet, a user friendly, web-based network visualization tool that provides a statistical framework to infer condition-specific interactions coupled with a community detection algorithm for bipartite graphs to identify significantly dense subnetworks. As a case study, we used CrosstalkNet to mine a set of 54 and 22 gene-expression profiles from breast tumor and normal samples, respectively, with epithelial and stromal compartments extracted via laser microdissection. We show how CrosstalkNet can be used to explore large-scale co-expression networks and to obtain insights into the biological processes that govern cross-talk between different tumor compartments. Significance: This web application enables researchers to mine complex networks and to decipher novel biological processes in tumor epithelial-stroma cross-talk as well as in other studies of intercompartmental interactions. Cancer Res; 78(8); 2140-3. ©2018 AACR . ©2018 American Association for Cancer Research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naqvi, S
2014-06-15
Purpose: Most medical physics programs emphasize proficiency in routine clinical calculations and QA. The formulaic aspect of these calculations and prescriptive nature of measurement protocols obviate the need to frequently apply basic physical principles, which, therefore, gradually decay away from memory. E.g. few students appreciate the role of electron transport in photon dose, making it difficult to understand key concepts such as dose buildup, electronic disequilibrium effects and Bragg-Gray theory. These conceptual deficiencies manifest when the physicist encounters a new system, requiring knowledge beyond routine activities. Methods: Two interactive computer simulation tools are developed to facilitate deeper learning of physicalmore » principles. One is a Monte Carlo code written with a strong educational aspect. The code can “label” regions and interactions to highlight specific aspects of the physics, e.g., certain regions can be designated as “starters” or “crossers,” and any interaction type can be turned on and off. Full 3D tracks with specific portions highlighted further enhance the visualization of radiation transport problems. The second code calculates and displays trajectories of a collection electrons under arbitrary space/time dependent Lorentz force using relativistic kinematics. Results: Using the Monte Carlo code, the student can interactively study photon and electron transport through visualization of dose components, particle tracks, and interaction types. The code can, for instance, be used to study kerma-dose relationship, explore electronic disequilibrium near interfaces, or visualize kernels by using interaction forcing. The electromagnetic simulator enables the student to explore accelerating mechanisms and particle optics in devices such as cyclotrons and linacs. Conclusion: The proposed tools are designed to enhance understanding of abstract concepts by highlighting various aspects of the physics. The simulations serve as virtual experiments that give deeper and long lasting understanding of core principles. The student can then make sound judgements in novel situations encountered beyond routine clinical activities.« less
Preisig, Basil C; Eggenberger, Noëmi; Zito, Giuseppe; Vanbellingen, Tim; Schumacher, Rahel; Hopfner, Simone; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Müri, René M
2015-03-01
Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face. Copyright © 2014 Elsevier Ltd. All rights reserved.
A tool for exploring space-time patterns: an animation user research.
Ogao, Patrick J
2006-08-29
Ever since Dr. John Snow (1813-1854) used a case map to identify water well as the source of a cholera outbreak in London in the 1800s, the use of spatio-temporal maps have become vital tools in a wide range of disease mapping and control initiatives. The increasing use of spatio-temporal maps in these life-threatening sectors warrants that they are accurate, and easy to interpret to enable prompt decision making by health experts. Similar spatio-temporal maps are observed in urban growth and census mapping--all critical aspects a of a country's socio-economic development. In this paper, a user test research was carried out to determine the effectiveness of spatio-temporal maps (animation) in exploring geospatial structures encompassing disease, urban and census mapping. Three types of animation were used, namely; passive, interactive and inference-based animation, with the key differences between them being on the level of interactivity and complementary domain knowledge that each offers to the user. Passive animation maintains the view only status. The user has no control over its contents and dynamic variables. Interactive animation provides users with the basic media player controls, navigation and orientation tools. Inference-based animation incorporates these interactive capabilities together with a complementary automated intelligent view that alerts users to interesting patterns, trends or anomalies that may be inherent in the data sets. The test focussed on the role of animation passive and interactive capabilities in exploring space-time patterns by engaging test-subjects in thinking aloud evaluation protocol. The test subjects were selected from a geoinformatics (map reading, interpretation and analysis abilities) background. Every test-subject used each of the three types of animation and their performances for each session assessed. The results show that interactivity in animation is a preferred exploratory tool in identifying, interpreting and providing explanations about observed geospatial phenomena. Also, exploring geospatial data structures using animation is best achieved using provocative interactive tools such as was seen with the inference-based animation. The visual methods employed using the three types of animation are all related and together these patterns confirm the exploratory cognitive structure and processes for visualization tools. The generic types of animation as defined in this paper play a crucial role in facilitating the visualization of geospatial data. These animations can be created and their contents defined based on the user's presentational and exploratory needs. For highly explorative tasks, maintaining a link between the data sets and the animation is crucial to enabling a rich and effective knowledge discovery environment.
A tool for exploring space-time patterns : an animation user research
Ogao, Patrick J
2006-01-01
Background Ever since Dr. John Snow (1813–1854) used a case map to identify water well as the source of a cholera outbreak in London in the 1800s, the use of spatio-temporal maps have become vital tools in a wide range of disease mapping and control initiatives. The increasing use of spatio-temporal maps in these life-threatening sectors warrants that they are accurate, and easy to interpret to enable prompt decision making by health experts. Similar spatio-temporal maps are observed in urban growth and census mapping – all critical aspects a of a country's socio-economic development. In this paper, a user test research was carried out to determine the effectiveness of spatio-temporal maps (animation) in exploring geospatial structures encompassing disease, urban and census mapping. Results Three types of animation were used, namely; passive, interactive and inference-based animation, with the key differences between them being on the level of interactivity and complementary domain knowledge that each offers to the user. Passive animation maintains the view only status. The user has no control over its contents and dynamic variables. Interactive animation provides users with the basic media player controls, navigation and orientation tools. Inference-based animation incorporates these interactive capabilities together with a complementary automated intelligent view that alerts users to interesting patterns, trends or anomalies that may be inherent in the data sets. The test focussed on the role of animation passive and interactive capabilities in exploring space-time patterns by engaging test-subjects in thinking aloud evaluation protocol. The test subjects were selected from a geoinformatics (map reading, interpretation and analysis abilities) background. Every test-subject used each of the three types of animation and their performances for each session assessed. The results show that interactivity in animation is a preferred exploratory tool in identifying, interpreting and providing explanations about observed geospatial phenomena. Also, exploring geospatial data structures using animation is best achieved using provocative interactive tools such as was seen with the inference-based animation. The visual methods employed using the three types of animation are all related and together these patterns confirm the exploratory cognitive structure and processes for visualization tools. Conclusion The generic types of animation as defined in this paper play a crucial role in facilitating the visualization of geospatial data. These animations can be created and their contents defined based on the user's presentational and exploratory needs. For highly explorative tasks, maintaining a link between the data sets and the animation is crucial to enabling a rich and effective knowledge discovery environment. PMID:16938138
imDEV: a graphical user interface to R multivariate analysis tools in Microsoft Excel.
Grapov, Dmitry; Newman, John W
2012-09-01
Interactive modules for Data Exploration and Visualization (imDEV) is a Microsoft Excel spreadsheet embedded application providing an integrated environment for the analysis of omics data through a user-friendly interface. Individual modules enables interactive and dynamic analyses of large data by interfacing R's multivariate statistics and highly customizable visualizations with the spreadsheet environment, aiding robust inferences and generating information-rich data visualizations. This tool provides access to multiple comparisons with false discovery correction, hierarchical clustering, principal and independent component analyses, partial least squares regression and discriminant analysis, through an intuitive interface for creating high-quality two- and a three-dimensional visualizations including scatter plot matrices, distribution plots, dendrograms, heat maps, biplots, trellis biplots and correlation networks. Freely available for download at http://sourceforge.net/projects/imdev/. Implemented in R and VBA and supported by Microsoft Excel (2003, 2007 and 2010).
Tax, Chantal M. W.; Chamberland, Maxime; van Stralen, Marijn; Viergever, Max A.; Whittingstall, Kevin; Fortin, David; Descoteaux, Maxime; Leemans, Alexander
2015-01-01
Fiber tractography plays an important role in exploring the architectural organization of fiber trajectories, both in fundamental neuroscience and in clinical applications. With the advent of diffusion MRI (dMRI) approaches that can also model “crossing fibers”, the complexity of the fiber network as reconstructed with tractography has increased tremendously. Many pathways interdigitate and overlap, which hampers an unequivocal 3D visualization of the network and impedes an efficient study of its organization. We propose a novel fiber tractography visualization approach that interactively and selectively adapts the transparency rendering of fiber trajectories as a function of their orientation to enhance the visibility of the spatial context. More specifically, pathways that are oriented (locally or globally) along a user-specified opacity axis can be made more transparent or opaque. This substantially improves the 3D visualization of the fiber network and the exploration of tissue configurations that would otherwise be largely covered by other pathways. We present examples of fiber bundle extraction and neurosurgical planning cases where the added benefit of our new visualization scheme is demonstrated over conventional fiber visualization approaches. PMID:26444010
Tax, Chantal M W; Chamberland, Maxime; van Stralen, Marijn; Viergever, Max A; Whittingstall, Kevin; Fortin, David; Descoteaux, Maxime; Leemans, Alexander
2015-01-01
Fiber tractography plays an important role in exploring the architectural organization of fiber trajectories, both in fundamental neuroscience and in clinical applications. With the advent of diffusion MRI (dMRI) approaches that can also model "crossing fibers", the complexity of the fiber network as reconstructed with tractography has increased tremendously. Many pathways interdigitate and overlap, which hampers an unequivocal 3D visualization of the network and impedes an efficient study of its organization. We propose a novel fiber tractography visualization approach that interactively and selectively adapts the transparency rendering of fiber trajectories as a function of their orientation to enhance the visibility of the spatial context. More specifically, pathways that are oriented (locally or globally) along a user-specified opacity axis can be made more transparent or opaque. This substantially improves the 3D visualization of the fiber network and the exploration of tissue configurations that would otherwise be largely covered by other pathways. We present examples of fiber bundle extraction and neurosurgical planning cases where the added benefit of our new visualization scheme is demonstrated over conventional fiber visualization approaches.
How Dynamic Visualization Technology can Support Molecular Reasoning
NASA Astrophysics Data System (ADS)
Levy, Dalit
2013-10-01
This paper reports the results of a study aimed at exploring the advantages of dynamic visualization for the development of better understanding of molecular processes. We designed a technology-enhanced curriculum module in which high school chemistry students conduct virtual experiments with dynamic molecular visualizations of solid, liquid, and gas. They interact with the visualizations and carry out inquiry activities to make and refine connections between observable phenomena and atomic level processes related to phase change. The explanations proposed by 300 pairs of students in response to pre/post-assessment items have been analyzed using a scale for measuring the level of molecular reasoning. Results indicate that from pretest to posttest, students make progress in their level of molecular reasoning and are better able to connect intermolecular forces and phase change in their explanations. The paper presents the results through the lens of improvement patterns and the metaphor of the "ladder of molecular reasoning," and discusses how this adds to our understanding of the benefits of interacting with dynamic molecular visualizations.
Visualization of Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Gerald-Yamasaki, Michael; Hultquist, Jeff; Bryson, Steve; Kenwright, David; Lane, David; Walatka, Pamela; Clucas, Jean; Watson, Velvin; Lasinski, T. A. (Technical Monitor)
1995-01-01
Scientific visualization serves the dual purpose of exploration and exposition of the results of numerical simulations of fluid flow. Along with the basic visualization process which transforms source data into images, there are four additional components to a complete visualization system: Source Data Processing, User Interface and Control, Presentation, and Information Management. The requirements imposed by the desired mode of operation (i.e. real-time, interactive, or batch) and the source data have their effect on each of these visualization system components. The special requirements imposed by the wide variety and size of the source data provided by the numerical simulation of fluid flow presents an enormous challenge to the visualization system designer. We describe the visualization system components including specific visualization techniques and how the mode of operation and source data requirements effect the construction of computational fluid dynamics visualization systems.
MaGnET: Malaria Genome Exploration Tool.
Sharman, Joanna L; Gerloff, Dietlind L
2013-09-15
The Malaria Genome Exploration Tool (MaGnET) is a software tool enabling intuitive 'exploration-style' visualization of functional genomics data relating to the malaria parasite, Plasmodium falciparum. MaGnET provides innovative integrated graphic displays for different datasets, including genomic location of genes, mRNA expression data, protein-protein interactions and more. Any selection of genes to explore made by the user is easily carried over between the different viewers for different datasets, and can be changed interactively at any point (without returning to a search). Free online use (Java Web Start) or download (Java application archive and MySQL database; requires local MySQL installation) at http://malariagenomeexplorer.org joanna.sharman@ed.ac.uk or dgerloff@ffame.org Supplementary data are available at Bioinformatics online.
MotionExplorer: exploratory search in human motion capture data based on hierarchical aggregation.
Bernard, Jürgen; Wilhelm, Nils; Krüger, Björn; May, Thorsten; Schreck, Tobias; Kohlhammer, Jörn
2013-12-01
We present MotionExplorer, an exploratory search and analysis system for sequences of human motion in large motion capture data collections. This special type of multivariate time series data is relevant in many research fields including medicine, sports and animation. Key tasks in working with motion data include analysis of motion states and transitions, and synthesis of motion vectors by interpolation and combination. In the practice of research and application of human motion data, challenges exist in providing visual summaries and drill-down functionality for handling large motion data collections. We find that this domain can benefit from appropriate visual retrieval and analysis support to handle these tasks in presence of large motion data. To address this need, we developed MotionExplorer together with domain experts as an exploratory search system based on interactive aggregation and visualization of motion states as a basis for data navigation, exploration, and search. Based on an overview-first type visualization, users are able to search for interesting sub-sequences of motion based on a query-by-example metaphor, and explore search results by details on demand. We developed MotionExplorer in close collaboration with the targeted users who are researchers working on human motion synthesis and analysis, including a summative field study. Additionally, we conducted a laboratory design study to substantially improve MotionExplorer towards an intuitive, usable and robust design. MotionExplorer enables the search in human motion capture data with only a few mouse clicks. The researchers unanimously confirm that the system can efficiently support their work.
Amazing Space: Explanations, Investigations, & 3D Visualizations
NASA Astrophysics Data System (ADS)
Summers, Frank
2011-05-01
The Amazing Space website is STScI's online resource for communicating Hubble discoveries and other astronomical wonders to students and teachers everywhere. Our team has developed a broad suite of materials, readings, activities, and visuals that are not only engaging and exciting, but also standards-based and fully supported so that they can be easily used within state and national curricula. These products include stunning imagery, grade-level readings, trading card games, online interactives, and scientific visualizations. We are currently exploring the potential use of stereo 3D in astronomy education.
NASA Technical Reports Server (NTRS)
Wheeler, Kevin; Timucin, Dogan; Rabbette, Maura; Curry, Charles; Allan, Mark; Lvov, Nikolay; Clanton, Sam; Pilewskie, Peter
2002-01-01
The goal of visual inference programming is to develop a software framework data analysis and to provide machine learning algorithms for inter-active data exploration and visualization. The topics include: 1) Intelligent Data Understanding (IDU) framework; 2) Challenge problems; 3) What's new here; 4) Framework features; 5) Wiring diagram; 6) Generated script; 7) Results of script; 8) Initial algorithms; 9) Independent Component Analysis for instrument diagnosis; 10) Output sensory mapping virtual joystick; 11) Output sensory mapping typing; 12) Closed-loop feedback mu-rhythm control; 13) Closed-loop training; 14) Data sources; and 15) Algorithms. This paper is in viewgraph form.
Semantics of directly manipulating spatializations.
Hu, Xinran; Bradel, Lauren; Maiti, Dipayan; House, Leanna; North, Chris; Leman, Scotland
2013-12-01
When high-dimensional data is visualized in a 2D plane by using parametric projection algorithms, users may wish to manipulate the layout of the data points to better reflect their domain knowledge or to explore alternative structures. However, few users are well-versed in the algorithms behind the visualizations, making parameter tweaking more of a guessing game than a series of decisive interactions. Translating user interactions into algorithmic input is a key component of Visual to Parametric Interaction (V2PI) [13]. Instead of adjusting parameters, users directly move data points on the screen, which then updates the underlying statistical model. However, we have found that some data points that are not moved by the user are just as important in the interactions as the data points that are moved. Users frequently move some data points with respect to some other 'unmoved' data points that they consider as spatially contextual. However, in current V2PI interactions, these points are not explicitly identified when directly manipulating the moved points. We design a richer set of interactions that makes this context more explicit, and a new algorithm and sophisticated weighting scheme that incorporates the importance of these unmoved data points into V2PI.
NASA Astrophysics Data System (ADS)
Hess, M. R.; Petrovic, V.; Kuester, F.
2017-08-01
Digital documentation of cultural heritage structures is increasingly more common through the application of different imaging techniques. Many works have focused on the application of laser scanning and photogrammetry techniques for the acquisition of threedimensional (3D) geometry detailing cultural heritage sites and structures. With an abundance of these 3D data assets, there must be a digital environment where these data can be visualized and analyzed. Presented here is a feedback driven visualization framework that seamlessly enables interactive exploration and manipulation of massive point cloud data. The focus of this work is on the classification of different building materials with the goal of building more accurate as-built information models of historical structures. User defined functions have been tested within the interactive point cloud visualization framework to evaluate automated and semi-automated classification of 3D point data. These functions include decisions based on observed color, laser intensity, normal vector or local surface geometry. Multiple case studies are presented here to demonstrate the flexibility and utility of the presented point cloud visualization framework to achieve classification objectives.
NASA Astrophysics Data System (ADS)
Miller, M. K.; Rossiter, A.; Spitzer, W.
2016-12-01
The Exploratorium, a hands-on science museum, explores local environmental conditions of San Francisco Bay to connect audiences to the larger global implications of ocean acidification and climate change. The work is centered in the Fisher Bay Observatory at Pier 15, a glass-walled gallery sited for explorations of urban San Francisco and the Bay. Interactive exhibits, high-resolution data visualizations, and mediated activities and conversations communicate to public audiences the impacts of excess carbon dioxide in the atmosphere and ocean. Through a 10-year education partnership with NOAA and two environmental literacy grants funded by its Office of Education, the Exploratorium has been part of two distinct but complementary strategies to increase climate literacy beyond traditional classroom settings. We will discuss two projects that address the ways complex scientific information can be transformed into learning opportunities for the public, providing information citizens can use for decision-making in their personal lives and their communities. The Visualizing Change project developed "visual narratives" that combine scientific visualizations and other images with story telling about the science and potential solutions of climate impacts on the ocean. The narratives were designed to engage curiosity and provide the public with hopeful and useful information to stimulate solutions-oriented behavior rather than to communicate despair about climate change. Training workshops for aquarium and museum docents prepare informal educators to use the narratives and help them frame productive conversations with the pubic. The Carbon Networks project, led by the Exploratorium, uses local and Pacific Rim data to explore the current state of climate change and ocean acidification. The Exploratorium collects and displays local ocean and atmosphere data as a member of the Central and Northern California Ocean Observing System and as an observing station for NOAA's Pacific Marine Environment Lab's carbon buoy network. Other Carbon Network partners, the Pacific Science Center and Waikiki Aquarium, also have access to local carbon data from NOAA. The project collectively explores the development of hands-on activities, teaching resources, and workshops for museum educators and classroom teachers.
2011-01-01
The goal of visual analytics is to facilitate the discourse between the user and the data by providing dynamic displays and versatile visual interaction opportunities with the data that can support analytical reasoning and the exploration of data from multiple user-customisable aspects. This paper introduces geospatial visual analytics, a specialised subtype of visual analytics, and provides pointers to a number of learning resources about the subject, as well as some examples of human health, surveillance, emergency management and epidemiology-related geospatial visual analytics applications and examples of free software tools that readers can experiment with, such as Google Public Data Explorer. The authors also present a practical demonstration of geospatial visual analytics using partial data for 35 countries from a publicly available World Health Organization (WHO) mortality dataset and Microsoft Live Labs Pivot technology, a free, general purpose visual analytics tool that offers a fresh way to visually browse and arrange massive amounts of data and images online and also supports geographic and temporal classifications of datasets featuring geospatial and temporal components. Interested readers can download a Zip archive (included with the manuscript as an additional file) containing all files, modules and library functions used to deploy the WHO mortality data Pivot collection described in this paper. PMID:21410968
Kamel Boulos, Maged N; Viangteeravat, Teeradache; Anyanwu, Matthew N; Ra Nagisetty, Venkateswara; Kuscu, Emin
2011-03-16
The goal of visual analytics is to facilitate the discourse between the user and the data by providing dynamic displays and versatile visual interaction opportunities with the data that can support analytical reasoning and the exploration of data from multiple user-customisable aspects. This paper introduces geospatial visual analytics, a specialised subtype of visual analytics, and provides pointers to a number of learning resources about the subject, as well as some examples of human health, surveillance, emergency management and epidemiology-related geospatial visual analytics applications and examples of free software tools that readers can experiment with, such as Google Public Data Explorer. The authors also present a practical demonstration of geospatial visual analytics using partial data for 35 countries from a publicly available World Health Organization (WHO) mortality dataset and Microsoft Live Labs Pivot technology, a free, general purpose visual analytics tool that offers a fresh way to visually browse and arrange massive amounts of data and images online and also supports geographic and temporal classifications of datasets featuring geospatial and temporal components. Interested readers can download a Zip archive (included with the manuscript as an additional file) containing all files, modules and library functions used to deploy the WHO mortality data Pivot collection described in this paper.
Semantics by analogy for illustrative volume visualization☆
Gerl, Moritz; Rautek, Peter; Isenberg, Tobias; Gröller, Eduard
2012-01-01
We present an interactive graphical approach for the explicit specification of semantics for volume visualization. This explicit and graphical specification of semantics for volumetric features allows us to visually assign meaning to both input and output parameters of the visualization mapping. This is in contrast to the implicit way of specifying semantics using transfer functions. In particular, we demonstrate how to realize a dynamic specification of semantics which allows to flexibly explore a wide range of mappings. Our approach is based on three concepts. First, we use semantic shader augmentation to automatically add rule-based rendering functionality to static visualization mappings in a shader program, while preserving the visual abstraction that the initial shader encodes. With this technique we extend recent developments that define a mapping between data attributes and visual attributes with rules, which are evaluated using fuzzy logic. Second, we let users define the semantics by analogy through brushing on renderings of the data attributes of interest. Third, the rules are specified graphically in an interface that provides visual clues for potential modifications. Together, the presented methods offer a high degree of freedom in the specification and exploration of rule-based mappings and avoid the limitations of a linguistic rule formulation. PMID:23576827
Feedback and Elaboration within a Computer-Based Simulation: A Dual Coding Perspective.
ERIC Educational Resources Information Center
Rieber, Lloyd P.; And Others
The purpose of this study was to explore how adult users interact and learn during a computer-based simulation given visual and verbal forms of feedback coupled with embedded elaborations of the content. A total of 52 college students interacted with a computer-based simulation of Newton's laws of motion in which they had control over the motion…
The Impact of the Webcam on an Online L2 Interaction
ERIC Educational Resources Information Center
Guichon, Nicolas; Cohen, Cathy
2014-01-01
It is intuitively felt that visual cues should enhance online communication, and this experimental study aims to test this prediction by exploring the value provided by a webcam in an online L2 pedagogical teacher-to-learner interaction. A total of 40 French undergraduate students with a B2 level in English were asked to describe in English four…
Visual exploration of high-dimensional data through subspace analysis and dynamic projections
Liu, S.; Wang, B.; Thiagarajan, J. J.; ...
2015-06-01
Here, we introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that createmore » smooth animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less
Visual Exploration of High-Dimensional Data through Subspace Analysis and Dynamic Projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, S.; Wang, B.; Thiagarajan, Jayaraman J.
2015-06-01
We introduce a novel interactive framework for visualizing and exploring high-dimensional datasets based on subspace analysis and dynamic projections. We assume the high-dimensional dataset can be represented by a mixture of low-dimensional linear subspaces with mixed dimensions, and provide a method to reliably estimate the intrinsic dimension and linear basis of each subspace extracted from the subspace clustering. Subsequently, we use these bases to define unique 2D linear projections as viewpoints from which to visualize the data. To understand the relationships among the different projections and to discover hidden patterns, we connect these projections through dynamic projections that create smoothmore » animated transitions between pairs of projections. We introduce the view transition graph, which provides flexible navigation among these projections to facilitate an intuitive exploration. Finally, we provide detailed comparisons with related systems, and use real-world examples to demonstrate the novelty and usability of our proposed framework.« less
SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics
Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis
2015-01-01
Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most “useful” or “interesting”. The two major obstacles in recommending interesting visualizations are (a) scale: evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility: identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics. PMID:26779379
SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics.
Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis
2015-09-01
Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most "useful" or "interesting". The two major obstacles in recommending interesting visualizations are (a) scale : evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility : identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics.
Alam, Zaid; Peddinti, Gopal
2017-01-01
Abstract The advent of polypharmacology paradigm in drug discovery calls for novel chemoinformatic tools for analyzing compounds’ multi-targeting activities. Such tools should provide an intuitive representation of the chemical space through capturing and visualizing underlying patterns of compound similarities linked to their polypharmacological effects. Most of the existing compound-centric chemoinformatics tools lack interactive options and user interfaces that are critical for the real-time needs of chemical biologists carrying out compound screening experiments. Toward that end, we introduce C-SPADE, an open-source exploratory web-tool for interactive analysis and visualization of drug profiling assays (biochemical, cell-based or cell-free) using compound-centric similarity clustering. C-SPADE allows the users to visually map the chemical diversity of a screening panel, explore investigational compounds in terms of their similarity to the screening panel, perform polypharmacological analyses and guide drug-target interaction predictions. C-SPADE requires only the raw drug profiling data as input, and it automatically retrieves the structural information and constructs the compound clusters in real-time, thereby reducing the time required for manual analysis in drug development or repurposing applications. The web-tool provides a customizable visual workspace that can either be downloaded as figure or Newick tree file or shared as a hyperlink with other users. C-SPADE is freely available at http://cspade.fimm.fi/. PMID:28472495
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granger, Brian R.; Chang, Yi -Chien; Wang, Yan
Here, the complexity of metabolic networks in microbial communities poses an unresolved visualization and interpretation challenge. We address this challenge in the newly expanded version of a software tool for the analysis of biological networks, VisANT 5.0. We focus in particular on facilitating the visual exploration of metabolic interaction between microbes in a community, e.g. as predicted by COMETS (Computation of Microbial Ecosystems in Time and Space), a dynamic stoichiometric modeling framework. Using VisANT's unique meta-graph implementation, we show how one can use VisANT 5.0 to explore different time-dependent ecosystem-level metabolic networks. In particular, we analyze the metabolic interaction networkmore » between two bacteria previously shown to display an obligate cross-feeding interdependency. In addition, we illustrate how a putative minimal gut microbiome community could be represented in our framework, making it possible to highlight interactions across multiple coexisting species. We envisage that the "symbiotic layout" of VisANT can be employed as a general tool for the analysis of metabolism in complex microbial communities as well as heterogeneous human tissues.« less
NASA Technical Reports Server (NTRS)
Biegel, Bryan A. (Technical Monitor); Sandstrom, Timothy A.; Henze, Chris; Levit, Creon
2003-01-01
This paper presents the hyperwall, a visualization cluster that uses coordinated visualizations for interactive exploration of multidimensional data and simulations. The system strongly leverages the human eye-brain system with a generous 7x7 array offlat panel LCD screens powered by a beowulf clustel: With each screen backed by a workstation class PC, graphic and compute intensive applications can be applied to a broad range of data. Navigational tools are presented that allow for investigation of high dimensional spaces.
Common Ground: An Interactive Visual Exploration and Discovery for Complex Health Data
2014-04-01
annotate other ontologies for the visual interface client. Finally, we are actively working on software development of both a backend server and the...the following infrastructure and resources. For the development and management of the ontologies, we installed a framework consisting of a server...that is being developed by Google. Using these 9 technologies, we developed an HTML5 client that runs on Windows, Mac OSX, Linux and mobile systems
Integrative Genomics Viewer (IGV) | Informatics Technology for Cancer Research (ITCR)
The Integrative Genomics Viewer (IGV) is a high-performance visualization tool for interactive exploration of large, integrated genomic datasets. It supports a wide variety of data types, including array-based and next-generation sequence data, and genomic annotations.
Designing and Using Open-Ended Software to Promote Conceptual Change.
ERIC Educational Resources Information Center
Horwitz, Paul; Barowy, Bill
1994-01-01
Describes a project that explores the use of interactive computer software for teaching Einstein's special theory of relativity to secondary students. Also describes ways to use computers to help students visualize and experiment with otherwise inaccessible phenomena. (ZWH)
Telescopic multi-resolution augmented reality
NASA Astrophysics Data System (ADS)
Jenkins, Jeffrey; Frenchi, Christopher; Szu, Harold
2014-05-01
To ensure a self-consistent scaling approximation, the underlying microscopic fluctuation components can naturally influence macroscopic means, which may give rise to emergent observable phenomena. In this paper, we describe a consistent macroscopic (cm-scale), mesoscopic (micron-scale), and microscopic (nano-scale) approach to introduce Telescopic Multi-Resolution (TMR) into current Augmented Reality (AR) visualization technology. We propose to couple TMR-AR by introducing an energy-matter interaction engine framework that is based on known Physics, Biology, Chemistry principles. An immediate payoff of TMR-AR is a self-consistent approximation of the interaction between microscopic observables and their direct effect on the macroscopic system that is driven by real-world measurements. Such an interdisciplinary approach enables us to achieve more than multiple scale, telescopic visualization of real and virtual information but also conducting thought experiments through AR. As a result of the consistency, this framework allows us to explore a large dimensionality parameter space of measured and unmeasured regions. Towards this direction, we explore how to build learnable libraries of biological, physical, and chemical mechanisms. Fusing analytical sensors with TMR-AR libraries provides a robust framework to optimize testing and evaluation through data-driven or virtual synthetic simulations. Visualizing mechanisms of interactions requires identification of observable image features that can indicate the presence of information in multiple spatial and temporal scales of analog data. The AR methodology was originally developed to enhance pilot-training as well as `make believe' entertainment industries in a user-friendly digital environment We believe TMR-AR can someday help us conduct thought experiments scientifically, to be pedagogically visualized in a zoom-in-and-out, consistent, multi-scale approximations.
Titanbrowse: a new paradigm for access, visualization and analysis of hyperspectral imaging
NASA Astrophysics Data System (ADS)
Penteado, Paulo F.
2016-10-01
Currently there are archives and tools to explore remote sensing imaging, but these lack some functionality needed for hyperspectral imagers: 1) Querying and serving only whole datacubes is not enough, since in each cube there is typically a large variation in observation geometry over the spatial pixels. Thus, often the most useful unit for selecting observations of interest is not a whole cube but rather a single spectrum. 2) Pixel-specific geometric data included in the standard pipelines is calculated at only one point per pixel. Particularly for selections of pixels from many different cubes, or observations near the limb, it is necessary to know the actual extent of each pixel. 3) Database queries need not only metadata, but also by the spectral data. For instance, one query might look for atypical values of some band, or atypical relations between bands, denoting spectral features (such as ratios or differences between bands). 4) There is the need to evaluate arbitrary, dynamically-defined, complex functions of the data (beyond just simple arithmetic operations), both for selection in the queries, and for visualization, to interactively tune the queries to the observations of interest. 5) Making the most useful query for some analysis often requires interactive visualization integrated with data selection and processing, because the user needs to explore how different functions of the data vary over the observations without having to download data and import it into visualization software. 6) Complementary to interactive use, an API allowing programmatic access to the system is needed for systematic data analyses. 7) Direct access to calibrated and georeferenced data, without the need to download data and software and learn to process it.We present titanbrowse, a database, exploration and visualization system for Cassini VIMS observations of Titan, designed to fullfill the aforementioned needs. While it originallly ran on data in the user's computer, we are now developing an online version, so that users do not need to download software and data. The server, which we maintain, processes the queries and communicates the results to the client the user runs. http://ppenteado.net/titanbrowse.
Enabling scientific workflows in virtual reality
Kreylos, O.; Bawden, G.; Bernardin, T.; Billen, M.I.; Cowgill, E.S.; Gold, R.D.; Hamann, B.; Jadamec, M.; Kellogg, L.H.; Staadt, O.G.; Sumner, D.Y.
2006-01-01
To advance research and improve the scientific return on data collection and interpretation efforts in the geosciences, we have developed methods of interactive visualization, with a special focus on immersive virtual reality (VR) environments. Earth sciences employ a strongly visual approach to the measurement and analysis of geologic data due to the spatial and temporal scales over which such data ranges, As observations and simulations increase in size and complexity, the Earth sciences are challenged to manage and interpret increasing amounts of data. Reaping the full intellectual benefits of immersive VR requires us to tailor exploratory approaches to scientific problems. These applications build on the visualization method's strengths, using both 3D perception and interaction with data and models, to take advantage of the skills and training of the geological scientists exploring their data in the VR environment. This interactive approach has enabled us to develop a suite of tools that are adaptable to a range of problems in the geosciences and beyond. Copyright ?? 2008 by the Association for Computing Machinery, Inc.
Interactive visualization and analysis of multimodal datasets for surgical applications.
Kirmizibayrak, Can; Yim, Yeny; Wakid, Mike; Hahn, James
2012-12-01
Surgeons use information from multiple sources when making surgical decisions. These include volumetric datasets (such as CT, PET, MRI, and their variants), 2D datasets (such as endoscopic videos), and vector-valued datasets (such as computer simulations). Presenting all the information to the user in an effective manner is a challenging problem. In this paper, we present a visualization approach that displays the information from various sources in a single coherent view. The system allows the user to explore and manipulate volumetric datasets, display analysis of dataset values in local regions, combine 2D and 3D imaging modalities and display results of vector-based computer simulations. Several interaction methods are discussed: in addition to traditional interfaces including mouse and trackers, gesture-based natural interaction methods are shown to control these visualizations with real-time performance. An example of a medical application (medialization laryngoplasty) is presented to demonstrate how the combination of different modalities can be used in a surgical setting with our approach.
ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation.
Hohman, Fred; Hodas, Nathan; Chau, Duen Horng
2017-05-01
Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as "black-boxes" due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user's data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.
An overview of 3D software visualization.
Teyseyre, Alfredo R; Campo, Marcelo R
2009-01-01
Software visualization studies techniques and methods for graphically representing different aspects of software. Its main goal is to enhance, simplify and clarify the mental representation a software engineer has of a computer system. During many years, visualization in 2D space has been actively studied, but in the last decade, researchers have begun to explore new 3D representations for visualizing software. In this article, we present an overview of current research in the area, describing several major aspects like: visual representations, interaction issues, evaluation methods and development tools. We also perform a survey of some representative tools to support different tasks, i.e., software maintenance and comprehension, requirements validation and algorithm animation for educational purposes, among others. Finally, we conclude identifying future research directions.
Level-2 Milestone 4797: Early Users on Max, Sequoia Visualization Cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cupps, Kim C.
This report documents the fact that an early user has run successfully on Max, the Sequoia visualization cluster, ASC L2 milestone 4797: Early Users on Sequoia Visualization System (Max), due December 31, 2013. The Max visualization and data analysis cluster will provide Sequoia users with compute cycles and an interactive option for data exploration and analysis. The system will be integrated in the first quarter of FY14 and the system is expected to be moved to the classified network by the second quarter of FY14. The goal of this milestone is to have early users running their visualization and datamore » analysis work on the Max cluster on the classified network.« less
Kwon, Daehong; Lee, Daehwan; Kim, Juyeon; Lee, Jongin; Sim, Mikang; Kim, Jaebum
2018-05-09
Proteins perform biological functions through cascading interactions with each other by forming protein complexes. As a result, interactions among proteins, called protein-protein interactions (PPIs) are not completely free from selection constraint during evolution. Therefore, the identification and analysis of PPI changes during evolution can give us new insight into the evolution of functions. Although many algorithms, databases and websites have been developed to help the study of PPIs, most of them are limited to visualize the structure and features of PPIs in a chosen single species with limited functions in the visualization perspective. This leads to difficulties in the identification of different patterns of PPIs in different species and their functional consequences. To resolve these issues, we developed a web application, called INTER-Species Protein Interaction Analysis (INTERSPIA). Given a set of proteins of user's interest, INTERSPIA first discovers additional proteins that are functionally associated with the input proteins and searches for different patterns of PPIs in multiple species through a server-side pipeline, and second visualizes the dynamics of PPIs in multiple species using an easy-to-use web interface. INTERSPIA is freely available at http://bioinfo.konkuk.ac.kr/INTERSPIA/.
Realistic terrain visualization based on 3D virtual world technology
NASA Astrophysics Data System (ADS)
Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai
2009-09-01
The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.
Realistic terrain visualization based on 3D virtual world technology
NASA Astrophysics Data System (ADS)
Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai
2010-11-01
The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.
Livnat, Yarden; Galli, Nathan; Samore, Matthew H; Gundlapalli, Adi V
2012-01-01
Advances in surveillance science have supported public health agencies in tracking and responding to disease outbreaks. Increasingly, epidemiologists have been tasked with interpreting multiple streams of heterogeneous data arising from varied surveillance systems. As a result public health personnel have experienced an overload of plots and charts as information visualization techniques have not kept pace with the rapid expansion in data availability. This study sought to advance the science of public health surveillance data visualization by conceptualizing a visual paradigm that provides an ‘epidemiological canvas’ for detection, monitoring, exploration and discovery of regional infectious disease activity and developing a software prototype of an ‘infectious disease weather map'. Design objectives were elucidated and the conceptual model was developed using cognitive task analysis with public health epidemiologists. The software prototype was pilot tested using retrospective data from a large, regional pediatric hospital, and gastrointestinal and respiratory disease outbreaks were re-created as a proof of concept. PMID:22358039
Matisse: A Visual Analytics System for Exploring Emotion Trends in Social Media Text Streams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A; Drouhard, Margaret MEG G; Beaver, Justin M
Dynamically mining textual information streams to gain real-time situational awareness is especially challenging with social media systems where throughput and velocity properties push the limits of a static analytical approach. In this paper, we describe an interactive visual analytics system, called Matisse, that aids with the discovery and investigation of trends in streaming text. Matisse addresses the challenges inherent to text stream mining through the following technical contributions: (1) robust stream data management, (2) automated sentiment/emotion analytics, (3) interactive coordinated visualizations, and (4) a flexible drill-down interaction scheme that accesses multiple levels of detail. In addition to positive/negative sentiment prediction,more » Matisse provides fine-grained emotion classification based on Valence, Arousal, and Dominance dimensions and a novel machine learning process. Information from the sentiment/emotion analytics are fused with raw data and summary information to feed temporal, geospatial, term frequency, and scatterplot visualizations using a multi-scale, coordinated interaction model. After describing these techniques, we conclude with a practical case study focused on analyzing the Twitter sample stream during the week of the 2013 Boston Marathon bombings. The case study demonstrates the effectiveness of Matisse at providing guided situational awareness of significant trends in social media streams by orchestrating computational power and human cognition.« less
Visualization and interaction tools for aerial photograph mosaics
NASA Astrophysics Data System (ADS)
Fernandes, João Pedro; Fonseca, Alexandra; Pereira, Luís; Faria, Adriano; Figueira, Helder; Henriques, Inês; Garção, Rita; Câmara, António
1997-05-01
This paper describes the development of a digital spatial library based on mosaics of digital orthophotos, called Interactive Portugal, that will enable users both to retrieve geospatial information existing in the Portuguese National System for Geographic Information World Wide Web server, and to develop local databases connected to the main system. A set of navigation, interaction, and visualization tools are proposed and discussed. They include sketching, dynamic sketching, and navigation capabilities over the digital orthophotos mosaics. Main applications of this digital spatial library are pointed out and discussed, namely for education, professional, and tourism markets. Future developments are considered. These developments are related to user reactions, technological advancements, and projects that also aim at delivering and exploring digital imagery on the World Wide Web. Future capabilities for site selection and change detection are also considered.
Visual analysis and exploration of complex corporate shareholder networks
NASA Astrophysics Data System (ADS)
Tekušová, Tatiana; Kohlhammer, Jörn
2008-01-01
The analysis of large corporate shareholder network structures is an important task in corporate governance, in financing, and in financial investment domains. In a modern economy, large structures of cross-corporation, cross-border shareholder relationships exist, forming complex networks. These networks are often difficult to analyze with traditional approaches. An efficient visualization of the networks helps to reveal the interdependent shareholding formations and the controlling patterns. In this paper, we propose an effective visualization tool that supports the financial analyst in understanding complex shareholding networks. We develop an interactive visual analysis system by combining state-of-the-art visualization technologies with economic analysis methods. Our system is capable to reveal patterns in large corporate shareholder networks, allows the visual identification of the ultimate shareholders, and supports the visual analysis of integrated cash flow and control rights. We apply our system on an extensive real-world database of shareholder relationships, showing its usefulness for effective visual analysis.
SmartR: an open-source platform for interactive visual analytics for translational research data
Herzinger, Sascha; Gu, Wei; Satagopam, Venkata; Eifes, Serge; Rege, Kavita; Barbosa-Silva, Adriano; Schneider, Reinhard
2017-01-01
Abstract Summary: In translational research, efficient knowledge exchange between the different fields of expertise is crucial. An open platform that is capable of storing a multitude of data types such as clinical, pre-clinical or OMICS data combined with strong visual analytical capabilities will significantly accelerate the scientific progress by making data more accessible and hypothesis generation easier. The open data warehouse tranSMART is capable of storing a variety of data types and has a growing user community including both academic institutions and pharmaceutical companies. tranSMART, however, currently lacks interactive and dynamic visual analytics and does not permit any post-processing interaction or exploration. For this reason, we developed SmartR, a plugin for tranSMART, that equips the platform not only with several dynamic visual analytical workflows, but also provides its own framework for the addition of new custom workflows. Modern web technologies such as D3.js or AngularJS were used to build a set of standard visualizations that were heavily improved with dynamic elements. Availability and Implementation: The source code is licensed under the Apache 2.0 License and is freely available on GitHub: https://github.com/transmart/SmartR. Contact: reinhard.schneider@uni.lu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28334291
SmartR: an open-source platform for interactive visual analytics for translational research data.
Herzinger, Sascha; Gu, Wei; Satagopam, Venkata; Eifes, Serge; Rege, Kavita; Barbosa-Silva, Adriano; Schneider, Reinhard
2017-07-15
In translational research, efficient knowledge exchange between the different fields of expertise is crucial. An open platform that is capable of storing a multitude of data types such as clinical, pre-clinical or OMICS data combined with strong visual analytical capabilities will significantly accelerate the scientific progress by making data more accessible and hypothesis generation easier. The open data warehouse tranSMART is capable of storing a variety of data types and has a growing user community including both academic institutions and pharmaceutical companies. tranSMART, however, currently lacks interactive and dynamic visual analytics and does not permit any post-processing interaction or exploration. For this reason, we developed SmartR , a plugin for tranSMART, that equips the platform not only with several dynamic visual analytical workflows, but also provides its own framework for the addition of new custom workflows. Modern web technologies such as D3.js or AngularJS were used to build a set of standard visualizations that were heavily improved with dynamic elements. The source code is licensed under the Apache 2.0 License and is freely available on GitHub: https://github.com/transmart/SmartR . reinhard.schneider@uni.lu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Reading impairment in schizophrenia: dysconnectivity within the visual system.
Vinckier, Fabien; Cohen, Laurent; Oppenheim, Catherine; Salvador, Alexandre; Picard, Hernan; Amado, Isabelle; Krebs, Marie-Odile; Gaillard, Raphaël
2014-01-01
Patients with schizophrenia suffer from perceptual visual deficits. It remains unclear whether those deficits result from an isolated impairment of a localized brain process or from a more diffuse long-range dysconnectivity within the visual system. We aimed to explore, with a reading paradigm, the functioning of both ventral and dorsal visual pathways and their interaction in schizophrenia. Patients with schizophrenia and control subjects were studied using event-related functional MRI (fMRI) while reading words that were progressively degraded through word rotation or letter spacing. Reading intact or minimally degraded single words involves mainly the ventral visual pathway. Conversely, reading in non-optimal conditions involves both the ventral and the dorsal pathway. The reading paradigm thus allowed us to study the functioning of both pathways and their interaction. Behaviourally, patients with schizophrenia were selectively impaired at reading highly degraded words. While fMRI activation level was not different between patients and controls, functional connectivity between the ventral and dorsal visual pathways increased with word degradation in control subjects, but not in patients. Moreover, there was a negative correlation between the patients' behavioural sensitivity to stimulus degradation and dorso-ventral connectivity. This study suggests that perceptual visual deficits in schizophrenia could be related to dysconnectivity between dorsal and ventral visual pathways. © 2013 Published by Elsevier Ltd.
Yu, Bowen; Doraiswamy, Harish; Chen, Xi; Miraldi, Emily; Arrieta-Ortiz, Mario Luis; Hafemeister, Christoph; Madar, Aviv; Bonneau, Richard; Silva, Cláudio T
2014-12-01
Elucidation of transcriptional regulatory networks (TRNs) is a fundamental goal in biology, and one of the most important components of TRNs are transcription factors (TFs), proteins that specifically bind to gene promoter and enhancer regions to alter target gene expression patterns. Advances in genomic technologies as well as advances in computational biology have led to multiple large regulatory network models (directed networks) each with a large corpus of supporting data and gene-annotation. There are multiple possible biological motivations for exploring large regulatory network models, including: validating TF-target gene relationships, figuring out co-regulation patterns, and exploring the coordination of cell processes in response to changes in cell state or environment. Here we focus on queries aimed at validating regulatory network models, and on coordinating visualization of primary data and directed weighted gene regulatory networks. The large size of both the network models and the primary data can make such coordinated queries cumbersome with existing tools and, in particular, inhibits the sharing of results between collaborators. In this work, we develop and demonstrate a web-based framework for coordinating visualization and exploration of expression data (RNA-seq, microarray), network models and gene-binding data (ChIP-seq). Using specialized data structures and multiple coordinated views, we design an efficient querying model to support interactive analysis of the data. Finally, we show the effectiveness of our framework through case studies for the mouse immune system (a dataset focused on a subset of key cellular functions) and a model bacteria (a small genome with high data-completeness).
NASA Astrophysics Data System (ADS)
Liu, Shuai; Chen, Ge; Yao, Shifeng; Tian, Fenglin; Liu, Wei
2017-07-01
This paper presents a novel integrated marine visualization framework which focuses on processing, analyzing the multi-dimension spatiotemporal marine data in one workflow. Effective marine data visualization is needed in terms of extracting useful patterns, recognizing changes, and understanding physical processes in oceanography researches. However, the multi-source, multi-format, multi-dimension characteristics of marine data pose a challenge for interactive and feasible (timely) marine data analysis and visualization in one workflow. And, global multi-resolution virtual terrain environment is also needed to give oceanographers and the public a real geographic background reference and to help them to identify the geographical variation of ocean phenomena. This paper introduces a data integration and processing method to efficiently visualize and analyze the heterogeneous marine data. Based on the data we processed, several GPU-based visualization methods are explored to interactively demonstrate marine data. GPU-tessellated global terrain rendering using ETOPO1 data is realized and the video memory usage is controlled to ensure high efficiency. A modified ray-casting algorithm for the uneven multi-section Argo volume data is also presented and the transfer function is designed to analyze the 3D structure of ocean phenomena. Based on the framework we designed, an integrated visualization system is realized. The effectiveness and efficiency of the framework is demonstrated. This system is expected to make a significant contribution to the demonstration and understanding of marine physical process in a virtual global environment.
Social Network Extraction and Analysis Based on Multimodal Dyadic Interaction
Escalera, Sergio; Baró, Xavier; Vitrià, Jordi; Radeva, Petia; Raducanu, Bogdan
2012-01-01
Social interactions are a very important component in people’s lives. Social network analysis has become a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted from multimodal dyadic interactions. For our study, we used a set of videos belonging to New York Times’ Blogging Heads opinion blog. The Social Network is represented as an oriented graph, whose directed links are determined by the Influence Model. The links’ weights are a measure of the “influence” a person has over the other. The states of the Influence Model encode automatically extracted audio/visual features from our videos using state-of-the art algorithms. Our results are reported in terms of accuracy of audio/visual data fusion for speaker segmentation and centrality measures used to characterize the extracted social network. PMID:22438733
Multisensory effects on somatosensation: a trimodal visuo-vestibular-tactile interaction
Kaliuzhna, Mariia; Ferrè, Elisa Raffaella; Herbelin, Bruno; Blanke, Olaf; Haggard, Patrick
2016-01-01
Vestibular information about self-motion is combined with other sensory signals. Previous research described both visuo-vestibular and vestibular-tactile bilateral interactions, but the simultaneous interaction between all three sensory modalities has not been explored. Here we exploit a previously reported visuo-vestibular integration to investigate multisensory effects on tactile sensitivity in humans. Tactile sensitivity was measured during passive whole body rotations alone or in conjunction with optic flow, creating either purely vestibular or visuo-vestibular sensations of self-motion. Our results demonstrate that tactile sensitivity is modulated by perceived self-motion, as provided by a combined visuo-vestibular percept, and not by the visual and vestibular cues independently. We propose a hierarchical multisensory interaction that underpins somatosensory modulation: visual and vestibular cues are first combined to produce a multisensory self-motion percept. Somatosensory processing is then enhanced according to the degree of perceived self-motion. PMID:27198907
Delta: a new web-based 3D genome visualization and analysis platform.
Tang, Bixia; Li, Feifei; Li, Jing; Zhao, Wenming; Zhang, Zhihua
2018-04-15
Delta is an integrative visualization and analysis platform to facilitate visually annotating and exploring the 3D physical architecture of genomes. Delta takes Hi-C or ChIA-PET contact matrix as input and predicts the topologically associating domains and chromatin loops in the genome. It then generates a physical 3D model which represents the plausible consensus 3D structure of the genome. Delta features a highly interactive visualization tool which enhances the integration of genome topology/physical structure with extensive genome annotation by juxtaposing the 3D model with diverse genomic assay outputs. Finally, by visually comparing the 3D model of the β-globin gene locus and its annotation, we speculated a plausible transitory interaction pattern in the locus. Experimental evidence was found to support this speculation by literature survey. This served as an example of intuitive hypothesis testing with the help of Delta. Delta is freely accessible from http://delta.big.ac.cn, and the source code is available at https://github.com/zhangzhwlab/delta. zhangzhihua@big.ac.cn. Supplementary data are available at Bioinformatics online.
PaintOmics 3: a web resource for the pathway analysis and visualization of multi-omics data.
Hernández-de-Diego, Rafael; Tarazona, Sonia; Martínez-Mira, Carlos; Balzano-Nogueira, Leandro; Furió-Tarí, Pedro; Pappas, Georgios J; Conesa, Ana
2018-05-25
The increasing availability of multi-omic platforms poses new challenges to data analysis. Joint visualization of multi-omics data is instrumental in better understanding interconnections across molecular layers and in fully utilizing the multi-omic resources available to make biological discoveries. We present here PaintOmics 3, a web-based resource for the integrated visualization of multiple omic data types onto KEGG pathway diagrams. PaintOmics 3 combines server-end capabilities for data analysis with the potential of modern web resources for data visualization, providing researchers with a powerful framework for interactive exploration of their multi-omics information. Unlike other visualization tools, PaintOmics 3 covers a comprehensive pathway analysis workflow, including automatic feature name/identifier conversion, multi-layered feature matching, pathway enrichment, network analysis, interactive heatmaps, trend charts, and more. It accepts a wide variety of omic types, including transcriptomics, proteomics and metabolomics, as well as region-based approaches such as ATAC-seq or ChIP-seq data. The tool is freely available at www.paintomics.org.
imDEV: a graphical user interface to R multivariate analysis tools in Microsoft Excel
Grapov, Dmitry; Newman, John W.
2012-01-01
Summary: Interactive modules for Data Exploration and Visualization (imDEV) is a Microsoft Excel spreadsheet embedded application providing an integrated environment for the analysis of omics data through a user-friendly interface. Individual modules enables interactive and dynamic analyses of large data by interfacing R's multivariate statistics and highly customizable visualizations with the spreadsheet environment, aiding robust inferences and generating information-rich data visualizations. This tool provides access to multiple comparisons with false discovery correction, hierarchical clustering, principal and independent component analyses, partial least squares regression and discriminant analysis, through an intuitive interface for creating high-quality two- and a three-dimensional visualizations including scatter plot matrices, distribution plots, dendrograms, heat maps, biplots, trellis biplots and correlation networks. Availability and implementation: Freely available for download at http://sourceforge.net/projects/imdev/. Implemented in R and VBA and supported by Microsoft Excel (2003, 2007 and 2010). Contact: John.Newman@ars.usda.gov Supplementary Information: Installation instructions, tutorials and users manual are available at http://sourceforge.net/projects/imdev/. PMID:22815358
Fast interactive exploration of 4D MRI flow data
NASA Astrophysics Data System (ADS)
Hennemuth, A.; Friman, O.; Schumann, C.; Bock, J.; Drexl, J.; Huellebrand, M.; Markl, M.; Peitgen, H.-O.
2011-03-01
1- or 2-directional MRI blood flow mapping sequences are an integral part of standard MR protocols for diagnosis and therapy control in heart diseases. Recent progress in rapid MRI has made it possible to acquire volumetric, 3-directional cine images in reasonable scan time. In addition to flow and velocity measurements relative to arbitrarily oriented image planes, the analysis of 3-dimensional trajectories enables the visualization of flow patterns, local features of flow trajectories or possible paths into specific regions. The anatomical and functional information allows for advanced hemodynamic analysis in different application areas like stroke risk assessment, congenital and acquired heart disease, aneurysms or abdominal collaterals and cranial blood flow. The complexity of the 4D MRI flow datasets and the flow related image analysis tasks makes the development of fast comprehensive data exploration software for advanced flow analysis a challenging task. Most existing tools address only individual aspects of the analysis pipeline such as pre-processing, quantification or visualization, or are difficult to use for clinicians. The goal of the presented work is to provide a software solution that supports the whole image analysis pipeline and enables data exploration with fast intuitive interaction and visualization methods. The implemented methods facilitate the segmentation and inspection of different vascular systems. Arbitrary 2- or 3-dimensional regions for quantitative analysis and particle tracing can be defined interactively. Synchronized views of animated 3D path lines, 2D velocity or flow overlays and flow curves offer a detailed insight into local hemodynamics. The application of the analysis pipeline is shown for 6 cases from clinical practice, illustrating the usefulness for different clinical questions. Initial user tests show that the software is intuitive to learn and even inexperienced users achieve good results within reasonable processing times.
Supporting creativity and appreciation of uncertainty in exploring geo-coded public health data.
Thew, S L; Sutcliffe, A; de Bruijn, O; McNaught, J; Procter, R; Jarvis, Paul; Buchan, I
2011-01-01
We present a prototype visualisation tool, ADVISES (Adaptive Visualization for e-Science), designed to support epidemiologists and public health practitioners in exploring geo-coded datasets and generating spatial epidemiological hypotheses. The tool is designed to support creative thinking while providing the means for the user to evaluate the validity of the visualization in terms of statistical uncertainty. We present an overview of the application and the results of an evaluation exploring public health researchers' responses to maps as a new way of viewing familiar data, in particular the use of thematic maps with adjoining descriptive statistics and forest plots to support the generation and evaluation of new hypotheses. A series of qualitative evaluations involved one experienced researcher asking 21 volunteers to interact with the system to perform a series of relatively complex, realistic map-building and exploration tasks, using a 'think aloud' protocol, followed by a semi-structured interview The volunteers were academic epidemiologists and UK National Health Service analysts. All users quickly and confidently created maps, and went on to spend substantial amounts of time exploring and interacting with system, generating hypotheses about their maps. Our findings suggest that the tool is able to support creativity and statistical appreciation among public health professionals and epidemiologists building thematic maps. Software such as this, introduced appropriately, could increase the capability of existing personnel for generating public health intelligence.
Interactive Visual Analysis within Dynamic Ocean Models
NASA Astrophysics Data System (ADS)
Butkiewicz, T.
2012-12-01
The many observation and simulation based ocean models available today can provide crucial insights for all fields of marine research and can serve as valuable references when planning data collection missions. However, the increasing size and complexity of these models makes leveraging their contents difficult for end users. Through a combination of data visualization techniques, interactive analysis tools, and new hardware technologies, the data within these models can be made more accessible to domain scientists. We present an interactive system that supports exploratory visual analysis within large-scale ocean flow models. The currents and eddies within the models are illustrated using effective, particle-based flow visualization techniques. Stereoscopic displays and rendering methods are employed to ensure that the user can correctly perceive the complex 3D structures of depth-dependent flow patterns. Interactive analysis tools are provided which allow the user to experiment through the introduction of their customizable virtual dye particles into the models to explore regions of interest. A multi-touch interface provides natural, efficient interaction, with custom multi-touch gestures simplifying the otherwise challenging tasks of navigating and positioning tools within a 3D environment. We demonstrate the potential applications of our visual analysis environment with two examples of real-world significance: Firstly, an example of using customized particles with physics-based behaviors to simulate pollutant release scenarios, including predicting the oil plume path for the 2010 Deepwater Horizon oil spill disaster. Secondly, an interactive tool for plotting and revising proposed autonomous underwater vehicle mission pathlines with respect to the surrounding flow patterns predicted by the model; as these survey vessels have extremely limited energy budgets, designing more efficient paths allows for greater survey areas.
Viewpoints: Interactive Exploration of Large Multivariate Earth and Space Science Data Sets
NASA Astrophysics Data System (ADS)
Levit, C.; Gazis, P. R.
2006-05-01
Analysis and visualization of extremely large and complex data sets may be one of the most significant challenges facing earth and space science investigators in the forthcoming decades. While advances in hardware speed and storage technology have roughly kept up with (indeed, have driven) increases in database size, the same is not of our abilities to manage the complexity of these data. Current missions, instruments, and simulations produce so much data of such high dimensionality that they outstrip the capabilities of traditional visualization and analysis software. This problem can only be expected to get worse as data volumes increase by orders of magnitude in future missions and in ever-larger supercomputer simulations. For large multivariate data (more than 105 samples or records with more than 5 variables per sample) the interactive graphics response of most existing statistical analysis, machine learning, exploratory data analysis, and/or visualization tools such as Torch, MLC++, Matlab, S++/R, and IDL stutters, stalls, or stops working altogether. Fortunately, the graphics processing units (GPUs) built in to all professional desktop and laptop computers currently on the market are capable of transforming, filtering, and rendering hundreds of millions of points per second. We present a prototype open-source cross-platform application which leverages much of the power latent in the GPU to enable smooth interactive exploration and analysis of large high- dimensional data using a variety of classical and recent techniques. The targeted application is the interactive analysis of large, complex, multivariate data sets, with dimensionalities that may surpass 100 and sample sizes that may exceed 106-108.
A methodology for coupling a visual enhancement device to human visual attention
NASA Astrophysics Data System (ADS)
Todorovic, Aleksandar; Black, John A., Jr.; Panchanathan, Sethuraman
2009-02-01
The Human Variation Model views disability as simply "an extension of the natural physical, social, and cultural variability of mankind." Given this human variation, it can be difficult to distinguish between a prosthetic device such as a pair of glasses (which extends limited visual abilities into the "normal" range) and a visual enhancement device such as a pair of binoculars (which extends visual abilities beyond the "normal" range). Indeed, there is no inherent reason why the design of visual prosthetic devices should be limited to just providing "normal" vision. One obvious enhancement to human vision would be the ability to visually "zoom" in on objects that are of particular interest to the viewer. Indeed, it could be argued that humans already have a limited zoom capability, which is provided by their highresolution foveal vision. However, humans still find additional zooming useful, as evidenced by their purchases of binoculars equipped with mechanized zoom features. The fact that these zoom features are manually controlled raises two questions: (1) Could a visual enhancement device be developed to monitor attention and control visual zoom automatically? (2) If such a device were developed, would its use be experienced by users as a simple extension of their natural vision? This paper details the results of work with two research platforms called the Remote Visual Explorer (ReVEx) and the Interactive Visual Explorer (InVEx) that were developed specifically to answer these two questions.
Touch influences perceived gloss
Adams, Wendy J.; Kerrigan, Iona S.; Graf, Erich W.
2016-01-01
Identifying an object’s material properties supports recognition and action planning: we grasp objects according to how heavy, hard or slippery we expect them to be. Visual cues to material qualities such as gloss have recently received attention, but how they interact with haptic (touch) information has been largely overlooked. Here, we show that touch modulates gloss perception: objects that feel slippery are perceived as glossier (more shiny).Participants explored virtual objects that varied in look and feel. A discrimination paradigm (Experiment 1) revealed that observers integrate visual gloss with haptic information. Observers could easily detect an increase in glossiness when it was paired with a decrease in friction. In contrast, increased glossiness coupled with decreased slipperiness produced a small perceptual change: the visual and haptic changes counteracted each other. Subjective ratings (Experiment 2) reflected a similar interaction – slippery objects were rated as glossier and vice versa. The sensory system treats visual gloss and haptic friction as correlated cues to surface material. Although friction is not a perfect predictor of gloss, the visual system appears to know and use a probabilistic relationship between these variables to bias perception – a sensible strategy given the ambiguity of visual clues to gloss. PMID:26915492
BiNA: A Visual Analytics Tool for Biological Network Data
Gerasch, Andreas; Faber, Daniel; Küntzer, Jan; Niermann, Peter; Kohlbacher, Oliver; Lenhof, Hans-Peter; Kaufmann, Michael
2014-01-01
Interactive visual analysis of biological high-throughput data in the context of the underlying networks is an essential task in modern biomedicine with applications ranging from metabolic engineering to personalized medicine. The complexity and heterogeneity of data sets require flexible software architectures for data analysis. Concise and easily readable graphical representation of data and interactive navigation of large data sets are essential in this context. We present BiNA - the Biological Network Analyzer - a flexible open-source software for analyzing and visualizing biological networks. Highly configurable visualization styles for regulatory and metabolic network data offer sophisticated drawings and intuitive navigation and exploration techniques using hierarchical graph concepts. The generic projection and analysis framework provides powerful functionalities for visual analyses of high-throughput omics data in the context of networks, in particular for the differential analysis and the analysis of time series data. A direct interface to an underlying data warehouse provides fast access to a wide range of semantically integrated biological network databases. A plugin system allows simple customization and integration of new analysis algorithms or visual representations. BiNA is available under the 3-clause BSD license at http://bina.unipax.info/. PMID:24551056
Integrative Analysis of Complex Cancer Genomics and Clinical Profiles Using the cBioPortal
Gao, Jianjiong; Aksoy, Bülent Arman; Dogrusoz, Ugur; Dresdner, Gideon; Gross, Benjamin; Sumer, S. Onur; Sun, Yichao; Jacobsen, Anders; Sinha, Rileen; Larsson, Erik; Cerami, Ethan; Sander, Chris; Schultz, Nikolaus
2014-01-01
The cBioPortal for Cancer Genomics (http://cbioportal.org) provides a Web resource for exploring, visualizing, and analyzing multidimensional cancer genomics data. The portal reduces molecular profiling data from cancer tissues and cell lines into readily understandable genetic, epigenetic, gene expression, and proteomic events. The query interface combined with customized data storage enables researchers to interactively explore genetic alterations across samples, genes, and pathways and, when available in the underlying data, to link these to clinical outcomes. The portal provides graphical summaries of gene-level data from multiple platforms, network visualization and analysis, survival analysis, patient-centric queries, and software programmatic access. The intuitive Web interface of the portal makes complex cancer genomics profiles accessible to researchers and clinicians without requiring bioinformatics expertise, thus facilitating biological discoveries. Here, we provide a practical guide to the analysis and visualization features of the cBioPortal for Cancer Genomics. PMID:23550210
Visual Exploration of Genetic Association with Voxel-based Imaging Phenotypes in an MCI/AD Study
Kim, Sungeun; Shen, Li; Saykin, Andrew J.; West, John D.
2010-01-01
Neuroimaging genomics is a new transdisciplinary research field, which aims to examine genetic effects on brain via integrated analyses of high throughput neuroimaging and genomic data. We report our recent work on (1) developing an imaging genomic browsing system that allows for whole genome and entire brain analyses based on visual exploration and (2) applying the system to the imaging genomic analysis of an existing MCI/AD cohort. Voxel-based morphometry is used to define imaging phenotypes. ANCOVA is employed to evaluate the effect of the interaction of genotypes and diagnosis in relation to imaging phenotypes while controlling for relevant covariates. Encouraging experimental results suggest that the proposed system has substantial potential for enabling discovery of imaging genomic associations through visual evaluation and for localizing candidate imaging regions and genomic regions for refined statistical modeling. PMID:19963597
WORDGRAPH: Keyword-in-Context Visualization for NETSPEAK's Wildcard Search.
Riehmann, Patrick; Gruendl, Henning; Potthast, Martin; Trenkmann, Martin; Stein, Benno; Froehlich, Benno
2012-09-01
The WORDGRAPH helps writers in visually choosing phrases while writing a text. It checks for the commonness of phrases and allows for the retrieval of alternatives by means of wildcard queries. To support such queries, we implement a scalable retrieval engine, which returns high-quality results within milliseconds using a probabilistic retrieval strategy. The results are displayed as WORDGRAPH visualization or as a textual list. The graphical interface provides an effective means for interactive exploration of search results using filter techniques, query expansion, and navigation. Our observations indicate that, of three investigated retrieval tasks, the textual interface is sufficient for the phrase verification task, wherein both interfaces support context-sensitive word choice, and the WORDGRAPH best supports the exploration of a phrase's context or the underlying corpus. Our user study confirms these observations and shows that WORDGRAPH is generally the preferred interface over the textual result list for queries containing multiple wildcards.
Photorealistic virtual anatomy based on Chinese Visible Human data.
Heng, P A; Zhang, S X; Xie, Y M; Wong, T T; Chui, Y P; Cheng, C Y
2006-04-01
Virtual reality based learning of human anatomy is feasible when a database of 3D organ models is available for the learner to explore, visualize, and dissect in virtual space interactively. In this article, we present our latest work on photorealistic virtual anatomy applications based on the Chinese Visible Human (CVH) data. We have focused on the development of state-of-the-art virtual environments that feature interactive photo-realistic visualization and dissection of virtual anatomical models constructed from ultra-high resolution CVH datasets. We also outline our latest progress in applying these highly accurate virtual and functional organ models to generate realistic look and feel to advanced surgical simulators. (c) 2006 Wiley-Liss, Inc.
ERIC Educational Resources Information Center
Charleer, Sven; Klerkx, Joris; Duval, Erik
2014-01-01
This article explores how information visualization techniques can be applied to learning analytics data to help teachers and students deal with the abundance of learner traces. We also investigate how the affordances of large interactive surfaces can facilitate a collaborative sense-making environment for multiple students and teachers to explore…
Visualization of geologic stress perturbations using Mohr diagrams.
Crossno, Patricia; Rogers, David H; Brannon, Rebecca M; Coblentz, David; Fredrich, Joanne T
2005-01-01
Huge salt formations, trapping large untapped oil and gas reservoirs, lie in the deepwater region of the Gulf of Mexico. Drilling in this region is high-risk and drilling failures have led to well abandonments, with each costing tens of millions of dollars. Salt tectonics plays a central role in these failures. To explore the geomechanical interactions between salt and the surrounding sand and shale formations, scientists have simulated the stresses in and around salt diapirs in the Gulf of Mexico using nonlinear finite element geomechanical modeling. In this paper, we describe novel techniques developed to visualize the simulated subsurface stress field. We present an adaptation of the Mohr diagram, a traditional paper-and-pencil graphical method long used by the material mechanics community for estimating coordinate transformations for stress tensors, as a new tensor glyph for dynamically exploring tensor variables within three-dimensional finite element models. This interactive glyph can be used as either a probe or a filter through brushing and linking.
Visual Systems for Interactive Exploration and Mining of Large-Scale Neuroimaging Data Archives
Bowman, Ian; Joshi, Shantanu H.; Van Horn, John D.
2012-01-01
While technological advancements in neuroimaging scanner engineering have improved the efficiency of data acquisition, electronic data capture methods will likewise significantly expedite the populating of large-scale neuroimaging databases. As they do and these archives grow in size, a particular challenge lies in examining and interacting with the information that these resources contain through the development of compelling, user-driven approaches for data exploration and mining. In this article, we introduce the informatics visualization for neuroimaging (INVIZIAN) framework for the graphical rendering of, and dynamic interaction with the contents of large-scale neuroimaging data sets. We describe the rationale behind INVIZIAN, detail its development, and demonstrate its usage in examining a collection of over 900 T1-anatomical magnetic resonance imaging (MRI) image volumes from across a diverse set of clinical neuroimaging studies drawn from a leading neuroimaging database. Using a collection of cortical surface metrics and means for examining brain similarity, INVIZIAN graphically displays brain surfaces as points in a coordinate space and enables classification of clusters of neuroanatomically similar MRI images and data mining. As an initial step toward addressing the need for such user-friendly tools, INVIZIAN provides a highly unique means to interact with large quantities of electronic brain imaging archives in ways suitable for hypothesis generation and data mining. PMID:22536181
Interactions of attention, emotion and motivation.
Raymond, Jane
2009-01-01
Although successful visually guided action begins with sensory processes and ends with motor control, the intervening processes related to the appropriate selection of information for processing are especially critical because of the brain's limited capacity to handle information. Three important mechanisms--attention, emotion and motivation--contribute to the prioritization and selection of information. In this chapter, the interplay between these systems is discussed with emphasis placed on interactions between attention (or immediate task relevance of stimuli) and emotion (or affective evaluation of stimuli), and between attention and motivation (or the predicted value of stimuli). Although numerous studies have shown that emotional stimuli modulate mechanisms of selective attention in humans, little work has been directed at exploring whether such interactions can be reciprocal, that is, whether attention can influence emotional response. Recent work on this question (showing that distracting information is typically devalued upon later encounters) is reviewed in the first half of the chapter. In the second half, some recent experiments exploring how prior value-prediction learning (i.e., learning to associate potential outcomes, good or bad, with specific stimuli) plays a role in visual selection and conscious perception. The results indicate that some aspects of motivation act on selection independently of traditionally defined attention and other aspects interact with it.
Olivers, Christian N L; Meijer, Frank; Theeuwes, Jan
2006-10-01
In 7 experiments, the authors explored whether visual attention (the ability to select relevant visual information) and visual working memory (the ability to retain relevant visual information) share the same content representations. The presence of singleton distractors interfered more strongly with a visual search task when it was accompanied by an additional memory task. Singleton distractors interfered even more when they were identical or related to the object held in memory, but only when it was difficult to verbalize the memory content. Furthermore, this content-specific interaction occurred for features that were relevant to the memory task but not for irrelevant features of the same object or for once-remembered objects that could be forgotten. Finally, memory-related distractors attracted more eye movements but did not result in longer fixations. The results demonstrate memory-driven attentional capture on the basis of content-specific representations. Copyright 2006 APA.
MaGnET: Malaria Genome Exploration Tool
Sharman, Joanna L.; Gerloff, Dietlind L.
2013-01-01
Summary: The Malaria Genome Exploration Tool (MaGnET) is a software tool enabling intuitive ‘exploration-style’ visualization of functional genomics data relating to the malaria parasite, Plasmodium falciparum. MaGnET provides innovative integrated graphic displays for different datasets, including genomic location of genes, mRNA expression data, protein–protein interactions and more. Any selection of genes to explore made by the user is easily carried over between the different viewers for different datasets, and can be changed interactively at any point (without returning to a search). Availability and Implementation: Free online use (Java Web Start) or download (Java application archive and MySQL database; requires local MySQL installation) at http://malariagenomeexplorer.org Contact: joanna.sharman@ed.ac.uk or dgerloff@ffame.org Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23894142
Abstracting Attribute Space for Transfer Function Exploration and Design.
Maciejewski, Ross; Jang, Yun; Woo, Insoo; Jänicke, Heike; Gaither, Kelly P; Ebert, David S
2013-01-01
Currently, user centered transfer function design begins with the user interacting with a one or two-dimensional histogram of the volumetric attribute space. The attribute space is visualized as a function of the number of voxels, allowing the user to explore the data in terms of the attribute size/magnitude. However, such visualizations provide the user with no information on the relationship between various attribute spaces (e.g., density, temperature, pressure, x, y, z) within the multivariate data. In this work, we propose a modification to the attribute space visualization in which the user is no longer presented with the magnitude of the attribute; instead, the user is presented with an information metric detailing the relationship between attributes of the multivariate volumetric data. In this way, the user can guide their exploration based on the relationship between the attribute magnitude and user selected attribute information as opposed to being constrained by only visualizing the magnitude of the attribute. We refer to this modification to the traditional histogram widget as an abstract attribute space representation. Our system utilizes common one and two-dimensional histogram widgets where the bins of the abstract attribute space now correspond to an attribute relationship in terms of the mean, standard deviation, entropy, or skewness. In this manner, we exploit the relationships and correlations present in the underlying data with respect to the dimension(s) under examination. These relationships are often times key to insight and allow us to guide attribute discovery as opposed to automatic extraction schemes which try to calculate and extract distinct attributes a priori. In this way, our system aids in the knowledge discovery of the interaction of properties within volumetric data.
Spatial Visualization in Introductory Geology Courses
NASA Astrophysics Data System (ADS)
Reynolds, S. J.
2004-12-01
Visualization is critical to solving most geologic problems, which involve events and processes across a broad range of space and time. Accordingly, spatial visualization is an essential part of undergraduate geology courses. In such courses, students learn to visualize three-dimensional topography from two-dimensional contour maps, to observe landscapes and extract clues about how that landscape formed, and to imagine the three-dimensional geometries of geologic structures and how these are expressed on the Earth's surface or on geologic maps. From such data, students reconstruct the geologic history of areas, trying to visualize the sequence of ancient events that formed a landscape. To understand the role of visualization in student learning, we developed numerous interactive QuickTime Virtual Reality animations to teach students the most important visualization skills and approaches. For topography, students can spin and tilt contour-draped, shaded-relief terrains, flood virtual landscapes with water, and slice into terrains to understand profiles. To explore 3D geometries of geologic structures, they interact with virtual blocks that can be spun, sliced into, faulted, and made partially transparent to reveal internal structures. They can tilt planes to see how they interact with topography, and spin and tilt geologic maps draped over digital topography. The GeoWall system allows students to see some of these materials in true stereo. We used various assessments to research the effectiveness of these materials and to document visualization strategies students use. Our research indicates that, compared to control groups, students using such materials improve more in their geologic visualization abilities and in their general visualization abilities as measured by a standard spatial visualization test. Also, females achieve greater gains, improving their general visualization abilities to the same level as males. Misconceptions that students carry obstruct learning, but are largely undocumented. Many students, for example, cannot visualize that the landscape in which rock layers were deposited was different than the landscape in which the rocks are exposed today, even in the Grand Canyon.
A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos
Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian
2016-01-01
Objective Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today’s keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users’ information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. Materials and Methods The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively. Results The authors produced a prototype implementation of the proposed system, which is publicly accessible at https://patentq.njit.edu/oer. To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Conclusion Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable information, as well as intuitively and conveniently preview essential content of a single or a collection of videos. PMID:26335986
Virtual Environments in Scientific Visualization
NASA Technical Reports Server (NTRS)
Bryson, Steve; Lisinski, T. A. (Technical Monitor)
1994-01-01
Virtual environment technology is a new way of approaching the interface between computers and humans. Emphasizing display and user control that conforms to the user's natural ways of perceiving and thinking about space, virtual environment technologies enhance the ability to perceive and interact with computer generated graphic information. This enhancement potentially has a major effect on the field of scientific visualization. Current examples of this technology include the Virtual Windtunnel being developed at NASA Ames Research Center. Other major institutions such as the National Center for Supercomputing Applications and SRI International are also exploring this technology. This talk will be describe several implementations of virtual environments for use in scientific visualization. Examples include the visualization of unsteady fluid flows (the virtual windtunnel), the visualization of geodesics in curved spacetime, surface manipulation, and examples developed at various laboratories.
Explanatory and illustrative visualization of special and general relativity.
Weiskopf, Daniel; Borchers, Marc; Ertl, Thomas; Falk, Martin; Fechtig, Oliver; Frank, Regine; Grave, Frank; King, Andreas; Kraus, Ute; Müller, Thomas; Nollert, Hans-Peter; Rica Mendez, Isabel; Ruder, Hanns; Schafhitzel, Tobias; Schär, Sonja; Zahn, Corvin; Zatloukal, Michael
2006-01-01
This paper describes methods for explanatory and illustrative visualizations used to communicate aspects of Einstein's theories of special and general relativity, their geometric structure, and of the related fields of cosmology and astrophysics. Our illustrations target a general audience of laypersons interested in relativity. We discuss visualization strategies, motivated by physics education and the didactics of mathematics, and describe what kind of visualization methods have proven to be useful for different types of media, such as still images in popular science magazines, film contributions to TV shows, oral presentations, or interactive museum installations. Our primary approach is to adopt an egocentric point of view: The recipients of a visualization participate in a visually enriched thought experiment that allows them to experience or explore a relativistic scenario. In addition, we often combine egocentric visualizations with more abstract illustrations based on an outside view in order to provide several presentations of the same phenomenon. Although our visualization tools often build upon existing methods and implementations, the underlying techniques have been improved by several novel technical contributions like image-based special relativistic rendering on GPUs, special relativistic 4D ray tracing for accelerating scene objects, an extension of general relativistic ray tracing to manifolds described by multiple charts, GPU-based interactive visualization of gravitational light deflection, as well as planetary terrain rendering. The usefulness and effectiveness of our visualizations are demonstrated by reporting on experiences with, and feedback from, recipients of visualizations and collaborators.
Stepping Into Science Data: Data Visualization in Virtual Reality
NASA Astrophysics Data System (ADS)
Skolnik, S.
2017-12-01
Have you ever seen people get really excited about science data? Navteca, along with the Earth Science Technology Office (ESTO), within the Earth Science Division of NASA's Science Mission Directorate have been exploring virtual reality (VR) technology for the next generation of Earth science technology information systems. One of their first joint experiments was visualizing climate data from the Goddard Earth Observing System Model (GEOS) in VR, and the resulting visualizations greatly excited the scientific community. This presentation will share the value of VR for science, such as the capability of permitting the observer to interact with data rendered in real-time, make selections, and view volumetric data in an innovative way. Using interactive VR hardware (headset and controllers), the viewer steps into the data visualizations, physically moving through three-dimensional structures that are traditionally displayed as layers or slices, such as cloud and storm systems from NASA's Global Precipitation Measurement (GPM). Results from displaying this precipitation and cloud data show that there is interesting potential for scientific visualization, 3D/4D visualizations, and inter-disciplinary studies using VR. Additionally, VR visualizations can be leveraged as 360 content for scientific communication and outreach and VR can be used as a tool to engage policy and decision makers, as well as the public.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A; Beaver, Justin M; BogenII, Paul L.
In this paper, we introduce a new visual analytics system, called Matisse, that allows exploration of global trends in textual information streams with specific application to social media platforms. Despite the potential for real-time situational awareness using these services, interactive analysis of such semi-structured textual information is a challenge due to the high-throughput and high-velocity properties. Matisse addresses these challenges through the following contributions: (1) robust stream data management, (2) automated sen- timent/emotion analytics, (3) inferential temporal, geospatial, and term-frequency visualizations, and (4) a flexible drill-down interaction scheme that progresses from macroscale to microscale views. In addition to describing thesemore » contributions, our work-in-progress paper concludes with a practical case study focused on the analysis of Twitter 1% sample stream information captured during the week of the Boston Marathon bombings.« less
Discovering Tradeoffs, Vulnerabilities, and Dependencies within Water Resources Systems
NASA Astrophysics Data System (ADS)
Reed, P. M.
2015-12-01
There is a growing recognition and interest in using emerging computational tools for discovering the tradeoffs that emerge across complex combinations infrastructure options, adaptive operations, and sign posts. As a field concerned with "deep uncertainties", it is logically consistent to include a more direct acknowledgement that our choices for dealing with computationally demanding simulations, advanced search algorithms, and sensitivity analysis tools are themselves subject to failures that could adversely bias our understanding of how systems' vulnerabilities change with proposed actions. Balancing simplicity versus complexity in our computational frameworks is nontrivial given that we are often exploring high impact irreversible decisions. It is not always clear that accepted models even encompass important failure modes. Moreover as they become more complex and computationally demanding the benefits and consequences of simplifications are often untested. This presentation discusses our efforts to address these challenges through our "many-objective robust decision making" (MORDM) framework for the design and management water resources systems. The MORDM framework has four core components: (1) elicited problem conception and formulation, (2) parallel many-objective search, (3) interactive visual analytics, and (4) negotiated selection of robust alternatives. Problem conception and formulation is the process of abstracting a practical design problem into a mathematical representation. We build on the emerging work in visual analytics to exploit interactive visualization of both the design space and the objective space in multiple heterogeneous linked views that permit exploration and discovery. Many-objective search produces tradeoff solutions from potentially competing problem formulations that can each consider up to ten conflicting objectives based on current computational search capabilities. Negotiated design selection uses interactive visualization, reformulation, and optimization to discover desirable designs for implementation. Multi-city urban water supply portfolio planning will be used to illustrate the MORDM framework.
Data Visualization and Analysis for Climate Studies using NASA Giovanni Online System
NASA Technical Reports Server (NTRS)
Rui, Hualan; Leptoukh, Gregory; Lloyd, Steven
2008-01-01
With many global earth observation systems and missions focused on climate systems and the associated large volumes of observational data available for exploring and explaining how climate is changing and why, there is an urgent need for climate services. Giovanni, the NASA GES DISC Interactive Online Visualization ANd ANalysis Infrastructure, is a simple to use yet powerful tool for analysing these data for research on global warming and climate change, as well as for applications to weather. air quality, agriculture, and water resources,
NASA Astrophysics Data System (ADS)
Engelhardt, Larry
2015-12-01
We discuss how computers can be used to solve the ordinary differential equations that provide a quantum mechanical description of magnetic resonance. By varying the parameters in these equations and visually exploring how these parameters affect the results, students can quickly gain insights into the nature of magnetic resonance that go beyond the standard presentation found in quantum mechanics textbooks. The results were generated using an IPython notebook, which we provide as an online supplement with interactive plots and animations.
Balconi, Michela; Vanutelli, Maria Elide
2016-01-01
The brain activity, considered in its hemodynamic (optical imaging: functional Near-Infrared Spectroscopy, fNIRS) and electrophysiological components (event-related potentials, ERPs, N200) was monitored when subjects observed (visual stimulation, V) or observed and heard (visual + auditory stimulation, VU) situations which represented inter-species (human-animal) interactions, with an emotional positive (cooperative) or negative (uncooperative) content. In addition, the cortical lateralization effect (more left or right dorsolateral prefrontal cortex, DLPFC) was explored. Both ERP and fNIRS showed significant effects due to emotional interactions which were discussed at light of cross-modal integration effects. The significance of inter-species effect for the emotional behavior was considered. In addition, hemodynamic and EEG consonant results and their value as integrated measures were discussed at light of valence effect. PMID:26976052
Real-time, interactive, visually updated simulator system for telepresence
NASA Technical Reports Server (NTRS)
Schebor, Frederick S.; Turney, Jerry L.; Marzwell, Neville I.
1991-01-01
Time delays and limited sensory feedback of remote telerobotic systems tend to disorient teleoperators and dramatically decrease the operator's performance. To remove the effects of time delays, key components were designed and developed of a prototype forward simulation subsystem, the Global-Local Environment Telerobotic Simulator (GLETS) that buffers the operator from the remote task. GLETS totally immerses an operator in a real-time, interactive, simulated, visually updated artificial environment of the remote telerobotic site. Using GLETS, the operator will, in effect, enter into a telerobotic virtual reality and can easily form a gestalt of the virtual 'local site' that matches the operator's normal interactions with the remote site. In addition to use in space based telerobotics, GLETS, due to its extendable architecture, can also be used in other teleoperational environments such as toxic material handling, construction, and undersea exploration.
Zelenyuk, Alla; Imre, Dan; Wilson, Jacqueline; Zhang, Zhiyuan; Wang, Jun; Mueller, Klaus
2015-02-01
Understanding the effect of aerosols on climate requires knowledge of the size and chemical composition of individual aerosol particles-two fundamental properties that determine an aerosol's optical properties and ability to serve as cloud condensation or ice nuclei. Here we present our aircraft-compatible single particle mass spectrometers, SPLAT II and its new, miniaturized version, miniSPLAT that measure in-situ and in real-time the size and chemical composition of individual aerosol particles with extremely high sensitivity, temporal resolution, and sizing precision on the order of a monolayer. Although miniSPLAT's size, weight, and power consumption are significantly smaller, its performance is on par with SPLAT II. Both instruments operate in dual data acquisition mode to measure, in addition to single particle size and composition, particle number concentrations, size distributions, density, and asphericity with high temporal resolution. We also present ND-Scope, our newly developed interactive visual analytics software package. ND-Scope is designed to explore and visualize the vast amount of complex, multidimensional data acquired by our single particle mass spectrometers, along with other aerosol and cloud characterization instruments on-board aircraft. We demonstrate that ND-Scope makes it possible to visualize the relationships between different observables and to view the data in a geo-spatial context, using the interactive and fully coupled Google Earth and Parallel Coordinates displays. Here we illustrate the utility of ND-Scope to visualize the spatial distribution of atmospheric particles of different compositions, and explore the relationship between individual particle compositions and their activity as cloud condensation nuclei.
SOSPEX, an interactive tool to explore SOFIA spectral cubes
NASA Astrophysics Data System (ADS)
Fadda, Dario; Chambers, Edward T.
2018-01-01
We present SOSPEX (SOFIA SPectral EXplorer), an interactive tool to visualize and analyze spectral cubes obtained with the FIFI-LS and GREAT instruments onboard the SOFIA Infrared Observatory. This software package is written in Python 3 and it is available either through Github or Anaconda.Through this GUI it is possible to explore directly the spectral cubes produced by the SOFIA pipeline and archived in the SOFIA Science Archive. Spectral cubes are visualized showing their spatial and spectral dimensions in two different windows. By selecting a part of the spectrum, the flux from the corresponding slice of the cube is visualized in the spatial window. On the other hand, it is possible to define apertures on the spatial window to show the corresponding spectral energy distribution in the spectral window.Flux isocontours can be overlapped to external images in the spatial window while line names, atmospheric transmission, or external spectra can be overplotted on the spectral window. Atmospheric models with specific parameters can be retrieved, compared to the spectra and applied to the uncorrected FIFI-LS cubes in the cases where the standard values give unsatisfactory results. Subcubes can be selected and saved as FITS files by cropping or cutting the original cubes. Lines and continuum can be fitted in the spectral window saving the results in Jyson files which can be reloaded later. Finally, in the case of spatially extended observations, it is possible to compute spectral momenta as a function of the position to obtain velocity dispersion maps or velocity diagrams.
The DIMA web resource--exploring the protein domain network.
Pagel, Philipp; Oesterheld, Matthias; Stümpflen, Volker; Frishman, Dmitrij
2006-04-15
Conserved domains represent essential building blocks of most known proteins. Owing to their role as modular components carrying out specific functions they form a network based both on functional relations and direct physical interactions. We have previously shown that domain interaction networks provide substantially novel information with respect to networks built on full-length protein chains. In this work we present a comprehensive web resource for exploring the Domain Interaction MAp (DIMA), interactively. The tool aims at integration of multiple data sources and prediction techniques, two of which have been implemented so far: domain phylogenetic profiling and experimentally demonstrated domain contacts from known three-dimensional structures. A powerful yet simple user interface enables the user to compute, visualize, navigate and download domain networks based on specific search criteria. http://mips.gsf.de/genre/proj/dima
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillen, David S.
Analysis activities for Nonproliferation and Arms Control verification require the use of many types of data. Tabular structured data, such as Excel spreadsheets and relational databases, have traditionally been used for data mining activities, where specific queries are issued against data to look for matching results. The application of visual analytics tools to structured data enables further exploration of datasets to promote discovery of previously unknown results. This paper discusses the application of a specific visual analytics tool to datasets related to the field of Arms Control and Nonproliferation to promote the use of visual analytics more broadly in thismore » domain. Visual analytics focuses on analytical reasoning facilitated by interactive visual interfaces (Wong and Thomas 2004). It promotes exploratory analysis of data, and complements data mining technologies where known patterns can be mined for. Also with a human in the loop, they can bring in domain knowledge and subject matter expertise. Visual analytics has not widely been applied to this domain. In this paper, we will focus on one type of data: structured data, and show the results of applying a specific visual analytics tool to answer questions in the Arms Control and Nonproliferation domain. We chose to use the T.Rex tool, a visual analytics tool developed at PNNL, which uses a variety of visual exploration patterns to discover relationships in structured datasets, including a facet view, graph view, matrix view, and timeline view. The facet view enables discovery of relationships between categorical information, such as countries and locations. The graph tool visualizes node-link relationship patterns, such as the flow of materials being shipped between parties. The matrix visualization shows highly correlated categories of information. The timeline view shows temporal patterns in data. In this paper, we will use T.Rex with two different datasets to demonstrate how interactive exploration of the data can aid an analyst with arms control and nonproliferation verification activities. Using a dataset from PIERS (PIERS 2014), we will show how container shipment imports and exports can aid an analyst in understanding the shipping patterns between two countries. We will also use T.Rex to examine a collection of research publications from the IAEA International Nuclear Information System (IAEA 2014) to discover collaborations of concern. We hope this paper will encourage the use of visual analytics structured data analytics in the field of nonproliferation and arms control verification. Our paper outlines some of the challenges that exist before broad adoption of these kinds of tools can occur and offers next steps to overcome these challenges.« less
ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hohman, Frederick M.; Hodas, Nathan O.; Chau, Duen Horng
Deep learning is the driving force behind many recent technologies; however, deep neural networks are often viewed as “black-boxes” due to their internal complexity that is hard to understand. Little research focuses on helping people explore and understand the relationship between a user’s data and the learned representations in deep learning models. We present our ongoing work, ShapeShop, an interactive system for visualizing and understanding what semantics a neural network model has learned. Built using standard web technologies, ShapeShop allows users to experiment with and compare deep learning models to help explore the robustness of image classifiers.
My Ideal City (mic): Virtual Environments to Design the Future Town
NASA Astrophysics Data System (ADS)
Borgherini, M.; Garbin, E.
2011-09-01
MIC is an EU funded project to explore the use of shared virtual environments as part of a public discussion on the issues of building the city of the future. An interactive exploration of four european cities - in the digital city models were translated urban places, family problems and citizens wishes - is a chance to see them in different ways and from different points of view, to imagine new scenarios to overcome barriers and stereotypes no longer effective. This paper describes the process from data to visualization of virtual cities and, in detail, the project of two interactive digital model (Trento and Lisbon).
Interactive exploration of coastal restoration modeling in virtual environments
NASA Astrophysics Data System (ADS)
Gerndt, Andreas; Miller, Robert; Su, Simon; Meselhe, Ehab; Cruz-Neira, Carolina
2009-02-01
Over the last decades, Louisiana has lost a substantial part of its coastal region to the Gulf of Mexico. The goal of the project depicted in this paper is to investigate the complex ecological and geophysical system not only to find solutions to reverse this development but also to protect the southern landscape of Louisiana for disastrous impacts of natural hazards like hurricanes. This paper sets a focus on the interactive data handling of the Chenier Plain which is only one scenario of the overall project. The challenge addressed is the interactive exploration of large-scale time-depending 2D simulation results and of terrain data with a high resolution that is available for this region. Besides data preparation, efficient visualization approaches optimized for the usage in virtual environments are presented. These are embedded in a complex framework for scientific visualization of time-dependent large-scale datasets. To provide a straightforward interface for rapid application development, a software layer called VRFlowVis has been developed. Several architectural aspects to encapsulate complex virtual reality aspects like multi-pipe vs. cluster-based rendering are discussed. Moreover, the distributed post-processing architecture is investigated to prove its efficiency for the geophysical domain. Runtime measurements conclude this paper.
The effect of linguistic and visual salience in visual world studies.
Cavicchio, Federica; Melcher, David; Poesio, Massimo
2014-01-01
Research using the visual world paradigm has demonstrated that visual input has a rapid effect on language interpretation tasks such as reference resolution and, conversely, that linguistic material-including verbs, prepositions and adjectives-can influence fixations to potential referents. More recent research has started to explore how this effect of linguistic input on fixations is mediated by properties of the visual stimulus, in particular by visual salience. In the present study we further explored the role of salience in the visual world paradigm manipulating language-driven salience and visual salience. Specifically, we tested how linguistic salience (i.e., the greater accessibility of linguistically introduced entities) and visual salience (bottom-up attention grabbing visual aspects) interact. We recorded participants' eye-movements during a MapTask, asking them to look from landmark to landmark displayed upon a map while hearing direction-giving instructions. The landmarks were of comparable size and color, except in the Visual Salience condition, in which one landmark had been made more visually salient. In the Linguistic Salience conditions, the instructions included references to an object not on the map. Response times and fixations were recorded. Visual Salience influenced the time course of fixations at both the beginning and the end of the trial but did not show a significant effect on response times. Linguistic Salience reduced response times and increased fixations to landmarks when they were associated to a Linguistic Salient entity not present itself on the map. When the target landmark was both visually and linguistically salient, it was fixated longer, but fixations were quicker when the target item was linguistically salient only. Our results suggest that the two types of salience work in parallel and that linguistic salience affects fixations even when the entity is not visually present.
VIGOR: Interactive Visual Exploration of Graph Query Results.
Pienta, Robert; Hohman, Fred; Endert, Alex; Tamersoy, Acar; Roundy, Kevin; Gates, Chris; Navathe, Shamkant; Chau, Duen Horng
2018-01-01
Finding patterns in graphs has become a vital challenge in many domains from biological systems, network security, to finance (e.g., finding money laundering rings of bankers and business owners). While there is significant interest in graph databases and querying techniques, less research has focused on helping analysts make sense of underlying patterns within a group of subgraph results. Visualizing graph query results is challenging, requiring effective summarization of a large number of subgraphs, each having potentially shared node-values, rich node features, and flexible structure across queries. We present VIGOR, a novel interactive visual analytics system, for exploring and making sense of query results. VIGOR uses multiple coordinated views, leveraging different data representations and organizations to streamline analysts sensemaking process. VIGOR contributes: (1) an exemplar-based interaction technique, where an analyst starts with a specific result and relaxes constraints to find other similar results or starts with only the structure (i.e., without node value constraints), and adds constraints to narrow in on specific results; and (2) a novel feature-aware subgraph result summarization. Through a collaboration with Symantec, we demonstrate how VIGOR helps tackle real-world problems through the discovery of security blindspots in a cybersecurity dataset with over 11,000 incidents. We also evaluate VIGOR with a within-subjects study, demonstrating VIGOR's ease of use over a leading graph database management system, and its ability to help analysts understand their results at higher speed and make fewer errors.
Engaging students in astronomy and spectroscopy through Project SPECTRA!
NASA Astrophysics Data System (ADS)
Wood, E. L.
2011-12-01
Computer simulations for minds-on learning with "Project Spectra!" How do we gain information about the Sun? How do we know Mars has CO2 or that Enceladus has H2O geysers? How do we use light in astronomy? These concepts are something students and educators struggle with because they are abstract. Using simulations and computer interactives (games) where students experience and manipulate the information makes concepts accessible. Visualizing lessons with multi-media solidifies understanding and retention of knowledge and is completely unlike its paper-and-pencil counterpart. Visualizations also enable teachers to forgo purchasing expensive laboratory equipment. "Project Spectra!" is a science and engineering program that uses computer-based Flash interactives to expose students to astronomical spectroscopy and actual data in a way that is not possible with traditional in-class activities. To engage students in "Project Spectra!", students are given a mission, which connects them with the research at hand. Missions range from exploring remote planetary atmospheres and surfaces, experimenting with the Sun using different filters, or analyzing the soil of a remote planet. Additionally, students have an opportunity to learn about NASA missions, view movies, and see images connected with their mission, which is something that is not practical to do during a typical paper-and-pencil activity. Since students can choose what to watch and explore, the interactives accommodate a broad range of learning styles. Students can go back and forth through the interactives if they've missed a concept or wish to view something again. In the end, students are asked critical thinking questions and conduct web-based research. These interactives complement in-class Project SPECTRA! activities exploring applications of the electromagnetic spectrum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasgupta, Aritra; Poco, Jorge; Bertini, Enrico
2016-01-01
The gap between large-scale data production rate and the rate of generation of data-driven scientific insights has led to an analytical bottleneck in scientific domains like climate, biology, etc. This is primarily due to the lack of innovative analytical tools that can help scientists efficiently analyze and explore alternative hypotheses about the data, and communicate their findings effectively to a broad audience. In this paper, by reflecting on a set of successful collaborative research efforts between with a group of climate scientists and visualization researchers, we introspect how interactive visualization can help reduce the analytical bottleneck for domain scientists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rizzi, Silvio; Hereld, Mark; Insley, Joseph
In this work we perform in-situ visualization of molecular dynamics simulations, which can help scientists to visualize simulation output on-the-fly, without incurring storage overheads. We present a case study to couple LAMMPS, the large-scale molecular dynamics simulation code with vl3, our parallel framework for large-scale visualization and analysis. Our motivation is to identify effective approaches for covisualization and exploration of large-scale atomistic simulations at interactive frame rates.We propose a system of coupled libraries and describe its architecture, with an implementation that runs on GPU-based clusters. We present the results of strong and weak scalability experiments, as well as future researchmore » avenues based on our results.« less
LinkWinds: An Approach to Visual Data Analysis
NASA Technical Reports Server (NTRS)
Jacobson, Allan S.
1992-01-01
The Linked Windows Interactive Data System (LinkWinds) is a prototype visual data exploration and analysis system resulting from a NASA/JPL program of research into graphical methods for rapidly accessing, displaying and analyzing large multivariate multidisciplinary datasets. It is an integrated multi-application execution environment allowing the dynamic interconnection of multiple windows containing visual displays and/or controls through a data-linking paradigm. This paradigm, which results in a system much like a graphical spreadsheet, is not only a powerful method for organizing large amounts of data for analysis, but provides a highly intuitive, easy to learn user interface on top of the traditional graphical user interface.
Visual analytics of geo-social interaction patterns for epidemic control.
Luo, Wei
2016-08-10
Human interaction and population mobility determine the spatio-temporal course of the spread of an airborne disease. This research views such spreads as geo-social interaction problems, because population mobility connects different groups of people over geographical locations via which the viruses transmit. Previous research argued that geo-social interaction patterns identified from population movement data can provide great potential in designing effective pandemic mitigation. However, little work has been done to examine the effectiveness of designing control strategies taking into account geo-social interaction patterns. To address this gap, this research proposes a new framework for effective disease control; specifically this framework proposes that disease control strategies should start from identifying geo-social interaction patterns, designing effective control measures accordingly, and evaluating the efficacy of different control measures. This framework is used to structure design of a new visual analytic tool that consists of three components: a reorderable matrix for geo-social mixing patterns, agent-based epidemic models, and combined visualization methods. With real world human interaction data in a French primary school as a proof of concept, this research compares the efficacy of vaccination strategies between the spatial-social interaction patterns and the whole areas. The simulation results show that locally targeted vaccination has the potential to keep infection to a small number and prevent spread to other regions. At some small probability, the local control strategies will fail; in these cases other control strategies will be needed. This research further explores the impact of varying spatial-social scales on the success of local vaccination strategies. The results show that a proper spatial-social scale can help achieve the best control efficacy with a limited number of vaccines. The case study shows how GS-EpiViz does support the design and testing of advanced control scenarios in airborne disease (e.g., influenza). The geo-social patterns identified through exploring human interaction data can help target critical individuals, locations, and clusters of locations for disease control purposes. The varying spatial-social scales can help geographically and socially prioritize limited resources (e.g., vaccines).
Mania, Katerina; Wooldridge, Dave; Coxon, Matthew; Robinson, Andrew
2006-01-01
Accuracy of memory performance per se is an imperfect reflection of the cognitive activity (awareness states) that underlies performance in memory tasks. The aim of this research is to investigate the effect of varied visual and interaction fidelity of immersive virtual environments on memory awareness states. A between groups experiment was carried out to explore the effect of rendering quality on location-based recognition memory for objects and associated states of awareness. The experimental space, consisting of two interconnected rooms, was rendered either flat-shaded or using radiosity rendering. The computer graphics simulations were displayed on a stereo head-tracked Head Mounted Display. Participants completed a recognition memory task after exposure to the experimental space and reported one of four states of awareness following object recognition. These reflected the level of visual mental imagery involved during retrieval, the familiarity of the recollection, and also included guesses. Experimental results revealed variations in the distribution of participants' awareness states across conditions while memory performance failed to reveal any. Interestingly, results revealed a higher proportion of recollections associated with mental imagery in the flat-shaded condition. These findings comply with similar effects revealed in two earlier studies summarized here, which demonstrated that the less "naturalistic" interaction interface or interface of low interaction fidelity provoked a higher proportion of recognitions based on visual mental images.
Photo-realistic Terrain Modeling and Visualization for Mars Exploration Rover Science Operations
NASA Technical Reports Server (NTRS)
Edwards, Laurence; Sims, Michael; Kunz, Clayton; Lees, David; Bowman, Judd
2005-01-01
Modern NASA planetary exploration missions employ complex systems of hardware and software managed by large teams of. engineers and scientists in order to study remote environments. The most complex and successful of these recent projects is the Mars Exploration Rover mission. The Computational Sciences Division at NASA Ames Research Center delivered a 30 visualization program, Viz, to the MER mission that provides an immersive, interactive environment for science analysis of the remote planetary surface. In addition, Ames provided the Athena Science Team with high-quality terrain reconstructions generated with the Ames Stereo-pipeline. The on-site support team for these software systems responded to unanticipated opportunities to generate 30 terrain models during the primary MER mission. This paper describes Viz, the Stereo-pipeline, and the experiences of the on-site team supporting the scientists at JPL during the primary MER mission.
Traumaculture and Telepathetic Cyber Fiction
NASA Astrophysics Data System (ADS)
Drinkall, Jacquelene
This paper explores the interactive CD-ROM No Other Symptoms: Time Travelling with Rosalind Brodsky, usingtelepathetic socio-psychological, psychoanalytic and narrative theories. The CD-ROMexists as a contemporary artwork and published interactive hardcover book authored by painter and new-media visual artist Suzanne Treister. The artwork incorporates Treister's paintings, writing, photoshop, animation, video and audio work with narrative structures taken from world history, the history of psychoanalysis, futurist science and science fiction, family history and biography.
Understanding interfirm relationships in business ecosystems with interactive visualization.
Basole, Rahul C; Clear, Trustin; Hu, Mengdie; Mehrotra, Harshit; Stasko, John
2013-12-01
Business ecosystems are characterized by large, complex, and global networks of firms, often from many different market segments, all collaborating, partnering, and competing to create and deliver new products and services. Given the rapidly increasing scale, complexity, and rate of change of business ecosystems, as well as economic and competitive pressures, analysts are faced with the formidable task of quickly understanding the fundamental characteristics of these interfirm networks. Existing tools, however, are predominantly query- or list-centric with limited interactive, exploratory capabilities. Guided by a field study of corporate analysts, we have designed and implemented dotlink360, an interactive visualization system that provides capabilities to gain systemic insight into the compositional, temporal, and connective characteristics of business ecosystems. dotlink360 consists of novel, multiple connected views enabling the analyst to explore, discover, and understand interfirm networks for a focal firm, specific market segments or countries, and the entire business ecosystem. System evaluation by a small group of prototypical users shows supporting evidence of the benefits of our approach. This design study contributes to the relatively unexplored, but promising area of exploratory information visualization in market research and business strategy.
Interactive client side data visualization with d3.js
NASA Astrophysics Data System (ADS)
Rodzianko, A.; Versteeg, R.; Johnson, D. V.; Soltanian, M. R.; Versteeg, O. J.; Girouard, M.
2015-12-01
Geoscience data associated with near surface research and operational sites is increasingly voluminous and heterogeneous (both in terms of providers and data types - e.g. geochemical, hydrological, geophysical, modeling data, of varying spatiotemporal characteristics). Such data allows scientists to investigate fundamental hydrological and geochemical processes relevant to agriculture, water resources and climate change. For scientists to easily share, model and interpret such data requires novel tools with capabilities for interactive data visualization. Under sponsorship of the US Department of Energy, Subsurface Insights is developing the Predictive Assimilative Framework (PAF): a cloud based subsurface monitoring platform which can manage, process and visualize large heterogeneous datasets. Over the last year we transitioned our visualization method from a server side approach (in which images and animations were generated using Jfreechart and Visit) to a client side one that utilizes the D3 Javascript library. Datasets are retrieved using web service calls to the server, returned as JSON objects and visualized within the browser. Users can interactively explore primary and secondary datasets from various field locations. Our current capabilities include interactive data contouring and heterogeneous time series data visualization. While this approach is very powerful and not necessarily unique, special attention needs to be paid to latency and responsiveness issues as well as to issues as cross browser code compatibility so that users have an identical, fluid and frustration-free experience across different computational platforms. We gratefully acknowledge support from the US Department of Energy under SBIR Award DOE DE-SC0009732, the use of data from the Lawrence Berkeley National Laboratory (LBNL) Sustainable Systems SFA Rifle field site and collaboration with LBNL SFA scientists.
HierarchicalTopics: visually exploring large text collections using topic hierarchies.
Dou, Wenwen; Yu, Li; Wang, Xiaoyu; Ma, Zhiqiang; Ribarsky, William
2013-12-01
Analyzing large textual collections has become increasingly challenging given the size of the data available and the rate that more data is being generated. Topic-based text summarization methods coupled with interactive visualizations have presented promising approaches to address the challenge of analyzing large text corpora. As the text corpora and vocabulary grow larger, more topics need to be generated in order to capture the meaningful latent themes and nuances in the corpora. However, it is difficult for most of current topic-based visualizations to represent large number of topics without being cluttered or illegible. To facilitate the representation and navigation of a large number of topics, we propose a visual analytics system--HierarchicalTopic (HT). HT integrates a computational algorithm, Topic Rose Tree, with an interactive visual interface. The Topic Rose Tree constructs a topic hierarchy based on a list of topics. The interactive visual interface is designed to present the topic content as well as temporal evolution of topics in a hierarchical fashion. User interactions are provided for users to make changes to the topic hierarchy based on their mental model of the topic space. To qualitatively evaluate HT, we present a case study that showcases how HierarchicalTopics aid expert users in making sense of a large number of topics and discovering interesting patterns of topic groups. We have also conducted a user study to quantitatively evaluate the effect of hierarchical topic structure. The study results reveal that the HT leads to faster identification of large number of relevant topics. We have also solicited user feedback during the experiments and incorporated some suggestions into the current version of HierarchicalTopics.
Predictors of Handwriting in Children with Autism Spectrum Disorder
ERIC Educational Resources Information Center
Hellinckx, Tinneke; Roeyers, Herbert; Van Waelvelde, Hilde
2013-01-01
During writing, perceptual, motor, and cognitive processes interact. This study explored the predictive value of several factors on handwriting quality as well as on speed in children with Autism Spectrum Disorder (ASD). Our results showed that, in this population, age, gender, and visual-motor integration significantly predicted handwriting…
Exploratory Visualization of Graphs Based on Community Structure
ERIC Educational Resources Information Center
Liu, Yujie
2013-01-01
Communities, also called clusters or modules, are groups of nodes which probably share common properties and/or play similar roles within a graph. They widely exist in real networks such as biological, social, and information networks. Allowing users to interactively browse and explore the community structure, which is essential for understanding…
CyberArts: Exploring Art and Technology.
ERIC Educational Resources Information Center
Jacobson, Linda, Ed.
This book takes the position that CyberArts(TM) is the new frontier in creativity, where the worlds of science and art meet. Computer technologies, visual design, music and sound, education and entertainment merge to form the new artistic territory of interactive multimedia. This diverse collection of essays, articles, and commentaries…
Finding the Right Words: Art Conversations and Poetry
ERIC Educational Resources Information Center
Reilly, Mary Ann
2008-01-01
Generative thinking is explored in this article by chronicling the development of middle school English language learners' poetry writing through their interaction with visual art. The author explains how art conversations (Reilly & Cohen, 2008) were used to help students engage in dialogue about the topic of journeys and how students' paintings…
Using Digital Libraries Non-Visually: Understanding the Help-Seeking Situations of Blind Users
ERIC Educational Resources Information Center
Xie, Iris; Babu, Rakesh; Joo, Soohyung; Fuller, Paige
2015-01-01
Introduction: This study explores blind users' unique help-seeking situations in interacting with digital libraries. In particular, help-seeking situations were investigated at both the physical and cognitive levels. Method: Fifteen blind participants performed three search tasks, including known- item search, specific information search, and…
Effects of Virtual Manipulatives with Different Approaches on Students' Knowledge of Slope
ERIC Educational Resources Information Center
Demir, Mustafa
2018-01-01
Virtual Manipulatives (VMs) are computer-based, dynamic, and visual representations of mathematical concepts, provide interactive learning environments to advance mathematics instruction (Moyer et al., 2002). Despite their broad use, few research explored the integration of VMs into mathematics instruction (Moyer-Packenham & Westenskow, 2013).…
Common Ground: An Interactive Visual Exploration and Discovery for Complex Health Data
2015-04-01
working with Intermountain Healthcare on a new rich dataset extracted directly from medical notes using natural language processing ( NLP ) algorithms...probabilities based on a state- of-the-art NLP classifiers. At that stage the data did not include geographic information or temporal information but we
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klein, Adam T.; Yagnik, Gargey B.; Hohenstein, Jessica D.
Mass spectrometry imaging (MSI) is an emerging technology for high-resolution plant biology. It has been utilized to study plant–pest interactions, but limited to the surface interfaces. Here we expand the technology to explore the chemical interactions occurring inside the plant tissues. Two sample preparation methods, imprinting and fracturing, were developed and applied, for the first time, to visualize internal metabolites of leaves in matrix-assisted laser desorption ionization (MALDI)-MSI. This is also the first time nanoparticle-based ionization was implemented to ionize diterpenoid phytochemicals that were difficult to analyze with traditional organic matrices. The interactions between rice–bacterium and soybean–aphid were investigated asmore » two model systems to demonstrate the capability of high-resolution MSI based on MALDI. Localized molecular information on various plant- or pest-derived chemicals provided valuable insight for the molecular processes occurring during the plant–pest interactions. Basically, salicylic acid and isoflavone based resistance was visualized in the soybean–aphid system and antibiotic diterpenoids in rice–bacterium interactions.« less
Klein, Adam T.; Yagnik, Gargey B.; Hohenstein, Jessica D.; ...
2015-04-27
Mass spectrometry imaging (MSI) is an emerging technology for high-resolution plant biology. It has been utilized to study plant–pest interactions, but limited to the surface interfaces. Here we expand the technology to explore the chemical interactions occurring inside the plant tissues. Two sample preparation methods, imprinting and fracturing, were developed and applied, for the first time, to visualize internal metabolites of leaves in matrix-assisted laser desorption ionization (MALDI)-MSI. This is also the first time nanoparticle-based ionization was implemented to ionize diterpenoid phytochemicals that were difficult to analyze with traditional organic matrices. The interactions between rice–bacterium and soybean–aphid were investigated asmore » two model systems to demonstrate the capability of high-resolution MSI based on MALDI. Localized molecular information on various plant- or pest-derived chemicals provided valuable insight for the molecular processes occurring during the plant–pest interactions. Basically, salicylic acid and isoflavone based resistance was visualized in the soybean–aphid system and antibiotic diterpenoids in rice–bacterium interactions.« less
Kozlikova, Barbora; Sebestova, Eva; Sustr, Vilem; Brezovsky, Jan; Strnad, Ondrej; Daniel, Lukas; Bednar, David; Pavelka, Antonin; Manak, Martin; Bezdeka, Martin; Benes, Petr; Kotry, Matus; Gora, Artur; Damborsky, Jiri; Sochor, Jiri
2014-09-15
The transport of ligands, ions or solvent molecules into proteins with buried binding sites or through the membrane is enabled by protein tunnels and channels. CAVER Analyst is a software tool for calculation, analysis and real-time visualization of access tunnels and channels in static and dynamic protein structures. It provides an intuitive graphic user interface for setting up the calculation and interactive exploration of identified tunnels/channels and their characteristics. CAVER Analyst is a multi-platform software written in JAVA. Binaries and documentation are freely available for non-commercial use at http://www.caver.cz. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Making memories: the development of long-term visual knowledge in children with visual agnosia.
Metitieri, Tiziana; Barba, Carmen; Pellacani, Simona; Viggiano, Maria Pia; Guerrini, Renzo
2013-01-01
There are few reports about the effects of perinatal acquired brain lesions on the development of visual perception. These studies demonstrate nonseverely impaired visual-spatial abilities and preserved visual memory. Longitudinal data analyzing the effects of compromised perceptions on long-term visual knowledge in agnosics are limited to lesions having occurred in adulthood. The study of children with focal lesions of the visual pathways provides a unique opportunity to assess the development of visual memory when perceptual input is degraded. We assessed visual recognition and visual memory in three children with lesions to the visual cortex having occurred in early infancy. We then explored the time course of visual memory impairment in two of them at 2 years and 3.7 years from the initial assessment. All children exhibited apperceptive visual agnosia and visual memory impairment. We observed a longitudinal improvement of visual memory modulated by the structural properties of objects. Our findings indicate that processing of degraded perceptions from birth results in impoverished memories. The dynamic interaction between perception and memory during development might modulate the long-term construction of visual representations, resulting in less severe impairment.
Making Memories: The Development of Long-Term Visual Knowledge in Children with Visual Agnosia
Barba, Carmen; Pellacani, Simona; Viggiano, Maria Pia; Guerrini, Renzo
2013-01-01
There are few reports about the effects of perinatal acquired brain lesions on the development of visual perception. These studies demonstrate nonseverely impaired visual-spatial abilities and preserved visual memory. Longitudinal data analyzing the effects of compromised perceptions on long-term visual knowledge in agnosics are limited to lesions having occurred in adulthood. The study of children with focal lesions of the visual pathways provides a unique opportunity to assess the development of visual memory when perceptual input is degraded. We assessed visual recognition and visual memory in three children with lesions to the visual cortex having occurred in early infancy. We then explored the time course of visual memory impairment in two of them at 2 years and 3.7 years from the initial assessment. All children exhibited apperceptive visual agnosia and visual memory impairment. We observed a longitudinal improvement of visual memory modulated by the structural properties of objects. Our findings indicate that processing of degraded perceptions from birth results in impoverished memories. The dynamic interaction between perception and memory during development might modulate the long-term construction of visual representations, resulting in less severe impairment. PMID:24319599
Interactive 3D Mars Visualization
NASA Technical Reports Server (NTRS)
Powell, Mark W.
2012-01-01
The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.
Panoptes: web-based exploration of large scale genome variation data.
Vauterin, Paul; Jeffery, Ben; Miles, Alistair; Amato, Roberto; Hart, Lee; Wright, Ian; Kwiatkowski, Dominic
2017-10-15
The size and complexity of modern large-scale genome variation studies demand novel approaches for exploring and sharing the data. In order to unlock the potential of these data for a broad audience of scientists with various areas of expertise, a unified exploration framework is required that is accessible, coherent and user-friendly. Panoptes is an open-source software framework for collaborative visual exploration of large-scale genome variation data and associated metadata in a web browser. It relies on technology choices that allow it to operate in near real-time on very large datasets. It can be used to browse rich, hybrid content in a coherent way, and offers interactive visual analytics approaches to assist the exploration. We illustrate its application using genome variation data of Anopheles gambiae, Plasmodium falciparum and Plasmodium vivax. Freely available at https://github.com/cggh/panoptes, under the GNU Affero General Public License. paul.vauterin@gmail.com. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Visualization of spatial-temporal data based on 3D virtual scene
NASA Astrophysics Data System (ADS)
Wang, Xianghong; Liu, Jiping; Wang, Yong; Bi, Junfang
2009-10-01
The main purpose of this paper is to realize the expression of the three-dimensional dynamic visualization of spatialtemporal data based on three-dimensional virtual scene, using three-dimensional visualization technology, and combining with GIS so that the people's abilities of cognizing time and space are enhanced and improved by designing dynamic symbol and interactive expression. Using particle systems, three-dimensional simulation, virtual reality and other visual means, we can simulate the situations produced by changing the spatial location and property information of geographical entities over time, then explore and analyze its movement and transformation rules by changing the interactive manner, and also replay history and forecast of future. In this paper, the main research object is the vehicle track and the typhoon path and spatial-temporal data, through three-dimensional dynamic simulation of its track, and realize its timely monitoring its trends and historical track replaying; according to visualization techniques of spatialtemporal data in Three-dimensional virtual scene, providing us with excellent spatial-temporal information cognitive instrument not only can add clarity to show spatial-temporal information of the changes and developments in the situation, but also be used for future development and changes in the prediction and deduction.
Building Stories about Sea Level Rise through Interactive Visualizations
NASA Astrophysics Data System (ADS)
Stephens, S. H.; DeLorme, D. E.; Hagen, S. C.
2013-12-01
Digital media provide storytellers with dynamic new tools for communicating about scientific issues via interactive narrative visualizations. While traditional storytelling uses plot, characterization, and point of view to engage audiences with underlying themes and messages, interactive visualizations can be described as 'narrative builders' that promote insight through the process of discovery (Dove, G. & Jones, S. 2012, Proc. IHCI 2012). Narrative visualizations are used in online journalism to tell complex stories that allow readers to select aspects of datasets to explore and construct alternative interpretations of information (Segel, E. & Heer, J. 2010, IEEE Trans. Vis. Comp. Graph.16, 1139), thus enabling them to participate in the story-building process. Nevertheless, narrative visualizations also incorporate author-selected narrative elements that help guide and constrain the overall themes and messaging of the visualization (Hullman, J. & Diakopoulos, N. 2011, IEEE Trans. Vis. Comp. Graph. 17, 2231). One specific type of interactive narrative visualization that is used for science communication is the sea level rise (SLR) viewer. SLR viewers generally consist of a base map, upon which projections of sea level rise scenarios can be layered, and various controls for changing the viewpoint and scenario parameters. They are used to communicate the results of scientific modeling and help readers visualize the potential impacts of SLR on the coastal zone. Readers can use SLR viewers to construct personal narratives of the effects of SLR under different scenarios in locations that are important to them, thus extending the potential reach and impact of scientific research. With careful selection of narrative elements that guide reader interpretation, the communicative aspects of these visualizations may be made more effective. This presentation reports the results of a content analysis of a subset of existing SLR viewers selected in order to comprehensively identify and characterize the narrative elements that contribute to this storytelling medium. The results describe four layers of narrative elements in these viewers: data, visual representations, annotations, and interactivity; and explain the ways in which these elements are used to communicate about SLR. Most existing SLR viewers have been designed with attention to technical usability; however, careful design of narrative elements could increase their overall effectiveness as story-building tools. The analysis concludes with recommendations for narrative elements that should be considered when designing new SLR viewers, and offers suggestions for integrating these components to balance author-driven and reader-driven design features for more effective messaging.
Nagayoshi, Michie; Hirose, Taiko; Toju, Kyoko; Suzuki, Shigenobu; Okamitsu, Motoko; Teramoto, Taeko; Omori, Takahide; Kawamura, Aki; Takeo, Naoko
2017-06-01
This study was conducted with infants diagnosed with bilateral retinoblastoma (RB) and their mothers. It explored characteristics of the mother-infant interaction, the infants' developmental characteristics and related risk factors. Cross-sectional statistical analysis was performed with 18 dyads of one-year-old infants with bilateral RB and their mothers. Using the Japanese Nursing Child Assessment Teaching Scale (JNCATS) results showed that infants with RB had significantly lower scores compared to normative Japanese scores on all of the infants' subscales and "Child's contingency" (p < 0.01). Five infants with visual impairment at high risk of developmental problems had a pass rate of 0% on six JNCATS items. There were positive correlations between Developmental quotients (DQ) and JNCATS score of "Responsiveness to caregiver" (ρ = 0.50, p < 0.05) and DQ and "Child's contingency" (ρ = 0.47, p < 0.05). Infants with visual impairment were characterized by high likelihood of developmental delays and problematic behaviors; they tended not to turn their face or eyes toward their mothers, smile in response to their mothers' talking to them or the latter's changing body language or facial expressions, or react in a contingent manner in their interactions. These infant behaviors noted by their mothers shared similarities with developmental characteristics of children with visual impairments. These findings indicated a need to provide support promoting mother-infant interactions consistent with the developmental characteristics of RB infants with visual impairment. Copyright © 2017 Elsevier Ltd. All rights reserved.
A web-based data visualization tool for the MIMIC-II database.
Lee, Joon; Ribey, Evan; Wallace, James R
2016-02-04
Although MIMIC-II, a public intensive care database, has been recognized as an invaluable resource for many medical researchers worldwide, becoming a proficient MIMIC-II researcher requires knowledge of SQL programming and an understanding of the MIMIC-II database schema. These are challenging requirements especially for health researchers and clinicians who may have limited computer proficiency. In order to overcome this challenge, our objective was to create an interactive, web-based MIMIC-II data visualization tool that first-time MIMIC-II users can easily use to explore the database. The tool offers two main features: Explore and Compare. The Explore feature enables the user to select a patient cohort within MIMIC-II and visualize the distributions of various administrative, demographic, and clinical variables within the selected cohort. The Compare feature enables the user to select two patient cohorts and visually compare them with respect to a variety of variables. The tool is also helpful to experienced MIMIC-II researchers who can use it to substantially accelerate the cumbersome and time-consuming steps of writing SQL queries and manually visualizing extracted data. Any interested researcher can use the MIMIC-II data visualization tool for free to quickly and conveniently conduct a preliminary investigation on MIMIC-II with a few mouse clicks. Researchers can also use the tool to learn the characteristics of the MIMIC-II patients. Since it is still impossible to conduct multivariable regression inside the tool, future work includes adding analytics capabilities. Also, the next version of the tool will aim to utilize MIMIC-III which contains more data.
OmicsNet: a web-based tool for creation and visual analysis of biological networks in 3D space.
Zhou, Guangyan; Xia, Jianguo
2018-06-07
Biological networks play increasingly important roles in omics data integration and systems biology. Over the past decade, many excellent tools have been developed to support creation, analysis and visualization of biological networks. However, important limitations remain: most tools are standalone programs, the majority of them focus on protein-protein interaction (PPI) or metabolic networks, and visualizations often suffer from 'hairball' effects when networks become large. To help address these limitations, we developed OmicsNet - a novel web-based tool that allows users to easily create different types of molecular interaction networks and visually explore them in a three-dimensional (3D) space. Users can upload one or multiple lists of molecules of interest (genes/proteins, microRNAs, transcription factors or metabolites) to create and merge different types of biological networks. The 3D network visualization system was implemented using the powerful Web Graphics Library (WebGL) technology that works natively in most major browsers. OmicsNet supports force-directed layout, multi-layered perspective layout, as well as spherical layout to help visualize and navigate complex networks. A rich set of functions have been implemented to allow users to perform coloring, shading, topology analysis, and enrichment analysis. OmicsNet is freely available at http://www.omicsnet.ca.
High-quality and interactive animations of 3D time-varying vector fields.
Helgeland, Anders; Elboth, Thomas
2006-01-01
In this paper, we present an interactive texture-based method for visualizing three-dimensional unsteady vector fields. The visualization method uses a sparse and global representation of the flow, such that it does not suffer from the same perceptual issues as is the case for visualizing dense representations. The animation is made by injecting a collection of particles evenly distributed throughout the physical domain. These particles are then tracked along their path lines. At each time step, these particles are used as seed points to generate field lines using any vector field such as the velocity field or vorticity field. In this way, the animation shows the advection of particles while each frame in the animation shows the instantaneous vector field. In order to maintain a coherent particle density and to avoid clustering as time passes, we have developed a novel particle advection strategy which produces approximately evenly-spaced field lines at each time step. To improve rendering performance, we decouple the rendering stage from the preceding stages of the visualization method. This allows interactive exploration of multiple fields simultaneously, which sets the stage for a more complete analysis of the flow field. The final display is rendered using texture-based direct volume rendering.
GPU Accelerated Browser for Neuroimaging Genomics.
Zigon, Bob; Li, Huang; Yao, Xiaohui; Fang, Shiaofen; Hasan, Mohammad Al; Yan, Jingwen; Moore, Jason H; Saykin, Andrew J; Shen, Li
2018-04-25
Neuroimaging genomics is an emerging field that provides exciting opportunities to understand the genetic basis of brain structure and function. The unprecedented scale and complexity of the imaging and genomics data, however, have presented critical computational bottlenecks. In this work we present our initial efforts towards building an interactive visual exploratory system for mining big data in neuroimaging genomics. A GPU accelerated browsing tool for neuroimaging genomics is created that implements the ANOVA algorithm for single nucleotide polymorphism (SNP) based analysis and the VEGAS algorithm for gene-based analysis, and executes them at interactive rates. The ANOVA algorithm is 110 times faster than the 4-core OpenMP version, while the VEGAS algorithm is 375 times faster than its 4-core OpenMP counter part. This approach lays a solid foundation for researchers to address the challenges of mining large-scale imaging genomics datasets via interactive visual exploration.
CircularLogo: A lightweight web application to visualize intra-motif dependencies.
Ye, Zhenqing; Ma, Tao; Kalmbach, Michael T; Dasari, Surendra; Kocher, Jean-Pierre A; Wang, Liguo
2017-05-22
The sequence logo has been widely used to represent DNA or RNA motifs for more than three decades. Despite its intelligibility and intuitiveness, the traditional sequence logo is unable to display the intra-motif dependencies and therefore is insufficient to fully characterize nucleotide motifs. Many methods have been developed to quantify the intra-motif dependencies, but fewer tools are available for visualization. We developed CircularLogo, a web-based interactive application, which is able to not only visualize the position-specific nucleotide consensus and diversity but also display the intra-motif dependencies. Applying CircularLogo to HNF6 binding sites and tRNA sequences demonstrated its ability to show intra-motif dependencies and intuitively reveal biomolecular structure. CircularLogo is implemented in JavaScript and Python based on the Django web framework. The program's source code and user's manual are freely available at http://circularlogo.sourceforge.net . CircularLogo web server can be accessed from http://bioinformaticstools.mayo.edu/circularlogo/index.html . CircularLogo is an innovative web application that is specifically designed to visualize and interactively explore intra-motif dependencies.
PyPathway: Python Package for Biological Network Analysis and Visualization.
Xu, Yang; Luo, Xiao-Chun
2018-05-01
Life science studies represent one of the biggest generators of large data sets, mainly because of rapid sequencing technological advances. Biological networks including interactive networks and human curated pathways are essential to understand these high-throughput data sets. Biological network analysis offers a method to explore systematically not only the molecular complexity of a particular disease but also the molecular relationships among apparently distinct phenotypes. Currently, several packages for Python community have been developed, such as BioPython and Goatools. However, tools to perform comprehensive network analysis and visualization are still needed. Here, we have developed PyPathway, an extensible free and open source Python package for functional enrichment analysis, network modeling, and network visualization. The network process module supports various interaction network and pathway databases such as Reactome, WikiPathway, STRING, and BioGRID. The network analysis module implements overrepresentation analysis, gene set enrichment analysis, network-based enrichment, and de novo network modeling. Finally, the visualization and data publishing modules enable users to share their analysis by using an easy web application. For package availability, see the first Reference.
Bodily-visual practices and turn continuation
Ford, Cecilia E.; Thompson, Sandra A.; Drake, Veronika
2012-01-01
This paper considers points in turn construction where conversation researchers have shown that talk routinely continues beyond possible turn completion, but where we find bodily-visual behavior doing such turn extension work. The bodily-visual behaviors we examine share many features with verbal turn extensions, but we argue that embodied movements have distinct properties that make them well-suited for specific kinds of social action, including stance display and by-play in sequences framed as subsidiary to a simultaneous and related verbal exchange. Our study is in line with a research agenda taking seriously the point made by Goodwin (2000a, b, 2003), Hayashi (2003, 2005), Iwasaki (2009), and others that scholars seeking to account for practices in language and social interaction do themselves a disservice if they privilege the verbal dimension; rather, as suggested in Stivers/Sidnell (2005), each semiotic system/modality, while coordinated with others, has its own organization. With the current exploration of bodily-visual turn extensions, we hope to contribute to a growing understanding of how these different modes of organization are managed concurrently and in concert by interactants in carrying out their everyday social actions. PMID:23526861
Morris, John H; Knudsen, Giselle M; Verschueren, Erik; Johnson, Jeffrey R; Cimermancic, Peter; Greninger, Alexander L; Pico, Alexander R
2015-01-01
By determining protein-protein interactions in normal, diseased and infected cells, we can improve our understanding of cellular systems and their reaction to various perturbations. In this protocol, we discuss how to use data obtained in affinity purification–mass spectrometry (AP-MS) experiments to generate meaningful interaction networks and effective figures. We begin with an overview of common epitope tagging, expression and AP practices, followed by liquid chromatography–MS (LC-MS) data collection. We then provide a detailed procedure covering a pipeline approach to (i) pre-processing the data by filtering against contaminant lists such as the Contaminant Repository for Affinity Purification (CRAPome) and normalization using the spectral index (SIN) or normalized spectral abundance factor (NSAF); (ii) scoring via methods such as MiST, SAInt and CompPASS; and (iii) testing the resulting scores. Data formats familiar to MS practitioners are then transformed to those most useful for network-based analyses. The protocol also explores methods available in Cytoscape to visualize and analyze these types of interaction data. The scoring pipeline can take anywhere from 1 d to 1 week, depending on one’s familiarity with the tools and data peculiarities. Similarly, the network analysis and visualization protocol in Cytoscape takes 2–4 h to complete with the provided sample data, but we recommend taking days or even weeks to explore one’s data and find the right questions. PMID:25275790
van den Broek, Ellen G C; van Eijden, Ans J P M; Overbeek, Mathilde M; Kef, Sabina; Sterkenburg, Paula S; Schuengel, Carlo
2017-01-01
Secure parent-child attachment may help children to overcome the challenges of growing up with a visual or visual-and-intellectual impairment. A large literature exists that provides a blueprint for interventions that promote parental sensitivity and secure attachment. The Video-feedback Intervention to promote Positive Parenting (VIPP) is based on that blueprint. While it has been adapted to several specific at risk populations, children with visual impairment may require additional adjustments. This study aimed to identify the themes that should be addressed in adapting VIPP and similar interventions. A Delphi-consultation was conducted with 13 professionals in the field of visual impairment to select the themes for relationship-focused intervention. These themes informed a systematic literature search. Interaction, intersubjectivity, joint attention, exploration, play and specific behavior were the themes mentioned in the Delphi-group. Paired with visual impairment or vision disorders, infants or young children (and their parents) the search yielded 74 articles, making the six themes for intervention adaptation more specific and concrete. The rich literature on six visual impairment specific themes was dominated by the themes interaction, intersubjectivity, and joint attention. These themes need to be addressed in adapting intervention programs developed for other populations, such as VIPP which currently focuses on higher order constructs of sensitivity and attachment.
Krajcovicova, Lenka; Mikl, Michal; Marecek, Radek; Rektorova, Irena
2014-01-01
Changes in connectivity of the posterior node of the default mode network (DMN) were studied when switching from baseline to a cognitive task using functional magnetic resonance imaging. In all, 15 patients with mild to moderate Alzheimer's disease (AD) and 18 age-, gender-, and education-matched healthy controls (HC) participated in the study. Psychophysiological interactions analysis was used to assess the specific alterations in the DMN connectivity (deactivation-based) due to psychological effects from the complex visual scene encoding task. In HC, we observed task-induced connectivity decreases between the posterior cingulate and middle temporal and occipital visual cortices. These findings imply successful involvement of the ventral visual pathway during the visual processing in our HC cohort. In AD, involvement of the areas engaged in the ventral visual pathway was observed only in a small volume of the right middle temporal gyrus. Additional connectivity changes (decreases) in AD were present between the posterior cingulate and superior temporal gyrus when switching from baseline to task condition. These changes are probably related to both disturbed visual processing and the DMN connectivity in AD and reflect deficits and compensatory mechanisms within the large scale brain networks in this patient population. Studying the DMN connectivity using psychophysiological interactions analysis may provide a sensitive tool for exploring early changes in AD and their dynamics during the disease progression.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelenyuk, Alla; Imre, D.; Wilson, Jacqueline M.
2015-02-01
Understanding the effect of aerosols on climate requires knowledge of the size and chemical composition of individual aerosol particles - two fundamental properties that determine an aerosol’s optical properties and ability to serve as cloud condensation or ice nuclei. Here we present miniSPLAT, our new aircraft compatible single particle mass spectrometer, that measures in-situ and in real-time size and chemical composition of individual aerosol particles with extremely high sensitivity, temporal resolution, and sizing precision on the order of a monolayer. miniSPLAT operates in dual data acquisition mode to measure, in addition to single particle size and composition, particle number concentrations,more » size distributions, density, and asphericity with high temporal resolution. When compared to our previous instrument, SPLAT II, miniSPLAT has been significantly reduced in size, weight, and power consumption without loss in performance. We also present ND-Scope, our newly developed interactive visual analytics software package. ND-Scope is designed to explore and visualize the vast amount of complex, multidimensional data acquired by our single particle mass spectrometers, along with other aerosol and cloud characterization instruments on-board aircraft. We demonstrate that ND-Scope makes it possible to visualize the relationships between different observables and to view the data in a geo-spatial context, using the interactive and fully coupled Google Earth and Parallel Coordinates displays. Here we illustrate the utility of ND-Scope to visualize the spatial distribution of atmospheric particles of different compositions, and explore the relationship between individual particle composition and their activity as cloud condensation nuclei.« less
3D Visualization of Machine Learning Algorithms with Astronomical Data
NASA Astrophysics Data System (ADS)
Kent, Brian R.
2016-01-01
We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.
JPL Earth Science Center Visualization Multitouch Table
NASA Astrophysics Data System (ADS)
Kim, R.; Dodge, K.; Malhotra, S.; Chang, G.
2014-12-01
JPL Earth Science Center Visualization table is a specialized software and hardware to allow multitouch, multiuser, and remote display control to create seamlessly integrated experiences to visualize JPL missions and their remote sensing data. The software is fully GIS capable through time aware OGC WMTS using Lunar Mapping and Modeling Portal as the GIS backend to continuously ingest and retrieve realtime remote sending data and satellite location data. 55 inch and 82 inch unlimited finger count multitouch displays allows multiple users to explore JPL Earth missions and visualize remote sensing data through very intuitive and interactive touch graphical user interface. To improve the integrated experience, Earth Science Center Visualization Table team developed network streaming which allows table software to stream data visualization to near by remote display though computer network. The purpose of this visualization/presentation tool is not only to support earth science operation, but specifically designed for education and public outreach and will significantly contribute to STEM. Our presentation will include overview of our software, hardware, and showcase of our system.
Kumar, Rajendra; Sobhy, Haitham
2017-01-01
Abstract Hi-C experiments generate data in form of large genome contact maps (Hi-C maps). These show that chromosomes are arranged in a hierarchy of three-dimensional compartments. But to understand how these compartments form and by how much they affect genetic processes such as gene regulation, biologists and bioinformaticians need efficient tools to visualize and analyze Hi-C data. However, this is technically challenging because these maps are big. In this paper, we remedied this problem, partly by implementing an efficient file format and developed the genome contact map explorer platform. Apart from tools to process Hi-C data, such as normalization methods and a programmable interface, we made a graphical interface that let users browse, scroll and zoom Hi-C maps to visually search for patterns in the Hi-C data. In the software, it is also possible to browse several maps simultaneously and plot related genomic data. The software is openly accessible to the scientific community. PMID:28973466
ACTIVIS: Visual Exploration of Industry-Scale Deep Neural Network Models.
Kahng, Minsuk; Andrews, Pierre Y; Kalro, Aditya; Polo Chau, Duen Horng
2017-08-30
While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved ACTIVIS, an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance- and subset-level. ACTIVIS has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how ACTIVIS may work with different models.
Data visualization and analysis tools for the MAVEN mission
NASA Astrophysics Data System (ADS)
Harter, B.; De Wolfe, A. W.; Putnam, B.; Brain, D.; Chaffin, M.
2016-12-01
The Mars Atmospheric and Volatile Evolution (MAVEN) mission has been collecting data at Mars since September 2014. We have developed new software tools for exploring and analyzing the science data. Our open-source Python toolkit for working with data from MAVEN and other missions is based on the widely-used "tplot" IDL toolkit. We have replicated all of the basic tplot functionality in Python, and use the bokeh and matplotlib libraries to generate interactive line plots and spectrograms, providing additional functionality beyond the capabilities of IDL graphics. These Python tools are generalized to work with missions beyond MAVEN, and our software is available on Github. We have also been exploring 3D graphics as a way to better visualize the MAVEN science data and models. We have constructed a 3D visualization of MAVEN's orbit using the CesiumJS library, which not only allows viewing of MAVEN's orientation and position, but also allows the display of selected science data sets and their variation over time.
NASA Astrophysics Data System (ADS)
Liu, Z.; Acker, J. G.; Kempler, S. J.
2016-12-01
The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) is one of twelve NASA Science Mission Directorate (SMD) Data Centers that provide Earth science data, information, and services to research scientists, applications scientists, applications users, and students around the world. The GES DISC is the home (archive) of NASA Precipitation and Hydrology, as well as Atmospheric Composition and Dynamics remote sensing data and information. To facilitate Earth science data access, the GES DISC has been developing user-friendly data services for users at different levels. Among them, the Geospatial Interactive Online Visualization ANd aNalysis Infrastructure (GIOVANNI, http://giovanni.gsfc.nasa.gov/) allows users to explore satellite-based data using sophisticated analyses and visualizations without downloading data and software, which is particularly suitable for novices to use NASA datasets in STEM activities. In this presentation, we will briefly introduce GIOVANNI and recommend datasets for STEM. Examples of using these datasets in STEM activities will be presented as well.
NASA Technical Reports Server (NTRS)
Liu, Z.; Acker, J.; Kempler, S.
2016-01-01
The NASA Goddard Earth Sciences (GES) Data and Information Services Center(DISC) is one of twelve NASA Science Mission Directorate (SMD) Data Centers that provide Earth science data, information, and services to users around the world including research and application scientists, students, citizen scientists, etc. The GESDISC is the home (archive) of remote sensing datasets for NASA Precipitation and Hydrology, Atmospheric Composition and Dynamics, etc. To facilitate Earth science data access, the GES DISC has been developing user-friendly data services for users at different levels in different countries. Among them, the Geospatial Interactive Online Visualization ANd aNalysis Infrastructure (Giovanni, http:giovanni.gsfc.nasa.gov) allows users to explore satellite-based datasets using sophisticated analyses and visualization without downloading data and software, which is particularly suitable for novices (such as students) to use NASA datasets in STEM (science, technology, engineering and mathematics) activities. In this presentation, we will briefly introduce Giovanni along with examples for STEM activities.
Visual exploration of parameter influence on phylogenetic trees.
Hess, Martin; Bremm, Sebastian; Weissgraeber, Stephanie; Hamacher, Kay; Goesele, Michael; Wiemeyer, Josef; von Landesberger, Tatiana
2014-01-01
Evolutionary relationships between organisms are frequently derived as phylogenetic trees inferred from multiple sequence alignments (MSAs). The MSA parameter space is exponentially large, so tens of thousands of potential trees can emerge for each dataset. A proposed visual-analytics approach can reveal the parameters' impact on the trees. Given input trees created with different parameter settings, it hierarchically clusters the trees according to their structural similarity. The most important clusters of similar trees are shown together with their parameters. This view offers interactive parameter exploration and automatic identification of relevant parameters. Biologists applied this approach to real data of 16S ribosomal RNA and protein sequences of ion channels. It revealed which parameters affected the tree structures. This led to a more reliable selection of the best trees.
A Haptic-Enhanced System for Molecular Sensing
NASA Astrophysics Data System (ADS)
Comai, Sara; Mazza, Davide
The science of haptics has received an enormous attention in the last decade. One of the major application trends of haptics technology is data visualization and training. In this paper, we present a haptically-enhanced system for manipulation and tactile exploration of molecules.The geometrical models of molecules is extracted either from theoretical or empirical data using file formats widely adopted in chemical and biological fields. The addition of information computed with computational chemistry tools, allows users to feel the interaction forces between an explored molecule and a charge associated to the haptic device, and to visualize a huge amount of numerical data in a more comprehensible way. The developed tool can be used either for teaching or research purposes due to its high reliance on both theoretical and experimental data.
Applying Infant Massage Practices: A Qualitative Study
ERIC Educational Resources Information Center
Lappin, Grace; Kretschmer, Robert E.
2005-01-01
This study explored the dynamic interaction between a mother and her 11-month-old visually impaired infant before and after the mother was taught infant massage. After the mother learned infant massage, she had more appropriate physical contact with her infant, engaged with him within his field of vision, directly vocalized to him, and had a…
Interactive visual analysis promotes exploration of long-term ecological data
T.N. Pham; J.A. Jones; R. Metoyer; F.J. Swanson; R.J. Pabst
2013-01-01
Long-term ecological data are crucial in helping ecologists understand ecosystem function and environmental change. Nevertheless, these kinds of data sets are difficult to analyze because they are usually large, multivariate, and spatiotemporal. Although existing analysis tools such as statistical methods and spreadsheet software permit rigorous tests of pre-conceived...
Interactive Visual Tools as Triggers of Collaborative Reasoning in Entry-Level Pathology
ERIC Educational Resources Information Center
Nivala, Markus; Rystedt, Hans; Saljo, Roger; Kronqvist, Pauliina; Lehtinen, Erno
2012-01-01
The growing importance of medical imaging in everyday diagnostic practices poses challenges for medical education. While the emergence of novel imaging technologies offers new opportunities, many pedagogical questions remain. In the present study, we explore the use of a new tool, a virtual microscope, for the instruction and the collaborative…
Beyond simple charts: Design of visualizations for big health data
Ola, Oluwakemi; Sedig, Kamran
2016-01-01
Health data is often big data due to its high volume, low veracity, great variety, and high velocity. Big health data has the potential to improve productivity, eliminate waste, and support a broad range of tasks related to disease surveillance, patient care, research, and population health management. Interactive visualizations have the potential to amplify big data’s utilization. Visualizations can be used to support a variety of tasks, such as tracking the geographic distribution of diseases, analyzing the prevalence of disease, triaging medical records, predicting outbreaks, and discovering at-risk populations. Currently, many health visualization tools use simple charts, such as bar charts and scatter plots, that only represent few facets of data. These tools, while beneficial for simple perceptual and cognitive tasks, are ineffective when dealing with more complex sensemaking tasks that involve exploration of various facets and elements of big data simultaneously. There is need for sophisticated and elaborate visualizations that encode many facets of data and support human-data interaction with big data and more complex tasks. When not approached systematically, design of such visualizations is labor-intensive, and the resulting designs may not facilitate big-data-driven tasks. Conceptual frameworks that guide the design of visualizations for big data can make the design process more manageable and result in more effective visualizations. In this paper, we demonstrate how a framework-based approach can help designers create novel, elaborate, non-trivial visualizations for big health data. We present four visualizations that are components of a larger tool for making sense of large-scale public health data. PMID:28210416
Beyond simple charts: Design of visualizations for big health data.
Ola, Oluwakemi; Sedig, Kamran
2016-01-01
Health data is often big data due to its high volume, low veracity, great variety, and high velocity. Big health data has the potential to improve productivity, eliminate waste, and support a broad range of tasks related to disease surveillance, patient care, research, and population health management. Interactive visualizations have the potential to amplify big data's utilization. Visualizations can be used to support a variety of tasks, such as tracking the geographic distribution of diseases, analyzing the prevalence of disease, triaging medical records, predicting outbreaks, and discovering at-risk populations. Currently, many health visualization tools use simple charts, such as bar charts and scatter plots, that only represent few facets of data. These tools, while beneficial for simple perceptual and cognitive tasks, are ineffective when dealing with more complex sensemaking tasks that involve exploration of various facets and elements of big data simultaneously. There is need for sophisticated and elaborate visualizations that encode many facets of data and support human-data interaction with big data and more complex tasks. When not approached systematically, design of such visualizations is labor-intensive, and the resulting designs may not facilitate big-data-driven tasks. Conceptual frameworks that guide the design of visualizations for big data can make the design process more manageable and result in more effective visualizations. In this paper, we demonstrate how a framework-based approach can help designers create novel, elaborate, non-trivial visualizations for big health data. We present four visualizations that are components of a larger tool for making sense of large-scale public health data.
Tuikkala, Johannes; Vähämaa, Heidi; Salmela, Pekka; Nevalainen, Olli S; Aittokallio, Tero
2012-03-26
Graph drawing is an integral part of many systems biology studies, enabling visual exploration and mining of large-scale biological networks. While a number of layout algorithms are available in popular network analysis platforms, such as Cytoscape, it remains poorly understood how well their solutions reflect the underlying biological processes that give rise to the network connectivity structure. Moreover, visualizations obtained using conventional layout algorithms, such as those based on the force-directed drawing approach, may become uninformative when applied to larger networks with dense or clustered connectivity structure. We implemented a modified layout plug-in, named Multilevel Layout, which applies the conventional layout algorithms within a multilevel optimization framework to better capture the hierarchical modularity of many biological networks. Using a wide variety of real life biological networks, we carried out a systematic evaluation of the method in comparison with other layout algorithms in Cytoscape. The multilevel approach provided both biologically relevant and visually pleasant layout solutions in most network types, hence complementing the layout options available in Cytoscape. In particular, it could improve drawing of large-scale networks of yeast genetic interactions and human physical interactions. In more general terms, the biological evaluation framework developed here enables one to assess the layout solutions from any existing or future graph drawing algorithm as well as to optimize their performance for a given network type or structure. By making use of the multilevel modular organization when visualizing biological networks, together with the biological evaluation of the layout solutions, one can generate convenient visualizations for many network biology applications.
CytoCom: a Cytoscape app to visualize, query and analyse disease comorbidity networks.
Moni, Mohammad Ali; Xu, Haoming; Liò, Pietro
2015-03-15
CytoCom is an interactive plugin for Cytoscape that can be used to search, explore, analyse and visualize human disease comorbidity network. It represents disease-disease associations in terms of bipartite graphs and provides International Classification of Diseases, Ninth Revision (ICD9)-centric and disease name centric views of disease information. It allows users to find associations between diseases based on the two measures: Relative Risk (RR) and [Formula: see text]-correlation values. In the disease network, the size of each node is based on the prevalence of that disease. CytoCom is capable of clustering disease network based on the ICD9 disease category. It provides user-friendly access that facilitates exploration of human diseases, and finds additional associated diseases by double-clicking a node in the existing network. Additional comorbid diseases are then connected to the existing network. It is able to assist users for interpretation and exploration of the human diseases by a variety of built-in functions. Moreover, CytoCom permits multi-colouring of disease nodes according to standard disease classification for expedient visualization. © The Author 2014. Published by Oxford University Press.
New Tools for Sea Ice Data Analysis and Visualization: NSIDC's Arctic Sea Ice News and Analysis
NASA Astrophysics Data System (ADS)
Vizcarra, N.; Stroeve, J.; Beam, K.; Beitler, J.; Brandt, M.; Kovarik, J.; Savoie, M. H.; Skaug, M.; Stafford, T.
2017-12-01
Arctic sea ice has long been recognized as a sensitive climate indicator and has undergone a dramatic decline over the past thirty years. Antarctic sea ice continues to be an intriguing and active field of research. The National Snow and Ice Data Center's Arctic Sea Ice News & Analysis (ASINA) offers researchers and the public a transparent view of sea ice data and analysis. We have released a new set of tools for sea ice analysis and visualization. In addition to Charctic, our interactive sea ice extent graph, the new Sea Ice Data and Analysis Tools page provides access to Arctic and Antarctic sea ice data organized in seven different data workbooks, updated daily or monthly. An interactive tool lets scientists, or the public, quickly compare changes in ice extent and location. Another tool allows users to map trends, anomalies, and means for user-defined time periods. Animations of September Arctic and Antarctic monthly average sea ice extent and concentration may also be accessed from this page. Our tools help the NSIDC scientists monitor and understand sea ice conditions in near real time. They also allow the public to easily interact with and explore sea ice data. Technical innovations in our data center helped NSIDC quickly build these tools and more easily maintain them. The tools were made publicly accessible to meet the desire from the public and members of the media to access the numbers and calculations that power our visualizations and analysis. This poster explores these tools and how other researchers, the media, and the general public are using them.
BioSIGHT: Interactive Visualization Modules for Science Education
NASA Technical Reports Server (NTRS)
Wong, Wee Ling
1998-01-01
Redefining science education to harness emerging integrated media technologies with innovative pedagogical goals represents a unique challenge. The Integrated Media Systems Center (IMSC) is the only engineering research center in the area of multimedia and creative technologies sponsored by the National Science Foundation. The research program at IMSC is focused on developing advanced technologies that address human-computer interfaces, database management, and high-speed network capabilities. The BioSIGHT project at is a demonstration technology project in the area of education that seeks to address how such emerging multimedia technologies can make an impact on science education. The scope of this project will help solidify NASA's commitment for the development of innovative educational resources that promotes science literacy for our students and the general population as well. These issues must be addressed as NASA marches toward the goal of enabling human space exploration that requires an understanding of life sciences in space. The IMSC BioSIGHT lab was established with the purpose of developing a novel methodology that will map a high school biology curriculum into a series of interactive visualization modules that can be easily incorporated into a space biology curriculum. Fundamental concepts in general biology must be mastered in order to allow a better understanding and application for space biology. Interactive visualization is a powerful component that can capture the students' imagination, facilitate their assimilation of complex ideas, and help them develop integrated views of biology. These modules will augment the role of the teacher and will establish the value of student-centered interactivity, both in an individual setting as well as in a collaborative learning environment. Students will be able to interact with the content material, explore new challenges, and perform virtual laboratory simulations. The BioSIGHT effort is truly cross-disciplinary in nature and requires expertise from many areas including Biology, Computer Science Electrical Engineering, Education, and the Cognitive Sciences. The BioSIGHT team includes a scientific illustrator, educational software designer, computer programmers as well as IMSC graduate and undergraduate students.
Interactive Streamline Exploration and Manipulation Using Deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tong, Xin; Chen, Chun-Ming; Shen, Han-Wei
2015-01-12
Occlusion presents a major challenge in visualizing three-dimensional flow fields with streamlines. Displaying too many streamlines at once makes it difficult to locate interesting regions, but displaying too few streamlines risks missing important features. A more ideal streamline exploration model is to allow the viewer to freely move across the field that has been populated with interesting streamlines and pull away the streamlines that cause occlusion so that the viewer can inspect the hidden ones in detail. In this paper, we present a streamline deformation algorithm that supports such user-driven interaction with three-dimensional flow fields. We define a view-dependent focus+contextmore » technique that moves the streamlines occluding the focus area using a novel displacement model. To preserve the context surrounding the user-chosen focus area, we propose two shape models to define the transition zone for the surrounding streamlines, and the displacement of the contextual streamlines is solved interactively with a goal of preserving their shapes as much as possible. Based on our deformation model, we design an interactive streamline exploration tool using a lens metaphor. Our system runs interactively so that users can move their focus and examine the flow field freely.« less
Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow.
Wongsuphasawat, Kanit; Smilkov, Daniel; Wexler, James; Wilson, Jimbo; Mane, Dandelion; Fritz, Doug; Krishnan, Dilip; Viegas, Fernanda B; Wattenberg, Martin
2018-01-01
We present a design study of the TensorFlow Graph Visualizer, part of the TensorFlow machine intelligence platform. This tool helps users understand complex machine learning architectures by visualizing their underlying dataflow graphs. The tool works by applying a series of graph transformations that enable standard layout techniques to produce a legible interactive diagram. To declutter the graph, we decouple non-critical nodes from the layout. To provide an overview, we build a clustered graph using the hierarchical structure annotated in the source code. To support exploration of nested structure on demand, we perform edge bundling to enable stable and responsive cluster expansion. Finally, we detect and highlight repeated structures to emphasize a model's modular composition. To demonstrate the utility of the visualizer, we describe example usage scenarios and report user feedback. Overall, users find the visualizer useful for understanding, debugging, and sharing the structures of their models.
Towards a visual modeling approach to designing microelectromechanical system transducers
NASA Astrophysics Data System (ADS)
Dewey, Allen; Srinivasan, Vijay; Icoz, Evrim
1999-12-01
In this paper, we address initial design capture and system conceptualization of microelectromechanical system transducers based on visual modeling and design. Visual modeling frames the task of generating hardware description language (analog and digital) component models in a manner similar to the task of generating software programming language applications. A structured topological design strategy is employed, whereby microelectromechanical foundry cell libraries are utilized to facilitate the design process of exploring candidate cells (topologies), varying key aspects of the transduction for each topology, and determining which topology best satisfies design requirements. Coupled-energy microelectromechanical system characterizations at a circuit level of abstraction are presented that are based on branch constitutive relations and an overall system of simultaneous differential and algebraic equations. The resulting design methodology is called visual integrated-microelectromechanical VHDL-AMS interactive design (VHDL-AMS is visual hardware design language for analog and mixed signal).
Intelligent Visualization of Geo-Information on the Future Web
NASA Astrophysics Data System (ADS)
Slusallek, P.; Jochem, R.; Sons, K.; Hoffmann, H.
2012-04-01
Visualization is a key component of the "Observation Web" and will become even more important in the future as geo data becomes more widely accessible. The common statement that "Data that cannot be seen, does not exist" is especially true for non-experts, like most citizens. The Web provides the most interesting platform for making data easily and widely available. However, today's Web is not well suited for the interactive visualization and exploration that is often needed for geo data. Support for 3D data was added only recently and at an extremely low level (WebGL), but even the 2D visualization capabilities of HTML e.g. (images, canvas, SVG) are rather limited, especially regarding interactivity. We have developed XML3D as an extension to HTML-5. It allows for compactly describing 2D and 3D data directly as elements of an HTML-5 document. All graphics elements are part of the Document Object Model (DOM) and can be manipulated via the same set of DOM events and methods that millions of Web developers use on a daily basis. Thus, XML3D makes highly interactive 2D and 3D visualization easily usable, not only for geo data. XML3D is supported by any WebGL-capable browser but we also provide native implementations in Firefox and Chromium. As an example, we show how OpenStreetMap data can be mapped directly to XML3D and visualized interactively in any Web page. We show how this data can be easily augmented with additional data from the Web via a few lines of Javascript. We also show how embedded semantic data (via RDFa) allows for linking the visualization back to the data's origin, thus providing an immersive interface for interacting with and modifying the original data. XML3D is used as key input for standardization within the W3C Community Group on "Declarative 3D for the Web" chaired by the DFKI and has recently been selected as one of the Generic Enabler for the EU Future Internet initiative.
New solutions for climate network visualization
NASA Astrophysics Data System (ADS)
Nocke, Thomas; Buschmann, Stefan; Donges, Jonathan F.; Marwan, Norbert
2016-04-01
An increasing amount of climate and climate impact research methods deals with geo-referenced networks, including energy, trade, supply-chain, disease dissemination and climatic tele-connection networks. At the same time, the size and complexity of these networks increases, resulting in networks of more than hundred thousand or even millions of edges, which are often temporally evolving, have additional data at nodes and edges, and can consist of multiple layers even in real 3D. This gives challenges to both the static representation and the interactive exploration of these networks, first of all avoiding edge clutter ("edge spagetti") and allowing interactivity even for unfiltered networks. Within this presentation, we illustrate potential solutions to these challenges. Therefore, we give a glimpse on a questionnaire performed with climate and complex system scientists with respect to their network visualization requirements, and on a review of available state-of-the-art visualization techniques and tools for this purpose (see as well Nocke et al., 2015). In the main part, we present alternative visualization solutions for several use cases (global, regional, and multi-layered climate networks) including alternative geographic projections, edge bundling, and 3-D network support (based on CGV and GTX tools), and implementation details to reach interactive frame rates. References: Nocke, T., S. Buschmann, J. F. Donges, N. Marwan, H.-J. Schulz, and C. Tominski: Review: Visual analytics of climate networks, Nonlinear Processes in Geophysics, 22, 545-570, doi:10.5194/npg-22-545-2015, 2015
AllAboard: Visual Exploration of Cellphone Mobility Data to Optimise Public Transport.
Di Lorenzo, G; Sbodio, M; Calabrese, F; Berlingerio, M; Pinelli, F; Nair, R
2016-02-01
The deep penetration of mobile phones offers cities the ability to opportunistically monitor citizens' mobility and use data-driven insights to better plan and manage services. With large scale data on mobility patterns, operators can move away from the costly, mostly survey based, transportation planning processes, to a more data-centric view, that places the instrumented user at the center of development. In this framework, using mobile phone data to perform transit analysis and optimization represents a new frontier with significant societal impact, especially in developing countries. In this paper we present AllAboard, an intelligent tool that analyses cellphone data to help city authorities in visually exploring urban mobility and optimizing public transport. This is performed within a self contained tool, as opposed to the current solutions which rely on a combination of several distinct tools for analysis, reporting, optimisation and planning. An interactive user interface allows transit operators to visually explore the travel demand in both space and time, correlate it with the transit network, and evaluate the quality of service that a transit network provides to the citizens at very fine grain. Operators can visually test scenarios for transit network improvements, and compare the expected impact on the travellers' experience. The system has been tested using real telecommunication data for the city of Abidjan, Ivory Coast, and evaluated from a data mining, optimisation and user prospective.
NASA Astrophysics Data System (ADS)
Herold, Julia; Abouna, Sylvie; Zhou, Luxian; Pelengaris, Stella; Epstein, David B. A.; Khan, Michael; Nattkemper, Tim W.
2009-02-01
In the last years, bioimaging has turned from qualitative measurements towards a high-throughput and highcontent modality, providing multiple variables for each biological sample analyzed. We present a system which combines machine learning based semantic image annotation and visual data mining to analyze such new multivariate bioimage data. Machine learning is employed for automatic semantic annotation of regions of interest. The annotation is the prerequisite for a biological object-oriented exploration of the feature space derived from the image variables. With the aid of visual data mining, the obtained data can be explored simultaneously in the image as well as in the feature domain. Especially when little is known of the underlying data, for example in the case of exploring the effects of a drug treatment, visual data mining can greatly aid the process of data evaluation. We demonstrate how our system is used for image evaluation to obtain information relevant to diabetes study and screening of new anti-diabetes treatments. Cells of the Islet of Langerhans and whole pancreas in pancreas tissue samples are annotated and object specific molecular features are extracted from aligned multichannel fluorescence images. These are interactively evaluated for cell type classification in order to determine the cell number and mass. Only few parameters need to be specified which makes it usable also for non computer experts and allows for high-throughput analysis.
Giovanni: The Bridge between Data and Science
NASA Technical Reports Server (NTRS)
Shen, Suhung; Lynnes, Christopher; Kempler, Steven J.
2012-01-01
NASA Giovanni (Goddard Interactive Online Visualization ANd aNalysis Infrastructure) is a web-based remote sensing and model data visualization and analysis system developed by the Goddard Earth Sciences Data and Information Services Center (GES DISC). This web-based tool facilitates data discovery, exploration and analysis of large amount of global and regional data sets, covering atmospheric dynamics, atmospheric chemistry, hydrology, oceanographic, and land surface. Data analysis functions include Lat-Lon map, time series, scatter plot, correlation map, difference, cross-section, vertical profile, and animation etc. Visualization options enable comparisons of multiple variables and easier refinement. Recently, new features have been developed, such as interactive scatter plots and maps. The performance is also being improved, in some cases by an order of magnitude for certain analysis functions with optimized software. We are working toward merging current Giovanni portals into a single omnibus portal with all variables in one (virtual) location to help users find a variable easily and enhance the intercomparison capability
The ‘when’ parietal pathway explored by lesion studies
Battelli, Lorella; Walsh, Vincent; Pascual-Leone, Alvaro; Cavanagh, Patrick
2016-01-01
Summary The perception of events in space and time is at the root of our interactions with the environment. The precision with which we perceive visual events in time enables us to act upon objects with great accuracy and the loss of such functions, due to brain lesions can be catastrophic. We outline a visual timing mechanism that deals with the trajectory of an object’s existence across time, a critical function when keeping track of multiple objects that temporally overlap or occur sequentially. Recent evidence suggests these functions are served by an extended network of areas which we call the ‘when’ pathway. Here we show that the when pathway is distinct from and interacts with, the well established ‘where’ and ‘what’ pathways. PMID:18708141
Freezing behavior as a response to sexual visual stimuli as demonstrated by posturography.
Mouras, Harold; Lelard, Thierry; Ahmaidi, Said; Godefroy, Olivier; Krystkowiak, Pierre
2015-01-01
Posturographic changes in motivational conditions remain largely unexplored in the context of embodied cognition. Over the last decade, sexual motivation has been used as a good canonical working model to study motivated social interactions. The objective of this study was to explore posturographic variations in response to visual sexual videos as compared to neutral videos. Our results support demonstration of a freezing-type response in response to sexually explicit stimuli compared to other conditions, as demonstrated by significantly decreased standard deviations for (i) the center of pressure displacement along the mediolateral and anteroposterior axes and (ii) center of pressure's displacement surface. These results support the complexity of the motor correlates of sexual motivation considered to be a canonical functional context to study the motor correlates of motivated social interactions.
Agrafiotis, Dimitris K; Wiener, John J M
2010-07-08
We introduce Scaffold Explorer, an interactive tool that allows medicinal chemists to define hierarchies of chemical scaffolds and use them to explore their project data. Scaffold Explorer allows the user to construct a tree, where each node corresponds to a specific scaffold. Each node can have multiple children, each of which represents a more refined substructure relative to its parent node. Once the tree is defined, it can be mapped onto any collection of compounds and be used as a navigational tool to explore structure-activity relationships (SAR) across different chemotypes. The rich visual analytics of Scaffold Explorer afford the user a "bird's-eye" view of the chemical space spanned by a particular data set, map any physicochemical property or biological activity of interest onto the individual scaffold nodes, serve as an aggregator for the properties of the compounds represented by these nodes, and quickly distinguish promising chemotypes from less interesting or problematic ones. Unlike previous approaches, which focused on automated extraction and classification of scaffolds, the utility of the new tool rests on its interactivity and ability to accommodate the medicinal chemists' intuition by allowing the use of arbitrary substructures containing variable atoms, bonds, and/or substituents such as those employed in substructure search.
Kim, Seokyeon; Jeong, Seongmin; Woo, Insoo; Jang, Yun; Maciejewski, Ross; Ebert, David S
2018-03-01
Geographic visualization research has focused on a variety of techniques to represent and explore spatiotemporal data. The goal of those techniques is to enable users to explore events and interactions over space and time in order to facilitate the discovery of patterns, anomalies and relationships within the data. However, it is difficult to extract and visualize data flow patterns over time for non-directional statistical data without trajectory information. In this work, we develop a novel flow analysis technique to extract, represent, and analyze flow maps of non-directional spatiotemporal data unaccompanied by trajectory information. We estimate a continuous distribution of these events over space and time, and extract flow fields for spatial and temporal changes utilizing a gravity model. Then, we visualize the spatiotemporal patterns in the data by employing flow visualization techniques. The user is presented with temporal trends of geo-referenced discrete events on a map. As such, overall spatiotemporal data flow patterns help users analyze geo-referenced temporal events, such as disease outbreaks, crime patterns, etc. To validate our model, we discard the trajectory information in an origin-destination dataset and apply our technique to the data and compare the derived trajectories and the original. Finally, we present spatiotemporal trend analysis for statistical datasets including twitter data, maritime search and rescue events, and syndromic surveillance.
Soto, Axel J; Zerva, Chrysoula; Batista-Navarro, Riza; Ananiadou, Sophia
2018-04-15
Pathway models are valuable resources that help us understand the various mechanisms underpinning complex biological processes. Their curation is typically carried out through manual inspection of published scientific literature to find information relevant to a model, which is a laborious and knowledge-intensive task. Furthermore, models curated manually cannot be easily updated and maintained with new evidence extracted from the literature without automated support. We have developed LitPathExplorer, a visual text analytics tool that integrates advanced text mining, semi-supervised learning and interactive visualization, to facilitate the exploration and analysis of pathway models using statements (i.e. events) extracted automatically from the literature and organized according to levels of confidence. LitPathExplorer supports pathway modellers and curators alike by: (i) extracting events from the literature that corroborate existing models with evidence; (ii) discovering new events which can update models; and (iii) providing a confidence value for each event that is automatically computed based on linguistic features and article metadata. Our evaluation of event extraction showed a precision of 89% and a recall of 71%. Evaluation of our confidence measure, when used for ranking sampled events, showed an average precision ranging between 61 and 73%, which can be improved to 95% when the user is involved in the semi-supervised learning process. Qualitative evaluation using pair analytics based on the feedback of three domain experts confirmed the utility of our tool within the context of pathway model exploration. LitPathExplorer is available at http://nactem.ac.uk/LitPathExplorer_BI/. sophia.ananiadou@manchester.ac.uk. Supplementary data are available at Bioinformatics online.
Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady
2015-01-01
The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material presentation formats, spatial abilities, and anatomical tasks. First, to understand the cognitive challenges a novice learner would be faced with when first exposed to 3D anatomical content, a six-step cognitive task analysis was developed. Following this, an experimental study was conducted to explore how presentation formats (dynamic vs. static visualizations) support learning of functional anatomy, and affect subsequent anatomical tasks derived from the cognitive task analysis. A second aim was to investigate the interplay between spatial abilities (spatial visualization and spatial relation) and presentation formats when the functional anatomy of a 3D scapula and the associated shoulder flexion movement are learned. Findings showed no main effect of the presentation formats on performances, but revealed the predictive influence of spatial visualization and spatial relation abilities on performance. However, an interesting interaction between presentation formats and spatial relation ability for a specific anatomical task was found. This result highlighted the influence of presentation formats when spatial abilities are involved as well as the differentiated influence of spatial abilities on anatomical tasks. © 2015 American Association of Anatomists.
Experiencing the Sights, Smells, Sounds, and Climate of Southern Italy in VR.
Manghisi, Vito M; Fiorentino, Michele; Gattullo, Michele; Boccaccio, Antonio; Bevilacqua, Vitoantonio; Cascella, Giuseppe L; Dassisti, Michele; Uva, Antonio E
2017-01-01
This article explores what it takes to make interactive computer graphics and VR attractive as a promotional vehicle, from the points of view of tourism agencies and the tourists themselves. The authors exploited current VR and human-machine interface (HMI) technologies to develop an interactive, innovative, and attractive user experience called the Multisensory Apulia Touristic Experience (MATE). The MATE system implements a natural gesture-based interface and multisensory stimuli, including visuals, audio, smells, and climate effects.
A new visual navigation system for exploring biomedical Open Educational Resource (OER) videos.
Zhao, Baoquan; Xu, Songhua; Lin, Shujin; Luo, Xiaonan; Duan, Lian
2016-04-01
Biomedical videos as open educational resources (OERs) are increasingly proliferating on the Internet. Unfortunately, seeking personally valuable content from among the vast corpus of quality yet diverse OER videos is nontrivial due to limitations of today's keyword- and content-based video retrieval techniques. To address this need, this study introduces a novel visual navigation system that facilitates users' information seeking from biomedical OER videos in mass quantity by interactively offering visual and textual navigational clues that are both semantically revealing and user-friendly. The authors collected and processed around 25 000 YouTube videos, which collectively last for a total length of about 4000 h, in the broad field of biomedical sciences for our experiment. For each video, its semantic clues are first extracted automatically through computationally analyzing audio and visual signals, as well as text either accompanying or embedded in the video. These extracted clues are subsequently stored in a metadata database and indexed by a high-performance text search engine. During the online retrieval stage, the system renders video search results as dynamic web pages using a JavaScript library that allows users to interactively and intuitively explore video content both efficiently and effectively.ResultsThe authors produced a prototype implementation of the proposed system, which is publicly accessible athttps://patentq.njit.edu/oer To examine the overall advantage of the proposed system for exploring biomedical OER videos, the authors further conducted a user study of a modest scale. The study results encouragingly demonstrate the functional effectiveness and user-friendliness of the new system for facilitating information seeking from and content exploration among massive biomedical OER videos. Using the proposed tool, users can efficiently and effectively find videos of interest, precisely locate video segments delivering personally valuable information, as well as intuitively and conveniently preview essential content of a single or a collection of videos. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Perrone-Bertolotti, Marcela; Lemonnier, Sophie; Baciu, Monica
2013-01-01
HIGHLIGHTSThe redundant bilateral visual presentation of verbal stimuli decreases asymmetry and increases the cooperation between the two hemispheres.The increased cooperation between the hemispheres is related to semantic information during lexical processing.The inter-hemispheric interaction is represented by both inhibition and cooperation. This study explores inter-hemispheric interaction (IHI) during a lexical decision task by using a behavioral approach, the bilateral presentation of stimuli within a divided visual field experiment. Previous studies have shown that compared to unilateral presentation, the bilateral redundant (BR) presentation decreases the inter-hemispheric asymmetry and facilitates the cooperation between hemispheres. However, it is still poorly understood which type of information facilitates this cooperation. In the present study, verbal stimuli were presented unilaterally (left or right visual hemi-field successively) and bilaterally (left and right visual hemi-field simultaneously). Moreover, during the bilateral presentation of stimuli, we manipulated the relationship between target and distractors in order to specify the type of information which modulates the IHI. Thus, three types of information were manipulated: perceptual, semantic, and decisional, respectively named pre-lexical, lexical and post-lexical processing. Our results revealed left hemisphere (LH) lateralization during the lexical decision task. In terms of inter-hemisphere interaction, the perceptual and decision-making information increased the inter-hemispheric asymmetry, suggesting the inhibition of one hemisphere upon the other. In contrast, semantic information decreased the inter-hemispheric asymmetry, suggesting cooperation between the hemispheres. We discussed our results according to current models of IHI and concluded that cerebral hemispheres interact and communicate according to various excitatory and inhibitory mechanisms, all which depend on specific processes and various levels of word processing.
NASA Astrophysics Data System (ADS)
Hoteit, I.; Hollt, T.; Hadwiger, M.; Knio, O. M.; Gopalakrishnan, G.; Zhan, P.
2016-02-01
Ocean reanalyses and forecasts are nowadays generated by combining ensemble simulations with data assimilation techniques. Most of these techniques resample the ensemble members after each assimilation cycle. Tracking behavior over time, such as all possible paths of a particle in an ensemble vector field, becomes very difficult, as the number of combinations rises exponentially with the number of assimilation cycles. In general a single possible path is not of interest but only the probabilities that any point in space might be reached by a particle at some point in time. We present an approach using probability-weighted piecewise particle trajectories to allow for interactive probability mapping. This is achieved by binning the domain and splitting up the tracing process into the individual assimilation cycles, so that particles that fall into the same bin after a cycle can be treated as a single particle with a larger probability as input for the next cycle. As a result we loose the possibility to track individual particles, but can create probability maps for any desired seed at interactive rates. The technique is integrated in an interactive visualization system that enables the visual analysis of the particle traces side by side with other forecast variables, such as the sea surface height, and their corresponding behavior over time. By harnessing the power of modern graphics processing units (GPUs) for visualization as well as computation, our system allows the user to browse through the simulation ensembles in real-time, view specific parameter settings or simulation models and move between different spatial or temporal regions without delay. In addition our system provides advanced visualizations to highlight the uncertainty, or show the complete distribution of the simulations at user-defined positions over the complete time series of the domain.
Perrone-Bertolotti, Marcela; Lemonnier, Sophie; Baciu, Monica
2013-01-01
HIGHLIGHTS The redundant bilateral visual presentation of verbal stimuli decreases asymmetry and increases the cooperation between the two hemispheres.The increased cooperation between the hemispheres is related to semantic information during lexical processing.The inter-hemispheric interaction is represented by both inhibition and cooperation. This study explores inter-hemispheric interaction (IHI) during a lexical decision task by using a behavioral approach, the bilateral presentation of stimuli within a divided visual field experiment. Previous studies have shown that compared to unilateral presentation, the bilateral redundant (BR) presentation decreases the inter-hemispheric asymmetry and facilitates the cooperation between hemispheres. However, it is still poorly understood which type of information facilitates this cooperation. In the present study, verbal stimuli were presented unilaterally (left or right visual hemi-field successively) and bilaterally (left and right visual hemi-field simultaneously). Moreover, during the bilateral presentation of stimuli, we manipulated the relationship between target and distractors in order to specify the type of information which modulates the IHI. Thus, three types of information were manipulated: perceptual, semantic, and decisional, respectively named pre-lexical, lexical and post-lexical processing. Our results revealed left hemisphere (LH) lateralization during the lexical decision task. In terms of inter-hemisphere interaction, the perceptual and decision-making information increased the inter-hemispheric asymmetry, suggesting the inhibition of one hemisphere upon the other. In contrast, semantic information decreased the inter-hemispheric asymmetry, suggesting cooperation between the hemispheres. We discussed our results according to current models of IHI and concluded that cerebral hemispheres interact and communicate according to various excitatory and inhibitory mechanisms, all which depend on specific processes and various levels of word processing. PMID:23818879
Helo, Andrea; van Ommen, Sandrien; Pannasch, Sebastian; Danteny-Dordoigne, Lucile; Rämä, Pia
2017-11-01
Conceptual representations of everyday scenes are built in interaction with visual environment and these representations guide our visual attention. Perceptual features and object-scene semantic consistency have been found to attract our attention during scene exploration. The present study examined how visual attention in 24-month-old toddlers is attracted by semantic violations and how perceptual features (i. e. saliency, centre distance, clutter and object size) and linguistic properties (i. e. object label frequency and label length) affect gaze distribution. We compared eye movements of 24-month-old toddlers and adults while exploring everyday scenes which either contained an inconsistent (e.g., soap on a breakfast table) or consistent (e.g., soap in a bathroom) object. Perceptual features such as saliency, centre distance and clutter of the scene affected looking times in the toddler group during the whole viewing time whereas looking times in adults were affected only by centre distance during the early viewing time. Adults looked longer to inconsistent than consistent objects either if the objects had a high or a low saliency. In contrast, toddlers presented semantic consistency effect only when objects were highly salient. Additionally, toddlers with lower vocabulary skills looked longer to inconsistent objects while toddlers with higher vocabulary skills look equally long to both consistent and inconsistent objects. Our results indicate that 24-month-old children use scene context to guide visual attention when exploring the visual environment. However, perceptual features have a stronger influence in eye movement guidance in toddlers than in adults. Our results also indicate that language skills influence cognitive but not perceptual guidance of eye movements during scene perception in toddlers. Copyright © 2017 Elsevier Inc. All rights reserved.
Visual Analysis of MOOC Forums with iForum.
Fu, Siwei; Zhao, Jian; Cui, Weiwei; Qu, Huamin
2017-01-01
Discussion forums of Massive Open Online Courses (MOOC) provide great opportunities for students to interact with instructional staff as well as other students. Exploration of MOOC forum data can offer valuable insights for these staff to enhance the course and prepare the next release. However, it is challenging due to the large, complicated, and heterogeneous nature of relevant datasets, which contain multiple dynamically interacting objects such as users, posts, and threads, each one including multiple attributes. In this paper, we present a design study for developing an interactive visual analytics system, called iForum, that allows for effectively discovering and understanding temporal patterns in MOOC forums. The design study was conducted with three domain experts in an iterative manner over one year, including a MOOC instructor and two official teaching assistants. iForum offers a set of novel visualization designs for presenting the three interleaving aspects of MOOC forums (i.e., posts, users, and threads) at three different scales. To demonstrate the effectiveness and usefulness of iForum, we describe a case study involving field experts, in which they use iForum to investigate real MOOC forum data for a course on JAVA programming.
Computer-Based Tools for Inquiry in Undergraduate Classrooms: Results from the VGEE
NASA Astrophysics Data System (ADS)
Pandya, R. E.; Bramer, D. J.; Elliott, D.; Hay, K. E.; Mallaiahgari, L.; Marlino, M. R.; Middleton, D.; Ramamurhty, M. K.; Scheitlin, T.; Weingroff, M.; Wilhelmson, R.; Yoder, J.
2002-05-01
The Visual Geophysical Exploration Environment (VGEE) is a suite of computer-based tools designed to help learners connect observable, large-scale geophysical phenomena to underlying physical principles. Technologically, this connection is mediated by java-based interactive tools: a multi-dimensional visualization environment, authentic scientific data-sets, concept models that illustrate fundamental physical principles, and an interactive web-based work management system for archiving and evaluating learners' progress. Our preliminary investigations showed, however, that the tools alone are not sufficient to empower undergraduate learners; learners have trouble in organizing inquiry and using the visualization tools effectively. To address these issues, the VGEE includes an inquiry strategy and scaffolding activities that are similar to strategies used successfully in K-12 classrooms. The strategy is organized around the steps: identify, relate, explain, and integrate. In the first step, students construct visualizations from data to try to identify salient features of a particular phenomenon. They compare their previous conceptions of a phenomenon to the data examine their current knowledge and motivate investigation. Next, students use the multivariable functionality of the visualization environment to relate the different features they identified. Explain moves the learner temporarily outside the visualization to the concept models, where they explore fundamental physical principles. Finally, in integrate, learners use these fundamental principles within the visualization environment by literally placing the concept model within the visualization environment as a probe and watching it respond to larger-scale patterns. This capability, unique to the VGEE, addresses the disconnect that novice learners often experience between fundamental physics and observable phenomena. It also allows learners the opportunity to reflect on and refine their knowledge as well as anchor it within a context for long-term retention. We are implementing the VGEE in one of two otherwise identical entry-level atmospheric courses. In addition to comparing student learning and attitudes in the two courses, we are analyzing student participation with the VGEE to evaluate the effectiveness and usability of the VGEE. In particular, we seek to identify the scaffolding students need to construct physically meaningful multi-dimensional visualizations, and evaluate the effectiveness of the visualization-embedded concept-models in addressing inert knowledge. We will also examine the utility of the inquiry strategy in developing content knowledge, process-of-science knowledge, and discipline-specific investigatory skills. Our presentation will include video examples of student use to illustrate our findings.
Marvel, Skylar W; To, Kimberly; Grimm, Fabian A; Wright, Fred A; Rusyn, Ivan; Reif, David M
2018-03-05
Drawing integrated conclusions from diverse source data requires synthesis across multiple types of information. The ToxPi (Toxicological Prioritization Index) is an analytical framework that was developed to enable integration of multiple sources of evidence by transforming data into integrated, visual profiles. Methodological improvements have advanced ToxPi and expanded its applicability, necessitating a new, consolidated software platform to provide functionality, while preserving flexibility for future updates. We detail the implementation of a new graphical user interface for ToxPi (Toxicological Prioritization Index) that provides interactive visualization, analysis, reporting, and portability. The interface is deployed as a stand-alone, platform-independent Java application, with a modular design to accommodate inclusion of future analytics. The new ToxPi interface introduces several features, from flexible data import formats (including legacy formats that permit backward compatibility) to similarity-based clustering to options for high-resolution graphical output. We present the new ToxPi interface for dynamic exploration, visualization, and sharing of integrated data models. The ToxPi interface is freely-available as a single compressed download that includes the main Java executable, all libraries, example data files, and a complete user manual from http://toxpi.org .
Mirel, Barbara; Eichinger, Felix; Keller, Benjamin J; Kretzler, Matthias
2011-03-21
Bioinformatics visualization tools are often not robust enough to support biomedical specialists’ complex exploratory analyses. Tools need to accommodate the workflows that scientists actually perform for specific translational research questions. To understand and model one of these workflows, we conducted a case-based, cognitive task analysis of a biomedical specialist’s exploratory workflow for the question: What functional interactions among gene products of high throughput expression data suggest previously unknown mechanisms of a disease? From our cognitive task analysis four complementary representations of the targeted workflow were developed. They include: usage scenarios, flow diagrams, a cognitive task taxonomy, and a mapping between cognitive tasks and user-centered visualization requirements. The representations capture the flows of cognitive tasks that led a biomedical specialist to inferences critical to hypothesizing. We created representations at levels of detail that could strategically guide visualization development, and we confirmed this by making a trial prototype based on user requirements for a small portion of the workflow. Our results imply that visualizations should make available to scientific users “bundles of features†consonant with the compositional cognitive tasks purposefully enacted at specific points in the workflow. We also highlight certain aspects of visualizations that: (a) need more built-in flexibility; (b) are critical for negotiating meaning; and (c) are necessary for essential metacognitive support.
iTTVis: Interactive Visualization of Table Tennis Data.
Wu, Yingcai; Lan, Ji; Shu, Xinhuan; Ji, Chenyang; Zhao, Kejian; Wang, Jiachen; Zhang, Hui
2018-01-01
The rapid development of information technology paved the way for the recording of fine-grained data, such as stroke techniques and stroke placements, during a table tennis match. This data recording creates opportunities to analyze and evaluate matches from new perspectives. Nevertheless, the increasingly complex data poses a significant challenge to make sense of and gain insights into. Analysts usually employ tedious and cumbersome methods which are limited to watching videos and reading statistical tables. However, existing sports visualization methods cannot be applied to visualizing table tennis competitions due to different competition rules and particular data attributes. In this work, we collaborate with data analysts to understand and characterize the sophisticated domain problem of analysis of table tennis data. We propose iTTVis, a novel interactive table tennis visualization system, which to our knowledge, is the first visual analysis system for analyzing and exploring table tennis data. iTTVis provides a holistic visualization of an entire match from three main perspectives, namely, time-oriented, statistical, and tactical analyses. The proposed system with several well-coordinated views not only supports correlation identification through statistics and pattern detection of tactics with a score timeline but also allows cross analysis to gain insights. Data analysts have obtained several new insights by using iTTVis. The effectiveness and usability of the proposed system are demonstrated with four case studies.
Integrating GIS and ABM to Explore Spatiotemporal Dynamics
NASA Astrophysics Data System (ADS)
Sun, M.; Jiang, Y.; Yang, C.
2013-12-01
Agent-based modeling as a methodology for the bottom-up exploration with the account of adaptive behavior and heterogeneity of system components can help discover the development and pattern of the complex social and environmental system. However, ABM is a computationally intensive process especially when the number of system components becomes large and the agent-agent/agent-environmental interaction is modeled very complex. Most of traditional ABM frameworks developed based on CPU do not have a satisfying computing capacity. To address the problem and as the emergence of advanced techniques, GPU computing with CUDA can provide powerful parallel structure to enable the complex simulation of spatiotemporal dynamics. In this study, we first develop a GPU-based ABM system. Secondly, in order to visualize the dynamics generated from the movement of agent and the change of agent/environmental attributes during the simulation, we integrate GIS into the ABM system. Advanced geovisualization technologies can be utilized for representing the spatiotemporal change events, such as proper 2D/3D maps with state-of-the-art symbols, space-time cube and multiple layers each of which presents pattern in one time-stamp, etc. Thirdly, visual analytics which include interactive tools (e.g. grouping, filtering, linking, etc.) is included in our ABM-GIS system to help users conduct real-time data exploration during the progress of simulation. Analysis like flow analysis and spatial cluster analysis can be integrated according to the geographical problem we want to explore.
SAFAS: Unifying Form and Structure through Interactive 3D Simulation
ERIC Educational Resources Information Center
Polys, Nicholas F.; Bacim, Felipe; Setareh, Mehdi; Jones, Brett D.
2015-01-01
There has been a significant gap between the tools used for the design of a building's architectural form and those that evaluate the structural physics of that form. Seeking to bring the perspectives of visual design and structural engineering closer together, we developed and evaluated a design tool for students and practitioners to explore the…
Preparing for Future Learning with a Tangible User Interface: The Case of Neuroscience
ERIC Educational Resources Information Center
Schneider, B.; Wallace, J.; Blikstein, P.; Pea, R.
2013-01-01
In this paper, we describe the development and evaluation of a microworld-based learning environment for neuroscience. Our system, BrainExplorer, allows students to discover the way neural pathways work by interacting with a tangible user interface. By severing and reconfiguring connections, users can observe how the visual field is impaired and,…
ERIC Educational Resources Information Center
Brown, Daniel
2013-01-01
Visualizing the three-dimensional distribution of stars within a constellation is highly challenging for both students and educators, but when carried out in an interactive collaborative way, it can create an ideal environment to explore common misconceptions about size and scale within astronomy. We present how the common tabletop activities…
Invariant Spatial Context Is Learned but Not Retrieved in Gaze-Contingent Tunnel-View Search
ERIC Educational Resources Information Center
Zang, Xuelian; Jia, Lina; Müller, Hermann J.; Shi, Zhuanghua
2015-01-01
Our visual brain is remarkable in extracting invariant properties from the noisy environment, guiding selection of where to look and what to identify. However, how the brain achieves this is still poorly understood. Here we explore interactions of local context and global structure in the long-term learning and retrieval of invariant display…
ERIC Educational Resources Information Center
Curtindale, Lori; Laurie-Rose, Cynthia; Bennett-Murphy, Laura; Hull, Sarah
2007-01-01
Applying optimal stimulation theory, the present study explored the development of sustained attention as a dynamic process. It examined the interaction of modality and temperament over time in children and adults. Second-grade children and college-aged adults performed auditory and visual vigilance tasks. Using the Carey temperament…
ERIC Educational Resources Information Center
Mundy, Amrit; Chan, Judy
2013-01-01
In the 2011-2012 academic year, the Organizational Development and Learning unit and the Centre for Teaching, Learning and Technology at the University of British Columbia co-developed an interactive theatre project, Conflict Theatre, to engage in discussion around conflict with our audience and to allow us to explore, engage with, and build…
Getting Graphic with the Past: Graphic Novels and the Teaching of History
ERIC Educational Resources Information Center
Cromer, Michael; Clark, Penney
2007-01-01
This article explores the potential of the graphic novel as a means to approach history and historiography in secondary school social studies and history classrooms. Because graphic novels convey their messages through the interaction of visuals and written text, they require reading that is across the grain. They have been likened to hypertext, a…
Effects of Various Sketching Tools on Visual Thinking in Idea Development
ERIC Educational Resources Information Center
Chu, Po Ying; Hung, Hsiu Yen; Wu, Chih Fu; Liu, Yen Te
2017-01-01
Due to the wide application of digital tools and the improvement in interactive technologies, design thinking might change in digital world comparing to that in traditional design process. This study aims to explore the difference of design thinking between three kinds of sketching tools, i.e. hand-sketch, tablet, and pen-input display, by means…
Authoring Tours of Geospatial Data With KML and Google Earth
NASA Astrophysics Data System (ADS)
Barcay, D. P.; Weiss-Malik, M.
2008-12-01
As virtual globes become widely adopted by the general public, the use of geospatial data has expanded greatly. With the popularization of Google Earth and other platforms, GIS systems have become virtual reality platforms. Using these platforms, a casual user can easily explore the world, browse massive data-sets, create powerful 3D visualizations, and share those visualizations with millions of people using the KML language. This technology has raised the bar for professionals and academics alike. It is now expected that studies and projects will be accompanied by compelling, high-quality visualizations. In this new landscape, a presentation of geospatial data can be the most effective form of advertisement for a project: engaging both the general public and the scientific community in a unified interactive experience. On the other hand, merely dumping a dataset into a virtual globe can be a disorienting, alienating experience for many users. To create an effective, far-reaching presentation, an author must take care to make their data approachable to a wide variety of users with varying knowledge of the subject matter, expertise in virtual globes, and attention spans. To that end, we present techniques for creating self-guided interactive tours of data represented in KML and visualized in Google Earth. Using these methods, we provide the ability to move the camera through the world while dynamically varying the content, style, and visibility of the displayed data. Such tours can automatically guide users through massive, complex datasets: engaging a broad user-base, and conveying subtle concepts that aren't immediately apparent when viewing the raw data. To the casual user these techniques result in an extremely compelling experience similar to watching video. Unlike video though, these techniques maintain the rich interactive environment provided by the virtual globe, allowing users to explore the data in detail and to add other data sources to the presentation.
InCHlib - interactive cluster heatmap for web applications.
Skuta, Ctibor; Bartůněk, Petr; Svozil, Daniel
2014-12-01
Hierarchical clustering is an exploratory data analysis method that reveals the groups (clusters) of similar objects. The result of the hierarchical clustering is a tree structure called dendrogram that shows the arrangement of individual clusters. To investigate the row/column hierarchical cluster structure of a data matrix, a visualization tool called 'cluster heatmap' is commonly employed. In the cluster heatmap, the data matrix is displayed as a heatmap, a 2-dimensional array in which the colour of each element corresponds to its value. The rows/columns of the matrix are ordered such that similar rows/columns are near each other. The ordering is given by the dendrogram which is displayed on the side of the heatmap. We developed InCHlib (Interactive Cluster Heatmap Library), a highly interactive and lightweight JavaScript library for cluster heatmap visualization and exploration. InCHlib enables the user to select individual or clustered heatmap rows, to zoom in and out of clusters or to flexibly modify heatmap appearance. The cluster heatmap can be augmented with additional metadata displayed in a different colour scale. In addition, to further enhance the visualization, the cluster heatmap can be interconnected with external data sources or analysis tools. Data clustering and the preparation of the input file for InCHlib is facilitated by the Python utility script inchlib_clust . The cluster heatmap is one of the most popular visualizations of large chemical and biomedical data sets originating, e.g., in high-throughput screening, genomics or transcriptomics experiments. The presented JavaScript library InCHlib is a client-side solution for cluster heatmap exploration. InCHlib can be easily deployed into any modern web application and configured to cooperate with external tools and data sources. Though InCHlib is primarily intended for the analysis of chemical or biological data, it is a versatile tool which application domain is not limited to the life sciences only.
Expeditious illustration of layer-cake models on and above a tactile surface
NASA Astrophysics Data System (ADS)
Lopes, Daniel Simões; Mendes, Daniel; Sousa, Maurício; Jorge, Joaquim
2016-05-01
Too often illustrating and visualizing 3D geological concepts are performed by sketching in 2D mediums, which may limit drawing performance of initial concepts. Here, the potential of expeditious geological modeling brought by hand gestures is explored. A spatial interaction system was developed to enable rapid modeling, editing, and exploration of 3D layer-cake objects. User interactions are acquired with motion capture and touch screen technologies. Virtual immersion is guaranteed by using stereoscopic technology. The novelty consists of performing expeditious modeling of coarse geological features with only a limited set of hand gestures. Results from usability-studies show that the proposed system is more efficient when compared to a windows-icon-menu-pointer modeling application.
Real-time visual mosaicking and navigation on the seafloor
NASA Astrophysics Data System (ADS)
Richmond, Kristof
Remote robotic exploration holds vast potential for gaining knowledge about extreme environments accessible to humans only with great difficulty. Robotic explorers have been sent to other solar system bodies, and on this planet into inaccessible areas such as caves and volcanoes. In fact, the largest unexplored land area on earth lies hidden in the airless cold and intense pressure of the ocean depths. Exploration in the oceans is further hindered by water's high absorption of electromagnetic radiation, which both inhibits remote sensing from the surface, and limits communications with the bottom. The Earth's oceans thus provide an attractive target for developing remote exploration capabilities. As a result, numerous robotic vehicles now routinely survey this environment, from remotely operated vehicles piloted over tethers from the surface to torpedo-shaped autonomous underwater vehicles surveying the mid-waters. However, these vehicles are limited in their ability to navigate relative to their environment. This limits their ability to return to sites with precision without the use of external navigation aids, and to maneuver near and interact with objects autonomously in the water and on the sea floor. The enabling of environment-relative positioning on fully autonomous underwater vehicles will greatly extend their power and utility for remote exploration in the furthest reaches of the Earth's waters---even under ice and under ground---and eventually in extraterrestrial liquid environments such as Europa's oceans. This thesis presents an operational, fielded system for visual navigation of underwater robotic vehicles in unexplored areas of the seafloor. The system does not depend on external sensing systems, using only instruments on board the vehicle. As an area is explored, a camera is used to capture images and a composite view, or visual mosaic, of the ocean bottom is created in real time. Side-to-side visual registration of images is combined with dead-reckoned navigation information in a framework allowing the creation and updating of large, locally consistent mosaics. These mosaics are used as maps in which the vehicle can navigate and localize itself with respect to points in the environment. The system achieves real-time performance in several ways. First, wherever possible, direct sensing of motion parameters is used in place of extracting them from visual data. Second, trajectories are chosen to enable a hierarchical search for side-to-side links which limits the amount of searching performed without sacrificing robustness. Finally, the map estimation is formulated as a sparse, linear information filter allowing rapid updating of large maps. The visual navigation enabled by the work in this thesis represents a new capability for remotely operated vehicles, and an enabling capability for a new generation of autonomous vehicles which explore and interact with remote, unknown and unstructured underwater environments. The real-time mosaic can be used on current tethered vehicles to create pilot aids and provide a vehicle user with situational awareness of the local environment and the position of the vehicle within it. For autonomous vehicles, the visual navigation system enables precise environment-relative positioning and mapping, without requiring external navigation systems, opening the way for ever-expanding autonomous exploration capabilities. The utility of this system was demonstrated in the field at sites of scientific interest using the ROVs Ventana and Tiburon operated by the Monterey Bay Aquarium Research Institute. A number of sites in and around Monterey Bay, California were mosaicked using the system, culminating in a complete imaging of the wreck site of the USS Macon , where real-time visual mosaics containing thousands of images were generated while navigating using only sensor systems on board the vehicle.
Patterns and Sequences: Interactive Exploration of Clickstreams to Understand Common Visitor Paths.
Liu, Zhicheng; Wang, Yang; Dontcheva, Mira; Hoffman, Matthew; Walker, Seth; Wilson, Alan
2017-01-01
Modern web clickstream data consists of long, high-dimensional sequences of multivariate events, making it difficult to analyze. Following the overarching principle that the visual interface should provide information about the dataset at multiple levels of granularity and allow users to easily navigate across these levels, we identify four levels of granularity in clickstream analysis: patterns, segments, sequences and events. We present an analytic pipeline consisting of three stages: pattern mining, pattern pruning and coordinated exploration between patterns and sequences. Based on this approach, we discuss properties of maximal sequential patterns, propose methods to reduce the number of patterns and describe design considerations for visualizing the extracted sequential patterns and the corresponding raw sequences. We demonstrate the viability of our approach through an analysis scenario and discuss the strengths and limitations of the methods based on user feedback.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bethel, E. Wes; Frank, Randy; Fulcomer, Sam
Scientific visualization is the transformation of abstract information into images, and it plays an integral role in the scientific process by facilitating insight into observed or simulated phenomena. Visualization as a discipline spans many research areas from computer science, cognitive psychology and even art. Yet the most successful visualization applications are created when close synergistic interactions with domain scientists are part of the algorithmic design and implementation process, leading to visual representations with clear scientific meaning. Visualization is used to explore, to debug, to gain understanding, and as an analysis tool. Visualization is literally everywhere--images are present in this report,more » on television, on the web, in books and magazines--the common theme is the ability to present information visually that is rapidly assimilated by human observers, and transformed into understanding or insight. As an indispensable part a modern science laboratory, visualization is akin to the biologist's microscope or the electrical engineer's oscilloscope. Whereas the microscope is limited to small specimens or use of optics to focus light, the power of scientific visualization is virtually limitless: visualization provides the means to examine data that can be at galactic or atomic scales, or at any size in between. Unlike the traditional scientific tools for visual inspection, visualization offers the means to ''see the unseeable.'' Trends in demographics or changes in levels of atmospheric CO{sub 2} as a function of greenhouse gas emissions are familiar examples of such unseeable phenomena. Over time, visualization techniques evolve in response to scientific need. Each scientific discipline has its ''own language,'' verbal and visual, used for communication. The visual language for depicting electrical circuits is much different than the visual language for depicting theoretical molecules or trends in the stock market. There is no ''one visualization too'' that can serve as a panacea for all science disciplines. Instead, visualization researchers work hand in hand with domain scientists as part of the scientific research process to define, create, adapt and refine software that ''speaks the visual language'' of each scientific domain.« less
VIP Data Explorer: A Tool for Exploring 30 years of Vegetation Index and Phenology Observations
NASA Astrophysics Data System (ADS)
Barreto-munoz, A.; Didan, K.; Rivera-Camacho, J.; Yitayew, M.; Miura, T.; Tsend-Ayush, J.
2011-12-01
Continuous acquisition of global satellite imagery over the years has contributed to the creation of long term data records from AVHRR, MODIS, TM, SPOT-VGT and other sensors. These records account for 30+ years, as these archives grow, they become invaluable tools for environmental, resources management, and climate studies dealing with trends and changes from local, regional to global scale. In this project, the Vegetation Index and Phenology Lab (VIPLab) is processing 30 years of daily global surface reflectance data into an Earth Science Data Record of Vegetation Index and Phenology metrics. Data from AVHRR (N07,N09,N11 and N14) and MODIS (AQUA and TERRA collection 5) for the periods 1981-1999 and 2000-2010, at CMG resolution were processed into one seamless and sensor independent data record using various filtering, continuity and gap filling techniques (Tsend-Ayush et al., AGU 2011, Rivera-Camacho et al, AGU 2011). An interactive online tool (VIP Data Explorer) was developed to support the visualization, qualitative and quantitative exploration, distribution, and documentation of these records using a simple web 2.0 interface. The VIP Data explorer (http://vip.arizona.edu/viplab_data_explorer) can display any combination of multi temporal and multi source data, enable the quickly exploration and cross comparison of the various levels of processing of this data. It uses the Google Earth (GE) model and was developed using the GE API for images rendering, manipulation and geolocation. These ESDRs records can be quickly animated in this environment and explored for visual trends and anomalies detection. Additionally the tool enables extracting and visualizing any land pixel time series while showing the different levels of processing it went through. User can explore this ESDR database within this data explorer GUI environment, and any desired data can be placed into a dynamic "cart" to be ordered and downloaded later. More functionalities are planned and will be added to this data explorer tool as the project progresses.
Touch Precision Modulates Visual Bias.
Misceo, Giovanni F; Jones, Maurice D
2018-01-01
The sensory precision hypothesis holds that different seen and felt cues about the size of an object resolve themselves in favor of the more reliable modality. To examine this precision hypothesis, 60 college students were asked to look at one size while manually exploring another unseen size either with their bare fingers or, to lessen the reliability of touch, with their fingers sleeved in rigid tubes. Afterwards, the participants estimated either the seen size or the felt size by finding a match from a visual display of various sizes. Results showed that the seen size biased the estimates of the felt size when the reliability of touch decreased. This finding supports the interaction between touch reliability and visual bias predicted by statistically optimal models of sensory integration.
NASA Astrophysics Data System (ADS)
Christensen, C.; Summa, B.; Scorzelli, G.; Lee, J. W.; Venkat, A.; Bremer, P. T.; Pascucci, V.
2017-12-01
Massive datasets are becoming more common due to increasingly detailed simulations and higher resolution acquisition devices. Yet accessing and processing these huge data collections for scientific analysis is still a significant challenge. Solutions that rely on extensive data transfers are increasingly untenable and often impossible due to lack of sufficient storage at the client side as well as insufficient bandwidth to conduct such large transfers, that in some cases could entail petabytes of data. Large-scale remote computing resources can be useful, but utilizing such systems typically entails some form of offline batch processing with long delays, data replications, and substantial cost for any mistakes. Both types of workflows can severely limit the flexible exploration and rapid evaluation of new hypotheses that are crucial to the scientific process and thereby impede scientific discovery. In order to facilitate interactivity in both analysis and visualization of these massive data ensembles, we introduce a dynamic runtime system suitable for progressive computation and interactive visualization of arbitrarily large, disparately located spatiotemporal datasets. Our system includes an embedded domain-specific language (EDSL) that allows users to express a wide range of data analysis operations in a simple and abstract manner. The underlying runtime system transparently resolves issues such as remote data access and resampling while at the same time maintaining interactivity through progressive and interruptible processing. Computations involving large amounts of data can be performed remotely in an incremental fashion that dramatically reduces data movement, while the client receives updates progressively thereby remaining robust to fluctuating network latency or limited bandwidth. This system facilitates interactive, incremental analysis and visualization of massive remote datasets up to petabytes in size. Our system is now available for general use in the community through both docker and anaconda.
Fast and Exact Fiber Surfaces for Tetrahedral Meshes.
Klacansky, Pavol; Tierny, Julien; Carr, Hamish; Zhao Geng
2017-07-01
Isosurfaces are fundamental geometrical objects for the analysis and visualization of volumetric scalar fields. Recent work has generalized them to bivariate volumetric fields with fiber surfaces, the pre-image of polygons in range space. However, the existing algorithm for their computation is approximate, and is limited to closed polygons. Moreover, its runtime performance does not allow instantaneous updates of the fiber surfaces upon user edits of the polygons. Overall, these limitations prevent a reliable and interactive exploration of the space of fiber surfaces. This paper introduces the first algorithm for the exact computation of fiber surfaces in tetrahedral meshes. It assumes no restriction on the topology of the input polygon, handles degenerate cases and better captures sharp features induced by polygon bends. The algorithm also allows visualization of individual fibers on the output surface, better illustrating their relationship with data features in range space. To enable truly interactive exploration sessions, we further improve the runtime performance of this algorithm. In particular, we show that it is trivially parallelizable and that it scales nearly linearly with the number of cores. Further, we study acceleration data-structures both in geometrical domain and range space and we show how to generalize interval trees used in isosurface extraction to fiber surface extraction. Experiments demonstrate the superiority of our algorithm over previous work, both in terms of accuracy and running time, with up to two orders of magnitude speedups. This improvement enables interactive edits of range polygons with instantaneous updates of the fiber surface for exploration purpose. A VTK-based reference implementation is provided as additional material to reproduce our results.
2012-01-01
Background Graph drawing is an integral part of many systems biology studies, enabling visual exploration and mining of large-scale biological networks. While a number of layout algorithms are available in popular network analysis platforms, such as Cytoscape, it remains poorly understood how well their solutions reflect the underlying biological processes that give rise to the network connectivity structure. Moreover, visualizations obtained using conventional layout algorithms, such as those based on the force-directed drawing approach, may become uninformative when applied to larger networks with dense or clustered connectivity structure. Methods We implemented a modified layout plug-in, named Multilevel Layout, which applies the conventional layout algorithms within a multilevel optimization framework to better capture the hierarchical modularity of many biological networks. Using a wide variety of real life biological networks, we carried out a systematic evaluation of the method in comparison with other layout algorithms in Cytoscape. Results The multilevel approach provided both biologically relevant and visually pleasant layout solutions in most network types, hence complementing the layout options available in Cytoscape. In particular, it could improve drawing of large-scale networks of yeast genetic interactions and human physical interactions. In more general terms, the biological evaluation framework developed here enables one to assess the layout solutions from any existing or future graph drawing algorithm as well as to optimize their performance for a given network type or structure. Conclusions By making use of the multilevel modular organization when visualizing biological networks, together with the biological evaluation of the layout solutions, one can generate convenient visualizations for many network biology applications. PMID:22448851
NASA Astrophysics Data System (ADS)
Ekenes, K.
2017-12-01
This presentation will outline the process of creating a web application for exploring large amounts of scientific geospatial data using modern automated cartographic techniques. Traditional cartographic methods, including data classification, may inadvertently hide geospatial and statistical patterns in the underlying data. This presentation demonstrates how to use smart web APIs that quickly analyze the data when it loads, and provides suggestions for the most appropriate visualizations based on the statistics of the data. Since there are just a few ways to visualize any given dataset well, it is imperative to provide smart default color schemes tailored to the dataset as opposed to static defaults. Since many users don't go beyond default values, it is imperative that they are provided with smart default visualizations. Multiple functions for automating visualizations are available in the Smart APIs, along with UI elements allowing users to create more than one visualization for a dataset since there isn't a single best way to visualize a given dataset. Since bivariate and multivariate visualizations are particularly difficult to create effectively, this automated approach removes the guesswork out of the process and provides a number of ways to generate multivariate visualizations for the same variables. This allows the user to choose which visualization is most appropriate for their presentation. The methods used in these APIs and the renderers generated by them are not available elsewhere. The presentation will show how statistics can be used as the basis for automating default visualizations of data along continuous ramps, creating more refined visualizations while revealing the spread and outliers of the data. Adding interactive components to instantaneously alter visualizations allows users to unearth spatial patterns previously unknown among one or more variables. These applications may focus on a single dataset that is frequently updated, or configurable for a variety of datasets from multiple sources.
NASA Astrophysics Data System (ADS)
Baart, F.; van Gils, A.; Hagenaars, G.; Donchyts, G.; Eisemann, E.; van Velzen, J. W.
2016-12-01
A compelling visualization is captivating, beautiful and narrative. Here we show how melding the skills of computer graphics, art, statistics, and environmental modeling can be used to generate innovative, attractive and very informative visualizations. We focus on the topic of visualizing forecasts and measurements of water (water level, waves, currents, density, and salinity). For the field of computer graphics and arts, water is an important topic because it occurs in many natural scenes. For environmental modeling and statistics, water is an important topic because the water is essential for transport, a healthy environment, fruitful agriculture, and a safe environment.The different disciplines take different approaches to visualizing water. In computer graphics, one focusses on creating water as realistic looking as possible. The focus on realistic perception (versus the focus on the physical balance pursued by environmental scientists) resulted in fascinating renderings, as seen in recent games and movies. Visualization techniques for statistical results have benefited from the advancement in design and journalism, resulting in enthralling infographics. The field of environmental modeling has absorbed advances in contemporary cartography as seen in the latest interactive data-driven maps. We systematically review the design emerging types of water visualizations. The examples that we analyze range from dynamically animated forecasts, interactive paintings, infographics, modern cartography to web-based photorealistic rendering. By characterizing the intended audience, the design choices, the scales (e.g. time, space), and the explorability we provide a set of guidelines and genres. The unique contributions of the different fields show how the innovations in the current state of the art of water visualization have benefited from inter-disciplinary collaborations.
Beyond image quality: designing engaging interactions with digital products
NASA Astrophysics Data System (ADS)
de Ridder, Huib; Rozendaal, Marco C.
2008-02-01
Ubiquitous computing (or Ambient Intelligence) promises a world in which information is available anytime anywhere and with which humans can interact in a natural, multimodal way. In such world, perceptual image quality remains an important criterion since most information will be displayed visually, but other criteria such as enjoyment, fun, engagement and hedonic quality are emerging. This paper deals with engagement, the intrinsically enjoyable readiness to put more effort into exploring and/or using a product than strictly required, thus attracting and keeping user's attention for a longer period of time. The impact of the experienced richness of an interface, both visually and degree of possible manipulations, was investigated in a series of experiments employing game-like user interfaces. This resulted in the extension of an existing conceptual framework relating engagement to richness by means of two intermediating variables, namely experienced challenge and sense of control. Predictions from this revised framework are evaluated against results of an earlier experiment assessing the ergonomic and hedonic qualities of interactive media. Test material consisted of interactive CD-ROM's containing presentations of three companies for future customers.
Constraint programming based biomarker optimization.
Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng
2015-01-01
Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.
Freezing Behavior as a Response to Sexual Visual Stimuli as Demonstrated by Posturography
Mouras, Harold; Lelard, Thierry; Ahmaidi, Said; Godefroy, Olivier; Krystkowiak, Pierre
2015-01-01
Posturographic changes in motivational conditions remain largely unexplored in the context of embodied cognition. Over the last decade, sexual motivation has been used as a good canonical working model to study motivated social interactions. The objective of this study was to explore posturographic variations in response to visual sexual videos as compared to neutral videos. Our results support demonstration of a freezing-type response in response to sexually explicit stimuli compared to other conditions, as demonstrated by significantly decreased standard deviations for (i) the center of pressure displacement along the mediolateral and anteroposterior axes and (ii) center of pressure’s displacement surface. These results support the complexity of the motor correlates of sexual motivation considered to be a canonical functional context to study the motor correlates of motivated social interactions. PMID:25992571
Ko, Sungahn; Zhao, Jieqiong; Xia, Jing; Afzal, Shehzad; Wang, Xiaoyu; Abram, Greg; Elmqvist, Niklas; Kne, Len; Van Riper, David; Gaither, Kelly; Kennedy, Shaun; Tolone, William; Ribarsky, William; Ebert, David S
2014-12-01
We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.
Experiences of Visually Impaired Students in Community College Math Courses
NASA Astrophysics Data System (ADS)
Swan, S. Tomeka
Blind and visually impaired students who attend community colleges face challenges in learning mathematics (Forrest, 2010). Scoy, McLaughlin, Walls, and Zuppuhaur (2006) claim these students are at a disadvantage in studying mathematics due to the visual and interactive nature of the subject, and by the way mathematics is taught. In this qualitative study six blind and visually impaired students attended three community colleges in one Mid-Atlantic state. They shared their experiences inside the mathematics classroom. Five of the students were enrolled in developmental level math, and one student was enrolled in college level math. The conceptual framework used to explore how blind and visually impaired students persist and succeed in math courses was Piaget's theory on constructivism. The data from this qualitative study was obtained through personal interviews. Based on the findings of this study, blind and visually impaired students need the following accommodations in order to succeed in community college math courses: Accommodating instructors who help to keep blind and visually impaired students motivated and facilitate their academic progress towards math completion, tutorial support, assistive technology, and a positive and inclusive learning environment.
Embedded Systems and TensorFlow Frameworks as Assistive Technology Solutions.
Mulfari, Davide; Palla, Alessandro; Fanucci, Luca
2017-01-01
In the field of deep learning, this paper presents the design of a wearable computer vision system for visually impaired users. The Assistive Technology solution exploits a powerful single board computer and smart glasses with a camera in order to allow its user to explore the objects within his surrounding environment, while it employs Google TensorFlow machine learning framework in order to real time classify the acquired stills. Therefore the proposed aid can increase the awareness of the explored environment and it interacts with its user by means of audio messages.
Rehabilitation of Reading and Visual Exploration in Visual Field Disorders: Transfer or Specificity?
ERIC Educational Resources Information Center
Schuett, Susanne; Heywood, Charles A.; Kentridge, Robert W.; Dauner, Ruth; Zihl, Josef
2012-01-01
Reading and visual exploration impairments in unilateral homonymous visual field disorders are frequent and disabling consequences of acquired brain injury. Compensatory therapies have been developed, which allow patients to regain sufficient reading and visual exploration performance through systematic oculomotor training. However, it is still…
Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study.
Eggenberger, Noëmi; Preisig, Basil C; Schumacher, Rahel; Hopfner, Simone; Vanbellingen, Tim; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Cazzoli, Dario; Müri, René M
2016-01-01
Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients' comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes.
Ponce Wong, Ruben D; Hellman, Randall B; Santos, Veronica J
2014-01-01
Upper-limb amputees rely primarily on visual feedback when using their prostheses to interact with others or objects in their environment. A constant reliance upon visual feedback can be mentally exhausting and does not suffice for many activities when line-of-sight is unavailable. Upper-limb amputees could greatly benefit from the ability to perceive edges, one of the most salient features of 3D shape, through touch alone. We present an approach for estimating edge orientation with respect to an artificial fingertip through haptic exploration using a multimodal tactile sensor on a robot hand. Key parameters from the tactile signals for each of four exploratory procedures were used as inputs to a support vector regression model. Edge orientation angles ranging from -90 to 90 degrees were estimated with an 85-input model having an R (2) of 0.99 and RMS error of 5.08 degrees. Electrode impedance signals provided the most useful inputs by encoding spatially asymmetric skin deformation across the entire fingertip. Interestingly, sensor regions that were not in direct contact with the stimulus provided particularly useful information. Methods described here could pave the way for semi-autonomous capabilities in prosthetic or robotic hands during haptic exploration, especially when visual feedback is unavailable.
Power-Production Diagnostic Tools for Low-Density Wind Farms with Applications to Wake Steering
NASA Astrophysics Data System (ADS)
Takle, E. S.; Herzmann, D.; Rajewski, D. A.; Lundquist, J. K.; Rhodes, M. E.
2016-12-01
Hansen (2011) provided guidelines for wind farm wake analysis with applications to "high density" wind farms (where average distance between turbines is less than ten times rotor diameter). For "low-density" (average distance greater than fifteen times rotor diameter) wind farms, or sections of wind farms we demonstrate simpler sorting and visualization tools that reveal wake interactions and opportunities for wind farm power prediction and wake steering. SCADA data from a segment of a large mid-continent wind farm, together with surface flux measurements and lidar data are subjected to analysis and visualization of wake interactions. A time-history animated visualization of a plan view of power level of individual turbines provides a quick analysis of wake interaction dynamics. Yaw-based sectoral histograms of enhancement/decline of wind speed and power from wind farm reference levels reveals angular width of wake interactions and identifies the turbine(s) responsible for the power reduction. Concurrent surface flux measurements within the wind farm allowed us to evaluate stability influence on wake loss. A one-season climatology is used to identify high-priority candidates for wake steering based on estimated power recovery. Typical clearing prices on the day-ahead market are used to estimate the added value of wake steering. Current research is exploring options for identifying candidate locations for wind farm "build-in" in existing low-density wind farms.
Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus
2018-01-01
Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.
BDNF and TNF-α polymorphisms in memory.
Yogeetha, B S; Haupt, L M; McKenzie, K; Sutherland, H G; Okolicsyani, R K; Lea, R A; Maher, B H; Chan, R C K; Shum, D H K; Griffiths, L R
2013-09-01
Here, we investigate the genetic basis of human memory in healthy individuals and the potential role of two polymorphisms, previously implicated in memory function. We have explored aspects of retrospective and prospective memory including semantic, short term, working and long-term memory in conjunction with brain derived neurotrophic factor (BDNF) and tumor necrosis factor-alpha (TNF-α). The memory scores for healthy individuals in the population were obtained for each memory type and the population was genotyped via restriction fragment length polymorphism for the BDNF rs6265 (Val66Met) SNP and via pyrosequencing for the TNF-α rs113325588 SNP. Using univariate ANOVA, a significant association of the BDNF polymorphism with visual and spatial memory retention and a significant association of the TNF-α polymorphism was observed with spatial memory retention. In addition, a significant interactive effect between BDNF and TNF-α polymorphisms was observed in spatial memory retention. In practice visual memory involves spatial information and the two memory systems work together, however our data demonstrate that individuals with the Val/Val BDNF genotype have poorer visual memory but higher spatial memory retention, indicating a level of interaction between TNF-α and BDNF in spatial memory retention. This is the first study to use genetic analysis to determine the interaction between BDNF and TNF-α in relation to memory in normal adults and provides important information regarding the effect of genetic determinants and gene interactions on human memory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krause, Josua; Dasgupta, Aritra; Fekete, Jean-Daniel
Dealing with the curse of dimensionality is a key challenge in high-dimensional data visualization. We present SeekAView to address three main gaps in the existing research literature. First, automated methods like dimensionality reduction or clustering suffer from a lack of transparency in letting analysts interact with their outputs in real-time to suit their exploration strategies. The results often suffer from a lack of interpretability, especially for domain experts not trained in statistics and machine learning. Second, exploratory visualization techniques like scatter plots or parallel coordinates suffer from a lack of visual scalability: it is difficult to present a coherent overviewmore » of interesting combinations of dimensions. Third, the existing techniques do not provide a flexible workflow that allows for multiple perspectives into the analysis process by automatically detecting and suggesting potentially interesting subspaces. In SeekAView we address these issues using suggestion based visual exploration of interesting patterns for building and refining multidimensional subspaces. Compared to the state-of-the-art in subspace search and visualization methods, we achieve higher transparency in showing not only the results of the algorithms, but also interesting dimensions calibrated against different metrics. We integrate a visually scalable design space with an iterative workflow guiding the analysts by choosing the starting points and letting them slice and dice through the data to find interesting subspaces and detect correlations, clusters, and outliers. We present two usage scenarios for demonstrating how SeekAView can be applied in real-world data analysis scenarios.« less
Fluorescence Microscopy of Nanochannel-Confined DNA.
Westerlund, Fredrik; Persson, Fredrik; Fritzsche, Joachim; Beech, Jason P; Tegenfeldt, Jonas O
2018-01-01
Stretching of DNA in nanoscale confinement allows for several important studies. The genetic contents of the DNA can be visualized on the single DNA molecule level and both the polymer physics of confined DNA and also DNA/protein and other DNA/DNA-binding molecule interactions can be explored. This chapter describes the basic steps to fabricate the nanostructures, perform the experiments and analyze the data.
Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron
2017-01-01
Abstract Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. PMID:28814063
Interactive Exploration on Large Genomic Datasets.
Tu, Eric
2016-01-01
The prevalence of large genomics datasets has made the the need to explore this data more important. Large sequencing projects like the 1000 Genomes Project [1], which reconstructed the genomes of 2,504 individuals sampled from 26 populations, have produced over 200TB of publically available data. Meanwhile, existing genomic visualization tools have been unable to scale with the growing amount of larger, more complex data. This difficulty is acute when viewing large regions (over 1 megabase, or 1,000,000 bases of DNA), or when concurrently viewing multiple samples of data. While genomic processing pipelines have shifted towards using distributed computing techniques, such as with ADAM [4], genomic visualization tools have not. In this work we present Mango, a scalable genome browser built on top of ADAM that can run both locally and on a cluster. Mango presents a combination of different optimizations that can be combined in a single application to drive novel genomic visualization techniques over terabytes of genomic data. By building visualization on top of a distributed processing pipeline, we can perform visualization queries over large regions that are not possible with current tools, and decrease the time for viewing large data sets. Mango is part of the Big Data Genomics project at University of California-Berkeley [25] and is published under the Apache 2 license. Mango is available at https://github.com/bigdatagenomics/mango.
Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron; Gümüs, Zeynep H
2017-08-01
Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. © The Authors 2017. Published by Oxford University Press.
Applying the metro map to software development management
NASA Astrophysics Data System (ADS)
Aguirregoitia, Amaia; Dolado, J. Javier; Presedo, Concepción
2010-01-01
This paper presents MetroMap, a new graphical representation model for controlling and managing the software development process. Metromap uses metaphors and visual representation techniques to explore several key indicators in order to support problem detection and resolution. The resulting visualization addresses diverse management tasks, such as tracking of deviations from the plan, analysis of patterns of failure detection and correction, overall assessment of change management policies, and estimation of product quality. The proposed visualization uses a metaphor with a metro map along with various interactive techniques to represent information concerning the software development process and to deal efficiently with multivariate visual queries. Finally, the paper shows the implementation of the tool in JavaFX with data of a real project and the results of testing the tool with the aforementioned data and users attempting several information retrieval tasks. The conclusion shows the results of analyzing user response time and efficiency using the MetroMap visualization system. The utility of the tool was positively evaluated.
Immersive Visual Analytics for Transformative Neutron Scattering Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A; Daniel, Jamison R; Drouhard, Margaret
The ORNL Spallation Neutron Source (SNS) provides the most intense pulsed neutron beams in the world for scientific research and development across a broad range of disciplines. SNS experiments produce large volumes of complex data that are analyzed by scientists with varying degrees of experience using 3D visualization and analysis systems. However, it is notoriously difficult to achieve proficiency with 3D visualizations. Because 3D representations are key to understanding the neutron scattering data, scientists are unable to analyze their data in a timely fashion resulting in inefficient use of the limited and expensive SNS beam time. We believe a moremore » intuitive interface for exploring neutron scattering data can be created by combining immersive virtual reality technology with high performance data analytics and human interaction. In this paper, we present our initial investigations of immersive visualization concepts as well as our vision for an immersive visual analytics framework that could lower the barriers to 3D exploratory data analysis of neutron scattering data at the SNS.« less
Multi-focus and multi-level techniques for visualization and analysis of networks with thematic data
NASA Astrophysics Data System (ADS)
Cossalter, Michele; Mengshoel, Ole J.; Selker, Ted
2013-01-01
Information-rich data sets bring several challenges in the areas of visualization and analysis, even when associated with node-link network visualizations. This paper presents an integration of multi-focus and multi-level techniques that enable interactive, multi-step comparisons in node-link networks. We describe NetEx, a visualization tool that enables users to simultaneously explore different parts of a network and its thematic data, such as time series or conditional probability tables. NetEx, implemented as a Cytoscape plug-in, has been applied to the analysis of electrical power networks, Bayesian networks, and the Enron e-mail repository. In this paper we briefly discuss visualization and analysis of the Enron social network, but focus on data from an electrical power network. Specifically, we demonstrate how NetEx supports the analytical task of electrical power system fault diagnosis. Results from a user study with 25 subjects suggest that NetEx enables more accurate isolation of complex faults compared to an especially designed software tool.
Science Education with the LSST
NASA Astrophysics Data System (ADS)
Jacoby, S. H.; Khandro, L. M.; Larson, A. M.; McCarthy, D. W.; Pompea, S. M.; Shara, M. M.
2004-12-01
LSST will create the first true celestial cinematography - a revolution in public access to the changing universe. The challenge will be to take advantage of the unique capabilities of the LSST while presenting the data in ways that are manageable, engaging, and supportive of national science education goals. To prepare for this opportunity for exploration, tools and displays will be developed using current deep-sky multi-color imaging data. Education professionals from LSST partners invite input from interested members of the community. Initial LSST science education priorities include: - Fostering authentic student-teacher research projects at all levels, - Exploring methods of visualizing the large and changing datasets in science centers, - Defining Web-based interfaces and tools for access and interaction with the data, - Delivering online instructional materials, and - Developing meaningful interactions between LSST scientists and the public.
Caçola, Priscila M; Pant, Mohan D
2014-10-01
The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.
Age-related differences in audiovisual interactions of semantically different stimuli.
Viggiano, Maria Pia; Giovannelli, Fabio; Giganti, Fiorenza; Rossi, Arianna; Metitieri, Tiziana; Rebai, Mohamed; Guerrini, Renzo; Cincotta, Massimo
2017-01-01
Converging results have shown that adults benefit from congruent multisensory stimulation in the identification of complex stimuli, whereas the developmental trajectory of the ability to integrate multisensory inputs in children is less well understood. In this study we explored the effects of audiovisual semantic congruency on identification of visually presented stimuli belonging to different categories, using a cross-modal approach. Four groups of children ranging in age from 6 to 13 years and adults were administered an object identification task of visually presented pictures belonging to living and nonliving entities. Stimuli were presented in visual, congruent audiovisual, incongruent audiovisual, and noise conditions. Results showed that children under 12 years of age did not benefit from multisensory presentation in speeding up the identification. In children the incoherent audiovisual condition had an interfering effect, especially for the identification of living things. These data suggest that the facilitating effect of the audiovisual interaction into semantic factors undergoes developmental changes and the consolidation of adult-like processing of multisensory stimuli begins in late childhood. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Manipulating and Visualizing Molecular Interactions in Customized Nanoscale Spaces
NASA Astrophysics Data System (ADS)
Stabile, Francis; Henkin, Gil; Berard, Daniel; Shayegan, Marjan; Leith, Jason; Leslie, Sabrina
We present a dynamically adjustable nanofluidic platform for formatting the conformations of and visualizing the interaction kinetics between biomolecules in solution, offering new time resolution and control of the reaction processes. This platform extends convex lens-induced confinement (CLiC), a technique for imaging molecules under confinement, by introducing a system for in situ modification of the chemical environment; this system uses a deep microchannel to diffusively exchange reagents within the nanoscale imaging region, whose height is fixed by a nanopost array. To illustrate, we visualize and manipulate salt-induced, surfactant-induced, and enzyme-induced reactions between small-molecule reagents and DNA molecules, where the conformations of the DNA molecules are formatted by the imposed nanoscale confinement. By using nanofabricated, nonabsorbing, low-background glass walls to confine biomolecules, our nanofluidic platform facilitates quantitative exploration of physiologically and biotechnologically relevant processes at the nanoscale. This device provides new kinetic information about dynamic chemical processes at the single-molecule level, using advancements in the CLiC design including a microchannel-based diffuser and postarray-based dialysis slit.
Sakurada, Takeshi; Ito, Koji; Gomi, Hiroaki
2016-01-01
Although strong motor coordination in intrinsic muscle coordinates has frequently been reported for bimanual movements, coordination in extrinsic visual coordinates is also crucial in various bimanual tasks. To explore the bimanual coordination mechanisms in terms of the frame of reference, here we characterized implicit bilateral interactions in visuomotor tasks. Visual perturbations (finger-cursor gain change) were applied while participants performed a rhythmic tracking task with both index fingers under an in-phase or anti-phase relationship in extrinsic coordinates. When they corrected the right finger's amplitude, the left finger's amplitude unintentionally also changed [motor interference (MI)], despite the instruction to keep its amplitude constant. Notably, we observed two specificities: one was large MI and low relative-phase variability (PV) under the intrinsic in-phase condition, and the other was large MI and high PV under the extrinsic in-phase condition. Additionally, using a multiple-interaction model, we successfully decomposed MI into intrinsic components caused by motor correction and extrinsic components caused by visual-cursor mismatch of the right finger's movements. This analysis revealed that the central nervous system facilitates MI by combining intrinsic and extrinsic components in the condition with in-phases in both intrinsic and extrinsic coordinates, and that under-additivity of the effects is explained by the brain's preference for the intrinsic interaction over extrinsic interaction. In contrast, the PV was significantly correlated with the intrinsic component, suggesting that the intrinsic interaction dominantly contributed to bimanual movement stabilization. The inconsistent features of MI and PV suggest that the central nervous system regulates multiple levels of bilateral interactions for various bimanual tasks. © 2015 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Zhao, Shanrong; Xi, Li; Quan, Jie; Xi, Hualin; Zhang, Ying; von Schack, David; Vincent, Michael; Zhang, Baohong
2016-01-08
RNA sequencing (RNA-seq), a next-generation sequencing technique for transcriptome profiling, is being increasingly used, in part driven by the decreasing cost of sequencing. Nevertheless, the analysis of the massive amounts of data generated by large-scale RNA-seq remains a challenge. Multiple algorithms pertinent to basic analyses have been developed, and there is an increasing need to automate the use of these tools so as to obtain results in an efficient and user friendly manner. Increased automation and improved visualization of the results will help make the results and findings of the analyses readily available to experimental scientists. By combing the best open source tools developed for RNA-seq data analyses and the most advanced web 2.0 technologies, we have implemented QuickRNASeq, a pipeline for large-scale RNA-seq data analyses and visualization. The QuickRNASeq workflow consists of three main steps. In Step #1, each individual sample is processed, including mapping RNA-seq reads to a reference genome, counting the numbers of mapped reads, quality control of the aligned reads, and SNP (single nucleotide polymorphism) calling. Step #1 is computationally intensive, and can be processed in parallel. In Step #2, the results from individual samples are merged, and an integrated and interactive project report is generated. All analyses results in the report are accessible via a single HTML entry webpage. Step #3 is the data interpretation and presentation step. The rich visualization features implemented here allow end users to interactively explore the results of RNA-seq data analyses, and to gain more insights into RNA-seq datasets. In addition, we used a real world dataset to demonstrate the simplicity and efficiency of QuickRNASeq in RNA-seq data analyses and interactive visualizations. The seamless integration of automated capabilites with interactive visualizations in QuickRNASeq is not available in other published RNA-seq pipelines. The high degree of automation and interactivity in QuickRNASeq leads to a substantial reduction in the time and effort required prior to further downstream analyses and interpretation of the analyses findings. QuickRNASeq advances primary RNA-seq data analyses to the next level of automation, and is mature for public release and adoption.
Metabolic rate and body size are linked with perception of temporal information☆
Healy, Kevin; McNally, Luke; Ruxton, Graeme D.; Cooper, Natalie; Jackson, Andrew L.
2013-01-01
Body size and metabolic rate both fundamentally constrain how species interact with their environment, and hence ultimately affect their niche. While many mechanisms leading to these constraints have been explored, their effects on the resolution at which temporal information is perceived have been largely overlooked. The visual system acts as a gateway to the dynamic environment and the relative resolution at which organisms are able to acquire and process visual information is likely to restrict their ability to interact with events around them. As both smaller size and higher metabolic rates should facilitate rapid behavioural responses, we hypothesized that these traits would favour perception of temporal change over finer timescales. Using critical flicker fusion frequency, the lowest frequency of flashing at which a flickering light source is perceived as constant, as a measure of the maximum rate of temporal information processing in the visual system, we carried out a phylogenetic comparative analysis of a wide range of vertebrates that supported this hypothesis. Our results have implications for the evolution of signalling systems and predator–prey interactions, and, combined with the strong influence that both body mass and metabolism have on a species' ecological niche, suggest that time perception may constitute an important and overlooked dimension of niche differentiation. PMID:24109147
Review: visual analytics of climate networks
NASA Astrophysics Data System (ADS)
Nocke, T.; Buschmann, S.; Donges, J. F.; Marwan, N.; Schulz, H.-J.; Tominski, C.
2015-09-01
Network analysis has become an important approach in studying complex spatiotemporal behaviour within geophysical observation and simulation data. This new field produces increasing numbers of large geo-referenced networks to be analysed. Particular focus lies currently on the network analysis of the complex statistical interrelationship structure within climatological fields. The standard procedure for such network analyses is the extraction of network measures in combination with static standard visualisation methods. Existing interactive visualisation methods and tools for geo-referenced network exploration are often either not known to the analyst or their potential is not fully exploited. To fill this gap, we illustrate how interactive visual analytics methods in combination with geovisualisation can be tailored for visual climate network investigation. Therefore, the paper provides a problem analysis relating the multiple visualisation challenges to a survey undertaken with network analysts from the research fields of climate and complex systems science. Then, as an overview for the interested practitioner, we review the state-of-the-art in climate network visualisation and provide an overview of existing tools. As a further contribution, we introduce the visual network analytics tools CGV and GTX, providing tailored solutions for climate network analysis, including alternative geographic projections, edge bundling, and 3-D network support. Using these tools, the paper illustrates the application potentials of visual analytics for climate networks based on several use cases including examples from global, regional, and multi-layered climate networks.
Review: visual analytics of climate networks
NASA Astrophysics Data System (ADS)
Nocke, T.; Buschmann, S.; Donges, J. F.; Marwan, N.; Schulz, H.-J.; Tominski, C.
2015-04-01
Network analysis has become an important approach in studying complex spatiotemporal behaviour within geophysical observation and simulation data. This new field produces increasing amounts of large geo-referenced networks to be analysed. Particular focus lies currently on the network analysis of the complex statistical interrelationship structure within climatological fields. The standard procedure for such network analyses is the extraction of network measures in combination with static standard visualisation methods. Existing interactive visualisation methods and tools for geo-referenced network exploration are often either not known to the analyst or their potential is not fully exploited. To fill this gap, we illustrate how interactive visual analytics methods in combination with geovisualisation can be tailored for visual climate network investigation. Therefore, the paper provides a problem analysis, relating the multiple visualisation challenges with a survey undertaken with network analysts from the research fields of climate and complex systems science. Then, as an overview for the interested practitioner, we review the state-of-the-art in climate network visualisation and provide an overview of existing tools. As a further contribution, we introduce the visual network analytics tools CGV and GTX, providing tailored solutions for climate network analysis, including alternative geographic projections, edge bundling, and 3-D network support. Using these tools, the paper illustrates the application potentials of visual analytics for climate networks based on several use cases including examples from global, regional, and multi-layered climate networks.
Hartzler, A L; Patel, R A; Czerwinski, M; Pratt, W; Roseway, A; Chandrasekaran, N; Back, A
2014-01-01
This article is part of the focus theme of Methods of Information in Medicine on "Pervasive Intelligent Technologies for Health". Effective nonverbal communication between patients and clinicians fosters both the delivery of empathic patient-centered care and positive patient outcomes. Although nonverbal skill training is a recognized need, few efforts to enhance patient-clinician communication provide visual feedback on nonverbal aspects of the clinical encounter. We describe a novel approach that uses social signal processing technology (SSP) to capture nonverbal cues in real time and to display ambient visual feedback on control and affiliation--two primary, yet distinct dimensions of interpersonal nonverbal communication. To examine the design and clinician acceptance of ambient visual feedback on nonverbal communication, we 1) formulated a model of relational communication to ground SSP and 2) conducted a formative user study using mixed methods to explore the design of visual feedback. Based on a model of relational communication, we reviewed interpersonal communication research to map nonverbal cues to signals of affiliation and control evidenced in patient-clinician interaction. Corresponding with our formulation of this theoretical framework, we designed ambient real-time visualizations that reflect variations of affiliation and control. To explore clinicians' acceptance of this visual feedback, we conducted a lab study using the Wizard-of-Oz technique to simulate system use with 16 healthcare professionals. We followed up with seven of those participants through interviews to iterate on the design with a revised visualization that addressed emergent design considerations. Ambient visual feedback on non- verbal communication provides a theoretically grounded and acceptable way to provide clinicians with awareness of their nonverbal communication style. We provide implications for the design of such visual feedback that encourages empathic patient-centered communication and include considerations of metaphor, color, size, position, and timing of feedback. Ambient visual feedback from SSP holds promise as an acceptable means for facilitating empathic patient-centered nonverbal communication.
Data visualization, bar naked: A free tool for creating interactive graphics.
Weissgerber, Tracey L; Savic, Marko; Winham, Stacey J; Stanisavljevic, Dejana; Garovic, Vesna D; Milic, Natasa M
2017-12-15
Although bar graphs are designed for categorical data, they are routinely used to present continuous data in studies that have small sample sizes. This presentation is problematic, as many data distributions can lead to the same bar graph, and the actual data may suggest different conclusions from the summary statistics. To address this problem, many journals have implemented new policies that require authors to show the data distribution. This paper introduces a free, web-based tool for creating an interactive alternative to the bar graph (http://statistika.mfub.bg.ac.rs/interactive-dotplot/). This tool allows authors with no programming expertise to create customized interactive graphics, including univariate scatterplots, box plots, and violin plots, for comparing values of a continuous variable across different study groups. Individual data points may be overlaid on the graphs. Additional features facilitate visualization of subgroups or clusters of non-independent data. A second tool enables authors to create interactive graphics from data obtained with repeated independent experiments (http://statistika.mfub.bg.ac.rs/interactive-repeated-experiments-dotplot/). These tools are designed to encourage exploration and critical evaluation of the data behind the summary statistics and may be valuable for promoting transparency, reproducibility, and open science in basic biomedical research. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.
EmailTime: visual analytics and statistics for temporal email
NASA Astrophysics Data System (ADS)
Erfani Joorabchi, Minoo; Yim, Ji-Dong; Shaw, Christopher D.
2011-01-01
Although the discovery and analysis of communication patterns in large and complex email datasets are difficult tasks, they can be a valuable source of information. We present EmailTime, a visual analysis tool of email correspondence patterns over the course of time that interactively portrays personal and interpersonal networks using the correspondence in the email dataset. Our approach is to put time as a primary variable of interest, and plot emails along a time line. EmailTime helps email dataset explorers interpret archived messages by providing zooming, panning, filtering and highlighting etc. To support analysis, it also measures and visualizes histograms, graph centrality and frequency on the communication graph that can be induced from the email collection. This paper describes EmailTime's capabilities, along with a large case study with Enron email dataset to explore the behaviors of email users within different organizational positions from January 2000 to December 2001. We defined email behavior as the email activity level of people regarding a series of measured metrics e.g. sent and received emails, numbers of email addresses, etc. These metrics were calculated through EmailTime. Results showed specific patterns in the use email within different organizational positions. We suggest that integrating both statistics and visualizations in order to display information about the email datasets may simplify its evaluation.
Real-time colouring and filtering with graphics shaders
NASA Astrophysics Data System (ADS)
Vohl, D.; Fluke, C. J.; Barnes, D. G.; Hassan, A. H.
2017-11-01
Despite the popularity of the Graphics Processing Unit (GPU) for general purpose computing, one should not forget about the practicality of the GPU for fast scientific visualization. As astronomers have increasing access to three-dimensional (3D) data from instruments and facilities like integral field units and radio interferometers, visualization techniques such as volume rendering offer means to quickly explore spectral cubes as a whole. As most 3D visualization techniques have been developed in fields of research like medical imaging and fluid dynamics, many transfer functions are not optimal for astronomical data. We demonstrate how transfer functions and graphics shaders can be exploited to provide new astronomy-specific explorative colouring methods. We present 12 shaders, including four novel transfer functions specifically designed to produce intuitive and informative 3D visualizations of spectral cube data. We compare their utility to classic colour mapping. The remaining shaders highlight how common computation like filtering, smoothing and line ratio algorithms can be integrated as part of the graphics pipeline. We discuss how this can be achieved by utilizing the parallelism of modern GPUs along with a shading language, letting astronomers apply these new techniques at interactive frame rates. All shaders investigated in this work are included in the open source software shwirl (Vohl 2017).
NASA Technical Reports Server (NTRS)
Rosenchein, Stanley J.; Burns, J. Brian; Chapman, David; Kaelbling, Leslie P.; Kahn, Philip; Nishihara, H. Keith; Turk, Matthew
1993-01-01
This report is concerned with agents that act to gain information. In previous work, we developed agent models combining qualitative modeling with real-time control. That work, however, focused primarily on actions that affect physical states of the environment. The current study extends that work by explicitly considering problems of active information-gathering and by exploring specialized aspects of information-gathering in computational perception, learning, and language. In our theoretical investigations, we analyzed agents into their perceptual and action components and identified these with elements of a state-machine model of control. The mathematical properties of each was developed in isolation and interactions were then studied. We considered the complexity dimension and the uncertainty dimension and related these to intelligent-agent design issues. We also explored active information gathering in visual processing. Working within the active vision paradigm, we developed a concept of 'minimal meaningful measurements' suitable for demand-driven vision. We then developed and tested an architecture for ongoing recognition and interpretation of visual information. In the area of information gathering through learning, we explored techniques for coping with combinatorial complexity. We also explored information gathering through explicit linguistic action by considering the nature of conversational rules, coordination, and situated communication behavior.
Canning, Claire Ann; Loe, Alan; Cockett, Kathryn Jane; Gagnon, Paul; Zary, Nabil
2017-01-01
Curriculum Mapping and dynamic visualization is quickly becoming an integral aspect of quality improvement in support of innovations which drive curriculum quality assurance processes in medical education. CLUE (Curriculum Explorer) a highly interactive, engaging and independent platform was developed to support curriculum transparency, enhance student engagement, and enable granular search and display. Reflecting a design based approach to meet the needs of the school's varied stakeholders, CLUE employs an iterative and reflective approach to drive the evolution of its platform, as it seeks to accommodate the ever-changing needs of our stakeholders in the fast pace world of medicine and medical education today. CLUE exists independent of institutional systems and in this way, is uniquely positioned to deliver a data driven quality improvement resource, easily adaptable for use by any member of our health care professions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartmann, Anja, E-mail: hartmann@ipk-gatersleben.de; Schreiber, Falk; Martin-Luther-University Halle-Wittenberg, Halle
The characterization of biological systems with respect to their behavior and functionality based on versatile biochemical interactions is a major challenge. To understand these complex mechanisms at systems level modeling approaches are investigated. Different modeling formalisms allow metabolic models to be analyzed depending on the question to be solved, the biochemical knowledge and the availability of experimental data. Here, we describe a method for an integrative analysis of the structure and dynamics represented by qualitative and quantitative metabolic models. Using various formalisms, the metabolic model is analyzed from different perspectives. Determined structural and dynamic properties are visualized in the contextmore » of the metabolic model. Interaction techniques allow the exploration and visual analysis thereby leading to a broader understanding of the behavior and functionality of the underlying biological system. The System Biology Metabolic Model Framework (SBM{sup 2} – Framework) implements the developed method and, as an example, is applied for the integrative analysis of the crop plant potato.« less
Volume curtaining: a focus+context effect for multimodal volume visualization
NASA Astrophysics Data System (ADS)
Fairfield, Adam J.; Plasencia, Jonathan; Jang, Yun; Theodore, Nicholas; Crawford, Neil R.; Frakes, David H.; Maciejewski, Ross
2014-03-01
In surgical preparation, physicians will often utilize multimodal imaging scans to capture complementary information to improve diagnosis and to drive patient-specific treatment. These imaging scans may consist of data from magnetic resonance imaging (MR), computed tomography (CT), or other various sources. The challenge in using these different modalities is that the physician must mentally map the two modalities together during the diagnosis and planning phase. Furthermore, the different imaging modalities will be generated at various resolutions as well as slightly different orientations due to patient placement during scans. In this work, we present an interactive system for multimodal data fusion, analysis and visualization. Developed with partners from neurological clinics, this work discusses initial system requirements and physician feedback at the various stages of component development. Finally, we present a novel focus+context technique for the interactive exploration of coregistered multi-modal data.
Interpreting Black-Box Classifiers Using Instance-Level Visual Explanations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamagnini, Paolo; Krause, Josua W.; Dasgupta, Aritra
2017-05-14
To realize the full potential of machine learning in diverse real- world domains, it is necessary for model predictions to be readily interpretable and actionable for the human in the loop. Analysts, who are the users but not the developers of machine learning models, often do not trust a model because of the lack of transparency in associating predictions with the underlying data space. To address this problem, we propose Rivelo, a visual analytic interface that enables analysts to understand the causes behind predictions of binary classifiers by interactively exploring a set of instance-level explanations. These explanations are model-agnostic, treatingmore » a model as a black box, and they help analysts in interactively probing the high-dimensional binary data space for detecting features relevant to predictions. We demonstrate the utility of the interface with a case study analyzing a random forest model on the sentiment of Yelp reviews about doctors.« less
Helioviewer.org: Enhanced Solar & Heliospheric Data Visualization
NASA Astrophysics Data System (ADS)
Stys, J. E.; Ireland, J.; Hughitt, V. K.; Mueller, D.
2013-12-01
Helioviewer.org enables the simultaneous exploration of multiple heterogeneous solar data sets. In the latest iteration of this open-source web application, Hinode XRT and Yohkoh SXT join SDO, SOHO, STEREO, and PROBA2 as supported data sources. A newly enhanced user-interface expands the utility of Helioviewer.org by adding annotations backed by data from the Heliospheric Events Knowledgebase (HEK). Helioviewer.org can now overlay solar feature and event data via interactive marker pins, extended regions, data labels, and information panels. An interactive time-line provides enhanced browsing and visualization to image data set coverage and solar events. The addition of a size-of-the-Earth indicator provides a sense of the scale to solar and heliospheric features for education and public outreach purposes. Tight integration with the Virtual Solar Observatory and SDO AIA cutout service enable solar physicists to seamlessly import science data into their SSW/IDL or SunPy/Python data analysis environments.
WebGL-enabled 3D visualization of a Solar Flare Simulation
NASA Astrophysics Data System (ADS)
Chen, A.; Cheung, C. M. M.; Chintzoglou, G.
2016-12-01
The visualization of magnetohydrodynamic (MHD) simulations of astrophysical systems such as solar flares often requires specialized software packages (e.g. Paraview and VAPOR). A shortcoming of using such software packages is the inability to share our findings with the public and scientific community in an interactive and engaging manner. By using the javascript-based WebGL application programming interface (API) and the three.js javascript package, we create an online in-browser experience for rendering solar flare simulations that will be interactive and accessible to the general public. The WebGL renderer displays objects such as vector flow fields, streamlines and textured isosurfaces. This allows the user to explore the spatial relation between the solar coronal magnetic field and the thermodynamic structure of the plasma in which the magnetic field is embedded. Plans for extending the features of the renderer will also be presented.
EdgeMaps: visualizing explicit and implicit relations
NASA Astrophysics Data System (ADS)
Dörk, Marian; Carpendale, Sheelagh; Williamson, Carey
2011-01-01
In this work, we introduce EdgeMaps as a new method for integrating the visualization of explicit and implicit data relations. Explicit relations are specific connections between entities already present in a given dataset, while implicit relations are derived from multidimensional data based on shared properties and similarity measures. Many datasets include both types of relations, which are often difficult to represent together in information visualizations. Node-link diagrams typically focus on explicit data connections, while not incorporating implicit similarities between entities. Multi-dimensional scaling considers similarities between items, however, explicit links between nodes are not displayed. In contrast, EdgeMaps visualize both implicit and explicit relations by combining and complementing spatialization and graph drawing techniques. As a case study for this approach we chose a dataset of philosophers, their interests, influences, and birthdates. By introducing the limitation of activating only one node at a time, interesting visual patterns emerge that resemble the aesthetics of fireworks and waves. We argue that the interactive exploration of these patterns may allow the viewer to grasp the structure of a graph better than complex node-link visualizations.
Sequence Diversity Diagram for comparative analysis of multiple sequence alignments.
Sakai, Ryo; Aerts, Jan
2014-01-01
The sequence logo is a graphical representation of a set of aligned sequences, commonly used to depict conservation of amino acid or nucleotide sequences. Although it effectively communicates the amount of information present at every position, this visual representation falls short when the domain task is to compare between two or more sets of aligned sequences. We present a new visual presentation called a Sequence Diversity Diagram and validate our design choices with a case study. Our software was developed using the open-source program called Processing. It loads multiple sequence alignment FASTA files and a configuration file, which can be modified as needed to change the visualization. The redesigned figure improves on the visual comparison of two or more sets, and it additionally encodes information on sequential position conservation. In our case study of the adenylate kinase lid domain, the Sequence Diversity Diagram reveals unexpected patterns and new insights, for example the identification of subgroups within the protein subfamily. Our future work will integrate this visual encoding into interactive visualization tools to support higher level data exploration tasks.
Visualizing Alternative Phosphorus Scenarios for Future Food Security
Neset, Tina-Simone; Cordell, Dana; Mohr, Steve; VanRiper, Froggi; White, Stuart
2016-01-01
The impact of global phosphorus scarcity on food security has increasingly been the focus of scientific studies over the past decade. However, systematic analyses of alternative futures for phosphorus supply and demand throughout the food system are still rare and provide limited inclusion of key stakeholders. Addressing global phosphorus scarcity requires an integrated approach exploring potential demand reduction as well as recycling opportunities. This implies recovering phosphorus from multiple sources, such as food waste, manure, and excreta, as well as exploring novel opportunities to reduce the long-term demand for phosphorus in food production such as changing diets. Presently, there is a lack of stakeholder and scientific consensus around priority measures. To therefore enable exploration of multiple pathways and facilitate a stakeholder dialog on the technical, behavioral, and institutional changes required to meet long-term future phosphorus demand, this paper introduces an interactive web-based tool, designed for visualizing global phosphorus scenarios in real time. The interactive global phosphorus scenario tool builds on several demand and supply side measures that can be selected and manipulated interactively by the user. It provides a platform to facilitate stakeholder dialog to plan for a soft landing and identify a suite of concrete priority options, such as investing in agricultural phosphorus use efficiency, or renewable fertilizers derived from phosphorus recovered from wastewater and food waste, to determine how phosphorus demand to meet future food security could be attained on a global scale in 2040 and 2070. This paper presents four example scenarios, including (1) the potential of full recovery of human excreta, (2) the challenge of a potential increase in non-food phosphorus demand, (3) the potential of decreased animal product consumption, and (4) the potential decrease in phosphorus demand from increased efficiency and yield gains in crop and livestock systems. PMID:27840814
Visualizing Alternative Phosphorus Scenarios for Future Food Security.
Neset, Tina-Simone; Cordell, Dana; Mohr, Steve; VanRiper, Froggi; White, Stuart
2016-01-01
The impact of global phosphorus scarcity on food security has increasingly been the focus of scientific studies over the past decade. However, systematic analyses of alternative futures for phosphorus supply and demand throughout the food system are still rare and provide limited inclusion of key stakeholders. Addressing global phosphorus scarcity requires an integrated approach exploring potential demand reduction as well as recycling opportunities. This implies recovering phosphorus from multiple sources, such as food waste, manure, and excreta, as well as exploring novel opportunities to reduce the long-term demand for phosphorus in food production such as changing diets. Presently, there is a lack of stakeholder and scientific consensus around priority measures. To therefore enable exploration of multiple pathways and facilitate a stakeholder dialog on the technical, behavioral, and institutional changes required to meet long-term future phosphorus demand, this paper introduces an interactive web-based tool, designed for visualizing global phosphorus scenarios in real time. The interactive global phosphorus scenario tool builds on several demand and supply side measures that can be selected and manipulated interactively by the user. It provides a platform to facilitate stakeholder dialog to plan for a soft landing and identify a suite of concrete priority options, such as investing in agricultural phosphorus use efficiency, or renewable fertilizers derived from phosphorus recovered from wastewater and food waste, to determine how phosphorus demand to meet future food security could be attained on a global scale in 2040 and 2070. This paper presents four example scenarios, including (1) the potential of full recovery of human excreta, (2) the challenge of a potential increase in non-food phosphorus demand, (3) the potential of decreased animal product consumption, and (4) the potential decrease in phosphorus demand from increased efficiency and yield gains in crop and livestock systems.
T.Rex Visual Analytics for Transactional Exploration
None
2018-01-16
T.Rex is PNNL's visual analytics tool that specializes in tabular structured data, like you might open with Excel. It's a client-server application, allowing the server to do a lot of the heavy lifting and the client to open spreadsheets with millions of rows. With datasets of that size, especially if you're unfamiliar with the contents, it's very hard to get a good grasp of what's in it using traditional tools. With T.Rex, the multiple views allow you to see categorical, temporal, numerical, relational, and summary data. The interactivity lets you look across your data and see how things relate to each other.
T.Rex Visual Analytics for Transactional Exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2014-07-01
T.Rex is PNNL's visual analytics tool that specializes in tabular structured data, like you might open with Excel. It's a client-server application, allowing the server to do a lot of the heavy lifting and the client to open spreadsheets with millions of rows. With datasets of that size, especially if you're unfamiliar with the contents, it's very hard to get a good grasp of what's in it using traditional tools. With T.Rex, the multiple views allow you to see categorical, temporal, numerical, relational, and summary data. The interactivity lets you look across your data and see how things relate tomore » each other.« less
Online Analysis Enhances Use of NASA Earth Science Data
NASA Technical Reports Server (NTRS)
Acker, James G.; Leptoukh, Gregory
2007-01-01
Giovanni, the Goddard Earth Sciences Data and Information Services Center (GES DISC) Interactive Online Visualization and Analysis Infrastructure, has provided researchers with advanced capabilities to perform data exploration and analysis with observational data from NASA Earth observation satellites. In the past 5-10 years, examining geophysical events and processes with remote-sensing data required a multistep process of data discovery, data acquisition, data management, and ultimately data analysis. Giovanni accelerates this process by enabling basic visualization and analysis directly on the World Wide Web. In the last two years, Giovanni has added new data acquisition functions and expanded analysis options to increase its usefulness to the Earth science research community.
Visualize Your Data with Google Fusion Tables
NASA Astrophysics Data System (ADS)
Brisbin, K. E.
2011-12-01
Google Fusion Tables is a modern data management platform that makes it easy to host, manage, collaborate on, visualize, and publish tabular data online. Fusion Tables allows users to upload their own data to the Google cloud, which they can then use to create compelling and interactive visualizations with the data. Users can view data on a Google Map, plot data in a line chart, or display data along a timeline. Users can share these visualizations with others to explore and discover interesting trends about various types of data, including scientific data such as invasive species or global trends in disease. Fusion Tables has been used by many organizations to visualize a variety of scientific data. One example is the California Redistricting Map created by the LA Times: http://goo.gl/gwZt5 The Pacific Institute and Circle of Blue have used Fusion Tables to map the quality of water around the world: http://goo.gl/T4SX8 The World Resources Institute mapped the threat level of coral reefs using Fusion Tables: http://goo.gl/cdqe8 What attendees will learn in this session: This session will cover all the steps necessary to use Fusion Tables to create a variety of interactive visualizations. Attendees will begin by learning about the various options for uploading data into Fusion Tables, including Shapefile, KML file, and CSV file import. Attendees will then learn how to use Fusion Tables to manage their data by merging it with other data and controlling the permissions of the data. Finally, the session will cover how to create a customized visualization from the data, and share that visualization with others using both Fusion Tables and the Google Maps API.
Analysis of hand contact areas and interaction capabilities during manipulation and exploration.
Gonzalez, Franck; Gosselin, Florian; Bachta, Wael
2014-01-01
Manual human-computer interfaces for virtual reality are designed to allow an operator interacting with a computer simulation as naturally as possible. Dexterous haptic interfaces are the best suited for this goal. They give intuitive and efficient control on the environment with haptic and tactile feedback. This paper is aimed at helping in the choice of the interaction areas to be taken into account in the design of such interfaces. The literature dealing with hand interactions is first reviewed in order to point out the contact areas involved in exploration and manipulation tasks. Their frequencies of use are then extracted from existing recordings. The results are gathered in an original graphical interaction map allowing for a simple visualization of the way the hand is used, and compared with a map of mechanoreceptors densities. Then an interaction tree, mapping the relative amount of actions made available through the use of a given contact area, is built and correlated with the losses of hand function induced by amputations. A rating of some existing haptic interfaces and guidelines for their design are finally achieved to illustrate a possible use of the developed graphical tools.
Visualization and Interaction in Research, Teaching, and Scientific Communication
NASA Astrophysics Data System (ADS)
Ammon, C. J.
2017-12-01
Modern computing provides many tools for exploring observations, numerical calculations, and theoretical relationships. The number of options is, in fact, almost overwhelming. But the choices provide those with modest programming skills opportunities to create unique views of scientific information and to develop deeper insights into their data, their computations, and the underlying theoretical data-model relationships. I present simple examples of using animation and human-computer interaction to explore scientific data and scientific-analysis approaches. I illustrate how valuable a little programming ability can free scientists from the constraints of existing tools and can facilitate the development of deeper appreciation data and models. I present examples from a suite of programming languages ranging from C to JavaScript including the Wolfram Language. JavaScript is valuable for sharing tools and insight (hopefully) with others because it is integrated into one of the most powerful communication tools in human history, the web browser. Although too much of that power is often spent on distracting advertisements, the underlying computation and graphics engines are efficient, flexible, and almost universally available in desktop and mobile computing platforms. Many are working to fulfill the browser's potential to become the most effective tool for interactive study. Open-source frameworks for visualizing everything from algorithms to data are available, but advance rapidly. One strategy for dealing with swiftly changing tools is to adopt common, open data formats that are easily adapted (often by framework or tool developers). I illustrate the use of animation and interaction in research and teaching with examples from earthquake seismology.
Visible Geology - Interactive online geologic block modelling
NASA Astrophysics Data System (ADS)
Cockett, R.
2012-12-01
Geology is a highly visual science, and many disciplines require spatial awareness and manipulation. For example, interpreting cross-sections, geologic maps, or plotting data on a stereonet all require various levels of spatial abilities. These skills are often not focused on in undergraduate geoscience curricula and many students struggle with spatial relations, manipulations, and penetrative abilities (e.g. Titus & Horsman, 2009). A newly developed program, Visible Geology, allows for students to be introduced to many geologic concepts and spatial skills in a virtual environment. Visible Geology is a web-based, three-dimensional environment where students can create and interrogate their own geologic block models. The program begins with a blank model, users then add geologic beds (with custom thickness and color) and can add geologic deformation events like tilting, folding, and faulting. Additionally, simple intrusive dikes can be modelled, as well as unconformities. Students can also explore the interaction of geology with topography by drawing elevation contours to produce their own topographic models. Students can not only spatially manipulate their model, but can create cross-sections and boreholes to practice their visual penetrative abilities. Visible Geology is easy to access and use, with no downloads required, so it can be incorporated into current, paper-based, lab activities. Sample learning activities are being developed that target introductory and structural geology curricula with learning objectives such as relative geologic history, fault characterization, apparent dip and thickness, interference folding, and stereonet interpretation. Visible Geology provides a richly interactive, and immersive environment for students to explore geologic concepts and practice their spatial skills.; Screenshot of Visible Geology showing folding and faulting interactions on a ridge topography.
A collaborative visual analytics suite for protein folding research.
Harvey, William; Park, In-Hee; Rübel, Oliver; Pascucci, Valerio; Bremer, Peer-Timo; Li, Chenglong; Wang, Yusu
2014-09-01
Molecular dynamics (MD) simulation is a crucial tool for understanding principles behind important biochemical processes such as protein folding and molecular interaction. With the rapidly increasing power of modern computers, large-scale MD simulation experiments can be performed regularly, generating huge amounts of MD data. An important question is how to analyze and interpret such massive and complex data. One of the (many) challenges involved in analyzing MD simulation data computationally is the high-dimensionality of such data. Given a massive collection of molecular conformations, researchers typically need to rely on their expertise and prior domain knowledge in order to retrieve certain conformations of interest. It is not easy to make and test hypotheses as the data set as a whole is somewhat "invisible" due to its high dimensionality. In other words, it is hard to directly access and examine individual conformations from a sea of molecular structures, and to further explore the entire data set. There is also no easy and convenient way to obtain a global view of the data or its various modalities of biochemical information. To this end, we present an interactive, collaborative visual analytics tool for exploring massive, high-dimensional molecular dynamics simulation data sets. The most important utility of our tool is to provide a platform where researchers can easily and effectively navigate through the otherwise "invisible" simulation data sets, exploring and examining molecular conformations both as a whole and at individual levels. The visualization is based on the concept of a topological landscape, which is a 2D terrain metaphor preserving certain topological and geometric properties of the high dimensional protein energy landscape. In addition to facilitating easy exploration of conformations, this 2D terrain metaphor also provides a platform where researchers can visualize and analyze various properties (such as contact density) overlayed on the top of the 2D terrain. Finally, the software provides a collaborative environment where multiple researchers can assemble observations and biochemical events into storyboards and share them in real time over the Internet via a client-server architecture. The software is written in Scala and runs on the cross-platform Java Virtual Machine. Binaries and source code are available at http://www.aylasoftware.org and have been released under the GNU General Public License. Copyright © 2014 Elsevier Inc. All rights reserved.
ImageX: new and improved image explorer for astronomical images and beyond
NASA Astrophysics Data System (ADS)
Hayashi, Soichi; Gopu, Arvind; Kotulla, Ralf; Young, Michael D.
2016-08-01
The One Degree Imager - Portal, Pipeline, and Archive (ODI-PPA) has included the Image Explorer interactive image visualization tool since it went operational. Portal users were able to quickly open up several ODI images within any HTML5 capable web browser, adjust the scaling, apply color maps, and perform other basic image visualization steps typically done on a desktop client like DS9. However, the original design of the Image Explorer required lossless PNG tiles to be generated and stored for all raw and reduced ODI images thereby taking up tens of TB of spinning disk space even though a small fraction of those images were being accessed by portal users at any given time. It also caused significant overhead on the portal web application and the Apache webserver used by ODI-PPA. We found it hard to merge in improvements made to a similar deployment in another project's portal. To address these concerns, we re-architected Image Explorer from scratch and came up with ImageX, a set of microservices that are part of the IU Trident project software suite, with rapid interactive visualization capabilities useful for ODI data and beyond. We generate a full resolution JPEG image for each raw and reduced ODI FITS image before producing a JPG tileset, one that can be rendered using the ImageX frontend code at various locations as appropriate within a web portal (for example: on tabular image listings, views allowing quick perusal of a set of thumbnails or other image sifting activities). The new design has decreased spinning disk requirements, uses AngularJS for the client side Model/View code (instead of depending on backend PHP Model/View/Controller code previously used), OpenSeaDragon to render the tile images, and uses nginx and a lightweight NodeJS application to serve tile images thereby significantly decreasing the Time To First Byte latency by a few orders of magnitude. We plan to extend ImageX for non-FITS images including electron microscopy and radiology scan images, and its featureset to include basic functions like image overlay and colormaps. Users needing more advanced visualization and analysis capabilities could use a desktop tool like DS9+IRAF on another IU Trident project called StarDock, without having to download Gigabytes of FITS image data.
Exploring Interactions of Cultural Capital with Learner and Instructor Expectations: A Case Study
ERIC Educational Resources Information Center
Russell, Roxanne
2010-01-01
In this case study, we are able to take a close look at a situation not often encountered in the literature on ICTs in emerging economies: a private company from an emerging economy provided much-needed funding for a US NASA higher learning consortium through a contract for training and research and development in 3D visualization. This study…
The Diverse Data, User Driven Services and the Power of Giovanni at NASA GES DISC
NASA Technical Reports Server (NTRS)
Shen, Suhung
2017-01-01
This presentation provides an overview of remote sensing and model data at GES (Goddard Earth Sciences) DISC (Data and Information Services Center); Overview of data services at GES DISC (Registration with NASA data system; Searching and downloading data); Giovanni (Geospatial Interactive Online VisualizationANd aNalysis Infrastructure): online data exploration tool; and NASA Earth Data and Information System.
ERIC Educational Resources Information Center
Matsumoto, Paul S.
2014-01-01
The article describes the use of Mathematica, a computer algebra system (CAS), in a high school chemistry course. Mathematica was used to generate a graph, where a slider controls the value of parameter(s) in the equation; thus, students can visualize the effect of the parameter(s) on the behavior of the system. Also, Mathematica can show the…
ERIC Educational Resources Information Center
Akhter, Parven
2016-01-01
This paper is derived from a wider small-scale study of digital literacy practice that explores the ways in which a multilingual seven-year-old child, Bablu, interacts with his grandmother during Internet activities connected to Qur'anic literacy. The study aims to reveal how intergenerational learning support was given to Bablu by his…
Stevenson, Ryan A; Fister, Juliane Krueger; Barnett, Zachary P; Nidiffer, Aaron R; Wallace, Mark T
2012-05-01
In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.
Heberle, Henry; Carazzolle, Marcelo Falsarella; Telles, Guilherme P; Meirelles, Gabriela Vaz; Minghim, Rosane
2017-09-13
The advent of "omics" science has brought new perspectives in contemporary biology through the high-throughput analyses of molecular interactions, providing new clues in protein/gene function and in the organization of biological pathways. Biomolecular interaction networks, or graphs, are simple abstract representations where the components of a cell (e.g. proteins, metabolites etc.) are represented by nodes and their interactions are represented by edges. An appropriate visualization of data is crucial for understanding such networks, since pathways are related to functions that occur in specific regions of the cell. The force-directed layout is an important and widely used technique to draw networks according to their topologies. Placing the networks into cellular compartments helps to quickly identify where network elements are located and, more specifically, concentrated. Currently, only a few tools provide the capability of visually organizing networks by cellular compartments. Most of them cannot handle large and dense networks. Even for small networks with hundreds of nodes the available tools are not able to reposition the network while the user is interacting, limiting the visual exploration capability. Here we propose CellNetVis, a web tool to easily display biological networks in a cell diagram employing a constrained force-directed layout algorithm. The tool is freely available and open-source. It was originally designed for networks generated by the Integrated Interactome System and can be used with networks from others databases, like InnateDB. CellNetVis has demonstrated to be applicable for dynamic investigation of complex networks over a consistent representation of a cell on the Web, with capabilities not matched elsewhere.
An Affordance-Based Framework for Human Computation and Human-Computer Collaboration.
Crouser, R J; Chang, R
2012-12-01
Visual Analytics is "the science of analytical reasoning facilitated by visual interactive interfaces". The goal of this field is to develop tools and methodologies for approaching problems whose size and complexity render them intractable without the close coupling of both human and machine analysis. Researchers have explored this coupling in many venues: VAST, Vis, InfoVis, CHI, KDD, IUI, and more. While there have been myriad promising examples of human-computer collaboration, there exists no common language for comparing systems or describing the benefits afforded by designing for such collaboration. We argue that this area would benefit significantly from consensus about the design attributes that define and distinguish existing techniques. In this work, we have reviewed 1,271 papers from many of the top-ranking conferences in visual analytics, human-computer interaction, and visualization. From these, we have identified 49 papers that are representative of the study of human-computer collaborative problem-solving, and provide a thorough overview of the current state-of-the-art. Our analysis has uncovered key patterns of design hinging on human and machine-intelligence affordances, and also indicates unexplored avenues in the study of this area. The results of this analysis provide a common framework for understanding these seemingly disparate branches of inquiry, which we hope will motivate future work in the field.
NASA Astrophysics Data System (ADS)
Keika, Kunihiro; Miyoshi, Yoshizumi; Machida, Shinobu; Ieda, Akimasa; Seki, Kanako; Hori, Tomoaki; Miyashita, Yukinaga; Shoji, Masafumi; Shinohara, Iku; Angelopoulos, Vassilis; Lewis, Jim W.; Flores, Aaron
2017-12-01
This paper introduces ISEE_3D, an interactive visualization tool for three-dimensional plasma velocity distribution functions, developed by the Institute for Space-Earth Environmental Research, Nagoya University, Japan. The tool provides a variety of methods to visualize the distribution function of space plasma: scatter, volume, and isosurface modes. The tool also has a wide range of functions, such as displaying magnetic field vectors and two-dimensional slices of distributions to facilitate extensive analysis. The coordinate transformation to the magnetic field coordinates is also implemented in the tool. The source codes of the tool are written as scripts of a widely used data analysis software language, Interactive Data Language, which has been widespread in the field of space physics and solar physics. The current version of the tool can be used for data files of the plasma distribution function from the Geotail satellite mission, which are publicly accessible through the Data Archives and Transmission System of the Institute of Space and Astronautical Science (ISAS)/Japan Aerospace Exploration Agency (JAXA). The tool is also available in the Space Physics Environment Data Analysis Software to visualize plasma data from the Magnetospheric Multiscale and the Time History of Events and Macroscale Interactions during Substorms missions. The tool is planned to be applied to data from other missions, such as Arase (ERG) and Van Allen Probes after replacing or adding data loading plug-ins. This visualization tool helps scientists understand the dynamics of space plasma better, particularly in the regions where the magnetohydrodynamic approximation is not valid, for example, the Earth's inner magnetosphere, magnetopause, bow shock, and plasma sheet.
McGuckian, Thomas B; Cole, Michael H; Pepping, Gert-Jan
2018-04-01
To visually perceive opportunities for action, athletes rely on the movements of their eyes, head and body to explore their surrounding environment. To date, the specific types of technology and their efficacy for assessing the exploration behaviours of association footballers have not been systematically reviewed. This review aimed to synthesise the visual perception and exploration behaviours of footballers according to the task constraints, action requirements of the experimental task, and level of expertise of the athlete, in the context of the technology used to quantify the visual perception and exploration behaviours of footballers. A systematic search for papers that included keywords related to football, technology, and visual perception was conducted. All 38 included articles utilised eye-movement registration technology to quantify visual perception and exploration behaviour. The experimental domain appears to influence the visual perception behaviour of footballers, however no studies investigated exploration behaviours of footballers in open-play situations. Studies rarely utilised representative stimulus presentation or action requirements. To fully understand the visual perception requirements of athletes, it is recommended that future research seek to validate alternate technologies that are capable of investigating the eye, head and body movements associated with the exploration behaviours of footballers during representative open-play situations.
HiTC: exploration of high-throughput ‘C’ experiments
Servant, Nicolas; Lajoie, Bryan R.; Nora, Elphège P.; Giorgetti, Luca; Chen, Chong-Jian; Heard, Edith; Dekker, Job; Barillot, Emmanuel
2012-01-01
Summary: The R/Bioconductor package HiTC facilitates the exploration of high-throughput 3C-based data. It allows users to import and export ‘C’ data, to transform, normalize, annotate and visualize interaction maps. The package operates within the Bioconductor framework and thus offers new opportunities for future development in this field. Availability and implementation: The R package HiTC is available from the Bioconductor website. A detailed vignette provides additional documentation and help for using the package. Contact: nicolas.servant@curie.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22923296
An evaluation-guided approach for effective data visualization on tablets
NASA Astrophysics Data System (ADS)
Games, Peter S.; Joshi, Alark
2015-01-01
There is a rising trend of data analysis and visualization tasks being performed on a tablet device. Apps with interactive data visualization capabilities are available for a wide variety of domains. We investigate whether users grasp how to effectively interpret and interact with visualizations. We conducted a detailed user evaluation to study the abilities of individuals with respect to analyzing data on a tablet through an interactive visualization app. Based upon the results of the user evaluation, we find that most subjects performed well at understanding and interacting with simple visualizations, specifically tables and line charts. A majority of the subjects struggled with identifying interactive widgets, recognizing interactive widgets with overloaded functionality, and understanding visualizations which do not display data for sorted attributes. Based on our study, we identify guidelines for designers and developers of mobile data visualization apps that include recommendations for effective data representation and interaction.
Exploring 4D Flow Data in an Immersive Virtual Environment
NASA Astrophysics Data System (ADS)
Stevens, A. H.; Butkiewicz, T.
2017-12-01
Ocean models help us to understand and predict a wide range of intricate physical processes which comprise the atmospheric and oceanic systems of the Earth. Because these models output an abundance of complex time-varying three-dimensional (i.e., 4D) data, effectively conveying the myriad information from a given model poses a significant visualization challenge. The majority of the research effort into this problem has concentrated around synthesizing and examining methods for representing the data itself; by comparison, relatively few studies have looked into the potential merits of various viewing conditions and virtual environments. We seek to improve our understanding of the benefits offered by current consumer-grade virtual reality (VR) systems through an immersive, interactive 4D flow visualization system. Our dataset is a Regional Ocean Modeling System (ROMS) model representing a 12-hour tidal cycle of the currents within New Hampshire's Great Bay estuary. The model data was loaded into a custom VR particle system application using the OpenVR software library and the HTC Vive hardware, which tracks a headset and two six-degree-of-freedom (6DOF) controllers within a 5m-by-5m area. The resulting visualization system allows the user to coexist in the same virtual space as the data, enabling rapid and intuitive analysis of the flow model through natural interactions with the dataset and within the virtual environment. Whereas a traditional computer screen typically requires the user to reposition a virtual camera in the scene to obtain the desired view of the data, in virtual reality the user can simply move their head to the desired viewpoint, completely eliminating the mental context switches from data exploration/analysis to view adjustment and back. The tracked controllers become tools to quickly manipulate (reposition, reorient, and rescale) the dataset and to interrogate it by, e.g., releasing dye particles into the flow field, probing scalar velocities, placing a cutting plane through a region of interest, etc. It is hypothesized that the advantages afforded by head-tracked viewing and 6DOF interaction devices will lead to faster and more efficient examination of 4D flow data. A human factors study is currently being prepared to empirically evaluate this method of visualization and interaction.
NASA Astrophysics Data System (ADS)
Kilb, D. L.; Fundis, A. T.; Risien, C. M.
2012-12-01
The focus of the Education and Public Engagement (EPE) component of the NSF's Ocean Observatories Initiative (OOI) is to provide a new layer of cyber-interactivity for undergraduate educators to bring near real-time data from the global ocean into learning environments. To accomplish this, we are designing six online services including: 1) visualization tools, 2) a lesson builder, 3) a concept map builder, 4) educational web services (middleware), 5) collaboration tools and 6) an educational resource database. Here, we report on our Fall 2012 release that includes the first four of these services: 1) Interactive visualization tools allow users to interactively select data of interest, display the data in various views (e.g., maps, time-series and scatter plots) and obtain statistical measures such as mean, standard deviation and a regression line fit to select data. Specific visualization tools include a tool to compare different months of data, a time series explorer tool to investigate the temporal evolution of select data parameters (e.g., sea water temperature or salinity), a glider profile tool that displays ocean glider tracks and associated transects, and a data comparison tool that allows users to view the data either in scatter plot view comparing one parameter with another, or in time series view. 2) Our interactive lesson builder tool allows users to develop a library of online lesson units, which are collaboratively editable and sharable and provides starter templates designed from learning theory knowledge. 3) Our interactive concept map tool allows the user to build and use concept maps, a graphical interface to map the connection between concepts and ideas. This tool also provides semantic-based recommendations, and allows for embedding of associated resources such as movies, images and blogs. 4) Education web services (middleware) will provide an educational resource database API.
LOD map--A visual interface for navigating multiresolution volume visualization.
Wang, Chaoli; Shen, Han-Wei
2006-01-01
In multiresolution volume visualization, a visual representation of level-of-detail (LOD) quality is important for us to examine, compare, and validate different LOD selection algorithms. While traditional methods rely on ultimate images for quality measurement, we introduce the LOD map--an alternative representation of LOD quality and a visual interface for navigating multiresolution data exploration. Our measure for LOD quality is based on the formulation of entropy from information theory. The measure takes into account the distortion and contribution of multiresolution data blocks. A LOD map is generated through the mapping of key LOD ingredients to a treemap representation. The ordered treemap layout is used for relative stable update of the LOD map when the view or LOD changes. This visual interface not only indicates the quality of LODs in an intuitive way, but also provides immediate suggestions for possible LOD improvement through visually-striking features. It also allows us to compare different views and perform rendering budget control. A set of interactive techniques is proposed to make the LOD adjustment a simple and easy task. We demonstrate the effectiveness and efficiency of our approach on large scientific and medical data sets.
Exploratory Climate Data Visualization and Analysis Using DV3D and UVCDAT
NASA Technical Reports Server (NTRS)
Maxwell, Thomas
2012-01-01
Earth system scientists are being inundated by an explosion of data generated by ever-increasing resolution in both global models and remote sensors. Advanced tools for accessing, analyzing, and visualizing very large and complex climate data are required to maintain rapid progress in Earth system research. To meet this need, NASA, in collaboration with the Ultra-scale Visualization Climate Data Analysis Tools (UVCOAT) consortium, is developing exploratory climate data analysis and visualization tools which provide data analysis capabilities for the Earth System Grid (ESG). This paper describes DV3D, a UV-COAT package that enables exploratory analysis of climate simulation and observation datasets. OV3D provides user-friendly interfaces for visualization and analysis of climate data at a level appropriate for scientists. It features workflow inte rfaces, interactive 40 data exploration, hyperwall and stereo visualization, automated provenance generation, and parallel task execution. DV30's integration with CDAT's climate data management system (COMS) and other climate data analysis tools provides a wide range of high performance climate data analysis operations. DV3D expands the scientists' toolbox by incorporating a suite of rich new exploratory visualization and analysis methods for addressing the complexity of climate datasets.
Interactive Retro-Deformation of Terrain for Reconstructing 3D Fault Displacements.
Westerteiger, R; Compton, T; Bernadin, T; Cowgill, E; Gwinner, K; Hamann, B; Gerndt, A; Hagen, H
2012-12-01
Planetary topography is the result of complex interactions between geological processes, of which faulting is a prominent component. Surface-rupturing earthquakes cut and move landforms which develop across active faults, producing characteristic surface displacements across the fault. Geometric models of faults and their associated surface displacements are commonly applied to reconstruct these offsets to enable interpretation of the observed topography. However, current 2D techniques are limited in their capability to convey both the three-dimensional kinematics of faulting and the incremental sequence of events required by a given reconstruction. Here we present a real-time system for interactive retro-deformation of faulted topography to enable reconstruction of fault displacement within a high-resolution (sub 1m/pixel) 3D terrain visualization. We employ geometry shaders on the GPU to intersect the surface mesh with fault-segments interactively specified by the user and transform the resulting surface blocks in realtime according to a kinematic model of fault motion. Our method facilitates a human-in-the-loop approach to reconstruction of fault displacements by providing instant visual feedback while exploring the parameter space. Thus, scientists can evaluate the validity of traditional point-to-point reconstructions by visually examining a smooth interpolation of the displacement in 3D. We show the efficacy of our approach by using it to reconstruct segments of the San Andreas fault, California as well as a graben structure in the Noctis Labyrinthus region on Mars.
Globe Browsing: Contextualized Spatio-Temporal Planetary Surface Visualization.
Bladin, Karl; Axelsson, Emil; Broberg, Erik; Emmart, Carter; Ljung, Patric; Bock, Alexander; Ynnerman, Anders
2017-08-29
Results of planetary mapping are often shared openly for use in scientific research and mission planning. In its raw format, however, the data is not accessible to non-experts due to the difficulty in grasping the context and the intricate acquisition process. We present work on tailoring and integration of multiple data processing and visualization methods to interactively contextualize geospatial surface data of celestial bodies for use in science communication. As our approach handles dynamic data sources, streamed from online repositories, we are significantly shortening the time between discovery and dissemination of data and results. We describe the image acquisition pipeline, the pre-processing steps to derive a 2.5D terrain, and a chunked level-of-detail, out-of-core rendering approach to enable interactive exploration of global maps and high-resolution digital terrain models. The results are demonstrated for three different celestial bodies. The first case addresses high-resolution map data on the surface of Mars. A second case is showing dynamic processes, such as concurrent weather conditions on Earth that require temporal datasets. As a final example we use data from the New Horizons spacecraft which acquired images during a single flyby of Pluto. We visualize the acquisition process as well as the resulting surface data. Our work has been implemented in the OpenSpace software [8], which enables interactive presentations in a range of environments such as immersive dome theaters, interactive touch tables, and virtual reality headsets.
NASA Astrophysics Data System (ADS)
Isik-Ercan, Zeynep; Zeynep Inan, Hatice; Nowak, Jeffrey A.; Kim, Beomjin
2014-01-01
This qualitative case study describes (a) the ways 3D visualization, coupled with other science and literacy experiences, supported young children's first exploration of the Earth-Sun-Moon system and (b) the perspectives of classroom teachers and children on using 3D visualization. We created three interactive 3D software modules that simulate day and night, Moon phases and seasons. These modules were used in a science and literacy unit for 35 second graders at an urban elementary school in Midwestern USA. Data included pre- and post-interviews, audio-taped lessons and classroom observations. Post-interviews demonstrated that children's knowledge of the shapes and the movements of the Earth and Moon, alternation of day and night, the occurrence of the seasons, and Moon's changing appearance increased. Second graders reported that they enjoyed expanding their knowledge through hands-on experiences; through its reality effect, 3D visualization enabled them to observe the space objects that move in the virtual space. The teachers noted that 3D visualization stimulated children's interest in space and that using 3D visualization in combination with other teaching methods-literacy experiences, videos and photos, simulations, discussions, and presentations-supported student learning. The teachers and the students still experienced challenges using 3D visualization due to technical problems with 3D vision and time constraints. We conclude that 3D visualization offers hands-on experiences for challenging science concepts and may support young children's ability to view phenomena that would typically be observed through direct, long-term observations in outer space. Results imply a reconsideration of assumed capabilities of young children to understand astronomical phenomena.
Diller, Kyle I; Bayden, Alexander S; Audie, Joseph; Diller, David J
2018-01-01
There is growing interest in peptide-based drug design and discovery. Due to their relatively large size, polymeric nature, and chemical complexity, the design of peptide-based drugs presents an interesting "big data" challenge. Here, we describe an interactive computational environment, PeptideNavigator, for naturally exploring the tremendous amount of information generated during a peptide drug design project. The purpose of PeptideNavigator is the presentation of large and complex experimental and computational data sets, particularly 3D data, so as to enable multidisciplinary scientists to make optimal decisions during a peptide drug discovery project. PeptideNavigator provides users with numerous viewing options, such as scatter plots, sequence views, and sequence frequency diagrams. These views allow for the collective visualization and exploration of many peptides and their properties, ultimately enabling the user to focus on a small number of peptides of interest. To drill down into the details of individual peptides, PeptideNavigator provides users with a Ramachandran plot viewer and a fully featured 3D visualization tool. Each view is linked, allowing the user to seamlessly navigate from collective views of large peptide data sets to the details of individual peptides with promising property profiles. Two case studies, based on MHC-1A activating peptides and MDM2 scaffold design, are presented to demonstrate the utility of PeptideNavigator in the context of disparate peptide-design projects. Copyright © 2017 Elsevier Ltd. All rights reserved.
Teaching Tectonics to Undergraduates with Web GIS
NASA Astrophysics Data System (ADS)
Anastasio, D. J.; Bodzin, A.; Sahagian, D. L.; Rutzmoser, S.
2013-12-01
Geospatial reasoning skills provide a means for manipulating, interpreting, and explaining structured information and are involved in higher-order cognitive processes that include problem solving and decision-making. Appropriately designed tools, technologies, and curriculum can support spatial learning. We present Web-based visualization and analysis tools developed with Javascript APIs to enhance tectonic curricula while promoting geospatial thinking and scientific inquiry. The Web GIS interface integrates graphics, multimedia, and animations that allow users to explore and discover geospatial patterns that are not easily recognized. Features include a swipe tool that enables users to see underneath layers, query tools useful in exploration of earthquake and volcano data sets, a subduction and elevation profile tool which facilitates visualization between map and cross-sectional views, drafting tools, a location function, and interactive image dragging functionality on the Web GIS. The Web GIS platform is independent and can be implemented on tablets or computers. The GIS tool set enables learners to view, manipulate, and analyze rich data sets from local to global scales, including such data as geology, population, heat flow, land cover, seismic hazards, fault zones, continental boundaries, and elevation using two- and three- dimensional visualization and analytical software. Coverages which allow users to explore plate boundaries and global heat flow processes aided learning in a Lehigh University Earth and environmental science Structural Geology and Tectonics class and are freely available on the Web.
Comprehension of Co-Speech Gestures in Aphasic Patients: An Eye Movement Study
Eggenberger, Noëmi; Preisig, Basil C.; Schumacher, Rahel; Hopfner, Simone; Vanbellingen, Tim; Nyffeler, Thomas; Gutbrod, Klemens; Annoni, Jean-Marie; Bohlhalter, Stephan; Cazzoli, Dario; Müri, René M.
2016-01-01
Background Co-speech gestures are omnipresent and a crucial element of human interaction by facilitating language comprehension. However, it is unclear whether gestures also support language comprehension in aphasic patients. Using visual exploration behavior analysis, the present study aimed to investigate the influence of congruence between speech and co-speech gestures on comprehension in terms of accuracy in a decision task. Method Twenty aphasic patients and 30 healthy controls watched videos in which speech was either combined with meaningless (baseline condition), congruent, or incongruent gestures. Comprehension was assessed with a decision task, while remote eye-tracking allowed analysis of visual exploration. Results In aphasic patients, the incongruent condition resulted in a significant decrease of accuracy, while the congruent condition led to a significant increase in accuracy compared to baseline accuracy. In the control group, the incongruent condition resulted in a decrease in accuracy, while the congruent condition did not significantly increase the accuracy. Visual exploration analysis showed that patients fixated significantly less on the face and tended to fixate more on the gesturing hands compared to controls. Conclusion Co-speech gestures play an important role for aphasic patients as they modulate comprehension. Incongruent gestures evoke significant interference and deteriorate patients’ comprehension. In contrast, congruent gestures enhance comprehension in aphasic patients, which might be valuable for clinical and therapeutic purposes. PMID:26735917
Neurons in the monkey amygdala detect eye-contact during naturalistic social interactions
Mosher, Clayton P.; Zimmerman, Prisca E.; Gothard, Katalin M.
2014-01-01
Summary Primates explore the visual world through eye-movement sequences. Saccades bring details of interest into the fovea while fixations stabilize the image [1]. During natural vision, social primates direct their gaze at the eyes of others to communicate their own emotions and intentions and to gather information about the mental states of others [2]. Direct gaze is an integral part of facial expressions that signals cooperation or conflict over resources and social status [3-6]. Despite the great importance of making and breaking eye contact in the behavioral repertoire of primates, little is known about the neural substrates that support these behaviors. Here we show that the monkey amygdala contains neurons that respond selectively to fixations at the eyes of others and to eye contact. These “eye cells” share several features with the canonical, visually responsive neurons in the monkey amygdala, however, they respond to the eyes only when they fall within the fovea of the viewer, either as a result of a deliberate saccade, or as eyes move into the fovea of the viewer during a fixation intended to explore a different feature. The presence of eyes in peripheral vision fails to activate the eye cells. These findings link the primate amygdala to eye-movements involved in the exploration and selection of details in visual scenes that contain socially and emotionally salient features. PMID:25283782
Neurons in the monkey amygdala detect eye contact during naturalistic social interactions.
Mosher, Clayton P; Zimmerman, Prisca E; Gothard, Katalin M
2014-10-20
Primates explore the visual world through eye-movement sequences. Saccades bring details of interest into the fovea, while fixations stabilize the image. During natural vision, social primates direct their gaze at the eyes of others to communicate their own emotions and intentions and to gather information about the mental states of others. Direct gaze is an integral part of facial expressions that signals cooperation or conflict over resources and social status. Despite the great importance of making and breaking eye contact in the behavioral repertoire of primates, little is known about the neural substrates that support these behaviors. Here we show that the monkey amygdala contains neurons that respond selectively to fixations on the eyes of others and to eye contact. These "eye cells" share several features with the canonical, visually responsive neurons in the monkey amygdala; however, they respond to the eyes only when they fall within the fovea of the viewer, either as a result of a deliberate saccade or as eyes move into the fovea of the viewer during a fixation intended to explore a different feature. The presence of eyes in peripheral vision fails to activate the eye cells. These findings link the primate amygdala to eye movements involved in the exploration and selection of details in visual scenes that contain socially and emotionally salient features. Copyright © 2014 Elsevier Ltd. All rights reserved.
Novel design of interactive multimodal biofeedback system for neurorehabilitation.
Huang, He; Chen, Y; Xu, W; Sundaram, H; Olson, L; Ingalls, T; Rikakis, T; He, Jiping
2006-01-01
A previous design of a biofeedback system for Neurorehabilitation in an interactive multimodal environment has demonstrated the potential of engaging stroke patients in task-oriented neuromotor rehabilitation. This report explores the new concept and alternative designs of multimedia based biofeedback systems. In this system, the new interactive multimodal environment was constructed with abstract presentation of movement parameters. Scenery images or pictures and their clarity and orientation are used to reflect the arm movement and relative position to the target instead of the animated arm. The multiple biofeedback parameters were classified into different hierarchical levels w.r.t. importance of each movement parameter to performance. A new quantified measurement for these parameters were developed to assess the patient's performance both real-time and offline. These parameters were represented by combined visual and auditory presentations with various distinct music instruments. Overall, the objective of newly designed system is to explore what information and how to feedback information in interactive virtual environment could enhance the sensorimotor integration that may facilitate the efficient design and application of virtual environment based therapeutic intervention.
Ordering competition: the interactional accomplishment of the sale of art and antiques at auction.
Heath, Christian; Luff, Paul
2007-03-01
Auctions provide an institutional solution to a social problem; they enable the legitimate pricing and exchange of goods where those goods are of uncertain value. In turn, auctions raise a number of social and organizational issues that are resolved within the interaction that arise in sales by auction. In this paper, we examine sales of fine art, antiques and objets d'art and explore the ways in which auctioneers mediate competition between buyers and establish a value for goods. In particular, we explore how bids are elicited, co-ordinated and revealed so as to rapidly escalate the price of goods in a transparent manner that enables the legitimate valuation and exchange of goods. In directing attention towards the significance of the social interaction, including talk, visual and material conduct, the paper contributes to the growing corpus of ethnographic studies of markets. It suggests that to understand the operation of markets and their outcomes, and to unpack issues of agency, trust and practice, we need to place the 'interaction order' at the heart of analytic agenda.
Active visual search in non-stationary scenes: coping with temporal variability and uncertainty
NASA Astrophysics Data System (ADS)
Ušćumlić, Marija; Blankertz, Benjamin
2016-02-01
Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human-computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and fixation duration) in an active search task. In addition, our method to improve single-trial detection performance in this adverse scenario is an important step in making brain-computer interfacing technology available for human-computer interaction applications.